Speech Title: From Computing With Words to Explainable Artificial Intelligence
Abstract: After the inception of
fuzzy sets, Lotfi A. Zadeh proposed very early several
ways to represent some forms of natural language and
meaning. His first attempt was the language PRUF1,
followed by several new notions such as fuzzy
information granulation or fuzzy constraint. The most
important of his seminal proposals was the concept of
linguistic variable2 defined as “a variable whose values
are words or sentences in a natural or artificial
language” and he claimed that “its main applications lie
in the realm of humanistic systems, especially in the
fields of artificial intelligence, linguistics, …”. The
culmination of his proposals on this subject was the
concept of Computing With Words3, proposed in 1996 as a
means to use words instead of numbers in “computing and
In 2016, twenty years after the emergence of this concept, Explainable Artificial Intelligence has been introduced in the DARPA incentives4 to both produce more explainable artificial intelligence models while maintaining good learning performances, and enable the user to easily interact with intelligent systems. The capacity of intelligent systems to understand natural language, at least to some extent, and to interact with humans by means of words, is clearly a component of the explainability of artificial intelligence.
Several related concepts such as their understandability, their expressiveness or their interpretability, are inherent in the quality of intelligent systems. The latter has been in particular extensively investigated in the construction and analysis of fuzzy intelligent systems.
We will review the importance of Lotfi A. Zadeh’s seminal work in various contributions to Explainable Artificial Intelligence based on fuzzy knowledge representation and we will highlight the variety and the efficiency of such solutions.
1L.A. Zadeh, PRUF - A meaning representation language for natural languages, Int. J. Man-Mach. Studies, vol. 10, pp. 395-460, 1978.
2L.A. Zadeh, The concept of a linguistic variable and its application to approximate reasoning-I, Information Sciences, Volume 8, Issue 3, 1975, Pages 199-249.
3L.A. Zadeh, Fuzzy Logic = Computing with words, IEEE Transactions on Fuzzy Systems, Vol. 4, No 2, May 1996, Pages 103-111.
5 B. Bouchon-Meunier, Information quality: the contribution of fuzzy methods, in Data Science for Financial Econometrics (Nguyen Duc Trung, Nguyen Ngoc Thach, Vladik Kreinovich, eds.), Studies in Computational Intelligence, Springer, 2020 (to appear)
Prof. Michael Felsberg
Linkoping University, Sweden
Michael Felsberg (MSc 1998, PhD 2002) is Full Professor and the Head of the Computer Vision Laboratory, Linköping University. His research interests include learning and modeling of machine perception. He has published more than 150 reviewed conference papers, journal articles, and book contributions, with in total more than 11,000 citations. He has received awards from the German Pattern Recognition Society in 2000, 2004, and 2005, from the Swedish Society for Automated Image Analysis in 2007 and 2010, from Conference on Information Fusion in 2011 (Honorable Mention), from the CVPR Workshop on Mobile Vision 2014, and from ICPR 2016 (best paper in computer vision). He has achieved top ranks on various challenges (VOT: 3rd 2013, 1st 2014, 2nd 2015; VOT-TIR: 1st 2015; OpenCV Tracking: 1st 2015; KITTI Stereo Odometry: 1st 2015, March). He is regularly Associate Editor for major journals in the field (e.g. JMIV, IMAVIS) and Area Chair for top-tier conferences (e.g. ECCV, BMVC, CVPR). He was Track Chair of the International Conference on Pattern Recognition 2016, General Co-Chair of the DAGM symposium in 2011, General Chair of CAIP 2017, and Program Chair of SCIA 2019.
Speech Title: Machine Learning for Visual Perception
Abstract: Deep learning with vast amounts of data has revolutionized the field of computer vision. Robot vision, however, requires to go beyond the developed methods due to its specific requirements: The amount of data is limited and need to be processed on-the-fly, annotation is incomplete or missing, measurements are sparse and of varying level of certainty, and detection and segmentation tasks need to be performed in real-time. Modern approaches to these visual perception problems will be presented and explained: online learning, weakly supervised and reinforcement learning, certainty-adaptive data densification, anomaly detection, and modulation-based discriminative segmentation. The presented methods achieve state-of-the-art results on major benchmarks, such as VOT, DAVIS, Kitti, and CARLA.
Speech Title: Machine learning for Real-World Moderate-Size Data
Abstract: Machine learning consists in building models from data. There has been recently a lot of attention in the machine learning community on big data, i.e. data that are available more or less without limitation, or at least in amounts that make it possible to use new paradigms for model design, such as deep learning. However there exist countless application contexts where the limited number of data is a concern. An obvious example is patient-based data in healthcare: single databases measuring the same information in the same settings for more than a few hundreds or thousands of patients are rare, while mathematical approximations in the learning methods and algorithms would theoretically require millions or billions of data. There is thus an increasing need for machine learning methods that can handle moderate-size databases of high-dimensional, and heterogeneous, data. High-dimensional means that the number of data is "low" with respect to the dimension of the data space. Heterogeneous means here that not the same features are measured, available, or sufficiently accurate on all data, making hypotheses on the availability and accuracy of features an important characteristic that has to be considered in models. This talk will introduce some areas of machine learning that are useful to answer these questions. It will cover fundamental aspects of feature selection, dimensionality reduction, missing data imputation and introduce challenges related to combining data from different sources.