One Piece

Keynote Speakers of ISCMI2020

Prof. Bernadette Bouchon-Meunier
Sorbonne Université, France
IEEE Life Fellow; President-elect, IEEE Computational Intelligence Society (2019 - 2019)

Bernadette Bouchon-Meunier is a director of research emeritus at the National Centre for Scientific Research and Sorbonne University, the former head of the department of Databases and Machine Learning in the Computer Science Laboratory of the University Pierre et Marie Curie-Paris 6 (LIP6). She is the Editor-in-Chief of the International Journal of Uncertainty, Fuzziness and Knowledge-based Systems and the Co-executive director of the IPMU International Conference held every other year since 1986. B. Bouchon-Meunier is the (co)-editor of 27 books, and the (co)-author of five. She has (co)-authored more than 400 papers on approximate and similarity-based reasoning, as well as the application of fuzzy logic and machine learning techniques to decision-making, data mining, risk forecasting, information retrieval, user modelling, sensorial and emotional information processing.
She is currently the President of the IEEE Computational Intelligence Society and the IEEE France Section Computational Intelligence chapter vice-chair. She is an IEEE Life Fellow, an International Fuzzy Systems Association Fellow and an Honorary Member of the EUSFLAT Society. She received the 2012 IEEE Computational Intelligence Society Meritorious Service Award, the 2017 EUSFLAT Scientific Excellence Award and the 2018 IEEE CIS Fuzzy Systems Pioneer Award.


Speech Title: From Computing With Words to Explainable Artificial Intelligence

Abstract: After the inception of fuzzy sets, Lotfi A. Zadeh proposed very early several ways to represent some forms of natural language and meaning. His first attempt was the language PRUF1, followed by several new notions such as fuzzy information granulation or fuzzy constraint. The most important of his seminal proposals was the concept of linguistic variable2 defined as “a variable whose values are words or sentences in a natural or artificial language” and he claimed that “its main applications lie in the realm of humanistic systems, especially in the fields of artificial intelligence, linguistics, …”. The culmination of his proposals on this subject was the concept of Computing With Words3, proposed in 1996 as a means to use words instead of numbers in “computing and reasoning”.
In 2016, twenty years after the emergence of this concept, Explainable Artificial Intelligence has been introduced in the DARPA incentives4 to both produce more explainable artificial intelligence models while maintaining good learning performances, and enable the user to easily interact with intelligent systems. The capacity of intelligent systems to understand natural language, at least to some extent, and to interact with humans by means of words, is clearly a component of the explainability of artificial intelligence.
Several related concepts such as their understandability, their expressiveness or their interpretability, are inherent in the quality of intelligent systems. The latter has been in particular extensively investigated in the construction and analysis of fuzzy intelligent systems.
We will review the importance of Lotfi A. Zadeh’s seminal work in various contributions to Explainable Artificial Intelligence based on fuzzy knowledge representation and we will highlight the variety and the efficiency of such solutions.

1L.A. Zadeh, PRUF - A meaning representation language for natural languages, Int. J. Man-Mach. Studies, vol. 10, pp. 395-460, 1978.
2L.A. Zadeh, The concept of a linguistic variable and its application to approximate reasoning-I, Information Sciences, Volume 8, Issue 3, 1975, Pages 199-249.
3L.A. Zadeh, Fuzzy Logic = Computing with words, IEEE Transactions on Fuzzy Systems, Vol. 4, No 2, May 1996, Pages 103-111.
5 B. Bouchon-Meunier, Information quality: the contribution of fuzzy methods, in Data Science for Financial Econometrics (Nguyen Duc Trung, Nguyen Ngoc Thach, Vladik Kreinovich, eds.), Studies in Computational Intelligence, Springer, 2020 (to appear)


Prof. Michael Felsberg
Linkoping University, Sweden

Michael Felsberg (MSc 1998, PhD 2002) is Full Professor and the Head of the Computer Vision Laboratory, Linköping University. His research interests include learning and modeling of machine perception. He has published more than 150 reviewed conference papers, journal articles, and book contributions, with in total more than 11,000 citations. He has received awards from the German Pattern Recognition Society in 2000, 2004, and 2005, from the Swedish Society for Automated Image Analysis in 2007 and 2010, from Conference on Information Fusion in 2011 (Honorable Mention), from the CVPR Workshop on Mobile Vision 2014, and from ICPR 2016 (best paper in computer vision). He has achieved top ranks on various challenges (VOT: 3rd 2013, 1st 2014, 2nd 2015; VOT-TIR: 1st 2015; OpenCV Tracking: 1st 2015; KITTI Stereo Odometry: 1st 2015, March). He is regularly Associate Editor for major journals in the field (e.g. JMIV, IMAVIS) and Area Chair for top-tier conferences (e.g. ECCV, BMVC, CVPR). He was Track Chair of the International Conference on Pattern Recognition 2016, General Co-Chair of the DAGM symposium in 2011, General Chair of CAIP 2017, and Program Chair of SCIA 2019.

Speech Title: Machine Learning for Visual Perception

Abstract: Deep learning with vast amounts of data has revolutionized the field of computer vision. Robot vision, however, requires to go beyond the developed methods due to its specific requirements: The amount of data is limited and need to be processed on-the-fly, annotation is incomplete or missing, measurements are sparse and of varying level of certainty, and detection and segmentation tasks need to be performed in real-time. Modern approaches to these visual perception problems will be presented and explained: online learning, weakly supervised and reinforcement learning, certainty-adaptive data densification, anomaly detection, and modulation-based discriminative segmentation. The presented methods achieve state-of-the-art results on major benchmarks, such as VOT, DAVIS, Kitti, and CARLA.



Prof. Michel Verleysen
Universite Catholique de Louvain, Belgium

Michel Verleysen was born in 1965 in Belgium. He received the M.S. and Ph.D. degrees in Electrical Engineering from the Université catholique de Louvain (Belgium) in 1987 and 1992, respectively. He was an invited professor at the Swiss E.P.F.L. (Ecole Polytechnique Fédérale de Lausanne, Switzerland) in 1992, at the Université d׳Evry Val d׳Essonne (France) in 2001, and at the Université Paris I – Panthéon-Sorbonne from 2002 to 2011, respectively. He is now a Full Professor at the Université catholique de Louvain, and Honorary Research Director of the Belgian F.N.R.S. (National Fund of Scientific Research). He is an editor-in-chief of the Neural Processing Letters Journal (published by Springer), chairman of the annual ESANN conference (European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning), past associate editor of the IEEE Transactions on Neural Networks journal, and member of the editorial board and program committee of several journals and conferences on neural networks and learning. He was the chairman of the IEEE Computational Intelligence Society Benelux chapter (2008–2010), and member of the executive board of the European Neural Networks Society (2005–2010). He is author or co-author of more than 250 scientific papers in international journals and books or communications to conferences with reviewing committees. He is the co-author of the scientific popularization book on artificial neural networks in the series 'Que Sais-Je?' in French, and of the “Nonlinear Dimensionality Reduction” book published by Springer in 2007. His research interests include machine learning, feature selection, artificial neural networks, self-organization, time-series forecasting, nonlinear statistics, adaptive signal processing, and high-dimensional data analysis.


Speech Title: Machine learning for Real-World Moderate-Size Data

Abstract: Machine learning consists in building models from data. There has been recently a lot of attention in the machine learning community on big data, i.e. data that are available more or less without limitation, or at least in amounts that make it possible to use new paradigms for model design, such as deep learning. However there exist countless application contexts where the limited number of data is a concern. An obvious example is patient-based data in healthcare: single databases measuring the same information in the same settings for more than a few hundreds or thousands of patients are rare, while mathematical approximations in the learning methods and algorithms would theoretically require millions or billions of data. There is thus an increasing need for machine learning methods that can handle moderate-size databases of high-dimensional, and heterogeneous, data. High-dimensional means that the number of data is "low" with respect to the dimension of the data space. Heterogeneous means here that not the same features are measured, available, or sufficiently accurate on all data, making hypotheses on the availability and accuracy of features an important characteristic that has to be considered in models. This talk will introduce some areas of machine learning that are useful to answer these questions. It will cover fundamental aspects of feature selection, dimensionality reduction, missing data imputation and introduce challenges related to combining data from different sources.