TutorialsOnline Kernel Learning
Online Kernel Learning
Thursday October 16, 2008
Professor José Príncipe
University of Florida, Gainesville, USA
Jose C. Principe (M’83-SM’90-F’00) is a Distinguished Professor of Electrical and Computer Engineering and Biomedical Engineering at the University of Florida where he teaches advanced signal processing, machine learning and artificial neural networks (ANNs) modeling. He is BellSouth Professor and the Founder and Director of the University of Florida Computational NeuroEngineering Laboratory (CNEL) www.cnel.ufl.edu . His primary area of interest is processing of time varying signals with adaptive neural models. The CNEL Lab has been studying signal and pattern recognition principles based on information theoretic criteria (entropy and mutual information).
Dr. Principe is an IEEE Fellow. He was the past Chair of the Technical Committee on Neural Networks of the IEEE Signal Processing Society, Past-President of the International Neural Network Society, and Past-Editor in Chief of the IEEE Transactions on Biomedical Engineering. He is a member of the Advisory Board of the University of Florida Brain Institute. Dr. Principe has more than 400 publications. He directed 62 Ph.D. dissertations and 65 Master theses. He wrote an interactive electronic book entitled “Neural and Adaptive Systems: Fundamentals Through Simulation” published by John Wiley and Sons and more recently co-authored a book entitled “Brain Machine Interface Engineering”.
Ph.D. student Wei-Feng Liu
University of Florida, Gainesville, USA
Weifeng (Aaron) Liu received his B.S. and M.S. degrees in Electrical Engineering from Shanghai Jiao Tong University in 2003 and 2005 respectively. In 2005, he joined the Computational NeuroEngineering Laboratory at the University of Florida. In 2008, he joined the Demand Forecasting Group at Amazon.com. His research areas include
signal processing, adaptive filtering and machine learning. He has 4 publications in refereed journals and 9 conference papers.
This tutorial will summarize recent advances in nonlinear adaptive filtering. Designing adaptive filters in Reproducing Kernel Hilbert Spaces (RKHS) bridges the established procedures of adaptive filter theory with kernel methods. The end result is a family of filters that are universal approximators in the input space, that have convex performance surfaces (no local minima), and that are on-line, i.e. they adapt with every new sample of the input. Moreover, we will show that contrary to common believe some of its members do not need explicit regularization, e.g. the Kernel Least Mean Squares (KLMS) is well posed in the sense of Hadamard. They are however growing structures therefore special techniques need to be included to curtail their growth. Although the tutorial will focus on adaptive filtering, similar techniques can be applied to the kernel algorithms of machine learning. List of topics:
- Brief Introduction to RKHS
- Kernel Least Mean Squares Algorithm
- Kernel Affine Projection Algorithms
- Extended Kernel Recursive Least Squares
- Active Learning for reduced representation
Jan Larsen received the M.Sc. and Ph.D. degrees in electrical
engineering from the Technical University of Denmark (DTU) in 1989 and 1994.
Dr. Larsen is currently Associate Professor of Digital Signal Processing at
Department of Informatics and Mathematical Modelling, DTU.
Jan Larsen has authored and co-authored more than 100 papers and book chapters within the areas of
nonlinear statistical signal processing, machine learning, neural networks and datamining
with applications to biomedicine, monitoring systems, multimedia, and webmining.
He has participated in several national and international research
programs, and has served as reviewer for many international journals, conferences,
publishing companies and research funding organizations. Further he took part
in conference organizations, among others, the IEEE Workshop on Machine Learning for Signal Processing (formerly Neural
Networks for Signal Processing) 1999--2009. He is past chair of the IEEE Machine
Learning for Signal Processing Technical Committee of the IEEE Signal
Processing Society (2005-2007), and chair of IEEE Denmark Section's Signal Processing
Chapter (2002-). He is a senior member of The Institute of Electrical and Electronics Engineers. Other professional committee participation
includes: Member of the Technical Committee 14: Signal Analysis for Machine Intelligence of the International Association for Pattern Recognition, 2006-;
Steering committee member of the Audio Signal Processing Network in Denmark, 2006-. Editorial Board Member of Signal Processing, Elsevier, 2006-2007;
and guest editorships involves IEEE Transactions on Neural Networks; Journal of VLSI Signal Processing Systems; and Neurocomputing.
The tutorial will discuss the definition of cognitive systems as the possibilities to extend the current systems engineering paradigm in order to perceive, learn, reason and interact robustly in open-ended changing environments. I will also address cognitive systems in a historical perspective and its relation and potential over current artificial intelligence architectures.
Machine learning models that learn from data and previous knowledge will play an increasingly important role in all levels of cognition as large real world digital environments (such as the Internet) usually are too complex to be modeled within a limited set of predefined specifications. There will inevitably be a need for robust decisions and behaviors in novel situations that include handling of conflicts and ambiguities based on the capability and knowledge of the artificial cognitive system.
Further, there is a need for automatic extraction and organization of meaning, purpose, and intentions in interplay with the environment (machines, artifacts and users) beyond current systems with build-in semantic representations and ontologies—in particular in terms of the interaction with users (users-in-the-loop models) through user models and user interaction models.
Research in cognitive information processing is inherently multi-disciplinary and involves natural science and technical disciplines, e.g., control, automation, and robot research, physics and computer science, as well as humanities such as social sciences, cognitive psychology, and semantics. However, machine learning for signal processing plays a key role at all the levels of the cognitive processes, and we expect this to be a new emerging trend in our community in the coming years.
Current examples of the use of machine learning for signal processing in cognitive systems include e.g. personalized information systems, sensor network systems, social dynamics system and Web2.0, and cognitive components analysis. I will use example from our own research and link to other research activities.
Tutorial road map
- Definition of cognitive systems and historical perspectives
- The role of machine learning in a new cognitive systems framework illustrated by specific examples
- Mini future workshop with the aim to output 5 challenging and important research themes for machine learning and signal processing