Check out the new USENIX Web site. next up previous
Next: Artificial sensorimotor system Up: Learning in Intelligent Embedded Previous: Abstract

Introduction

The information processing capabilities embedded in systems today are extremely unreliable when operated in conditions that fall outside of their narrow design specifications. For example, the best present-day speech recognition systems will fail if a different microphone is substituted, or if the speaker has a sore throat [Rabiner]. On the other hand, biological systems are extremely robust to environmental changes when compared with artificial systems. The success of biological information processing systems lies in their ability to accommodate changes at all processing levels, from low-level sensor modalities to high-level computational algorithms and architectures. In order for artificial systems to truly become ubiquitous in the future, they will need to incorporate this ability to quickly and robustly adapt to changes.

In these proceedings, we present some of our work on algorithms that allow a system to learn from prior experience and adapt its behavior accordingly. We demonstrate these algorithms on a small quadruped robot that we have constructed to perform various kinds of sensorimotor tasks. In particular, we show how the robot learns to track a novel object by rapidly changing the weights of a convolutional neural network which processes the color, luminance, motion, and audio signals. Based upon the reliability of the different input channels, the system is able to discover the most salient visual and auditory cues unique to that object.

The training of the robot is done online in real time using supervisory signals from a teacher. We also explore how the robot can learn robust cues for object recognition without any supervision. This form of unsupervised learning is essential for adaptation in situations where there is little user interaction. We show how a simple algorithm can automatically learn to segment high-dimensional input data into features that correspond to functionally relevant parts [Palmer]. Our learning algorithm incorporates nonnegativity constraints that are similar to those found in biological neural networks. This enables our algorithm to learn parts as features by modeling positive coactivation in the inputs. Such a parts-based representation is valuable because it is invariant to perturbations or occlusions that affect localized regions of the input space.


next up previous
Next: Artificial sensorimotor system Up: Learning in Intelligent Embedded Previous: Abstract

1999-03-20