Knowledge Representation

Robot programming traditionally involves configuring a robot for a specific skill. This involves mapping sensory stimuli through transfer functions to create inputs to effectors based on the task to be accomplished. Traditionally programming techniques however make it impossible to transfer and re-use robot skills from one task to another. This is due to the lack of a knowledge representation framework which allows for easy transfer of domain-specific information learned in one context to all other related tasks or contexts. We address this problem by factoring closed-loop control programs into declarative and procedural components. The declarative structure of the program can be transferred to different contexts because it captures only abstract information about objectives required to meet a behavioral goal. The procedural structure supports generalization by parameterizing declarative objectives based on environmental context.

Our computational representation is based on a framework called the control basis that makes use of low-level controllers and their dynamics to learn robot specific knowledge structures. Primitive actions in the control basis are constructed by combining a potential function with a feedback signal and motor variables. Furthermore, the error dynamics created when a controller interacts with the environment provides a natural discrete abstraction of the underlying continuous state space. This allows for a compact representation of the action space where behavior can be represented in terms of the dynamics of the controlled syste. This property is exploited for creating abstractions of control actions. We also show how this state representation when combined with a reward function which rewards discovering "controllable interactions" can lead to the robot learning increasing complex and hierarchical behavior.


Manipulation Planning

Autonomous robots demand complex behavior to deal with unstructured environments. To meet these expectations, a robot needs to address a suite of problems associated with long term knowledge acquisition, representation, and execution in the presence of partial information. In this work, we address these issues by the acquisition of broad, domain general skills using an intrinsically motivated reward function. We show how these skills can be represented compactly and used hierarchically to obtain complex manipulation skills. We further present a Bayesian model using the learned skills to model objects in the world, in terms of the actions they afford. We argue that our knowledge representation allows a robot to both predict the dynamics of objects in the world as well as recognize them.