Next: Experimental Agenda Up: Project Description Previous: Categorization and Abstraction

Learning Physical Schemas

With the foundations of the previous sections we may now describe how a robotic agent may learn physical schemas. Whereas the previous walking example (Section 1.2.4) has been implemented, the following example is hypothetical. Briefly the problem is this: Once an agent has learned extended behaviors, such as learning to walk, or to reach and grasp, it is possible to learn abstract representations of those behaviors called physical schemas. The problem, then, is to go from robust ground instances of behavior to abstract schematic representations of behavior.

We can view behaviors as generating multivariate time series of descriptive variables. The task is to identify features in this time series that establish roles in the on-going pattern of interaction. For example, Figure 5 is a semantic characterization of CONTAINMENT. Any time series of descriptive variables that matches Figure 5 is potentially an instance of CONTAINMENT. This abstract schema may carry additional information, not represented in Figure 5. For example, the probabilities of agents' actions in Figure 5, and , are also important: In a containment relationship we expect , that is, when Agent1 ``escapes,'' Agent2 re-establishes the containment with high probability. Moreover, we may choose to require that Agent 1 and Agent 2 constitute disjoint sets of resources. Our claim is that the core semantic characterization of a physical schema is a trajectory through the predicate space, as shown in Figure 5.

Several methods could be adapted to learning schemas like CONTAINMENT from time series data. The representation of CONTAINMENT as a state-transition diagram suggests methods for learning Markov models and hidden Markov models. If we view physical schemas as essentially grammatical, that is, structured arrangements of tokens over time, then we might apply methods for learning probabilistic grammars e.g., [35]). In any case, a learning method must be able to lift patterns out of time series of potentially huge state descriptions, where most of the state information is irrelevant.

The Multi-stream Dependency Detection (MSDD) algorithm efficiently finds patterns in time series of large state vectors [81]. For example, we currently use MSDD to find patterns in the behavior of computer networks [65][80]. Although MSDD will require some modifications to learn physical schemas, it can lift regular patterns, like Figure 5, in time series. We must develop methods to postprocess the dependencies, to gather up and fit together rules that describe pieces of a schema like the CONTAINMENT model in Figure 5. MSDD in its current form will not suffice, but even in its current form, it can find predictive patterns in time series and abstract them away from background noise variables. Thus, if a physical schema is characterized by a predictive pattern that relates agent actions to predicates schema - then MSDD or something like it will find this characterization.



Next: Experimental Agenda Up: Project Description Previous: Categorization and Abstraction


grupen@tigger.cs.umass.edu
Wed Apr 16 00:53:15 EDT 1997