Dexter is a platform for studying bi-manual dexterity designed to help us study the acquisition of concepts and cognitive representations from interaction with the world. We are addressing central issues in cognitive science and artificial intelligence: the origins of conceptual systems, the role of native structure, computational complexity issues, and knowledge representation. The goal of the research is to advance computational accounts for sensorimotor and cognitive development in a manner that leads to new theories for controlling intelligent robots, and provides a basis for shared meaning between humans and machines. Research is underway toward mechanisms for learning hierarchical control knowledge - categories of objects, activities, tasks, and situations - through a continuous interaction with the environment.
Dexter consists of two Whole Arm Manipulators (WAMs) from Barrett Technologies, two Barrett Hands, and a BiSight stereo head. The WAMs are 7 DOF manipulators with roughly anthropomorphic geometry. Each degree of freedom is actuated through braided steel tendons through a low ratio transmission. This configuration leads to a combination of excellent velocity, acceleration, and back-drivability. The last of these properties means that contacts anywhere on the arm are detectable as actuator "effort," thus creating the opportunity to study "whole-body" grasping.
Each 3 fingered Barrett hand has 4 DOF and has integrated tactile load cells (ATI) on each fingertip. Three VME cages host the computing system to control the integrated platform
The BiSight head consists of 4 mechanical DOF (head pan-tilt and independent verge), 3 optical degrees of freedom (focus, iris, and zoom), and an integrated, binaural acoustic sensor consisting of four microphones for localizing and interpreting acoustic sources. Dexter also uses a MS Kinect to gather point clouds in 3D.