Learning Fast and Precise Pixel-to-Torque Control: A Platform for Reproducible Research of Learning on Hardware
Steffen Bleher, Steve Heim, Sebastian Trimpe
In the field, robots often need to operate in unknown and unstructured environments, where accurate sensing and state estimation (SE) become a major challenge. Cameras have been used with great success in mapping and planning in such environments as well as complex but quasi-static tasks, such as grasping, but are rarely integrated into the control loop for unstable systems. Learning pixel-to-torque control promises to enable robots to flexibly handle a wider variety of tasks. While reinforcement learning (RL) offers a solution in principle, learning pixel-to- torque control for unstable systems that require precise and high-bandwidth control still presents a significant practical challenge, and best practices have not been established. Part of the reason is that many of the most auspicious tools, such as deep neural networks (DNNs), are opaque: the cause for success on one system is difficult to interpret and generalize.