Difference between revisions of "Robotics Colloquium Schedule"

From PublicWiki
Jump to: navigation, search
(Spring 2013 Speaker Schedule)
Line 77: Line 77:
 
     <tr class="lineitem">
 
     <tr class="lineitem">
 
       <td>05/24/13</td>
 
       <td>05/24/13</td>
       <td><a href="http://haptics.seas.upenn.edu/">Katherine&nbsp;Kuchenbecker</a></td>
+
       <td><a href="http://haptics.seas.upenn.edu/">Katherine Kuchenbecker</a></td>
 
       <td><strong></strong></td> <!-- title -->
 
       <td><strong></strong></td> <!-- title -->
 
       <td><a href="#aKuch"></a></td> <!-- abstract link (add word "abstract")  -->
 
       <td><a href="#aKuch"></a></td> <!-- abstract link (add word "abstract")  -->

Revision as of 09:06, 3 April 2013

This page is the schedule of speakers. Here is the robotics colloquium event page (with abstracts).

<html xmlns="http://www.w3.org/1999/xhtml"> <head>

 <style type="text/css">

/*<![CDATA[*/

 .lineitem {border-style: none none solid none; border-color: black; border-width: 1px}
 /*]]>*/
 </style>

</head>


<body>

 Robotics Colloquium 

<p> Talks by invited and local researchers on all aspects of control theory, stochastic estimation, machine learning and mechanical design as applied to dynamical systems in robotics. Researchers working at the intersection of these areas with biology and neuroscience will also be hosted. The colloquium is held Fridays at 2:30 pm in CSE 305.

Spring 2013
04/5/13 Dieter Fox PechaKucha 20x20 for Robotics <a href="#aFox">abstract</a>
04/12/13 No talk <a href="#aTBD"></a>
04/19/13 tba <a href="#aTBD"></a>
04/26/13 <a href="http://www.cs.cmu.edu/~mason/">Matt Mason</a> <a href="#aMason"></a>
05/03/13 tba <a href="#aTBD"></a>
05/10/13 tba <a href="#aTBD"></a>
05/17/13 tba <a href="#aTBD"></a>
05/24/13 <a href="http://haptics.seas.upenn.edu/">Katherine Kuchenbecker</a> <a href="#aKuch"></a>
05/31/13 <a href="http://www.cs.berkeley.edu/~pabbeel/">Pieter Abbeel</a> <a hrefaAbbeel"></a>
06/08/13 <a href="http://mit.edu/aeroastro/people/roy.html">Nick Roy</a> <a href="#aRoy"></a>
Winter 2013
01/18/13 Robotics and State
Estimation Lab
Overview of RSE Lab Research
01/25/13 Joshua Smith Robotics Research in the Sensor Systems Group <a href="#a20120327">abstract</a>
02/01/13 no talk   <a href=""></a>
02/08/13 Gaurav Sukhatme Persistent Autonomy at Sea <a href="#a20120329">abstract</a>
02/15/13 Jiri Najemnik Sequence Optimization in Engineering, Artificial Intelligence and Biology <a href="#a20120328">abstract</a>
02/22/13 no talk   <a href=""></a>
03/01/13 Richard Newcombe Beyond Point Clouds: Adventures in Real-time Dense SLAM <a href="#a20120331">abstract</a>
03/08/13 Tom Erez Model-Based Optimization for Intelligent Robot Control <a href="#a20120301">abstract</a>
03/15/13 Byron Boots Spectral Approaches to Learning Dynamical Systems <a href="#a20120401">abstract</a>
Spring 2012
3/30/12 <a href="http://www.cc.gatech.edu/~athomaz/">Andrea Thomaz</a> Designing Learning Interactions for Robots <a href="#a20120330">abstract</a>
4/6/12 <a href="http://mplab.ucsd.edu/">Javier Movellan</a> Towards a New Science of Learning <a href="#a20120406">abstract</a>
4/13/12 <a href="http://www.cs.washington.edu/homes/todorov/">Emanuel Todorov</a> Automatic Synthesis of Complex Behaviors with Optimal Control <a href="#a20120413">abstract</a>
4/20/12 <a href="http://www-all.cs.umass.edu/~barto/">Andrew Barto</a> Autonomous Robot Acquisition of Transferable Skills <a href="#a20120420">abstract</a>
4/27/12 <a href="http://www.cs.washington.edu/homes/fox/">Dieter Fox</a> Grounding Natural Language in Robot Control Systems <a href="#a20120427">abstract</a>
5/4/12 <a href="http://charm.stanford.edu/Main/AllisonOkamura/">Allison Okamura</a> Robot-Assisted Needle Steering <a href="#a20120504">abstract</a>
5/11/12 <a href="http://www.ee.washington.edu/faculty/hannaford/">Blake Hannaford</a> Click the Scalpel -- Better Patient Outcomes by Advancing Robotics in Surgery <a href="#a20120511">abstract</a>
5/18/12 no talk
5/25/12 <a href="http://www.neuromech.northwestern.edu/">Malcolm MacIver</a> Robotic Electrolocation <a href="#a20120525">abstract</a>
6/1/12 <a href="http://www.ri.cmu.edu/person.html?person_id=689">Drew Bagnell</a> Imitation Learning, Inverse Optimal Control and Purposeful Prediction <a href="#a20120601">abstract</a>


Detailed schedule
04/05/13 Dieter Fox <a name="aFox" id="aFox">
     PechaKucha 20x20 for Robotics</a> 

<p> PechaKucha 20x20 is a new approach to giving presentations. From the <a href="http://www.pechakucha.org/faq">FAQ</a>: "PechaKucha 20x20 is a simple presentation format where you show 20 images, each for 20 seconds. The images advance automatically and you talk along to the images." While the presentation format was originally developed for architecture presentations, it has been successfully applied to fields as diverse as art, cooking, design, and journalism. This talk will give an overview of the format and some examples, in the interest of stimulating discussion about the role of such a format in technology. </td> </tr>

04/19/13</td> tba</td> <a name="aTBA" id="aTBA">
      </a> 			
     <p><p>
     </td>
   </tr>
04/26/13</td> Matt Mason</td> <a name="aMason" id="aMason">
      </a> 			
     <p><p>
     </td>
   </tr>
05/03/13</td> tba</td> <a name="aTBA" id="aTBA">
      </a> 			
     <p><p>
     </td>
   </tr>
05/10/13</td> tba</td> <a name="aTBA" id="aTBA">
      </a> 			
     <p><p>
     </td>
   </tr>
05/17/13</td> tba</td> <a name="aTBA" id="aTBA">
      </a> 			
     <p><p>
     </td>
   </tr>
05/24/13</td> Katherine Kuchenbecker</td> <a name="aKuch" id="aKuch">
      </a> 			
     <p><p>
     </td>
   </tr>
05/31/13</td> Pieter Abbeel</td> <a name="aAbbeel" id="aAbbeel">
      </a> 			
     <p><p>
     </td>
   </tr>
06/08/13</td> Nick Roy</td> <a name="aRoy" id="aRoy">
      </a> 			
     <p><p>
     </td>
   </tr>


Past quarters</td>
   </tr>
01/25/13</td> Joshua Smith</td> <a name="a20120327" id="a20120327">Robotics Research in the Sensor Systems Group</a>

<p><p> After providing a brief overview of the Sensor Systems group, I will present our recent work in robotics. I will introduce pretouch sensing, our term for in-hand sensing that is shorter range than vision but longer range than tactile sensing. I will review Electric Field Pretouch sensing, introduce Seashell Effect Pretouch, and discuss strategies for using pretouch sensing in the context of robotic manipulation. As an active sensing modality, pretouch requires a choice of "next view." Since the robot hand is used for both sensing and actuation, pretouch-enabled grasping also requires us to consider an exploration/execution tradeoff. Finally, I will outline several new robotics projects that are underway.</td>

   </tr>
02/08/13</td> Gaurav Sukhatme</td> <a name="a20120329" id="a20120329">Persistent Autonomy at Sea</a>

<p><p> Underwater robotics is undergoing a transformation. Recent advances in AI and machine learning are enabling a new generation of underwater robots to make intelligent decisions (where to sample ? how to navigate ?) by reasoning about their environment (what is the shipping and water forecast ?). At USC, we are engaged in a long-term effort to develop persistent, autonomous underwater explorer robots. In this talk, I will give an overview of some of our recent results focusing on two problems in adaptive sampling: underwater change detection and biological sampling. I will also present our recent work on hazard avoidance, allowing robots to operate in regions where there is ship traffic. Bio: Gaurav S. Sukhatme is a Professor of Computer Science (joint appointment in Electrical Engineering) at the University of Southern California (USC). He is currently serving as the Chairman of the Computer Science department. His recent research is in networked robots. <p><p> Dr. Sukhatme has served as PI on numerous federal grants. He is Fellow of the IEEE and a recipient of the NSF CAREER award and the Okawa foundation research award. He is one of the founders of the RSS conference and has served as program chair of all three leading robotics conferences (ICRA, IROS and RSS). He is the Editor-in-Chief of the Springer journal Autonomous Robots.</td>

   </tr>
02/15/13</td> <a href="">Jiri Najemnik</a></td> <a name="a20120328" id="a20120328">Sequence Optimization in Engineering, Artificial Intelligence and Biology</a>

<p><p> Part 1: Linear equivalent of dynamic programming. We show that Bellman equation for dynamic programming can be replaced by just as simple linear equation for the so-called optimal ranking function, which encodes the optimal sequence via its greedy maximization. This optimal ranking function represents Gibbs distribution which minimizes the expected sequence cost given the entropy level (set by a temperature parameter). Each temperature level gives rise to a linearly computable optimal ranking function.<p> Part 2: Predictive state representation with entropy level constraint. Building on part 1, we show that if one specifies the entropy level of the input's stochastic process, then its Bayesian inference for the purposes of optimal learning can be simplified greatly. We conceptualize an idealized nervous system that is an online input-output transformer of binary vectors representing the neurons' firing states, and we ask how one would adjust the input-output mapping optimally to minimize the expected cost. We will argue that predictive state representations could be employed by a nervous system.<p> Part 3: Evidence of optimal predictive control of human eyes. We present evidence of optimal-like predictive control of human eyes in visual search for a small camouflaged target. To a striking degree, human searchers behave as if maintaining a map of beliefs (represented as probabilities) about the target location, updating their beliefs with visual data obtained on each fixation using the Bayes Rule, and moving eyes online in order to maximize the expected information gain. Some of these results were published in Nature.</td>

   </tr>
02/15/13</td> <a href="">Richard Newcombe</a></td> <a name="a20120331" id="a20120331">Beyond Point Clouds: Adventures in Real-time Dense SLAM</a>

<p><p> One clear direction for the near future of robotics makes use of the ability to build and keep up to date geometric models of the environment. In this talk I will present an overview of my work in monocular real-time dense surface SLAM (simultaneous localisation and mapping) which aims to provide such geometric models using only a single passive colour or depth camera and without further specific hardware or infrastructure requirements. In contrast to previous SLAM systems which utilised sparser point cloud scene representations, the systems I will present, which include KinectFusion and DTAM, simultaneously estimate a camera pose together with a full dense surface estimate of the scene. Such dense surface mapping results in physically predictive models that are more useful for geometry aware augmented reality and robotics applications. Crucially, representing the scene using surfaces enables elegant dense image tracking techniques to be used in estimating the camera pose, resulting in robustness to high speed agile camera motion. I'll provide a real-time demonstration of these techniques which are useful not only in robust camera tracking, but also in object tracking in general. Finally, I'll outline our latest work in moving beyond surface estimation to incorporating objects into the dense SLAM pipeline.</td>

   </tr>
02/15/13</td> <a href="">Tom Erez</a></td> <a name="a20120301" id="a20120301">Model-Based Optimization for Intelligent Robot Control</a>

<p><p> Science-fiction robots can perform any task humans do and more. In reality, however, today's articulated robots are disappointingly limited in their motor skills. Current planning and control algorithms cannot provide the robot with the capacity for intelligent motor behavior - instead, control engineers must manually specify the motions of every task. This approach results in jerky motions (popularly stereotyped as “moving like a robot”) that cannot cope with unexpected changes. I study control methods that automate the job of the controls engineer. I give the robot only a cost function that encodes the task in high-level terms: move forward, remain upright, bring an object, etc. The robot uses a model of itself and its surroundings to optimize its behavior, finding a solution that minimizes the future cost. This optimization-based approach can be applied to different problems, and in every case the robot alone decides how to solve the task. Re-optimizing in real time allows the robot to deal with unexpected deviations from the plan, generating robust and creative behavior that adapts to modeling errors and dynamic environments. In this talk, I will present the theoretic and algorithmic aspects needed to control articulated robots using model-based optimization. I will discuss how machine learning can be used to create better controllers, and share some of my work on trajectory optimization. <p><p> A preview of some of the work discussed in this talk can be seen here: https://dl.dropbox.com/u/57029/MedleyJan13.mp4 [a lower-quality version is also available on youtube: http://www.youtube.com/watch?v=t4JdSklL8w0 ]</td>

   </tr>
02/15/13</td> <a href="">Byron Boots</a></td> <a name="a20120401" id="a20120401">Spectral Approaches to Learning Dynamical Systems</a>

<p><p> If we hope to build an intelligent agent, we have to solve (at least!) the following problem: by watching an incoming stream of sensor data, hypothesize an external world model which explains that data. For this purpose, an appealing model representation is a dynamical system. Sometimes we can use extensive domain knowledge to write down a dynamical system, however, for many domains, specifying a model by hand can be a time consuming process. This motivates an alternative approach: *learning* a dynamical system directly from sensor data. A popular assumption is that observations are generated from a hidden sequence of latent variables, but learning such a model directly from sensor data can be tricky. To discover the right latent state representation and model parameters, we must solve difficult temporal and structural credit assignment problems, often leading to a search space with a host of (bad) local optima. In this talk, I will present a very different approach. I will discuss how to model a dynamical system's belief space as a set of *predictions* of observable quantities. These so-called Predictive State Representations (PSRs) are very expressive and subsume popular latent variable models including Kalman filters and input-output hidden Markov models. One of the primary advantages of PSRs over latent variable formulations of dynamical systems is that model parameters can be estimated directly from moments of observed data using a recently discovered class of spectral learning algorithms. Unlike the popular EM algorithm, spectral learning algorithms are statistically consistent, computationally efficient, and easy to implement using established matrix-algebra techniques. The result is a powerful framework for learning dynamical system models directly from data.</td>

   </tr>
3/08/12</td> <a href="http://www.cc.gatech.edu/~athomaz/">Andrea Thomaz</a></td> <a name="a201203302" id="a201203302">Designing Learning Interactions for Robots</a>

<p><p> In this talk I present recent work from the Socially Intelligent Machines Lab at Georgia Tech. One of the focuses of our lab is on Socially Guided Machine Learning, building robot systems that can learn from everyday human teachers. We look at standard Machine Learning interactions and redesign interfaces and algorithms to support the collection of learning input from naive humans. This talk starts with an initial investigation comparing self and social learning which motivates our recent work on Active Learning for robots. Then, I will present results from a study of robot active learning, which motivates two challenges: getting interaction timing right, and asking good questions. To address the first challenge we are building computational models of reciprocal social interactions. And to address the second challenge we are developing algorithms for generating Active Learning queries in embodied learning tasks. <p><p> Dr. Andrea L. Thomaz is an Assistant Professor of Interactive Computing at the Georgia Institute of Technology. She directs the Socially Intelligent Machines lab, which is affiliated with the Robotics and Intelligent Machines (RIM) Center and with the Graphics Visualization and Usability (GVU) Center. She earned a B.S. in Electrical and Computer Engineering from the University of Texas at Austin in 1999, and Sc.M. and Ph.D. degrees from MIT in 2002 and 2006. Dr. Thomaz is published in the areas of Artificial Intelligence, Robotics, Human-Robot Interaction, and Human-Computer Interaction. She received an ONR Young Investigator Award in 2008, and an NSF CAREER award in 2010. Her work has been featured on the front page of the New York Times, and in 2009 she was named one of MIT Technology Review’s TR 35.</td>

   </tr>
4/6/12</td> <a href="http://mplab.ucsd.edu/">Javier Movellan</a></td> <a name="a20120406" id="a20120406">Towards a New Science of Learning</a>

<p><p> Advances in machine learning, machine perception, neuroscience, and control theory are making possible the emergence of a new science of learning. This discipline could help us understand the role of learning in the development of human intelligence, and to create machines that can learn from experience and that can accelerate human learning and education. I will propose that key to this emerging science is the commitment to computational analysis, for which the framework of probability theory and stochastic optimal control is particularly well suited, and to the testing of theories using physical real time robotic implementations. I will describe our efforts to help understand learning and development from a computational point of view. This includes development of machine perception primitives for social interaction, development of social robots to enrich early childhood education, computational analysis of rich databases of early social behavior, and development of sophisticated humanoid robots to understand the emergence of sensory-motor intelligence in infants.</td>

   </tr>
4/13/12</td> <a href="http://www.cs.washington.edu/homes/todorov/">Emanuel Todorov</a></td> <a name="a20120413" id="a20120413">Automatic Synthesis of Complex Behaviors with Optimal Control</a>

<p><p> In this talk I will show videos of complex motor behaviors synthesized automatically using new optimal control methods, and explain how these methods work. The behaviors include getting up from an arbitrary pose on the ground, walking, hopping, swimming, kicking, climbing, hand-stands, and cooperative actions. The synthesis methods fall in two categories. The first is online trajectory optimization or model-predictive control (MPC). The idea is to optimize the movement trajectory at every step of the estimation-control loop up to some time horizon (in our case about half a second), execute only the beginning portion of the trajectory, and repeat the optimization at the next time step (say 10 msec later). This approach has been used extensively in domains such as chemical process control where the dynamics are sufficiently slow and smooth to make online optimization possible. We have now developed a number of algorithmic improvements, allowing us to apply MPC to robotic systems. This requires a fast physics engine (for computing derivatives via finite differencing) which we have also developed. The second method is based on the realization that most movements performed on land are made for the purpose of establishing contact with the environment, and exerting contact forces. This suggests that contact events should not be treated as side-effects of multi-joint kinematics and dynamics, but rather as explicit decision variables. We have developed a method where the optimizer directly specifies the desired contact events, using continuous decision variables, and at the same time optimizes the movement trajectory in a way consistent with the specified contact events. This makes it possible to optimize movement trajectories with many contact events, without need for manual scripting, motion capture or fortuitous choice of "features".</td>

   </tr>
4/20/12</td> <a href="http://www-all.cs.umass.edu/~barto/">Andrew Barto</a></td> <a name="a20120420" id="a20120420">Autonomous Robot Acquisition of Transferable Skills</a>

<p><p> A central goal of artificial intelligence is the design of agents that can learn to achieve increasingly complex behavior over time. An important type of cumulative learning is the acquisition of procedural knowledge in the form of skills, allowing an agent to abstract away from low-level motor control and plan and learn at a higher level, and thus progressively improving its problem solving abilities and creating further opportunities for learning. I describe a robot system that learns to sequence innate controllers to solve a task, and then extracts components of that solution as transferable skills. The resulting skills improve the robot’s ability to learn to solve a second task. This system was developed by Dr. George Konidaris, who received the Ph.D. from the University of Massachusetts Amherst in 2010 and is currently a Postdoctoral Associate in the Learning and Intelligent Systems Group in the MIT Computer Science and Artificial Intelligence Laboratory.</td>

   </tr>
4/27/12</td> <a href="http://www.cs.washington.edu/homes/fox/">Dieter Fox</a></td> <a name="a20120427" id="a20120427">Grounding Natural Language in Robot Control Systems</a>

<p><p> Robots are becoming more and more capable at reasoning about people, objects, and activities in their environments. The ability to extract high-level semantic information from sensor data provides new opportunities for human-robot interaction. One such opportunity is to explore interacting with robots via natural language. In this talk I will present our preliminary work toward enabling robots to interpret, or ground, natural language commands in robot control systems. We build on techniques developed by the semantic natural language processing community on learning grammars that parse natural language input to logic-based semantic meaning. I will demonstrate early results in two application domains: First, learning to follow natural language directions through indoor environments; and, second, learning to ground (simple) object attributes via weakly supervised training. Joint work with Luke Zettlemoyer, Cynthia Matuszek, Nicholas Fitzgerald, and Liefeng Bo. Support provided by Intel ISTC-PC, NSF, ARL, and ONR.</td>

   </tr>
5/4/12</td> <a href="http://charm.stanford.edu/Main/AllisonOkamura/">Allison Okamura</a></td> <a name="a20120504" id="a20120504">Robot-Assisted Needle Steering</a>

<p><p> Robot-assisted needle steering is a promising technique to improve the effectiveness of needle-based medical procedures by allowing redirection of a needle's path within tissue. Our robot employs a tip-based steering technique, in which the asymmetric tips of long, thin, flexible needles develop tip forces orthogonal to the needle shaft due to interaction with surrounding tissue. The robot steers a needle though two input degrees of freedom, insertion along and rotation about the needle shaft, in order to achieve six-degree-of-freedom positioning of the needle tip. A closed-loop system for asymmetric-tip needle steering was developed, including devices, models and simulations, path planners, controllers, and integration with medical imaging. I will present results from testing needle steering in artificial and biological tissues, and discuss ongoing work toward clinical applications. This project is a collaboration between researchers at Johns Hopkins University, UC Berkeley, and Stanford University. <p><p> Dr. Allison M. Okamura received the BS degree from the University of California at Berkeley in 1994, and the MS and PhD degrees from Stanford University in 1996 and 2000, respectively, all in mechanical engineering. She is currently Associate Professor in the mechanical engineering department at Stanford University. She was previously Professor and Vice Chair of mechanical engineering at Johns Hopkins University. She has been an associate editor of the IEEE Transactions on Haptics, an editor of the IEEE International Conference on Robotics and Automation Conference Editorial Board, and co-chair of the IEEE Haptics Symposium. Her awards include the 2009 IEEE Technical Committee on Haptics Early Career Award, the 2005 IEEE Robotics and Automation Society Early Academic Career Award, and the 2004 NSF CAREER Award. She is an IEEE Fellow. Her interests include haptics, teleoperation, virtual environments and simulators, medical robotics, neuromechanics and rehabilitation, prosthetics, and engineering education. For more information about our work, please see the Collaborative Haptics and Robotics in Medicine (CHARM) Laboratory website: http://charm.stanford.edu.</td>

   </tr>
5/11/12</td> <a href="http://www.ee.washington.edu/faculty/hannaford/">Blake Hannaford</a></td> <a name="a20120511" id="a20120511">Click the Scalpel -- Better Patient Outcomes by Advancing Robotics in Surgery</a> Surgery is a demanding unstructured physical manipulation task involving highly trained humans, advanced tools, networked information systems, and uncertainty. This talk will review engineering and scientific research at the University of Washington Biorobotics Lab, aimed at better care of patients, including remote patients in extreme environments. The Raven interoperable robot surgery research system is a telemanipulation system for exploration and training in surgical robotics. We are currently near completion of seven "Raven-II" systems which will be deployed at leading surgical robotics research centers to create an interoperable network of testbeds. Highly effective and safe surgical teleoperation systems of the future will provide high quality haptic feedback. Research in systems theory and human perception addressing that goal will also be introduced.

<p><p> Dr. Blake Hannaford, Ph.D., is Professor of Electrical Engineering, Adjunct Professor of Bioengineering, Mechanical Engineering, and Surgery at the University of Washington. He received the B.S. degree in Engineering and Applied Science from Yale University in 1977, and the M.S. and Ph.D. degrees in Electrical Engineering from the University of California, Berkeley, in 1982 and 1985 respectively. Before graduate study, he held engineering positions in digital hardware and software design, office automation, and medical image processing. At Berkeley he pursued thesis research in multiple target tracking in medical images and the control of time-optimal voluntary human movement. From 1986 to 1989 he worked on the remote control of robot manipulators in the Man-Machine Systems Group in the Automated Systems Section of the NASA Jet Propulsion Laboratory, Caltech. He supervised that group from 1988 to 1989. Since September 1989, he has been at the University of Washington in Seattle, where he has been Professor of Electrical Engineering since 1997, and served as Associate Chair for Education from 1999 to 2001. He was awarded the National Science Foundation's Presidential Young Investigator Award and the Early Career Achievement Award from the IEEE Engineering in Medicine and Biology Society and is an IEEE Fellow. His currently active interests include haptic displays on the Internet, and surgical robotics. He has consulted on robotic surgical devices with the Food and Drug Administration Panel on surgical devices.</td>

   </tr>
5/25/12</td> <a href="http://www.neuromech.northwestern.edu/">Malcolm MacIver</a></td> <a name="a20120525" id="a20120525">Robotic Electrolocation</a> Electrolocation is used by the weakly electric fish of South America and Africa to navigate and hunt in murky water where vision is ineffective. These fish generate an AC electric field that is perturbed by objects nearby that differ in impedance from the water. Electroreceptors covering the body of the fish report the amplitude and phase of the local field. The animal decodes electric field perturbations into information about its surroundings. Electrolocation is fundamentally divergent from optical vision (and other imaging methods) that create projective images of 3D space. Current electrolocation methods are also quite different from electrical impedance tomography. We will describe current electrolocation technology, and progress on development of a propulsion system inspired by electric fish to provide the precise movement capabilities that this short-range sensing approach requires.

<p><p> Dr. Malcolm MacIver is Associate Professor at Northwestern University with joint appointments in the Mechanical Engineering and Biomedical Engineering departments. He is interested in the neural and mechanical basis of animal behavior, evolution, and the implications of the close coupling of movement with gathering information for our understanding of intelligence and consciousness. He also develops immersive art installations that have been exhibited internationally.</td>

   </tr>
6/1/12</td> <a href="http://www.ri.cmu.edu/person.html?person_id=689">Drew Bagnell</a></td> <a name="a20120601" id="a20120601">Imitation Learning, Inverse Optimal Control and Purposeful Prediction</a>

<p><p> Programming robots is hard. While demonstrating a desired behavior may be easy, designing a system that behaves this way is often difficult, time consuming, and ultimately expensive. Machine learning promises to enable "programming by demonstration" for developing high-performance robotic systems. Unfortunately, many approaches that utilize the classical tools of supervised learning fail to meet the needs of imitation learning. I'll discuss the problems that result from ignoring the effect of actions influencing the world, and I'll highlight simple "reduction- based" approaches that, both in theory and in practice, mitigate these problems. I'll demonstrate the resulting approach on the development of reactive controllers for cluttered UAV flight and for video game systems. Additionally, robotic systems are often built atop sophisticated planning algorithms that efficiently reason far into the future; consequently, ignoring these planning algorithms in lieu of a supervised learning approach often leads to poor and myopic performance. While planners have demonstrated dramatic success in applications ranging from legged locomotion to outdoor unstructured navigation, such algorithms rely on fully specified cost functions that map sensor readings and environment models to a scalar cost. Such cost functions are usually manually designed and programmed. Recently, our group has developed a set of techniques that learn these functions from human demonstration by applying an Inverse Optimal Control (IOC) approach to find a cost function for which planned behavior mimics an expert's demonstration. These approaches shed new light on the intimate connections between probabilistic inference and optimal control. I'll consider case studies in activity forecasting of drivers and pedestrians as well as the imitation learning of robotic locomotion and rough-terrain navigation. These case-studies highlight key challenges in applying the algorithms in practical settings. J. Andrew Bagnell is an Associate Professor with the Robotics Institute, the National Robotics Engineering Center and the Machine Learning Department at Carnegie Mellon University. His research centers on the theory and practice of machine learning for decision making and robotics. <p><p> Dr. Bagnell directs the Learning, AI, and Robotics Laboratory (LAIRLab) within the Robotics Institute. Dr. Bagnell serves as the director of the Robotics Institute Summer Scholars program, a summer research experience in robotics for undergraduates throughout the world. Dr. Bagnell and his group's research has won awards in both the robotics and machine learning communities including at the International Conference on Machine Learning, Robotics Science and Systems, and the International Conference on Robotics and Automation. Dr. Bagnell's current projects focus on machine learning for dexterous manipulation, decision making under uncertainty, ground and aerial vehicle control, and robot perception. Prior to joining the faculty, Prof. Bagnell received his doctorate at Carnegie Mellon in 2004 with a National Science Foundation Graduate Fellowship and completed undergraduate studies with highest honors in electrical engineering at the University of Florida.</td>

   </tr>
 </table>

</body>

</html>