Sensor-based Task Oriented Grasp Synthesis
One of the key challenges in task-oriented grasp synthesis is to mathematically represent a task. In our work, we represent a task as a sequence of constant screw motions. Given a grasp (pair of antipodal contact locations) we can evaluate its feasibility for imparting the desired constant screw motion using our proposed task-dependent grasp metric. We have also developed a neural network-based approach which solves the inverse problem, i.e. given an object representation in terms of a partial point cloud, obtained from an RGBD sensor, and a task in terms of a screw axis, compute a good grasping region for the robot to grasp the object and impart the desired constant screw motion. This task representation also allows us to couple our approach for task-oriented grasp synthesis with screw geometry-based motion planners. For more details please visit the project page.
More recently, we have formalized the notion of regrasping in order to satify the motion constraints. Using our task-dependent grasp metric and a manipulation plan we are able to compute whether there is a need to regrasp an object while executing the manipulation plan or a single grasp would suffice. 

Representing complex manipulation tasks, like scooping and pouring, as a sequence of constant screw motions in SE(3) allows us to extract the task-related constraints on the end-effector’s motion from kinesthetic demonstrations and transfer them to newer instances of the same tasks. This approach has been evaluated for complex manipulation tasks like scooping and pouring and also in the context of vertical containerized farming for transplanting and harvesting leafy crops. 