Teaching Robots with Gestures
Automation plays an important role in increasing productivity and maintaining quality in industrial environment. Industrial Robots enable scalability of throughput and ensure strict quality which is very tough to attain by manual labor.
Large automobile companies outsource the production of smaller parts to a few small and medium sized industries. Such S & M sized industries invest in industrial robots for the manufacturing and finishing of auto parts.
During commissioning of the robots, ABB programs and configures the robot for a particular job. Any changes in job or work piece, the S & M scale industry reverts to ABB to configure the robot.
How could such industries adapt to frequent job changes without depending on ABB for configuring the robot?
The project explored direct manipulation techniques to configure the robot. Instead of joystick or push-buttons, Human Gestures were used to communicate with the robot. This project had two main parts — configuring the robot and defining the paint paths. Microsoft Kinect camera was used to track different points on a body.
The movement of hand-joints was mapped on the axes of the robot. The angle of movement of a particular hand-joint is signaled to the robot and the robot corresponds. This way the robot could be configured to a large extent and the fine tuning could be done by a TPU (Teach Pendant Unit).
Motion tracking was used to capture the paint path taken by an expert painter. The 3D points which constitute the path are fed to the robot which replicates the expert painter’s actions.
These way paint-efficient and energy-efficient paths could be made. This project explored how machine learning could happen through natural user interfaces like Human Gestures.
The Memory Lane Project
The Memory Lane Project is an interactive installation which affords the user to engage with it and find traces of his/her memory in it.
The concept has been developed for the golden jubilee celebration of Indian Institute of Management, Ahmedabad to commemorate everyone who has been a part of the institute. The alumni, therefore, becomes the primary user. The installation focuses on ‘memory’ of the user. Long term memory is majorly visual in nature, and hence everything that the installation serves to the user is visual and not textual.
Essentially, the installation is a grid of photographs which, through its affordance, invites the user to interact with it, and allows him/her to select a one. Photographs enable moments of self-encounter that allows for constructive identity-creation. Taking leads from the selected one, the installation puts forward a set of related photographs, and thereby offers the user a nostalgic world to immerse in.
The use of Kinect camera enables intuitive gestures like swaying of hand for surfing and spreading arms to open a photograph, thereby making the interaction very natural. The ludic interface allows the user to play around using gestures.
Equally vital is the choice of space for the installation. The tunnel connecting the architecturally distinct campuses of IIM-A, is a metaphor for ‘going back in time’.
Shopping for garments is a pleasing experience. But waiting for trail rooms to try the garments can hamper this experience. During weekends or festive season, its
a usual scene to see customers queuing outside the trial rooms with 3-4 garments to try.
What if the customer has to make a quick choice between the garments?
Can we enable the customer to take a second opinion (from the person who is accompanying him / her) ?
T trial relieves the user from waiting-for-trial-rooms during shopping for garments like T shirts. The large display acts as a mirror to the user. As soon as the user stands in front of the mirror, the kinect tracks the position of user’s shoulders in 3 dimension. T Trial now
maps a T Shirt on the user(which is scaled depending on the body size of the user)
Once the T Shirt gets mapped on the user, He / She can decide whether it suits him/her. The user can also use hand gestures to hover on the green arrows to load the
next or previous T Shirts.
The user can say aloud “choose“ to choose a particular T Shirt. T Trial captures an image of the user. The user can choose upto four T Shirts after which all the 4 images are displayed on screen. The user can now compare between the T Shirt designs and then make a decision. The user can also get second opinions from his friends or family
who are accompanying him / her.
The idea was to create an immersive viewing experience using a huge dome. This dome (6 feet dia) covers the whole view angle of the user, hence the user sees nothing
but the projected image or video.
A sphere was generated programmatically using processing and Panoramic Photographs were wrapped on the sphere. These panaromic photos look perfectly stitched from all the sides – which gives a continuos perception of the photograph. Instead of viewing this sphere from outside, the view point was placed at the center of the virtual sphere (the
camera’s co ordinates were adjusted so that it was placed exactly in the center of the virtual sphere)
The inner wall of this virtual sphere was projection mapped on the concave part of the dome to enhance the onscreen movements. By using ultra short-throw projectors, occlusion was minimized to give the user an immersive experience.
A web cam was placed on the dome which tracks the fiducial marker held by the user (this marker could also be printed on the user’s T shirt ). As the user moves the marker, the virtual sphere gets rotated and the panaromic photos move. The user is able to view the photo in 360 degrees by just moving the marker.
This enables the user to view any part of the photo or video by gestures (or moving around,)
This way the movie can be made interactive as the user can choose his / her own frame instead of viewing the frame that the director chooses to show.