Our research asks the question: In the future, as everyone can see in a computer enhanced way, how will those enhanced sensory capabilities communication between people? How might language itself evolve, as children will be able to augment speech by drawing their ideas directly in the air, and gestures can trigger simulations as part of speech itself?
In our Holojam project, participants wearing light-weight untethered VR headsets, tracked via Motion Capture walk within a shared alternate reality, see each other as avatars, and draw in the air. Physical and virtual objects are intermixed. This research was demonstrated at SIGGRAPH 2015: http://mrl.nyu.edu/holojam
In our Chalktalk project, people can sketch freehand drawings to create complex simulations. We are integrating Holojam and Chalktalk to prototype visually enhanced communication capabilities for future reality. We will make these techniques available to other research groups in the larger "Consortium for Future Reality"