Virtual Reality


PAAR / 2022
Paar is a VR dance film split between a physical world and virtual world. Filmed in the Tieranatomisches Theater (TAT) at Berlin Charité, and set to an ambient sonic landscape by the artist Arushi Jain and movement by choreographer and creative director Carly Lave. Paar uses 360 video together with motion capture technology, audience members are taken through a seven minute immersive experience where they get to stand, bear witness and interact with the dancers’ journey. This project is possible with support from director Carly Lave, Dr. Christian Stein, cofounder of gamelab.berlin, and GHUNGHRU founder Arushi Jain. In the creation of the projection, we integrated multiple technological components to drive the composition of the video across physical and virtual spaces.
  • 360 Video
  • Photogrammetry
  • Motion Capture Technology
  • Sound Spatialization
Paar was filmed using the Insta360Pro camera together with motion capture technology to create a music video split between a physical world and virtual world. We integrated this real-world footage with a virtual model of the TAT that was specially modeled and designed for the performance. The virtual model is developed from a 3D photorealistic rendering of the TAT, created via a process of photogrammetry shot by DSLR cameras and drone technology. We composited these images to create a hyper-realistic model of the TAT to splice with our live film, making for 180° of virtual world and 180° of physical world viewable in VR. Paar premiered June 9th at the Tieranatomisches Theater Berlin. Paar at the TAT

GOLEM / 2018
An Immersive Dance Project Radicalizing the Physical Body in the Virtual World, this joint-institution, international, and interdisciplinary research project explored the dancing body through technologies including motion-capture technology, virtual reality, digital avatars, and the physical stage to wield a narrative between man and machine. The resulting work sought to question the human body’s engagement, sensorial response, and viewership in the fields of virtual design and dance.

Together a cross-disciplinary team from gamelab.berlin (of Humboldt Universität zu Berlin, the Virtual Design Lab of Kaiserslautern Hochschule, Arushi Jain from GHUNGHRU and Carly Lave forged in December 2018 to envision new methodologies of using motion-capture technology and virtual reality in a performance space. The feasibility and creation of this project is entirely dependent on the novel and exciting art-tech collaboration. The performance featured the Optitrack motion-capture system with VR headsets, Head-Mounted Displays (HMDs), onstage. Through live projections, the dancers interacted with a motion-captured representation of their body in the virtual environment, engineered in Unity by the Kaiserslautern team.


Instrument Design



Extending Interfaces for Disability
In 2019, I got severely injured carrying my synthesizer on tour in India. Due to this, I’ve had limited movement in my right hand which has proven to be quite difficult for both programming, composing and performance purposes. Given that I was no longer able to sequence my modular synthesizer using my keyboard, I had to create more creative ways that didn’t require the use of my hands.

This lead to the creation of ektara, a tool built using python and CREPE: A Convolutional Representation for Pitch Estimation that enables me to sing a melody into a microphone and directly sequence my synthesizer. Using a pitch detection algorithm in offline mode, voice is converted into MIDI notes + pitch bend data in order to enable microtonality in synthesizers. Using Ableton and Expert Sleepers FH2 Midi to CV Convertor, using the generated MIDI data I can sequence my modular synth via Control Voltage using any DC coupled interface. The result is a new paradigm in the field of modular, where it’s now possible for a voltage based syntheiszer to mimic a human voice in microtones, creating a theremin like impact of a singing synthesizer. In addition to being a great tool for people with disabilities, it enables musicians to bring microtonality to electronic music. 



Communal Virtual Aural Interfaces
From 2018 - 2021 I worked as an infrastructure engineer at Reddit, cycling between the departments of Ads Engineering, Growth and Network Infrastructure. As you know, Reddit is one of the top five most visited websites in the US, with a mission to bring community and belonging to the whole world. What this means in reality, is that there are a lot of users who meet and interact with countless anonymous pieces of content in a virtual space. 

I’ve always found the question “What is the ambient noise of an online chatroom” fascinating. So while I was working at Reddit, I did a series of art experiments to better understand how audio can expand our understanding of virtual communities. 

Experiment #1 
What is the ambient noise of online spaces? 

Experiment #2 
Can audio be used to navigate online spaces?




Programming 


I’ve built using a lot of different tools, below are some of the key languages and platforms I feel comfortable with. 

Python, Scala, GO, Java, Typescript, Javascript
Spark, Kubernetes, Docker, AWS, Kafka
Redis, Elasticsearch, Postgresql, Vault, gRPC, Thrift