Image: Still image from Me, Me, and My Computer
There is an inextricable link between the development of tools and the progression of art. Tools are made, artists wield them in new, creative ways, giving them meaning and making way for the next evolution of tools. You, Me, and Our Computers is inspired by two web-based tools: webRTC and PoseNet. YMOC combines the two tools in an explorative journey to discover new opportunities for creative interaction online.
WebRTC (Web Real-Time Communication) is an open-source set of protocols that provide real-time communication functionality to web browsers. Among its capabilities, webRTC enables peer-to-peer communication, which allows for individual users to directly share data, audio and video without the need for an intermediary server.
PoseNet is a machine learning model available from TensorFlow.js that runs real-time pose estimation in the browser. Pose estimation is a computer vision technique that recognizes the individual parts of a human form in an image. It makes available individual “keypoints” (like the right eye, or the left knee) for use in interaction. PoseNet works from a webcam image, making its functionality available to anyone with a webcam-enabled computer.
PoseNet returns lightweight JSON data, which can be sent and received over webRTC with relatively low bandwidth requirements. Sending PoseNet data over a peer connection allows for two peers to share their physical locations relative to their webcams and their body positions. The data from two peers can be combined to allow the two users to physically interact with each in a shared web experience. For example, two users could pass a virtual ball back and forth, they could high five one another, or they could just dance side by side on screen.
You, Me, and Our Computers is a series of websites exploring a range of interactions made possible by combining webRTC with PoseNet and similar computer vision algorithms and machine learning models. We’ll call these interactions “social, embodied interaction over distance.” In these web experiences, participants’ images and joint positioning are combined on digital canvases resulting in visual and auditory feedback. Through these interactions, YMOC seeks to encourage explorations of virtual touch and co-presence over distance.
This is a new frontier of exploration on the web, and YMOC seeks to make it more accessible. During the SloMoCo Spring Microresidency, YMOC will rapidly generate, test, and document the series of experimental web experiences in collaboration with the SloMoCo community. All code will be made available under an open source license and all documentation will be publicly available.
Image: Still image from Me, Me, and My Computer
In the first weeks of the residency, we had the opportunity to submit work to a drive-through show, Immediations at ASU DesignSpace. The call requested ambient audio-video immediations to be projected at approximately 120 foot by 10 foot in an unused parking garage turned exhibition hall.
I used this opportunity to begin pulling the technological requirements of YMOC together. I started with revisiting webRTC and PoseNet code samples I had written for a course I taught last fall at NYU ITP. Among the samples I had a project that combines PoseNet data from two people into one curved shape. It was a rough sketch that lacked smoothing and video capability. For Immediations I continued developing the sketch to create Me, Me, and My Computer.
MMMC starts with a p5.js sketch that shows one person (me!) mirrored on each half of the HTML5 canvas. Using PoseNet, the sketch shows the keypoints of me and my mirror and displays them as two individual curved shapes. When me and my mirrored image touch, the PoseNet data is combined to show one shape from the combined figures. I recorded a two and a half minute video of the canvas from this sketch and staggered it seven times at one-second intervals across the 6912 x 540 projection area.
Image: Still image from Medium post on PoseNet smoothing.
This week I started a new repo for my SloMoCo project. It is based on my repository WebRTC Simple Peer Examples. I have added the MMMC sketch under sketches. I have started work on an implementation of peer video sharing using Simple-Peer, as my original project only allows for data sharing. In addition, I researched smoothing for PoseNet, and after not finding any great sample code for it, I created a Medium post and two p5 code samples (smooth right hand | smooth body) explaining how to do simple smoothing on PoseNet using a frame averaging technique.
Next week I will work on transitioning the sketch from MMMC to be a two-person sketch. I will user test the sketch and record and post documentation.