since much of my online following is through Instagram, I decided to learn Spark in order to bring my brand of audioreactive geometry into the burgeoning world of augmented reality. the basic concept was little sprites that dance and "bounce" around with the user. I wanted to test the limits of the medium: how many different animations could I get away with in one filter while still retaining focus and cohesion?
the answer wound up being 4. I modeled the objects in Cinema 4D to start: a fractured pyramid, an abstract "ghost," a soft blob, and a jagged, fractured blob. next, I thought of songs that would fit these shapes, make for good 5-10 second loops, and provide the user with a variety of moods. after some audio editing in Audacity, the tracks sampled were, in order of appearance:
next, I imported the objects and audio loops into Spark, where I animated each loop and used face and plane trackers in order to bring the assets into the user's world space for interaction. using the node-based patch editor, I made it such that when the user raised their eyebrows, the first animation would begin; when they opened their mouth, the second; tapping on the second visual would trigger the third, and tapping anywhere on the other side of the camera would trigger the fourth.
this project was difficult—while adept at declarative programming in JavaScript, I had never used reactive programming, the paradigm used in Spark. this involved reconceptualizing the foundation of computational thinking I'd been trained in. Spark also isn't designed for the number of events I had planned, so I had to develop some creative methods to work around that. I had fun with this, and am interested in producing more augmented reality work going forward.