The problem: Display four computer screens on one projector.
Hardware HD video mixers with VGA inputs are too expensive and employing a minion to switch between VGA channels manually is impractical. The obvious solution to display multiple screens simultaneously or in sequence would be to use a video mixing software. But video capture cards to get the video signal into a computer have their price point as well. We all (still) have ethernet though, so VNC to the rescue.
Wait a minute … why use ethernet when you could use WiFi? In our experience running four VNC streams simultaneously over an off-the-shelf WiFi router resulted in very laggy display updates. Even worse: VNC jammed our WiFi, which lead to a lot of dropped OSC packets, which in turn made BenoitLib very unreliable. This is the most important reasons (next to overall stability and reliability) why we consistently use ethernet in our performances. We had too many issues with WiFi and plugging in a cable was the easiest solution to this problem.
We did some experiments with the built in OS X screen sharing server but we ended up using the Vine VNC server as it turned out to be be more reliable.
Ideally we would start four VNC servers on our performing laptops and four VNC clients on the projecting computer and arrange the windows next to each other. We did this in our first performance (2010) at ZKM: „Ctrl-N“. (Also worth noting is that Matthias still used Max/MSP in that performance!)
While this would work very well with a 4k projector and can also work with a 1080p projector many projectors you face as a laptop band are still stuck with 1024 or 1280 pixels of horizontal resolution, leaving just a little over 500 pixels to present a screen. The other problem is, that the coding fonts have to be set rather large to be still legible by the audience. Most of us don’t like coding with a too large font, as line wrapping appears too quickly, more scrolling is involved and we loose track of our code, as not as much is visible on our screen.
Some VNC clients, like Chicken of the VNC, are also unable to rescale VNC views (at least not in windowed mode), which made it an unfeasible choice for us.
Showing the screens successively in full size made the code more legible and we didn’t have to use enormously large fonts. We first used this approach in our 2011 ZKM performance: „Bal des Ardents“.
In this performance we used an AppleScript and the OS X screen sharing client to show a different screens every ten seconds. We hid the OS X menu bar by changing the LSUIPresentationMode in the app bundles info.plist (which doesn’t seem to work anymore on OS X Yosemite).
We kind of liked, that the hostname of the VNC stream was displayed in the title bar of the VNC client. This is also when we started to use the ____cake hostnames for all Mandelbrots laptops, to have some uniformity. So far we had cheesecake, fishcake, beefcake, fruitcake and cupcake – with boomcake, hardcake, mooncake and blowcake still in use.
While this approach worked very well, it was limited to instant switching of the screens. I was thinking about spicing up the presentation and wanted to try blending in in additional visuals, crossfading between screens or arranging segments of the screens freely. This could be done with standard VJ and/or some glue software, acting as a VNC client and as a Syphon server. (I actually have seen software like this used in the wild, but could not find it on the web …). But of course, using ready-made software is not really my style, so I added VNC capabilities to my VJ/creative coding hybrid/zombie NeoJuice – a Lua based live coding environment built on top of OpenFrameworks.
Using LibVNC for this was actually quite easy, especially as I didn’t need a GUI, as I could specify all login information in a Lua script. A standard double buffering technique is used to upload the pixel data coming from LibVNC to the GPU. Inside NeoJuice it then can be used as any other texture, e.g. in a fragment shader. After sorting out some threading issues it was good to go and is now working very reliably. Initially we ran the software on a separate machine but nowadays I run it on the extended desktop of the computer I also perform on, as it’s not using much resources. Initially we thought it could also be fun to project our pixelized faces (grabbed via web cams, transmitted over a separate VNC stream) behind our code. We first used this setup at „Lass uns blau machen“ and afterwards at next_generation 4.0 Festival in our infamous „Danaë“ performance.
Because it ended up looking kind of goofy we dropped this idea – at the Laptops meets Musicians Festival in Venice, two months later, we tried to display every screen at once, blurred and slightly faded in the background, successively bringing another screen to the foreground.
The „standard“ Mandelbrots visuals we project nowadays were first used at Network Music Festival 2012 in Birmingham. It consists of a desaturated IBNIZ layer which is warped and blended with slowly and randomly moving blobs. On top of that we we have the VNC layer, still just switching screens in a fixed interval.
In another iteration we started running the screen layer through an extra shader, which discards every pitch black (0, 0, 0) fragment instead of additively blending the VNC layer on top of the IBNIZ visuals. As the VNC textures have to be scaled down most of the time, the artifacts of the linear filtering usually create a subtle dark outline on the code, which is too bright to be discarded. Overall this creates a higher contrast between background visuals and code – a welcome side effect for free!
Our screen switching approach was rightfully criticized many times. While code is legible, following the coding process gets very hard, as the screen always switches, leaving you with not enough time to read and understand the code. On the other hand, increasing the time intervall can lead to unsatisfying situations as well, in which nothing interesting happens on the displayed screen for an extended period of time. Whenever we have two projectors at our disposal we project two screens simultaneously and increase the switching time, but this is still far from perfect. An interesting solution for this problem could be a live image director, overseeing all code screens simultaneously, and switching screens when he sees some interesting development, much like this is done at sports broadcasts.
A downside of using VNC (in contrast to using „real“ video signals) is that very fast moving elements of the screen aren’t transmitted fast enough and appear choppy. A perfect example for this is the audio scope of SuperCollider. This can be seen in our telematic performance to Mexico. NeoJuice is also used here to composite our screens, the web cam image and other visual material.
In some performances we didn’t want to show our screens directly but tried to display our coding editor in a more stylized way. So instead of directly showing our screens via VNC we use a technique I call document forwarding. I’ll explain how we do it and will show an example where we used it in the next part of this series.