We use audio in a web browser all the time, listening to songs on YouTube or reacting to a sound in a flash game. But upcoming (and to some extent already available) techniques promise more: Interactive, dynamic audio with signal processing and even access to the sample level. Especially the last feature allows for more than traditional audio programming environments, such as Max/MSP or SuperCollider, can offer: Basically you could implement any audio application (or at least any DSP algorithm). Or maybe not?
What are the present (and future) difficulties in programming audio in the web? What is already possible and what needs further web standards and API specifications? Does it all come to Flash or other plugins or can we leave Flash behind some day?
In this multi-part blog series I want to discuss the past, present and future of audio in the web, highlighting already available techniques to do sound synthesis and sound processing in a common browser and showing how to make the web go “beep!”. As I’m quite new in this topic I’m open to any additions and comments.
Here are all posts already available: