A number of proposed features with overlapping spec and implementation requirements are popping up:
- Advanced audio APIs that allow complex mixing and effects processing (e.g. Mozilla's AudioData, Chrome's AudioNode)
- Synchronization of multiple HTML media elements (e.g. proposed HTML MediaController)
- Capture and recording of local audio and video input (e.g. proposed HTML Streams)
- Peer-to-peer streaming of audio and video streams (e.g. proposed WebRTC and HTML Streams)
At the API level, I want to integrate effects processing with capturing, recording and streaming by building an effects API for HTML Streams. There is a proposal on the Mozilla wiki --- it needs work.
At the implementation level, all these features should be built on a common foundation to ensure that current and future integration requirements can be met; trying to bridge separate implementation models while maintaining global synchronization, high throughput and low latency seems very difficult. My current project is to build that foundation. It's challenging but fun. I've made it my top priority partly because it could block work on the above features, and also because there is an urgent need for practical comparisons of a Stream-based processing API against less-integrated APIs.
I've found that to focus I really need to disconnect from the Internet. So I'm getting into the habit of working online in the first half of the day and then going offline around 2pm or 3pm (NZST) for the rest of the day. It's working. My apologies to anyone trying to find me online during that time.
My current plan is to build the foundation and just enough of the DOM API to support Worker-based audio synthesis and capture of stream graph output; then I'll be able to write tests for the framework. Then we'll hook up actual audio and video output, make the HTML media element implementation use the framework, and flesh out the new features.