[flocking-users] Audio input with MediaStreamSourceNode
colin at colinclark.org
Mon Apr 7 09:52:13 EDT 2014
Thanks for testing it for me! I did some quick debugging and it looks like getUserMedia still isn’t supported by iOS 7. A quick check of caniuse confirms it:
Looks like we’ll just have to keep waiting for Apple to support it. I’m glad to hear it’s working well on Chrome, though!
On Apr 7, 2014, at 9:44 AM, Adam Tindale <adamrtindale at gmail.com> wrote:
> Hi Colin,
> Awesome work.
> iPad Mini iOS 7.1 Chrome NADA
> iPad Mini iOS 7.1 Safari NADA
> Nexus 4 Android 4.3 Chrome Works perfectly (even in the background)
> Thanks for the audio context work! This is really exciting. I've found that the webaudio compressor is a really nice thing to have with synthesis to make sure that the audio competes with the sound in other pages but also to not accidentally destroy someone's speakers.
> On Sat, Apr 5, 2014 at 8:53 PM, Colin Clark <colin at colinclark.org> wrote:
> Hi all,
> Today I added support for the Web Audio API’s MediaStreamSourceNode, which provides audio input in any modern browser. This replaces the lame Flash solution we were using previously. Tests on my MacBook and Chromebook suggest it’s quite effective down to the minimum buffer size supported by the Web Audio API (256 samples). Here are a couple of simple demos:
> I haven’t yet had an opportunity to test on Android, iOS, or Firefox OS, so let me know it looks.
> This also required some preliminary refactoring of the way nodes are managed in the web audio strategy, which should make simple use cases of inserting nodes before or after the Flocking ScriptProcessorNode easier. Here’s an updated example showing how nodes can be inserted after Flocking; no more monkey patching. Keep in mind that this API will definitely change once “islands” are implemented:
More information about the flocking