[flocking-users] WebAudio Nodes
adamrtindale at gmail.com
Sat Mar 15 16:34:14 EDT 2014
Thanks for this. I'm excited to see the integration of existing web audio
Having web audio nodes working only at audio rate makes sense.
I forked your example and played with the compressor node in Web Audio,
which is quite useful.
In this case the multiple synths are wrapped up into one node of flocking
and latency isn't accrued, correct? It would be if I registered more
flocking nodes and put them in series, right?
Thanks for the help!
On Thu, Mar 13, 2014 at 12:08 PM, Colin Clark <colin at colinclark.org> wrote:
> Hi Adam,
> I looked into it more closely yesterday, and the issue is a bit more
> complicated than I expected. To monkey patch this from the outside, you
> also have to swap out the audio strategy's start and stop methods. Totally
> doable, but quite hacky and it's only a temporary solution. But at least
> it's possible, which wouldn't be the case if Flocking followed the
> traditional OO approach of hiding everything. Here's a working example:
> I spent some time contemplating a cleaner API for doing this, and came to
> the conclusion that the best way to do it is to go ahead and implement my
> "Web Audio Islands" idea in Flocking. It's going to take a few weeks to get
> this in place, but the plan is that you will be able define native Web
> Audio notes using Flocking's declarative form and wire them up as inputs to
> Flocking unit generators (and vice versa). There will be two restrictions,
> I think:
> 1. You likely won't be able to use Web Audio nodes to create non-audio
> rate signals, such as the sort of thing you'd do when using the declarative
> 2. You may experience compounding latency if you do a lot of interleaving
> of native nodes with Flocking ugens. This is a systemic issue with the
> WebAudio API until support for Web Workers is added to ScriptProcessorNode.
> The refactoring of Flocking itself should be reasonably straightforward. I
> will need to create a new "ugen evaluator" component that is responsible
> for organizing a set of unit generators and evaluating them. We've already
> got the flock.ugenNodeList grade, which defines all the logic that is
> required for managing unit generators and their evaluation order. I'll need
> to split the .gen() method out of Synth into a separate grade. At that
> point, we can directly bind a tree of Flocking unit generators and their
> ugenEvaluator (representing the "sea" around the Web Audio node islands) to
> a ScriptProcessorNode's onaudioprocess callback. We'll also need "proxy"
> unit generators that represent inputs from a native Web Audio node.
> I'd like to get the upcoming Flocking ICMC paper draft finished first,
> along with finishing the demo of a new "graph view" in the Flocking
> Playground before I dive into this change. But I think it will probably
> help to address several weak points in Flocking's architecture along the
> I hope this helps,
> On Mar 11, 2014, at 9:48 PM, Adam Tindale <adamrtindale at gmail.com> wrote:
> > Hi Colin,
> > Thanks for the skeleton. I'm having trouble getting it to behave
> properly. Regardless of what I do the gain it has no effect on the audio
> > My first guess was that the scriptnode was still attached to the
> destination, so I tried to attach the gainNode to the destination and then
> disconnect the scriptNode from connecting directly to the destination.
> > -----
> > var as = flock.enviro.shared.audioStrategy;
> > as.flockingJSNode = as.jsNode; // Better keep a reference around, just
> in case it could get garbage collected.
> > // Create the new gain node and set parameters on it.
> > var gainNode = as.context.createGainNode();
> > gainNode.gain.value = .00000001;
> > // Connect the Flocking jsNode up to it.
> > as.flockingJSNode.connect(gainNode);
> > gainNode.connect(as.context.destination);
> > as.flockingJSNode.disconnect(as.context.destination);
> > ------
> > This didn't work. It does launch without errors however.
> > Any advice is appreciated.
> > a
> > On Mon, Feb 24, 2014 at 11:26 AM, Colin Clark <colin at colinclark.org>
> > Hi again Adam,
> > On Feb 24, 2014, at 11:07 AM, Adam Tindale <adamrtindale at gmail.com>
> > > The future directions for mixing webaudio nodes with Flocking.js nodes
> sound incredible! I would love to see that integration.
> > Ok, I'll set aside some time to look into this soon. I've filed an
> issue about it here:
> > https://github.com/colinbdclark/Flocking/issues/71
> > > What I am specifically trying to do is user a pannerNode in WebAudio
> and use the 3D panning features with WebGL to have a full 3D audio/visual
> > That sounds great. I am, as we speak, working on a panner unit generator
> for Flocking, but it unfortunately won't yet have any kind of 3D
> spatializing features. This is one of the areas where the Web Audio API
> really does excel, so your approach of mixing Flocking with native nodes
> makes a lot of sense here.
> > > I'll submit some code to this thread as I get it working.
> > Looking forward to seeing it!
> > Colin
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the flocking