[flocking] Questions

Colin Clark colin at colinclark.org
Sat Jun 20 17:16:46 EDT 2015


Hi Steven,

Glad to hear that you're making progress. Apologies for the slow response, I was away on vacation with sketchy internet access. Responses inline...

> On Jun 9, 2015, at 1:22 PM, Steven Dale <lifeinchords at gmail.com> wrote:
> 	• I was indeed using _enviro.play on every sound played. This caused lots of perf trouble with memory buildup over time. I've since upgraded to your latest lib, and our code now only use one .start call, and assigns a synth def to play whenever needed. This is still not great, but better.. 

I'm glad you were able to sort out the issue!

> 	• Individual playBuffers don't work on their first use. On the second action, they do. Assuming because WAV's are not loaded, and once the app requests it for playing, they download and are available for play the 2nd time. Is this our app, do we need to somehow "invisibly" trigger or lazy load our sounds, or is something to do with Flocking?

I'm not sure I understand exactly what the behaviour you're seeing is. When you say they "don't work on their first use," what specifically do you happens, and how? 

How and when are you a) instantiating the synth, b) playing it back and c) stopping it? If it looks like an issue with Flocking, can you create a simplified test case that illustrates the issue?

Buffers load asynchronously in Flocking. If you use a buffer definition object as the "buffer" input to your flock.ugen.playBuffer unit generator, and you start it playing immediately, it will output silence until the buffer is available, and then it will play the buffer. Is it possible that you're somehow stopping the synth again before the buffer has finished loading?

If you need to to receive a callback when a buffer has finished loading, you should load it explicitly using a flock.bufferLoader object. I've updated the documentation with more information and an example of how to use it. It's quite helpful also for pre-loading all the assets on a page if you've got long sound files, for example.

https://github.com/colinbdclark/Flocking/blob/master/docs/buffers/about-buffers.md#manually-loading-buffers

> 	• We're still getting lots of glitches and hiccups with playBuffer use, in some places worse than others. For example:
> 		• try hovering over an image bit + then press "space" key to open/preview it. This plays open sound. The Escape to go back/close it, which plays close sound. Both of these, minus the ending pops, are pretty clean, and animation is smooth. 
> 		• Then try drag+dropping bits, 5 or 10 times.. that seems to be struggling. I'm curious if this is because  
> 			• Flocking playback conflicts with Meteor JS, which does lots of event handling/binding magic?
> 			• We use the Greensock JS Animation lib, instantiated/used here for drag+drop, and elsewhere for other animations like preview. It's based on RequestAnimationFrame I think.. is drag+drop stuttering because sound+anim are working together + fighting for resources? If so, why would preview be ok, and drag no?
> 			• Does length of the sample have anything to do with it? Seems like the short tiny sounds work fine, but longer ones stumble

Flocking is implemented entirely using the Web Audio API's ScriptProcessorNode. Unfortunately in current implementations, this means that Flocking has to run on the main browser thread, where it is susceptible to being interrupted by rendering, scrolling, and other user events. The latest version of the spec addresses this issue, but browsers haven't yet adopted it. I'll be adding further support for Web Audio native nodes in Flocking 0.4.0, which are also less susceptible to glitching.

In the meantime, use a larger buffer size (as you discovered, it sounds like) to reduce the likelihood of glitching. The downside, of course, is latency--larger buffer sizes mean a longer interval between the time when a sound is played and when it actually comes out of the user's speakers.

> 	• what does blockSize do on init? what should/could it be set at? Is this something we can use to decrease glitching?

Flocking is a block-based architecture; internally, it generates samples in "blocks" for efficiency. The default block size for Flocking is 64 samples. Increasing the block size will cause greater delays when changing input values on unit generators or using control-rate unit generators to modulate parameters. In practice, I've found that larger buffer sizes have a much more noticeable impact on performance, but you may find that also setting the block size to 128 or 256 is acceptable if you're really encountering problems.

> 	• If you happen to have a Wacom tablet, press s key to create a sketch bit- you can then draw with a stylus. Mouse works too but you wont get pressure data. I've noticed there's some mouse binding happening in Flocking here. I'm not sure what it's doing - do you see any conflicts here in terms of overlapping use of mouse handling, that might spill into perfor issues while person sketches? sketching is handled by the Ploma JS lib, and we bind our canvas mouse handlers here. A demo of Ploma outside of our app, maybe to compare.

Flocking, by default, does not bind any mouse handlers. There are a handful of mouse-related unit generators that are provided for convenience, but it doesn't look like your application is using them. There is no risk of collision.

> 	• You mentioned in last email that a better strategy is to start each synth, play it, then use a gate on an env to trigger. Do you have an example of how we might do this?  I think I'm missing something.. are you saying to do this we should load our app, play all of the sounds, each with the flock.synth() function, then later when people take actions, we turn up the gate? I expect we'll have 30-50 synth defs this way ... 

Some of the examples in the Playground may be useful:

http://flockingjs.org/demos/interactive/html/playground.html#playBufferTrigger
http://flockingjs.org/demos/interactive/html/playground.html#adsr_envGen

I don't necessarily recommend running 50 synths simultaneously if you don't have to, though Flocking can probably handle it. You can, however, swap buffers on the fly, changing which sound is currently being played by your synth (when the gate is closed). I don't understand your application's architecture or requirements well enough to make a specific recommendation, though, but it's good to know what options are available to you. In general, Unit Generators are very lightweight, optimized objects, whereas Synths are a little more intensive to instantiate. If you can avoid creating anything, that's always good. :)

I hope this helps,

Colin



More information about the flocking mailing list