[flocking] Questions

Steven Dale lifeinchords at gmail.com
Tue Jun 9 13:22:12 EDT 2015

Hi Colin, (+ CC'ing our collabs who might be interested)

How's it going? We've since been focused on other features- we're now
returning for another pass at sound in our app. Sorry for the long email,
this is a collection of observations + questions from the last month of

************    Some background:

   - You can play + listen on our publicly accessible sandbox
   <http://makeparallels.herokuapp.com/>. Plz keep in mind images are not
   currently deleted from the history, so only upload test image data.
   Everyone shares one canvas currently. It's not precious, feel free to play.

   - Most UI actions are currently tied to playBuffer ugens (samples). This
   isn't very exciting, but a first step to explore where and how sound fits
   in the app. Next step is to design it using dynamic synths.

   - We call each square on the canvas, a bit... image bits, text bits,
   sketch bits, etc...

**********    Observations / Questions:

   - I was indeed using _enviro.play on every sound played. This caused
   lots of perf trouble with memory buildup over time. I've since upgraded to
   your latest lib, and our code now only use one .start call
   and assigns a synth def to play whenever needed. This is still not great,
   but better..

   - Individual playBuffers don't work on their first use. On the second
   action, they do. Assuming because WAV's are not loaded, and once the app
   requests it for playing, they download and are available for play the 2nd
   time. Is this our app, do we need to somehow "invisibly" trigger or lazy
   load our sounds, or is something to do with Flocking?

   - We're still getting lots of glitches and hiccups with playBuffer use,
   in some places worse than others. For example:
      - try hovering over an image bit + then press "space" key to
      open/preview it. This plays open sound. The Escape to go back/close
      it, which plays close sound. Both of these, minus the ending pops, are
      pretty clean, and animation is smooth.

   - Then try drag+dropping bits, 5 or 10 times.. that seems to be
      struggling. I'm curious if this is because
         - Flocking playback conflicts with Meteor JS, which does lots of
         event handling/binding magic?

   - We use the Greensock
         <http://greensock.com/docs/#/HTML5/GSAP/Utils/Draggable/>JS Animation
         lib, instantiated/used here
         drag+drop, and elsewhere for other animations like preview.
It's based on
         RequestAnimationFrame I think.. is drag+drop stuttering
because sound+anim
         are working together + fighting for resources? If so, why
would preview be
         ok, and drag no?

   - Does length of the sample have anything to do with it? Seems like the
         short tiny sounds work fine, but longer ones stumble

   - what does blockSize do on init? what should/could it be set at? Is
   this something we can use to decrease glitching?

   - If you happen to have a Wacom tablet, press s key to create a sketch
   bit- you can then draw with a stylus. Mouse works too but you wont get
   pressure data. I've noticed there's some mouse binding happening in
   Flocking here
   I'm not sure what it's doing - do you see any conflicts here in terms of
   overlapping use of mouse handling, that might spill into perfor issues
   while person sketches? sketching is handled by the Ploma
   lib, and we bind our canvas mouse handlers here
   A demo <https://evhan55.github.io/> of Ploma outside of our app, maybe
   to compare.

   - You mentioned in last email that a better strategy is to start each
   synth, play it, then use a gate on an env to trigger. Do you have an
   example of how we might do this?  I think I'm missing something.. are you
   saying to do this we should load our app, play all of the sounds, each with
   the flock.synth() function, then later when people take actions, we turn up
   the gate? I expect we'll have 30-50 synth defs this way ...

   - Thanks for the books references, I've got some SC books on the way :)
    If you have any leads on how we might recreate this
   <https://www.youtube.com/watch?v=385CymvTecU> sound, please let me

Ok,.. whew.. I hope I don't lose you on all of that..

Any thoughts greatly appreciated,

=: s


         Steven Dale   @lifeinchords <https://twitter.com/lifeinchords>
@makeparallels <http://www.twitter.com/makeparallels>


On Fri, May 1, 2015 at 11:55 PM, Steven Dale <lifeinchords at gmail.com> wrote:

> Colin!
> Thank you for super detailed response. Very helpful - we have a great
> amount of info to move forward.
> - I'm quite sure the worker thing isn't a bug, as I was calling init on
> every SFX play. Gonna update our instance and have another go
> - The popping noise happens on first call, but I noticed it happens on the
> examples page too, so I think it's just a property of the Impulse sound
> demo, that it continues after the ramp? That was a stupid mistake on my
> part..
> - The info regarding events, reverb and leads for books are great - I'll
> get in contact after I dig in some more and have a chance to explore.
> Also will send over a link to our sandbox instance as soon as we get
> something in there.. Talk soon
> =: s
> On Thu, Apr 30, 2015 at 11:41 AM, Colin Clark <colin at colinclark.org>
> wrote:
>> Hi Steven,
>> Thanks for all your great questions. Just some background information
>> before I answer your specific questions below:
>> You should have one Flocking Environment per application. So only invoke
>> flock.init() once, and retain an instance of the environment somewhere we
>> you can access it if needed. Start it playing at the beginning, rather than
>> every time you want to trigger a sound.
>> You'll typically want one Synth instance per "voice" or thing that needs
>> to trigger sound. You won't typically reinstantiate a synth every time you
>> want to trigger a sound. Instead, create the synth up front, start it
>> playing, and then open and close the gate on an envelope unit generator to
>> trigger sounds in response to user actions. If you know you won't be using
>> a synth for a while, you can call pause() on it to minimize resources, and
>> then start it play()ing again later.
> Re: context: That helps a lot. Does resource drain build up over time by
> just being idle?
>  - didn't realize it, and this sounds like the culprit - I was calling
> init() on every SFX play. The strategy you mention makes sense, kinda like
> turning down the volume knob I guess, the radio is still on, but still
> serves the purpose of keeping sound out.
> ======================================
>> On Apr 29, 2015, at 10:10 PM, Steven Dale <lifeinchords at gmail.com> wrote:
>> - When sound is triggered/played in our app, afterwords there's an
>> endless repeating click/pop sound. Both Chrome + Firefox. Listen here:
>> https://drive.google.com/file/d/0B4zhzWwgaF63Z3h4WDN2VldzbE0/view?usp=sharing
>> Can you provide more details or a running instance of your application? I
>> don't know exactly what you're doing, nor what your audio file sounds like.
>> More detail makes it easier to answer these kinds of questions.
>> If I had to randomly guess, it may have to do with the fact that you're
>> repeatedly calling this._enviro.play() every time you're triggering a
>> sound, but I don't know.
>> Is there a stop audio function? Do we need to run it every time we're
>> done playing something to stop audio output, then re-init it right before
>> the next trigger of sound? Or is this audio connection supposed to stay on
>> throughout the life of the person's experience in a given session?
>> Synths are intended to be relatively long-lived. They can be triggered
>> repeatedly and have their parameters changed on the fly. Again, more detail
>> would be helpful. But in general you'll probably want to keep your synth
>> instance around persistently, and then use some kind of an envelope,
>> opening and closing its gate in response to user actions.
>> - We're triggering sounds on drag and dropping divs. They spawn web
>> workers seen in the  console, one for each sound.. I racked up 20-30 in
>> seconds.. and they persist. Is this normal + expected? Feels like at some
>> point the browser is gonna choke
>> That sounds like a bug. Can you:
>> * Make sure you're running the latest release of Flocking (version 0.1.1)
>> https://github.com/colinbdclark/Flocking/releases/tag/0.1.1
>> * Send me a link to a running instance of your app or instructions on how
>> to build/run so I can take a look?
>> Web Workers are spawned in Flocking for two reasons:
>> 1) To run the clocks on an asynchronous Scheduler instance (one of which
>> is created when you instantiate the Flocking Environment by calling
>> flock.init())
>> 2) If you're using the legacy pure JavaScript audio file decoders, which
>> aren't shipped by default with recent versions of Flocking (so very
>> unlikely)
>> Is it possible that you're initializing Flocking over and over again
>> somehow? Or creating a large number of Scheduler instances? If not, I'll
>> take a look and see if there's a bug I need to fix.
>> - Does the lib play mp3 files? I saw a reference to a WAV file in the
>> examples - is there a preferred file format to use to trigger samples?
>> uncompressed feels quite large for transferring back n forth over the wire
>> Yes. Flocking supports all of the codecs that can be decoded by the Web
>> Audio API. Here's MDN's compatibility table:
>> https://developer.mozilla.org/en-US/docs/Web/HTML/Supported_media_formats#Browser_compatibility
>> In short, most browsers will support MP3 out of the box with Flocking.
>> - From here:
>> https://github.com/colinbdclark/Flocking/blob/master/docs/responding-to-user-input.md:
>> Can we bind the events to a class that returns a set of DIV's, rather than
>> a single DIV, tied with an ID? Is it a linear thing, where if we bind to 5
>> divs say, it will be 5x more drain on browser resources bc 5 voices are
>> playing at once?
>> Currently, the flock.ugen.mouse.click unit generator only supports being
>> bound to one element at a time. That's something that can be fixed, and
>> I've filed a bug about it:
>> https://github.com/colinbdclark/Flocking/issues/107
>> However, it's trivial to create your own custom handlers and bind them to
>> your synth using the <code>set()</code> method. Something like this should
>> work just fine:
>> var divs = $(".lotsOfElements");
>> divs.mousedown(function () {
>>     mySynth.set("myEnv.gate", 1.0);
>> });
>> divs.mouseup(function () {
>>     mySynth.set("myEnv.gate", 0.0);
>> });
>> The browser ugens are just there to provide quick solutions for testing a
>> synth. If you're building more complex UIs, you'll probably want to roll
>> your own event logic.
>> - Is there a way to apply a reverb effect onto the end of the signal
>> chain? I saw something about several channel audio, and the delay
>> definition. Is this possible with the lib and needs to be modeled? or is
>> just not possible with this kind of synthesis? I'm new to doing this stuff
>> with code.
>> There's the Freeverb unit generator. It takes four inputs:
>> * source: the signal you want to apply the reverb to
>> * mix: the wet/dry mix for the reverb, between 0-1
>> * room: the room size, between 0-1
>> * damp: the reverb's HF damp, between 0-1
>> If you want to add reverb to a whole collection of different synths,
>> you'll want to write your synths' output to an interconnect bus (using
>> flock.ugen.out) and then create a dedicated "effects synth" that reads from
>> the interconnect bus and applies the reverb. I can whip you up an example
>> if you end up going that way.
>> - Any ETA on the docs for flock.synth?
>> https://github.com/colinbdclark/Flocking/blob/master/docs/synths/overview.md
>> Between the documentation in the main README and this page, is there any
>> other documentation you're specifically interested in? What information do
>> you think we're missing?
>> https://github.com/colinbdclark/Flocking/blob/master/docs/synths/creating-synths.md
>> The API for Synths, fortunately, is fairly simple. You can create them,
>> add and remove them to the environment's list of evaluated nodes in
>> specified locations, and get/set values on them. That's pretty much the
>> extent of its functionality.
>> - Any existing docs on how I might take some sound synthesis examples
>> from classic textbooks, papers, etc, and apply them to create synth
>> definitions [in Flocking]?  For example, we want to synthesize/model the
>> elastic, stretchy sound of a slingshot, think Angry birds SFX-- is this
>> possible with Flocking?
>> There are some pretty good books about audio synthesis. My favourite is
>> this one, which is unfortunately out of print:
>> http://books.google.ca/books/about/Computer_Music.html?id=eY_BQgAACAAJ
>> Curtis Roads' Computer Music Tutorial goes into detail about many
>> synthesis techniques. Also Nick Collins' Introduction to Computer Music is
>> quite good. There are also great languages-specific books for
>> SuperCollider, ChucK, and CSound that provide primers on signal processing
>> and the source code could likely be ported to Flocking.
>> I hope this helps,
>> Colin
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.idrc.ocad.ca/pipermail/flocking/attachments/20150609/3c31a7ab/attachment-0001.html>

More information about the flocking mailing list