You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Each of these values is fed into superdough to make the appropriate sound, in this case, creating
a BufferSourceNode with the buffer for the given sound
an effect send for delay (global effect)
a worklet node for dist
In synth terms, each event spawns a new voice, so there are theoretically infinite voices:
n("0,1,2,3,4,5")// <-- you could go on till infinity.scale("C:minor:pentatonic")
This idea is very similar (and probably borrowed from) multichannel expansion in supercollider, where you could provide an Array to any function argument to make it multichannel.
This idea is very flexible and allows "full timbral polyphony", meaning you can set any voice parameter to any value without affecting other voices (except the global effects delay and room).
Again, in synth terms, it's like each event quickly builds a new synthesiser from scratch and plugs it into the shared effect sends and the audio output.
While very flexible, this design also comes with a flaw: all sound shaping properties are tied to the event. For example, each event has its own envelope.
What if we want to have an envelope that spans across multiple events?
What if we want to pattern the filter cutoff while an event is playing?
What if we want to pattern any other effect while an event is playing?
Modulation Controls
So far, a partial solution for these problems is by introducing modulation controls, allowing one to shape a parameter over the course of its (event-) lifetime. I am talking about: filter/gain envelopes, vibrato, phaser etc..
These controls are not patterns, but a more traditional way of describing changes in time, mostly by ADSR or LFOs.
Recently, @daslyfe has found a way to untie these types of controls from the event: Instead of starting the modulation phase when the event starts, we can adjust its phase relative to the current time or cycle, see
absolute phase = the phase offset is based on seconds
relative phase = phase offset is based on cycles
event phase = phase starts with event (default)
Patterned Modulations
While the modulating controls provide some control over the value in time (ADSR / LFO), I wish there was a way to do the same with patterns.. So instead of
note("c").vib("0.5:1")
we should/could be able to do something like this:
note("c").add.mod(note(sine2))
Of course, add.mod does not exist, and it doesn't work like vib when removing the mod.
Here's something that works right now:
Here, we see the pattern emulates the modulation, but it only works with many events (32x), as the note value for each event is locked. This type of modulation could also be seen as a control rate modulation (as done with .kr in super collider).
With 128 samples per block and 44.1kHz sample rate, control rate would be 44100/128 ~= 344Hz or segment(344) at 1cps..
If we increase the speed of the c above, we start to hear phasing issues, because the phase of the default triangle oscillator is started when each event starts. We could potentially solve this with @daslyfe 's solution by offsetting the oscillator phase accordingly, so maybe we could get control rate then..
Let's assume the phasing issue is solved, we might then be able to apply a slower envelope on top...
note("c*32").add(note(sine2)).attack.mod(.1)
This pseudo syntax would apply an attack that is decoupled from the events, where the phase of the envelope is absolute.
Pattern Controlled State Machines
Another potential solution lies in what I like to call "Pattern Controlled State Machines".
The basic idea is that events are not directly turned into voices (superdough calls) that make sound.
Instead, each event could send a message to a "State Machine", which could be similar to a channel in a tracker:
In this case, channel tags each event with the channel it should send to.
The channel itself makes sound based on its internal state, modified by the messages it receives.
I've made up the gate control here as an idea of how the envelope could be controlled with a pattern.
There are a lot of open questions:
How to handle polyphony (maybe like new note actions in impulse tracker?)
How are they defined? (maybe they could be spawned on demand, similar to regular voices)
How would they coexist with the current design?
...?
Happy to hear your thoughts..
TBD: elaborate on this other potential solution: #561 (comment)
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Let's open a potentially more long form discussion about the rulership of events, and how we might undermine it..
Event Centrism
Let me try to explain what I mean by event centrism.
So far, events (=
Hap
's) from a pattern are used to callsuperdough
with a number of controls. For example:The first cycle will have the following
Hap.value
'sEach of these values is fed into superdough to make the appropriate sound, in this case, creating
BufferSourceNode
with the buffer for the given sounddelay
(global effect)dist
In synth terms, each event spawns a new voice, so there are theoretically infinite voices:
This idea is very similar (and probably borrowed from) multichannel expansion in supercollider, where you could provide an Array to any function argument to make it multichannel.
This idea is very flexible and allows "full timbral polyphony", meaning you can set any voice parameter to any value without affecting other voices (except the global effects
delay
androom
).Again, in synth terms, it's like each event quickly builds a new synthesiser from scratch and plugs it into the shared effect sends and the audio output.
While very flexible, this design also comes with a flaw: all sound shaping properties are tied to the event. For example, each event has its own envelope.
Modulation Controls
So far, a partial solution for these problems is by introducing modulation controls, allowing one to shape a parameter over the course of its (event-) lifetime. I am talking about: filter/gain envelopes, vibrato, phaser etc..
These controls are not patterns, but a more traditional way of describing changes in time, mostly by ADSR or LFOs.
Recently, @daslyfe has found a way to untie these types of controls from the event: Instead of starting the modulation phase when the event starts, we can adjust its phase relative to the current time or cycle, see
I've tried to categorize these here, with
absolute phase
= the phase offset is based on secondsrelative phase
= phase offset is based on cyclesevent phase
= phase starts with event (default)Patterned Modulations
While the modulating controls provide some control over the value in time (ADSR / LFO), I wish there was a way to do the same with patterns.. So instead of
we should/could be able to do something like this:
Of course,
add.mod
does not exist, and it doesn't work likevib
when removing themod
.Here's something that works right now:
Here, we see the pattern emulates the modulation, but it only works with many events (32x), as the note value for each event is locked. This type of modulation could also be seen as a control rate modulation (as done with
.kr
in super collider).With 128 samples per block and 44.1kHz sample rate, control rate would be 44100/128 ~= 344Hz or
segment(344)
at 1cps..If we increase the speed of the
c
above, we start to hear phasing issues, because the phase of the default triangle oscillator is started when each event starts. We could potentially solve this with @daslyfe 's solution by offsetting the oscillator phase accordingly, so maybe we could get control rate then..Let's assume the phasing issue is solved, we might then be able to apply a slower envelope on top...
This pseudo syntax would apply an attack that is decoupled from the events, where the phase of the envelope is absolute.
Pattern Controlled State Machines
Another potential solution lies in what I like to call "Pattern Controlled State Machines".
The basic idea is that events are not directly turned into voices (superdough calls) that make sound.
Instead, each event could send a message to a "State Machine", which could be similar to a channel in a tracker:
In this case,
channel
tags each event with the channel it should send to.The channel itself makes sound based on its internal state, modified by the messages it receives.
I've made up the
gate
control here as an idea of how the envelope could be controlled with a pattern.There are a lot of open questions:
Happy to hear your thoughts..
TBD: elaborate on this other potential solution: #561 (comment)
TBD: elaborate on tidal's solution to this problem:
https://tidalcycles.org/docs/reference/control_busses/
Beta Was this translation helpful? Give feedback.
All reactions