-
Hello, this is Michael Gogins. First, congratulations for Tidal Cycles and for Strudel, this is wonderful stuff! And the code looks very well written. I am a composer and music software developer. Currently I am working on a project called cloud-music that provides algorithmic compositions of mine rendered by my WebAssembly build of Csound. These pieces are public Web pages that play indefinitely once started and can offer greater or lesser degrees of interaction and control to the listener. They also feature animated visuals. I am trying to integrate Strudel into the infrastructure for my pieces. The idea is that a pre-composed Tidal patch will generate the music. A REPL for this patch could be opened by the user to fiddle around with the composition. I have gotten as far as running both Strudel and a cloud-music piece locally, and using
Any suggestions for how to implement my requirements would be most welcome! Also, any suggestions and criticisms. I have read #64 and understand it. FYI my WASM build of Csound is here, and the more widely used canonical WASM build of Csound is here. Thanks! |
Beta Was this translation helpful? Give feedback.
Replies: 19 comments 8 replies
-
Hey Michael! Thanks for the kind words, glad you like it :) I am generally very interested in adding alternative outputs to the strudel REPL, csound looks like a very solid option
So I assume you want to use strudel as the sequencer and csound as the audio backend?
If you want a tighter integration, I would not use MIDI at all. afaik midi loopback within the browser is not possible. Also, web midi support is not ideal across browsers. The ideal solution depends on which ways of communication are available from JS to csound wasm. I have not done much with wasm but I know there is the postMessage API as one way to communicate. I've also seen OSC being used in supercollider wasm, not sure if that requires a backend server. If a way to talk from JS to csound wasm is found, csound would need some API to trigger a sound at a specific time. If that's given the rest should be smooth sailing. I've just added a minimal example to use strudel just as a scheduler, without audio: code + demo. Some hints:
Sounds pretty interesting! You can extend the strudel API by adding a That should be some info to get you started, feel free to ask any questions / report your progress here! |
Beta Was this translation helpful? Give feedback.
-
Thanks for your very prompt and helpful response!
Csound for WASM has a pretty complete JavaScript API.
I'm glad you said something about a static serving of Strudel, but it also
seems possible to serve both Csound and Strudel from the same vite origin.
I will explore your suggestions and get back to you with results.
Regards,
Mike
…On Thu, Nov 24, 2022, 15:32 Felix Roos ***@***.***> wrote:
Hey Michael! Thanks for the kind words, glad you like it :) I am generally
very interested in adding alternative outputs to the strudel REPL, csound
looks like a very solid option
I am trying to integrate Strudel into the infrastructure for my pieces.
The idea is that a pre-composed Tidal patch will generate the music
So I assume you want to use strudel as the sequencer and csound as the
audio backend?
The MIDI connection between Csound and Strudel does not need any external
dependencies such as a virtual MIDI connector.
If you want a tighter integration, I would not use MIDI at all. afaik midi
loopback within the browser is not possible. Also, web midi support is not
ideal across browsers.
The ideal solution depends on which ways of communication are available
from JS to csound wasm. I have not done much with wasm but I know there is
the postMessage API as one way to communicate. I've also seen OSC being
used in supercollider wasm
<https://github.com/dylans/supercollider/blob/wasm-no-submodules/README_WASM.md>,
not sure if that requires a backend server.
If a way to talk from JS to csound wasm is found, csound would need some
API to trigger a sound at a specific time. If that's given the rest should
be smooth sailing.
I've just added a minimal example to use strudel just as a scheduler,
without audio: code
<https://github.com/tidalcycles/strudel/blob/main/packages/core/examples/without-audio.html>
+ demo
<https://raw.githack.com/tidalcycles/strudel/main/packages/core/examples/without-audio.html>.
Some hints:
- I've imported the 3 strudel packages core mini and transpiler from
skypack <https://www.skypack.dev/>, which enables a buildless setup.
you could even run the html file serverless in the browser and it should
work. I suppose in your actual project, you'd want to install the packages
from npm and bundle them in some way. Just be aware that the packages are
currently ESM only, meaning you need a bundler that supports it (i'd
recommend vite <https://vitejs.dev/> see example
<https://github.com/tidalcycles/strudel/tree/main/packages/core/examples/vite-vanilla-repl>
)
- The repl function expects a getTime callback that supports an
absolute time value as the clock source of truth. In without-audio
example, I used Date.now(), mainly to show that you don't need an
AudioContext. In your case, you'd probably want to return the
currentTime of the csound audio context.
- Instead of using a custom trigger like described in #64
<#64>, you can
simplify the user code by providing a defaultOutput to the repl
function, where you will get a callback for each hap (=event). This is the
spot where the communcation with csound has to happen. The arguments are
- hap: event with value
- deadline: number of seconds until the hap should be triggered
(based on the getTime clock source)
- duration: number of seconds the hap should last
- some more info about the hap value: by default, strudel comes with these
controls
<https://github.com/tidalcycles/strudel/blob/main/packages/core/controls.mjs>
to shape the hap's value. If you want to use more / different
properties, you can use createParam to add custom names (example
<https://strudel.tidalcycles.org/#Y29uc3QgeHh4ID0gY3JlYXRlUGFyYW0oJ3h4eCcpOwpjb25zdCB5eXkgPSBjcmVhdGVQYXJhbSgneXl5Jyk7Cgp4eHgoIjIzIDI0IikueXl5KCI8MjAgMjE%2BIikubG9nKCk%3D>).
Also see the technical manual
<https://github.com/tidalcycles/strudel/wiki/Technical-Manual#3-sound-output>
My CsoundAC library for algorithmic compositions has facilities that I
would like to put into Strudel, not necessarily in the Strudel repository,
but perhaps by injecting a hook into a Strudel run loop or pattern so that
I can provide my own transformations (e.g. CsoundAC provides chords in any
equal temperament, scales, automatic modulations that are correct,
neo-Riemannian chord transformations, various degrees of automated
voice-leading, score generating facilities based on fractals, etc.).
Sounds pretty interesting! You can extend the strudel API by adding a
Pattern.prototype (example
<https://strudel.tidalcycles.org/#UGF0dGVybi5wcm90b3R5cGUuaW5jcmVhc2UgPSBmdW5jdGlvbihuKSB7CiAgcmV0dXJuIHRoaXMuZm1hcCh2ID0%2BIHYgKyBuKQp9Cgpub3RlKCI2MCA2MSIuaW5jcmVhc2UoMCkp>
)
That should be some info to get you started, feel free to ask any
questions!
—
Reply to this email directly, view it on GitHub
<#270 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ABQIGJO7EJPGLERDIHOSPODWJ7GEJANCNFSM6AAAAAASKP2TDU>
.
You are receiving this because you authored the thread.Message ID:
***@***.***>
|
Beta Was this translation helpful? Give feedback.
-
I have partly done what I need using the embedded form of Strudel. I built Strudel for production, moved the As for the defaultOutput approach, I would prefer to keep WebAudio, WebDirt, etc. going in Strudel along with Csound playing. This is what is happening with my MIDI approach so far. Of course I can't keep using MIDI because, just as you say, we can't do a MIDI loopback in the browser (although the WebAudio people have been talking about this for practically ten years!). |
Beta Was this translation helpful? Give feedback.
-
Yes, it works fine without MIDI, a direct call into Csound.
The commits are here in my csound-integration branch:
https://github.com/gogins/strudel/tree/csound-integration.
Best,
Mike
…On Mon, Nov 28, 2022, 15:35 Felix Roos ***@***.***> wrote:
I just saw your answer in csound/csound#1658
<csound/csound#1658> . Did you manage to make
it work without midi? if yes I'd be really interested in knowing how. Your
strudel fork seems to have no commits pushed..
—
Reply to this email directly, view it on GitHub
<#270 (reply in thread)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ABQIGJNY37VY2ZJB53WODWDWKUJSHANCNFSM6AAAAAASKP2TDU>
.
You are receiving this because you authored the thread.Message ID:
***@***.***>
|
Beta Was this translation helpful? Give feedback.
-
I also now managed to control csound from strudel: https://github.com/tidalcycles/strudel/blob/csound/packages/csound/csound.mjs still not sure how to get precise timing... |
Beta Was this translation helpful? Give feedback.
-
It's good you got this to work.
The main difference with what I have done is, I am bringing the Strudel
REPL into pieces that I have already written, or that I will write using
previous pieces as a template. In these pieces, I have already created
Csound, compiled my Csound instruments, and started the performance. I also
need to be able to use audio either from Strudel's synths and WebDirt, or
from Csound, or from both together. WebDirt in particular has a drum
machine setup that is better than what we normally do in Csound.
And I need to be able to keep on doing that.
I also strongly prefer to keep all the pfields in the units that I have
implemented, otherwise I will have to rewrite a bunch of my Csound
instruments. It is MIDI semantics but with more precision.
The integration that I have created is designed to work both with canonical
Csound, and my own csound-wasm build of Csound for WebAssembly.
As for timing, this is an excellent question, and Csound internally has a
pretty good answer, or at least part of the answer. Csound computes samples
in blocks, for WebAudio, of 128 sample frames. However, Csound can start
and stop processing audio with sample-frame accuracy. To illustrate,
suppose a block starts at time 10 seconds at a rate of 48000 sample frames
per second. That block will last for 0.00266666666666 seconds. If I
schedule a note in Csound for time 10.0013, about halfway through the
block, then Csound will compute the number of sample frames to wait before
starting to play that note -- about 64 frames.
Because Csound and Strudel are feeding audio to the same WebAudio
destination, their audio streams are already in lock step.
There's no external timing offset here, it's not like synchronizing
WebAudio with WebMIDI. If the time of a hap is offset from the time at the
start of the current block of sample frames, then the Csound event should
be scheduled with p2 (time) set to 0 plus the difference between the start
of the block and the intended start time of the hap. Actually, that's what
your code is doing, isn't it? Csound should simply use what you are doing
-- as long as you make sure to set the Csound command-line option
"--sample-accurate".
Of course, some Nodes in the WebAudio signal flow graph may have internal
latencies, such as compressors or FFTs, but there's not much we can do
about that now.
I have a question, why do you use eval(schedule...) instead of inputMessage
or readScore? They do the same work semantically, but I think that eval
might introduce a little extra overhead, although I haven't tried to
measure it.
Best,
Mike
…-----------------------------------------------------
Michael Gogins
Irreducible Productions
http://michaelgogins.tumblr.com
Michael dot Gogins at gmail dot com
On Mon, Nov 28, 2022 at 6:21 PM Felix Roos ***@***.***> wrote:
example note("<C^7 F^7 Fm7 Db^7>".voicings()).ply(8).csound()
—
Reply to this email directly, view it on GitHub
<#270 (reply in thread)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ABQIGJPODI5L6MEYUYBXVLLWKU47LANCNFSM6AAAAAASKP2TDU>
.
You are receiving this because you authored the thread.Message ID:
***@***.***>
|
Beta Was this translation helpful? Give feedback.
-
About the pfields, I will try to keep this simple and clear.
There is a low-level Csound score syntax used in the "C" functions
csoundReadScore, csoundInputMessage (which actually is just another name
for csoundReadScore), and the schedule opcodes. And there is a higher-level
Csound score syntax used in non-real-time pieces that supports evaluation
of expressions, sections of scores, looping of sections, and so on. We
should NOT DO ANYTHING WITH THIS because Tidal and Strudel are SO much
better than Csound at this job.
Tidal or Strudel should concern themselves ONLY with the low-level score
syntax. This, as you noted, is just a linear list of pfields. They can be
floating point numbers or strings.
p1 is ALWAYS the instrument number. It is one-based and can have a
fractional part which can serve as a note ID and be used for ties between
notes. It can also be a string instrument definition name.
p2 is ALWAYS the time in beats. At the low level, this is usually just
seconds. Tidal and Strudel should assume that p2 is always the score time
in seconds. For real-time performance this is ALWAYS relative to the time
at which the function is called, i.e., relative to the current score time.
p3 is ALWAYS the duration in beats. At the low level, this is usually just
seconds. If the number is negative, the instrument is scheduled for
indefinite performance, just like a MIDI note on event. Tidal and Strudel
put the duration into the hap, so Tidal and Strudel should ALWAYS put the
hap duration in seconds into p3. At least for the time being, we should not
worry about indefinite instruments, ties, note on and note off, and so
forth. Just use absolute duration.
That's the Csound standard. After that, people do different things. There
can be ANY number of additional pfields with ANY user-defined semantics.
However, I VERY STRONGLY SUGGEST that Tidal and Strudel stick as close as
possible to MIDI semantics which all computer music people more or less
understand:
p4 should ALWAYS be the pitch of the note as a MIDI key number in the
interval [0, 127] but it should be allowed to be a floating point number
for detuning, alternative scales, etc. This should ALWAYS come from the hap.
p5 should ALWAYS be the loudness of the note as a MIDI velocity number in
the interval [0, 127] but it should be allowed to be a floating point
number for fine precision. This should ALWAYS come from the hap.
All other pfields from p6 on up to any number should be optional and be
user-defined. They should come from the Strudel/Tidal patch, not from the
hap.
The control values in the haps should go into Csound not as pfields, which
Csound evaluates only at the onset of a note, but through Csound control
channels, i.e. using the "C" API function csoundControlChannel. If you want
the hap controls to go into Csound, use the csound control channels. It
would be possible to completely automate this and to send every hap control
right on to Csound, but that would, I suspect, quickly cause too much
overhead. I think it's better just to let the user append a Csound control
channel node to a control node: <Pattern>.gain().cschan(). But this needs
more thought because not all Csound instruments need to get the same
control values. As for me, I create an implicit namepace for channels by
prepending the instrument name to the channel name, e.g. Guitar_gain.
Perhaps something like that could be done here.
Alternatively, there could simply be a Csound control channel output for
Patterns, something like <Pattern>.chan(name, value), where name and value
are defined by the user in their patch. This is what I am planning to try.
Best,
Mike
…-----------------------------------------------------
Michael Gogins
Irreducible Productions
http://michaelgogins.tumblr.com
Michael dot Gogins at gmail dot com
On Tue, Nov 29, 2022 at 4:46 PM Felix Roos ***@***.***> wrote:
Just improved the implementation a bit, now it's possible to write custom
csound instruments inline in the repl + combining csound with other outputs
is possible.
I also strongly prefer to keep all the pfields in the units that I have
implemented, otherwise I will have to rewrite a bunch of my Csound
instruments. It is MIDI semantics but with more precision.
Sorry for the csound noob question, but is there an alternative to using a
linear list of p values?
Strudel uses a list of 100+ possible controls which can be sequenced via
patterns. It would be rather tedious if those needed to be lined up in a
long list of p values (where most would be undefined). For example,
superdirt / supercollider uses OSC messages that look like
"/note/60/gain/0.5/cutoff/2000" etc.. = [/key/value]*. so we only need to
pass the params that are actually used. I searched the csound docs, but
found nothing so far. It would also be really practical if all those params
were automatically defined in the scope of the csound instrument in user
land (e.g. inote, igain, icutoff).
Because Csound and Strudel are feeding audio to the same WebAudio
destination, their audio streams are already in lock step.
While this is true, there is still still a problem with using relative
time. The onTrigger callback has the following arguments:
- time the absolute audio context time when the event should start
- hap hap / event data
- currentTime the current audio context time, slightly before time
this means, at the time of the callback, the relative time left before the
event should start is time-currentTime.
I think the "problem" is that time will still tick away between the
calculation of the relative time and the actual execution from within
csound. Because the time it takes for the code to execute is slightly
different each time, the timing will warble around. If there was a way to
tell csound to trigger at an absolute rather than a relative time, this
would not be a "problem". I am putting problem in quotes here because the
timing is not super off because:
I have a question, why do you use eval(schedule...) instead of inputMessage
or readScore? They do the same work semantically, but I think that eval
might introduce a little extra overhead, although I haven't tried to
measure it.
Thanks for the hint, the timing actually got way better after using
inputMessage instead of eval (as the execution time is shorter).
Here is a screenshot of how using csound now looks inside the repl:
[image: repl csound]
<https://camo.githubusercontent.com/0b293aa15eb4504e99239a608681b1322ec78dd90df752f8462af06644a43fb7/68747470733a2f2f692e696d6775722e636f6d2f6e5a45473458542e706e67>
I am happy how good this went, this is already pretty usable in terms of
live coding synths along with the music it is playing!
—
Reply to this email directly, view it on GitHub
<#270 (reply in thread)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ABQIGJKNJPHY2OHUGR7RSB3WKZ2U5ANCNFSM6AAAAAASKP2TDU>
.
You are receiving this because you authored the thread.Message ID:
***@***.***>
|
Beta Was this translation helpful? Give feedback.
-
Another thought and this is for me an ABSOLUTE REQUIREMENT.
I like what you have done to embed Csound in Strudel and to define Csound
instruments in Strudel. This should be very useful.
However, I already have my Csound instantiated and running my orchestra. I
MUST be able to get the <Pattern>.csound(pfields...) output to use my own,
already running Csound. This is what I have already done and got working
with Strudel. I need this because I am adding Strudel to an already working
framework for making pieces.
So keep what you have done, but make it so that when the user passes to
Strudel a Csound object, strudel will use that object instead of creating
its own Csound object.
Best,
Mike
…-----------------------------------------------------
Michael Gogins
Irreducible Productions
http://michaelgogins.tumblr.com
Michael dot Gogins at gmail dot com
On Tue, Nov 29, 2022 at 8:28 PM Michael Gogins ***@***.***>
wrote:
About the pfields, I will try to keep this simple and clear.
There is a low-level Csound score syntax used in the "C" functions
csoundReadScore, csoundInputMessage (which actually is just another name
for csoundReadScore), and the schedule opcodes. And there is a higher-level
Csound score syntax used in non-real-time pieces that supports evaluation
of expressions, sections of scores, looping of sections, and so on. We
should NOT DO ANYTHING WITH THIS because Tidal and Strudel are SO much
better than Csound at this job.
Tidal or Strudel should concern themselves ONLY with the low-level score
syntax. This, as you noted, is just a linear list of pfields. They can be
floating point numbers or strings.
p1 is ALWAYS the instrument number. It is one-based and can have a
fractional part which can serve as a note ID and be used for ties between
notes. It can also be a string instrument definition name.
p2 is ALWAYS the time in beats. At the low level, this is usually just
seconds. Tidal and Strudel should assume that p2 is always the score time
in seconds. For real-time performance this is ALWAYS relative to the time
at which the function is called, i.e., relative to the current score time.
p3 is ALWAYS the duration in beats. At the low level, this is usually just
seconds. If the number is negative, the instrument is scheduled for
indefinite performance, just like a MIDI note on event. Tidal and Strudel
put the duration into the hap, so Tidal and Strudel should ALWAYS put the
hap duration in seconds into p3. At least for the time being, we should not
worry about indefinite instruments, ties, note on and note off, and so
forth. Just use absolute duration.
That's the Csound standard. After that, people do different things. There
can be ANY number of additional pfields with ANY user-defined semantics.
However, I VERY STRONGLY SUGGEST that Tidal and Strudel stick as close as
possible to MIDI semantics which all computer music people more or less
understand:
p4 should ALWAYS be the pitch of the note as a MIDI key number in the
interval [0, 127] but it should be allowed to be a floating point number
for detuning, alternative scales, etc. This should ALWAYS come from the hap.
p5 should ALWAYS be the loudness of the note as a MIDI velocity number in
the interval [0, 127] but it should be allowed to be a floating point
number for fine precision. This should ALWAYS come from the hap.
All other pfields from p6 on up to any number should be optional and be
user-defined. They should come from the Strudel/Tidal patch, not from the
hap.
The control values in the haps should go into Csound not as pfields, which
Csound evaluates only at the onset of a note, but through Csound control
channels, i.e. using the "C" API function csoundControlChannel. If you want
the hap controls to go into Csound, use the csound control channels. It
would be possible to completely automate this and to send every hap control
right on to Csound, but that would, I suspect, quickly cause too much
overhead. I think it's better just to let the user append a Csound control
channel node to a control node: <Pattern>.gain().cschan(). But this needs
more thought because not all Csound instruments need to get the same
control values. As for me, I create an implicit namepace for channels by
prepending the instrument name to the channel name, e.g. Guitar_gain.
Perhaps something like that could be done here.
Alternatively, there could simply be a Csound control channel output for
Patterns, something like <Pattern>.chan(name, value), where name and value
are defined by the user in their patch. This is what I am planning to try.
Best,
Mike
-----------------------------------------------------
Michael Gogins
Irreducible Productions
http://michaelgogins.tumblr.com
Michael dot Gogins at gmail dot com
On Tue, Nov 29, 2022 at 4:46 PM Felix Roos ***@***.***>
wrote:
> Just improved the implementation a bit, now it's possible to write custom
> csound instruments inline in the repl + combining csound with other outputs
> is possible.
>
> I also strongly prefer to keep all the pfields in the units that I have
> implemented, otherwise I will have to rewrite a bunch of my Csound
> instruments. It is MIDI semantics but with more precision.
>
> Sorry for the csound noob question, but is there an alternative to using
> a linear list of p values?
> Strudel uses a list of 100+ possible controls which can be sequenced via
> patterns. It would be rather tedious if those needed to be lined up in a
> long list of p values (where most would be undefined). For example,
> superdirt / supercollider uses OSC messages that look like
> "/note/60/gain/0.5/cutoff/2000" etc.. = [/key/value]*. so we only need
> to pass the params that are actually used. I searched the csound docs, but
> found nothing so far. It would also be really practical if all those params
> were automatically defined in the scope of the csound instrument in user
> land (e.g. inote, igain, icutoff).
>
> Because Csound and Strudel are feeding audio to the same WebAudio
> destination, their audio streams are already in lock step.
>
> While this is true, there is still still a problem with using relative
> time. The onTrigger callback has the following arguments:
>
> - time the absolute audio context time when the event should start
> - hap hap / event data
> - currentTime the current audio context time, slightly before time
>
> this means, at the time of the callback, the relative time left before
> the event should start is time-currentTime.
> I think the "problem" is that time will still tick away between the
> calculation of the relative time and the actual execution from within
> csound. Because the time it takes for the code to execute is slightly
> different each time, the timing will warble around. If there was a way to
> tell csound to trigger at an absolute rather than a relative time, this
> would not be a "problem". I am putting problem in quotes here because the
> timing is not super off because:
>
> I have a question, why do you use eval(schedule...) instead of
> inputMessage
> or readScore? They do the same work semantically, but I think that eval
> might introduce a little extra overhead, although I haven't tried to
> measure it.
>
> Thanks for the hint, the timing actually got way better after using
> inputMessage instead of eval (as the execution time is shorter).
>
> Here is a screenshot of how using csound now looks inside the repl:
>
> [image: repl csound]
> <https://camo.githubusercontent.com/0b293aa15eb4504e99239a608681b1322ec78dd90df752f8462af06644a43fb7/68747470733a2f2f692e696d6775722e636f6d2f6e5a45473458542e706e67>
>
> I am happy how good this went, this is already pretty usable in terms of
> live coding synths along with the music it is playing!
>
> —
> Reply to this email directly, view it on GitHub
> <#270 (reply in thread)>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/ABQIGJKNJPHY2OHUGR7RSB3WKZ2U5ANCNFSM6AAAAAASKP2TDU>
> .
> You are receiving this because you authored the thread.Message ID:
> ***@***.***>
>
|
Beta Was this translation helpful? Give feedback.
-
Thanks for clarifying the conventions around the p fields.
Assuming I correctly understand the difference between pfields and control values, I disagree.
It seems helpful to see pfields as the equivalent of MIDI note events, while control fields are like MIDI cc messages. Let me demonstrate this with an example. In this pattern version of Steve Reichs classic, the Don't get me wrong I am not advising against using control values at all. I am just saying that if it's not possible set values individually and in parallel means not being able to make full use of the power of tidal patterns.
In the ideal integration scenario I have in mind, the user should not need to manually wire up any standardized variables. This will drastically reduce boilerplate + will make csound patterns interoperable with non csound patterns (because they use the same params). This is what I have in mind: await csound`
instr CoolSynth
; in here, inote, icutoff and iattack should be automatically defined
endin`;
note('c3').cutoff(800).attack(0.5).csound('CoolSynth'); This could involve injecting that variable logic into the top of each instrument definition, like (simplified): // all 100+ controls:
const controls = ['note', 'gain', 'cutoff', 'attack' /*..... */];
function csound(code) {
let lines = code.split('\n');
const ivars = controls.map((name, i) => `i${name} = p${i + 4})`);
lines = [lines[0], ...ivars, ...lines.slice(1)];
code = lines.join('\n');
/* evaluate logic */
} this would turn: `instr CoolSynth
; user land
endin`; into `instr CoolSynth
inote = p4
igain = p5
icutoff = p6
iattack = p7
; ..... many more
; user land
endin`; Of course this is not ideal, as most of the pfields are undefined, but I see no other possibility without having the ability to pass a locally defined key value map or some type of encoded string (like Maybe there is a way to solve this with control channels by using a seperate "throw away" channel for each individual hap, with time as an identifier const ms = Date.now();
controls.forEach((name) => {
csound.setControlChannel(`strudel.${name}.${ms}`, hap.value.pan);
});
csound.inputMessage(`i ${instrument} ${time} ${duration} ${ms}`); function csound(code) {
let lines = code.split('\n');
const ivars = controls.map((name, i) => `i${name} = chnget:i(strcat "strudel.${name}.", ims)`);
lines = [lines[0], `ìms = p4`, ...ivars, ...lines.slice(1)];
code = lines.join('\n');
/* evaluate logic */
} which would turn `instr CoolSynth
; user land
endin`; into `instr CoolSynth
ims = p4
inote = chnget:i(strcat "strudel.note.", ims)
igain = chnget:i(strcat "strudel.gain.", ims)
icutoff = chnget:i(strcat "strudel.cutoff.", ims)
iattack = chnget:i(strcat "strudel.attack.", ims)
; ..... many more
; user land
endin
`; I am not sure if strcat can be used like that, also not sure if it is a good idea to create so many control channels, tbh it does not feel right. But yeah, I also think we might have slightly different goals here: I am looking for a really tight integration of csound with patterns (comparable to webaudio / superdirt integration) using as less boilerplate code as possible, while you want to add live codability to your existing work. PS I can manage to understand what you are saying WITHOUT ALL CAPS :) |
Beta Was this translation helpful? Give feedback.
-
PLEASE use MIDI key and MIDI velocity as p5 and p6. So far as I know you
and I are the only people fooling with this now. If you make this change, I
won't have to change the hundred or instrument definitions that I use as
"patches." The other people will just start using it with MIDI key and
velocity and if they need to convert it in Tidal or in Csound they can do
that.
Compatibility to my requirements is very easily met.
Instead of
Pattern.prototype._csound = function (instrument) {
instrument = instrument || 'triangle';
return this.onTrigger((time, hap, currentTime) => {
if (!_csound) {
}
use this instead.
Pattern.prototype._csound = function (instrument) {
instrument = instrument || 'triangle';
return this.onTrigger((time, hap, currentTime) => {
if (!_csound) {
if (window.__csound__) {
_csound = window.__csound__;
logger('[csound] using external Csound', 'warning');
} else {
logger('[csound] not loaded yet', 'warning');
return;
}
Best,
Mike
-----------------------------------------------------
Michael Gogins
Irreducible Productions
http://michaelgogins.tumblr.com
Michael dot Gogins at gmail dot com
…On Wed, Nov 30, 2022 at 3:26 PM Felix Roos ***@***.***> wrote:
Thanks for clarifying the conventions around the p fields.
The current difference to my implementation is that pitch (p4) is
currently a frequency and gain (p5) is a normalized number (0-1).
The control values in the haps should go into Csound not as pfields, which
Csound evaluates only at the onset of a note, but through Csound control
channels, i.e. using the "C" API function csoundControlChannel. If you want
the hap controls to go into Csound, use the csound control channels.
Assuming I correctly understand the difference between pfields and control
values, I disagree.
1. pfields are defined locally, for each call of inputMessage
2. control fields are defined globally, meaning they will have the
same value for parallel instances of an instrument
It seems helpful to see pfields as the equivalent of MIDI note events,
while control fields are like MIDI cc messages.
The fact that cc messages are globally set (like I think control values
are too), is a big limitation in my opinion. Being able to set control
values differently for each individual event is a huge win and it's one of
the reasons I like tidal patterns so much (MIDI kind of solves this with
MPE).
Let me demonstrate this with an example
<https://strudel.tidalcycles.org?U9H66vdlb_U4>. In this pattern version
of Steve Reichs classic, the pan param is set differently for each of the
two sequences running in parallel. Now imagine the same two sequences but
they both use the same csound instrument. If the pan value is sent as a
pfield, the panning can be processed in parallel, but if it's a control
value, the last value wins or am I wrong here? At least that's what happens
in the MIDI world when sending the same cc number with a different value.
Don't get me wrong I am not advising against using control values at all.
I am just saying that if it's not possible set values individually and in
parallel means not being able to make full use of the power of tidal
patterns.
If you want
the hap controls to go into Csound, use the csound control channels. It
would be possible to completely automate this and to send every hap control
right on to Csound, but that would, I suspect, quickly cause too much
overhead.
In the ideal integration scenario I have in mind, the user should not need
to manually wire up any standardized variables. This will drastically
reduce boilerplate + will make csound patterns interoperable with non
csound patterns (because they use the same params). This is what I have in
mind:
await csound`
instr CoolSynth
; in here, inote, icutoff and iattack should be automatically defined
endin`;
note('c3').cutoff(800).attack(0.5).csound('CoolSynth');
This could involve injecting that variable logic into the top of each
instrument definition, like (simplified):
// all 100+ controls:
const controls = ['note', 'gain', 'cutoff', 'attack' /*..... */];
function csound(code) {
let lines = code.split('\n');
const ivars = controls.map((name, i) => `i${name} = p${i + 4})`);
lines = [lines[0], ...ivars, ...lines.slice(1)];
code = lines.join('\n');
}
this would turn:
`instr CoolSynth
; user land
endin`;
into
`instr CoolSynth
inote = p4
igain = p5
icutoff = p6
iattack = p7
; ..... many more
; user land
endin`;
Of course this is not ideal, as most of the pfields are undefined, but I
see no other possibility without having the ability to pass a locally
defined key value map or some type of encoded string (like
note/60/cutoff/800/attack/0.5).
Maybe there *is* a way to solve this with control channels by using a
seperate "throw away" channel for each individual hap, with time as an
identifier
const ms = Date.now();
controls.forEach((name) => {
csound.setControlChannel(`strudel.${name}.${ms}`, hap.value.pan);
});
csound.inputMessage(`i ${instrument} ${time} ${duration} ${ms}`);
function csound(code) {
let lines = code.split('\n');
const ivars = controls.map((name, i) => `i${name} = chnget:i(strcat "strudel.${name}.", ims)`);
lines = [lines[0], `ìms = p4`, ...ivars, ...lines.slice(1)];
code = lines.join('\n');
}
which would turn
`instr CoolSynth
; user land
endin`;
into
`instr CoolSynth
ims = p4
inote = chnget:i(strcat "strudel.note.", ims)
igain = chnget:i(strcat "strudel.gain.", ims)
icutoff = chnget:i(strcat "strudel.cutoff.", ims)
iattack = chnget:i(strcat "strudel.attack.", ims)
; ..... many more
; user land
endin
`;
I am not sure if strcat can be used like that, also not sure if it is a
good idea to create so many control channels, tbh it does not feel right.
But yeah, I also think we might have slightly different goals here: I am
looking for a really tight integration of csound with patterns (comparable
to webaudio / superdirt integration) using as less boilerplate code as
possible, while you want to add live codability to your existing work.
If for some reason the compatibility to your requirements is not met, you
can always use a custom onTrigger. Also, note that the strudel repl will
never load csound if you're not asking for it (it's 3.5MB...)
PS I can manage to understand what you are saying WITHOUT ALL CAPS :)
—
Reply to this email directly, view it on GitHub
<#270 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ABQIGJMI5G3A5W5B5WTIP5TWK62APANCNFSM6AAAAAASKP2TDU>
.
You are receiving this because you authored the thread.Message ID:
***@***.***>
|
Beta Was this translation helpful? Give feedback.
-
OK then, I suggest two outputs, one that uses Steven's frequency, named
csound, and one that is identical except it uses my MIDI key, MIDI
velocity, named csnd or csmidi or something.
Best,
Mike
…On Thu, Dec 1, 2022, 07:03 Felix Roos ***@***.***> wrote:
Using frequency and normalized gain is the standard here:
https://github.com/kunstmusik/csound-live-code/blob/master/doc/intro.md#exploring-the-instruments
It is used in this lovely collection of instruments
<https://github.com/kunstmusik/csound-live-code/blob/master/livecode.orc>
which I think would be a great preset for strudel.
—
Reply to this email directly, view it on GitHub
<#270 (reply in thread)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ABQIGJNPUPWUUQVWDLPCR5LWLCHZXANCNFSM6AAAAAASKP2TDU>
.
You are receiving this because you authored the thread.Message ID:
***@***.***>
|
Beta Was this translation helpful? Give feedback.
-
I found a solution for the key value mapping: await csound`
; returns value of given key in given "string map"
; example: keymap("cutoff", "freq/440/cutoff/2000") -> "2000"
opcode keymap, S, SS
Skey, Smap xin
idelimiter = strindex(Smap, strcat(Skey, "/"))
ifrom = idelimiter + strlen(Skey) + 1
Svalue = strsub(Smap, ifrom, strlen(Smap))
Svalue = strsub(Svalue, 0, strindex(Svalue, "/"))
xout Svalue
endop
instr CoolSynth
Smap = strget(p6)
icut = strtol(keymap("cutoff", Smap)); <---
ifreq = strtol(keymap("freq", Smap)); <---
asig = vco2(p5, ifreq, 0, .5)
asig = zdf_2pole(asig, icut, 0.5)
asig *= linsegr:a(0, .01, 1, .1, 0, p3, 0, .1, 0)
out(asig, asig)
endin`
note("1 3 5 8".sub(1).scale('A2 minor'))
.cutoff(sine.range(400,10000).round().slow(4))
.csound('CoolSynth') I am sending all defined controls as a string in p6 (example |
Beta Was this translation helpful? Give feedback.
-
That's good. But does this work only at the beginning of a note (init-time
or i-time in Csound parlance), or does it also work within the span of the
note (control-rate or k-time in Csound parlance)? Have you tested this?
…-----------------------------------------------------
Michael Gogins
Irreducible Productions
http://michaelgogins.tumblr.com
Michael dot Gogins at gmail dot com
On Thu, Dec 1, 2022 at 4:47 PM Felix Roos ***@***.***> wrote:
I found a solution for the key value mapping:
await csound`; returns value of given key in given "string map"; example: keymap("cutoff", "freq/440/cutoff/2000") -> "2000"opcode keymap, S, SS Skey, Smap xin idelimiter = strindex(Smap, strcat(Skey, "/")) ifrom = idelimiter + strlen(Skey) + 1 Svalue = strsub(Smap, ifrom, strlen(Smap)) Svalue = strsub(Svalue, 0, strindex(Svalue, "/")) xout Svalueendopinstr CoolSynth Smap = strget(p6) icut = strtol(keymap("cutoff", Smap)); <--- ifreq = strtol(keymap("freq", Smap)); <--- asig = vco2(p5, ifreq, 0, .5) asig = zdf_2pole(asig, icut, 0.5) asig *= linsegr:a(0, .01, 1, .1, 0, p3, 0, .1, 0) out(asig, asig)endin`
note("1 3 5 8".sub(1).scale('A2 minor'))
.cutoff(sine.range(400,10000).round().slow(4))
.csound('CoolSynth')
I am sending all defined controls as a string in p6 (example
"freq/440/cutoff/1000"), of which the instrument can pull out values by
key using my custom keymap opcode. This allows getting pure local values
without needing an endless unreadable list of p values.
—
Reply to this email directly, view it on GitHub
<#270 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ABQIGJMYQVXN3KREDTIX6OTWLEMIVANCNFSM6AAAAAASKP2TDU>
.
You are receiving this because you authored the thread.Message ID:
***@***.***>
|
Beta Was this translation helpful? Give feedback.
-
I'm thinking about the default values.
Take a look at https://gogins.github.io/cloud-music/cloud_music_no_5.html.
I'd like to know if this will play for you.
Click on the Play button on the top menu bar. You'll have to wait a few
seconds, then an alert box will pop up, then you just click on OK. The rest
you can probably figure out.
The code is here:
https://github.com/gogins/cloud-music/blob/main/docs/cloud_music_no_5.html
This is alpha level code. There are some things that will throw naive users
for a loop.
Best,
Mike
-----------------------------------------------------
Michael Gogins
Irreducible Productions
http://michaelgogins.tumblr.com
Michael dot Gogins at gmail dot com
…On Thu, Dec 1, 2022 at 5:48 PM Felix Roos ***@***.***> wrote:
still need a solution for undefined values though. there *could* be a
default value for each control but it should be more performant to skip
certain effects when they are not set (which is also what happens in the
webaudio implementation)
—
Reply to this email directly, view it on GitHub
<#270 (reply in thread)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ABQIGJORF5JW7JEV3GAQXBTWLETMJANCNFSM6AAAAAASKP2TDU>
.
You are receiving this because you authored the thread.Message ID:
***@***.***>
|
Beta Was this translation helpful? Give feedback.
-
Of course I will credit you.
Best,
Mike
…On Fri, Dec 2, 2022, 06:43 Felix Roos ***@***.***> wrote:
Take a look at https://gogins.github.io/cloud-music/cloud_music_no_5.html.
I'd like to know if this will play for you.
That works for me. Would be nice if you gave me credit for the strudel
patch, as it is festivalOfFingers3
<https://github.com/tidalcycles/strudel/blob/main/repl/src/tunes.mjs#L244>
playing at a different speed with a different instrument. To clarify that
in the future, I've added CC BY-NC-SA 4.0 to all examples see #277
<#277>
Also, #275 (comment)
<#275 (comment)>
is now ready for merge, so you can test if it works for you.
—
Reply to this email directly, view it on GitHub
<#270 (reply in thread)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ABQIGJNYSIKK5UX3LGR4PKTWLHOHPANCNFSM6AAAAAASKP2TDU>
.
You are receiving this because you authored the thread.Message ID:
***@***.***>
|
Beta Was this translation helpful? Give feedback.
-
By the way, thanks for enabling the use of an external Csound in Strudel.
Best,
Mike
…-----------------------------------------------------
Michael Gogins
Irreducible Productions
http://michaelgogins.tumblr.com
Michael dot Gogins at gmail dot com
On Fri, Dec 2, 2022 at 7:55 AM Michael Gogins ***@***.***>
wrote:
Of course I will credit you.
Best,
Mike
On Fri, Dec 2, 2022, 06:43 Felix Roos ***@***.***> wrote:
> Take a look at https://gogins.github.io/cloud-music/cloud_music_no_5.html
> .
> I'd like to know if this will play for you.
>
> That works for me. Would be nice if you gave me credit for the strudel
> patch, as it is festivalOfFingers3
> <https://github.com/tidalcycles/strudel/blob/main/repl/src/tunes.mjs#L244>
> playing at a different speed with a different instrument. To clarify that
> in the future, I've added CC BY-NC-SA 4.0 to all examples see #277
> <#277>
>
> Also, #275 (comment)
> <#275 (comment)>
> is now ready for merge, so you can test if it works for you.
>
> —
> Reply to this email directly, view it on GitHub
> <#270 (reply in thread)>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/ABQIGJNYSIKK5UX3LGR4PKTWLHOHPANCNFSM6AAAAAASKP2TDU>
> .
> You are receiving this because you authored the thread.Message ID:
> ***@***.***>
>
|
Beta Was this translation helpful? Give feedback.
-
Yes, I made a fresh clone of Strudel and your Csound stuff works fine.
Best,
Mike
…-----------------------------------------------------
Michael Gogins
Irreducible Productions
http://michaelgogins.tumblr.com
Michael dot Gogins at gmail dot com
On Fri, Dec 2, 2022 at 10:40 AM Michael Gogins ***@***.***>
wrote:
By the way, thanks for enabling the use of an external Csound in Strudel.
Best,
Mike
-----------------------------------------------------
Michael Gogins
Irreducible Productions
http://michaelgogins.tumblr.com
Michael dot Gogins at gmail dot com
On Fri, Dec 2, 2022 at 7:55 AM Michael Gogins ***@***.***>
wrote:
> Of course I will credit you.
>
> Best,
> Mike
>
> On Fri, Dec 2, 2022, 06:43 Felix Roos ***@***.***> wrote:
>
>> Take a look at
>> https://gogins.github.io/cloud-music/cloud_music_no_5.html.
>> I'd like to know if this will play for you.
>>
>> That works for me. Would be nice if you gave me credit for the strudel
>> patch, as it is festivalOfFingers3
>> <https://github.com/tidalcycles/strudel/blob/main/repl/src/tunes.mjs#L244>
>> playing at a different speed with a different instrument. To clarify that
>> in the future, I've added CC BY-NC-SA 4.0 to all examples see #277
>> <#277>
>>
>> Also, #275 (comment)
>> <#275 (comment)>
>> is now ready for merge, so you can test if it works for you.
>>
>> —
>> Reply to this email directly, view it on GitHub
>> <#270 (reply in thread)>,
>> or unsubscribe
>> <https://github.com/notifications/unsubscribe-auth/ABQIGJNYSIKK5UX3LGR4PKTWLHOHPANCNFSM6AAAAAASKP2TDU>
>> .
>> You are receiving this because you authored the thread.Message ID:
>> ***@***.***>
>>
>
|
Beta Was this translation helpful? Give feedback.
-
I have rebased my fork of Strudel and issued a pull request (#283) that just adds one output function, The combination of |
Beta Was this translation helpful? Give feedback.
I have rebased my fork of Strudel and issued a pull request (#283) that just adds one output function,
<pattern>.csoundm(instrument_number or pattern of instrument numbers)
. The only difference between this function and @felixroos' existingcsound
output function is thatcsound
translates notes to frequency and normalized amplitude, whereascsoundm
translates notes to MIDI key number and MIDI velocity. Some people would prefer frequency, other people such as myself would prefer MIDI.The combination of
csoundm
and the ability to override the Csound instance with a reference to a global Csound, already in the Strudel main branch, make Strudel completely usable for my kinds of pieces.