Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

audio: continuous DMA from AudioIn, real-time audio filtering #2676

Open
jepler opened this issue Mar 3, 2020 · 22 comments
Open

audio: continuous DMA from AudioIn, real-time audio filtering #2676

jepler opened this issue Mar 3, 2020 · 22 comments

Comments

@jepler
Copy link
Member

jepler commented Mar 3, 2020

We'd like to be able to do real time audio filtering with CircuitPython. Example: Read from mic, perform FIR filtering with ulab, output on speaker.

We would also like to process all data from the mic without interruption. Example: Waterfall FFT demo

One option to enable this is to allow Python classes to be samples, to allow Python code to retrieve sample data, and to make microphones into audio samples.

Either M4 or nrf52840 would get the initial (PDM)AudioIn DMA work, as this is different per MCU. nrf52840 because clue is the current hotness, M4 because its audio out is better for the filtering case.

A sketch of how this might look as a class outline is at https://gist.github.com/2701e024cdea2c2f60527d674a7b28bc

However, it might also turn out that this is infeasible due to GC, constraints on what we can do from background tasks etc., and instead we need to add a fixed-function pipeline of some sort. In that scenario, a set of FIR taps could be associated with e.g., an AudioMixer object instead.

I'm initially associating this with the 5.x.0 features milestone because @ladyada mentioned this as a possible use case for ulab. It may be too aggressive a timeline to have this done with the ulab release, so the milestone can be postponed as appropriate.

@TheMindVirus
Copy link

Mozilla WebAudio API covers a large portion of what you would like to see working in CircuitPython audioio, audiomixer and synthio: https://developer.mozilla.org/en-US/docs/Web/API/Web_Audio_API
It is currently in Working-Draft for Javascript in Modern Web Browsers such as Firefox and Chrome and is very well documented, so much so that I have written several demo apps: #5944

A CircuitPython port of this API would be able to bridge the gap between existing code in the web and microcontrollers.

While looking into porting some of the code that uses the Biquad Filter (potentially useful for the Equaliser for WinAmp),
I looked for a CircuitPython board that has a headphone jack. The Sony Spresense came to hand but required some setup.
I then noticed that the particular release of CircuitPython I had flashed to the board (7.0.0) was missing audioio and audiomixer entirely, along with an obvious pin definition for the headphone jack in board.

The next port of call was the newly updated Arduino Core (for which the SiLabs CP210x Serial Drivers had to be updated) which was able to upload sketches in bootloader mode with the microUSB on the mezzanine (with their firmware upload tool and some custom written batch files since the Arduino "burn bootloader" button appeared to be broken).

Spresense Upload Syntax: flash_writer.exe -s -c %PORT% -b %BAUD% -d -n "./loader.espk"
CircuitPython 7.0.0 Image: flash_writer.exe -s -c COM10 -b 115200 -d -n "./adafruit-circuitpython-spresense-en_GB-7.0.0.spk"

I wrote a simple starter sketch replicating a retro TV Beep that counts down before shows used to go on air.
The same sketch can be written for Raspberry Pi OS by muxing the headphone jack GPIO pins and writing directly to them,
but the Spresense makes it a lot more complicated than that by going through its own NuttX abstraction layer at runtime.

I think the Spresense board with some tweaks could be a really good DSP Accelerator for other CircuitPython boards,
especially those which don't have their own headphone jacks or other physical connectors for Audio/Video interfacing.
It could do this either by using its own SDK or running a separate instance of CircuitPython, in both cases completely isolated
from the board running your project thereby removing many compiler limitations found when running on one microcontroller.

More details about what I found trying to get the headphone jack working are included as comments in the Arduino Sketch attached below in SpresenseTVBeep.ino. I also decided to record and denoise the output captured directly from the board
after testing it with some well preserved Sony headphones.

Arduino Sketch and Audio Samples included: SpresenseTVBeep.zip

@tannewt
Copy link
Member

tannewt commented Feb 24, 2022

@TheMindVirus I'd recommend adding an issue for supporting the existing audio apis on spresense too.

@mfhepp
Copy link

mfhepp commented May 12, 2022

I support this proposal; it would be really useful to be able to use e.g. the M4 CPU power to do FFT, digital filters etc. in a background process from CircuitPython without needing to go back to lower-level programming environments.

For example, people are doing Software-defined Radio (SDR) stuff on ATmega328Ps (see here). So being able to at least do filtering of audio frequencies (low-pass, band-pass, high-pass), amplitude modification, amplitude modulation, impuls shaping would be great.

From an architectural perspective and without having looked at available libraries, I think it would be sufficient to

  1. be able to set up a DMA-based sampling-output cycle, basically a mapping from ADC to DAC port and setting a sampling rate, and
  2. be able to register a "transfer function" from CircuitPython that translates the input sample to the output samples.

Does that sound realistic? And could we get this from "some distant future feature" to "near-term priorities"?

@timchinowsky
Copy link

@jepler wondering what is your latest thinking on this? Recently I opened #9225 with an eye towards DSP sorts of things but as I become more familiar with the codebase I am realizing that really the thing is to do is to integrate input from built-in, I2S, and even SPI ADCs into the synthio/audiomixer framework.

@timchinowsky
Copy link

I've prototyped on rp2040 a set of minimal, non-breaking changes which enable processing of live sampled audio. With these changes, the following code will sample audio from A0 and play it out with PWM on D12:

import analogbufio
import array
import audiocore
import audiopwmio
import board

buffer = array.array("H", [0x0000] * 8)
adc = analogbufio.BufferedIn(board.A0, sample_rate=10000)
adc.readinto(buffer, loop=True)
pwm = audiopwmio.PWMAudioOut(board.D12)
sample = audiocore.RawSample(buffer, sample_rate=10000, single_buffer=False)
pwm.play(sample)

Changes required:

  1. Add a loop=True option to analogbufio.BufferedIn.readinto so that instead of terminating readinto sets up looping DMA into buffer.
  2. Add a single_buffer=False option to audiocore.RawSample so that the RawSample presents to PWMAudioOut as double-buffered instead of single-buffered.
  3. Tighten up the timing across the board so that everyone agrees on what 10000 Hz is. For this to work without glitches timing must be exact, see fix off-by-one in ADC sample rate divisor #9396 and fix off-by-one in RP2040 PWM frequency setting #9398, and right now I think there is also an issue with the use of the DMA pacer timer in audiodma.c, see https://forums.raspberrypi.com/viewtopic.php?t=373201. To get this to work glitch-free I replaced the pacer with a PWM DREQ.

With this in place it should be straightforward to add features like multi-channel sampling using the ADC round-robin registers, and to implement DSP via functions which take in a buffer and return a processed buffer.

Wondering where this might fit on the development roadmap, let me know if you'd like me to tidy it up and submit a PR. analogbufio is not implemented on many ports but it does have a nice simplicity to it.

@dhalbert
Copy link
Collaborator

dhalbert commented Jul 7, 2024

@timchinowsky This is very nice! Would it also work with audiobusio?

@timchinowsky
Copy link

It should work anywhere that uses the audio buffer protocol - basically it tricks a RawSample into providing live data.

@jepler
Copy link
Member Author

jepler commented Jul 7, 2024

Does this idea pave the way to doing any processing on the audio data before it's output, such as applying a digital filter?

@timchinowsky
Copy link

@jepler yes, by composing the RawSample with DSP functions that take in a buffer and return a buffer, so for instance instead of play(sample) you'd do play(filter(sample)), etc. I'm planning to write a clipping function and a biquad filter to test this out. Thinking the DSP functions will mostly modify input data in-place rather than require more allocation. Adding, multiplying, etc. multiple signals should work too.

@tannewt
Copy link
Member

tannewt commented Jul 8, 2024

Wondering where this might fit on the development roadmap, let me know if you'd like me to tidy it up and submit a PR.

I'm happy to do the reviews for this whenever you are ready. Folks would love to see this I'm sure.

@timchinowsky
Copy link

Right now in rp2040 audio DMA in audiopwmio always proceeds at a rate set by a DMA timer. I've found these to be not precise enough to synchronize with e.g. the ADC pacing timer. It would be helpful to be able to specify a sample rate by reference to another peripheral, i.e. "use whatever sample rate the ADC pacer is set to." One way this could be done is to define some high sample rates as special-purpose codes, e.g. a sample rate of 0xFFFFFF00 would be interpreted as "use the ADC DREQ", etc. Thoughts?

@timchinowsky
Copy link

Signal quality is never going to be good with the RP2040 ADC so I'm shifting focus to an I2S audio codec, TAC5212. I've got the ADC and DAC working and am working on implementing I2SIn a la #5456.
Screenshot from 2024-08-01 17-34-10

@ladyada
Copy link
Member

ladyada commented Aug 2, 2024

thats a cute I2S codec, i ordered some samples of the TAC5242

@timchinowsky
Copy link

I've got ~5 populated breakout boards for the TAC5212 that are up for grabs, if anyone's interested in messing with it.

@tannewt
Copy link
Member

tannewt commented Aug 5, 2024

It would be helpful to be able to specify a sample rate by reference to another peripheral, i.e. "use whatever sample rate the ADC pacer is set to." One way this could be done is to define some high sample rates as special-purpose codes, e.g. a sample rate of 0xFFFFFF00 would be interpreted as "use the ADC DREQ", etc. Thoughts?

Would it be enough for us to coordinate internally so that the ADC and PWM output dma's use the same pacing timer when set to the same rate? I don't really like the special value. Instead, I'd just pass the object you want to tie to.

@timchinowsky
Copy link

timchinowsky commented Aug 5, 2024

That would be better, it's just more code. Using a magic number is a ~two-line change.

@timchinowsky
Copy link

Re: the TAC5212, these things are great, up to 768 kHz sampling at up to 32 bits, and you can daisy-chain the devices to get up to 8 channels over one pin. To do them justice, planning to use rp2pio with whatever modifications are needed to support continuous streaming from input to output. That module is kind of the ultimate as far as commanding low-level hardware performance with high-level CP functions. There is a lot of variety in how codecs like to be spoken to, and it will be nice to surface that in the CP code.

@tannewt
Copy link
Member

tannewt commented Aug 6, 2024

To do them justice, planning to use rp2pio with whatever modifications are needed to support continuous streaming from input to output.

Note that this isn't portable. An I2SIn would be better because it could be implemented on other platforms. RP2 can use PIO internally.

That would be better, it's just more code. Using a magic number is a ~two-line change.

More code is fine. Magic numbers are too much magic imo.

@timchinowsky
Copy link

Note that this isn't portable. An I2SIn would be better because it could be implemented on other platforms. RP2 can use PIO internally.

Yeah, I was thinking to do both, like right now there's both https://github.com/adafruit/Adafruit_CircuitPython_PIOASM/blob/main/examples/pioasm_i2sout.py and the same pio code in https://github.com/adafruit/circuitpython/blob/main/ports/raspberrypi/common-hal/audiobusio/I2SOut.c.

@timchinowsky
Copy link

Although I think what we're really going to want is an I2SInOut that does both directions with the same clocks.

@timchinowsky
Copy link

Progress with the TAC5xxx codecs using rp2pio posted here: https://github.com/timchinowsky/tac5.
Next is (1) add a background read to rp2pio to complement the background write which is already there and (2) ponder how to connect the reads to the writes with some CP signal processing, integrating with existing audiocore paradigms where possible.

@timchinowsky
Copy link

Added draft background_read method: #9659. Seems to work with TAC5 codec, needs more testing.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

7 participants