This project focuses on generating jazz music using deep learning, specifically an LSTM-based (Long Short-Term Memory) neural network. The goal is to create a system capable of improvising jazz solos in a style similar to a trained dataset.
- Apply an LSTM to a music generation task.
- Generate original jazz music based on learned patterns.
- Utilize the Functional API of TensorFlow/Keras to design and train a complex sequential model.
The model is trained on a corpus of jazz music preprocessed into sequences of musical "values." These values represent musical notes or chords encoded as one-hot vectors. The dataset is formatted as:
X
: A (m, T_x, 90) array where each sequence contains 30 notes (T_x = 30) with 90 possible values.Y
: The labels shifted by one time step to predict the next value.
-
Preprocessing:
- Musical pieces are converted into sequences of values.
- Data is split into inputs (
X
) and corresponding outputs (Y
).
-
Model Architecture:
- Uses an LSTM network with a hidden state size of 64.
- Inputs are reshaped to match the LSTM requirements.
- A dense layer with softmax activation is used for output predictions.
- Training:
- Trains on snippets of 30 musical values.
- Optimizes to predict the next note in a sequence using categorical cross-entropy loss.
- Generation:
- Implements a custom sequence generation loop.
- At each step, the predicted note is fed as input to generate the next note iteratively.
- TensorFlow/Keras
music21
for handling MIDI data.- Supporting modules like
preprocess.py
andmusic_utils.py
for data preparation.
After training, the model generates jazz solos by predicting notes iteratively. The output can be converted back into MIDI format for playback.
- Python 3.x
- Required libraries: TensorFlow, NumPy,
music21
, Matplotlib
- Clone the repository or download the project files.
- Install the dependencies:
pip install -r requirements.txt
- Run the notebook:
jupyter notebook Improvise_a_Jazz_Solo_with_an_LSTM_Network.ipynb
- Train the model on the provided dataset.
- Use the trained model to generate jazz solos.
- Generated MIDI files can be played to listen to the jazz solo.
- Extend the model to handle other genres of music.
- Experiment with more advanced architectures like transformers for better results.
- Incorporate real-time generation and visualization of music.