Use a deep neural network to borrow the skills of real artists and turn your two-bit doodles into masterpieces! This project is an implementation of Semantic Style Transfer (Champandard, 2016), based on the Neural Patches algorithm (Li, 2016). Read more about the motivation in this in-depth article and watch this workflow video for inspiration.
The doodle.py
script generates a new image by using one, two, three or four images as inputs depending what you're trying to do: the original style and its annotation, and a target content image (optional) with its annotation (a.k.a. your doodle). The algorithm extracts annotated patches from the style image, and incrementally transfers them over to the target image based on how closely they match.
NOTE: Making a #NeuralDoodle
is a skill. The parameters in the script were adjusted to work well by default and with the examples below. For new images, you may need to adjust values and modify on your input data too. It takes practice, but you can reach almost photo-realistic results if you iterate! (Ask for advice here or see examples.)
IMPORTANT: This project is possible thanks to the nucl.ai Conference on Creative AI, July 18-20. Join us in Vienna!
The main script is called doodle.py
, which you can run with Python 3.4+ (see setup below). The --device
argument that lets you specify which GPU or CPU to use. For the samples above, here are the performance results:
- GPU Rendering — Assuming you have CUDA setup and enough on-board RAM, the process should complete in 3 to 8 minutes, even with twice the iteration count.
- CPU Rendering — This will take hours and hours, even up to 12h on older hardware. To match quality it'd take twice the time. Do multiple runs in parallel!
The default is to use cpu
, if you have NVIDIA card setup with CUDA already try gpu0
. On the CPU, you can also set environment variable to OMP_NUM_THREADS=4
, but we've found the speed improvements to be minimal.
The algorithm is built for style transfer, but can also generate image analogies that we call a #NeuralDoodle
; use the hashtag if you post your images! Example files are included in the #/samples/
folder. Execute with these commands:
# Synthesize a coastline as if painted by Monet. This uses "*_sem.png" masks for both images.
python3 doodle.py --style samples/Monet.jpg --output samples/Coastline.png \
--device=cpu --iterations=40
# Generate a scene around a lake in the style of a Renoir painting.
python3 doodle.py --style samples/Renoir.jpg --output samples/Landscape.png \
--device=gpu0 --iterations=80
Notice the Renoir results look a little better than the Monet. Some rotational variations of the source image could improve the quality of the arch outline in particular.
If you want to transfer the style given a source style with annotations, and a target content image with annotations, you can use the following command lines. In all cases, the semantic map is loaded and used if it's found under the *_sem.png
filename that matches the input file.
# Synthesize a portrait of Seth Johnson like a Gogh portrait. This uses "*_sem.png" masks for both images.
python3 doodle.py --style samples/Gogh.jpg --content samples/Seth.png \
--output SethAsGogh.png --device=cpu --phases=4 --iterations=40
# Generate what a photo of Vincent van Gogh would look like, using Seth's portrait as reference.
python3 doodle.py --style samples/Seth.jpg --content samples/Gogh.png \
--output GoghAsSeth.png --device=gpu0 --phases=4 --iterations=80
To perform regular style transfer without semantic annotations, simply delete or rename the files with the semantic maps. The photo is originally by Seth Johnson, and the concept for this style transfer by Kyle McDonald.
For synthesizing bitmap textures, you only need an input style without anotations and without target output. In this case, you simply specify one input style image and the output file as follows:
# First synthesis uses a darker noise pattern as seed.
python3 doodle.py --style samples/Wall.jpg --output Wall.png\
--seed=noise --seed-range=0:128 --iterations=50 --phases=3
# Second synthesis uses a lighter noise pattern as seed.
python3 doodle.py --style samples/Wall.jpg --output Wall.png\
--seed=noise --seed-range=192:255 --iterations=50 --phases=3
You can also control the output resolution using --output-size=512x512
parameter—which also depends on the memory you have available. By default the size will be the same as the style image.
You can configure the algorithm using the following parameters. Type python3 doodle.py --help
for the full list of options, or see the source code.
--style-weight=50.0
— Weight of style relative to content.--style-layers=3_1,4_1
— The layers to match style patches.--semantic-weight=1.0
— Global weight of semantics vs. features.--smoothness=1.0
— Weight of image smoothing scheme.--seed=noise
— Seed image path, "noise" or "content".--print-every=10
— How often to log statistics to stdout.--save-every=10
— How frequently to save PNG into frames.
This project requires Python 3.4+ and you'll also need numpy
and scipy
(numerical computing libraries) as well as python3-dev
installed system-wide. If you want more detailed instructions, follow these:
- Use Prepared Docker Container or Amazon Image (beginner)
- Linux Installation of Lasagne (recommended)
- Mac OSX Installation of Lasagne (intermediate)
- Windows Installation of Lasagne (expert)
Afterward fetching the repository, you can run the following commands from your terminal to setup a local environment:
# Create a local environment for Python 3.x to install dependencies here.
python3 -m venv pyvenv --system-site-packages
# If you're using bash, make this the active version of Python.
source pyvenv/bin/activate
# Setup the required dependencies simply using the PIP module.
python3 -m pip install --ignore-installed -r requirements.txt
After this, you should have scikit-image
, theano
and lasagne
installed in your virtual environment. You'll also need to download this pre-trained neural network (VGG19, 80Mb) and put it in the same folder as the script to run. Once you're done you can just delete the #/pyvenv/
folder.
You'll need a good NVIDIA card with CUDA to run this software on GPU, ideally 2Gb / 4Gb or better still, 8Gb to 12Gb for larger resolutions. The code does work on CPU by default, so use that as fallback since you likely have more system RAM!
To improve memory consumption, you can also install NVIDIA's cudnn
library version 3.0 or 4.0. This allows convolutional neural networks to run faster and save space in GPU RAM.
FIX: Use --device=cpu
to use main system memory.
First make sure CUDA is installed correctly and environment variables are set, then reinstall theano
. If everything is setup correctly, the GPU should be used regularly as the gradient calculations are offloaded. If you run NVIDIA's monitoring tool it looks something like this:
The third column is the utilitazition of compute resources, and the fourth column is the use of memory. If memory is under-used you can increase resolution! If compute is under allocated too you can try running multiple scripts in parallel!
FIX: Run nvidia-smi dmon
and check the sm
column.
There's a Python extension compiler called Cython, and it's missing or inproperly installed. Try getting it directly from the system package manager rather than PIP.
FIX: sudo apt-get install cython3
This happens when you're running without a GPU, and the CPU libraries were not found (e.g. libblas
). The neural network expressions cannot be evaluated by Theano and it's raising an exception.
FIX: sudo apt-get install libblas-dev libopenblas-dev
You need to install Lasagne and Theano directly from the versions specified in requirements.txt
, rather than from the PIP versions. These alternatives are older and don't have the required features.
FIX: python3 -m pip install -r requirements.txt
It seems your terminal is misconfigured and not compatible with the way Python treats locales. You may need to change this in your .bash_rc
or other startup script. Alternatively, this command will fix it once for this shell instance.
FIX: export LC_ALL=en_US.UTF-8
It's possible there's a platform bug in the underlying libraries or compiler, which has been reported on MacOS El Capitan. It's not clear how to fix it, but you can try to disable optimizations to prevent the bug. (See Issue #8.)
FIX: Use --safe-mode
flag to disable optimizations.
Related algorithms have shown this is possible in realtime—if you're willing to accept slightly lower quality:
- Texture Networks: Feed-forward Synthesis of Textures and Stylized Images
- Perceptual Losses for Real-Time Style Transfer and Super-Resolution
- Precomputed Real-Time Texture Synthesis with Markovian Generative Adversarial Networks
This project is not designed for real-time use, the focus is on quality. The code in this repository is ideal for training realtime capable networks.
There are many online services that provide basic style transfer with neural networks. We run @DeepForger, a Twitter & Facebook bot with web interface, that can take your requests too. It takes time to make forgeries, so there's a queue... be patient!