-
-
Notifications
You must be signed in to change notification settings - Fork 6.9k
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
* Update README to new documentation style * Update README.md * Update README.md * Rename requirements-ci.txt to requirements-headless.txt * Update README.md * Update README.md * Update README.md
- Loading branch information
Showing
7 changed files
with
59 additions
and
50 deletions.
There are no files selected for viewing
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,74 +1,83 @@ | ||
Take a video and replace the face in it with a face of your choice. You only need one image of the desired face. No dataset, no training. | ||
# Roop | ||
|
||
You can watch some demos [here](https://drive.google.com/drive/folders/1KHv8n_rd3Lcr2v7jBq1yPSTWM554Gq8e). | ||
A Stable Diffusion extension is also available, [here](https://github.com/s0md3v/sd-webui-roop). | ||
> Take a video and replace the face in it with a face of your choice. You only need one image of the desired face. No dataset, no training.. | ||
![demo-gif](demo.gif) | ||
[![Build Status](https://img.shields.io/github/actions/workflow/status/s0md3v/roop/ci.yml.svg?branch=main)](https://github.com/s0md3v/roop/actions?query=workflow:ci) | ||
|
||
## Disclaimer | ||
|
||
This software is meant to be a productive contribution to the rapidly growing AI-generated media industry. It will help artists with tasks such as animating a custom character or using the character as a model for clothing etc. | ||
## Preview | ||
|
||
The developers of this software are aware of its possible unethical applications and are committed to take preventative measures against them. It has a built-in check which prevents the program from working on inappropriate media including but not limited to nudity, graphic content, sensitive material such as war footage etc. We will continue to develop this project in the positive direction while adhering to law and ethics. This project may be shut down or include watermarks on the output if requested by law. | ||
<div style="display:flex"> | ||
<img src="https://raw.githubusercontent.com/s0md3v/roop/next/.github/target-1080p.gif?sanitize=true" width="48%" alt="Target Video" /> | ||
|
||
<img src="https://raw.githubusercontent.com/s0md3v/roop/next/.github/output-1080p.gif?sanitize=true" width="48%" alt="Output Video "/> | ||
</div> | ||
|
||
Users of this software are expected to use this software responsibly while abiding the local law. If face of a real person is being used, users are suggested to get consent from the concerned person and clearly mention that it is a deepfake when posting content online. Developers of this software will not be responsible for actions of end-users. | ||
|
||
### Licence/Commercial Use Disclaimer | ||
Roop uses a lot of third party libraries as well pre-trained models. The users should keep in mind that these third party components have their own license and terms. | ||
## Installation | ||
|
||
## How to install? | ||
Be aware, the installation needs technical skills and is not for beginners. Please do not open platform and installation related issues on GitHub. We have a very helpful [Discord](https://discord.com/invite/Y9p4ZQ2sB9) community that will guide you to install roop. | ||
|
||
### Basic | ||
[Basic](https://roop-ai.gitbook.io/roop/installation/basic) - It is more likely to work on your computer, but will be quite slow | ||
|
||
It is more likely to work on your computer, but will be quite slow. Follow instructions for the basic installation [here](https://github.com/s0md3v/roop/wiki/1.-Installation). | ||
[Acceleration](https://roop-ai.gitbook.io/roop/installation/acceleration) - Unleash the full potential of your CPU and GPU | ||
|
||
### Acceleration | ||
|
||
If you own a capable GPU and are prepared to address any software problems, you have the option to activate such acceleration, which offers significantly enhanced speed. Once you finished the basic installation, you can follow the instructions for the acceleration installation [here](https://github.com/s0md3v/roop/wiki/2.-Acceleration). | ||
## Usage | ||
|
||
## How to use? | ||
Start the program with arguments: | ||
|
||
### UI | ||
``` | ||
python run.py [options] | ||
-h, --help show this help message and exit | ||
-s SOURCE_PATH, --source SOURCE_PATH select an source image | ||
-t TARGET_PATH, --target TARGET_PATH select an target image or video | ||
-o OUTPUT_PATH, --output OUTPUT_PATH select output file or directory | ||
--frame-processor FRAME_PROCESSOR [FRAME_PROCESSOR ...] frame processors (choices: face_swapper, face_enhancer, ...) | ||
--keep-fps keep target fps | ||
--keep-frames keep temporary frames | ||
--skip-audio skip target audio | ||
--many-faces process every face | ||
--reference-face-position REFERENCE_FACE_POSITION position of the reference face | ||
--reference-frame-number REFERENCE_FRAME_NUMBER number of the reference frame | ||
--similar-face-distance SIMILAR_FACE_DISTANCE face distance used for recognition | ||
--temp-frame-format {jpg,png} image format used for frame extraction | ||
--temp-frame-quality [0-100] image quality used for frame extraction | ||
--output-video-encoder {libx264,libx265,libvpx-vp9,h264_nvenc,hevc_nvenc} encoder used for the output video | ||
--output-video-quality [0-100] quality used for the output video | ||
--max-memory MAX_MEMORY maximum amount of RAM in GB | ||
--execution-provider {cpu} [{cpu} ...] available execution provider (choices: cpu, ...) | ||
--execution-threads EXECUTION_THREADS number of execution threads | ||
-v, --version show program's version number and exit | ||
``` | ||
|
||
Executing `python run.py` command will launch this window: | ||
|
||
![gui-demo](gui-demo.png) | ||
### Headless | ||
|
||
Choose a face (image with desired face) and the target image/video (image/video in which you want to replace the face) and click on `Start`. Open file explorer and navigate to the directory you select your output to be in. You will find a directory named `<video_title>` where you can see the frames being swapped in realtime. Once the processing is done, it will create the output file. That's it. | ||
Using the `-s/--source`, `-t/--target` and `-o/--output` argument will run the program in headless mode. | ||
|
||
## CLI | ||
|
||
Additional command line arguments are given below. To learn out what they do, check the guide [here](https://github.com/s0md3v/roop/wiki/3.-Advanced-Options). | ||
## Disclaimer | ||
|
||
``` | ||
options: | ||
-h, --help show this help message and exit | ||
-s SOURCE_PATH, --source SOURCE_PATH select an source image | ||
-t TARGET_PATH, --target TARGET_PATH select an target image or video | ||
-o OUTPUT_PATH, --output OUTPUT_PATH select output file or directory | ||
--frame-processor FRAME_PROCESSOR [FRAME_PROCESSOR ...] frame processors (choices: face_swapper, face_enhancer, ...) | ||
--keep-fps keep target fps | ||
--keep-frames keep temporary frames | ||
--skip-audio skip target audio | ||
--many-faces process every face | ||
--reference-face-position REFERENCE_FACE_POSITION position of the reference face | ||
--reference-frame-number REFERENCE_FRAME_NUMBER number of the reference frame | ||
--similar-face-distance SIMILAR_FACE_DISTANCE face distance used for recognition | ||
--temp-frame-format {jpg,png} image format used for frame extraction | ||
--temp-frame-quality [0-100] image quality used for frame extraction | ||
--output-video-encoder {libx264,libx265,libvpx-vp9,h264_nvenc,hevc_nvenc} encoder used for the output video | ||
--output-video-quality [0-100] quality used for the output video | ||
--max-memory MAX_MEMORY maximum amount of RAM in GB | ||
--execution-provider {cpu} [{cpu} ...] available execution provider (choices: cpu, ...) | ||
--execution-threads EXECUTION_THREADS number of execution threads | ||
-v, --version show program's version number and exit | ||
``` | ||
This software is meant to be a productive contribution to the rapidly growing AI-generated media industry. It will help artists with tasks such as animating a custom character or using the character as a model for clothing etc. | ||
|
||
The developers of this software are aware of its possible unethical applications and are committed to take preventative measures against them. It has a built-in check which prevents the program from working on inappropriate media including but not limited to nudity, graphic content, sensitive material such as war footage etc. We will continue to develop this project in the positive direction while adhering to law and ethics. This project may be shut down or include watermarks on the output if requested by law. | ||
|
||
Users of this software are expected to use this software responsibly while abiding the local law. If face of a real person is being used, users are suggested to get consent from the concerned person and clearly mention that it is a deepfake when posting content online. Developers of this software will not be responsible for actions of end-users. | ||
|
||
|
||
## Licenses | ||
|
||
Our software uses a lot of third party libraries as well pre-trained models. The users should keep in mind that these third party components have their own license and terms, therefore our license is not being applied. | ||
|
||
Using the `-s/--source`, `-t/--target` and `-o/--output` argument will run the program in headless mode. | ||
|
||
## Credits | ||
|
||
- [henryruhs](https://github.com/henryruhs): for being an irreplaceable contributor to the project | ||
- [ffmpeg](https://ffmpeg.org): for making video related operations easy | ||
- [deepinsight](https://github.com/deepinsight): for their [insightface](https://github.com/deepinsight/insightface) project which provided a well-made library and models. | ||
- and all developers behind libraries used in this project. | ||
- all developers behind the libraries used in this project | ||
|
||
|
||
## Documentation | ||
|
||
Read the [documenation](https://roop-ai.gitbook.io/roop) for a deep dive. |
Binary file not shown.
File renamed without changes.