How to use Face swap with command line on Windows 7

I made this short tutorial because the apps out there (FakeApp, etc) are a pain in the ass to use and don’t give you much control on what you are doing. So I thought, “why not go back to the source?”. In the beginning, was the command line. All these apps are just a packaging of python + faceswap + a (more or less good) gui.

And it saves you from viruses, hidden cryptominers and ugly watermarks

So, if you feel you are a bit computer savvy, you can give it a try.

Requirements :

  • Have a recent Nvidia graphic adapter (Cuda capability 3.0+ and 3Gb is a must)
  • Install Cuda 9.0. Do not install graphic driver (do a custom install) if your driver is more recent than the one included in CUDA. Do NOT install CUDA 9.1 unless you use a tensorflow > 1.5.0.
  • Install Cudnn 7.05. Merge files in the zip to the cuda installation (C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v9.0) in the same directories.
  • Install Python 3.6 64 bits from (ensure it’s in the path)
  • install Visual Studio 2015 build tools. (it should come with the redistributable)
  • install Cmake ( 64 bits
  • Download the face swap repository ( : get the faceswap-master zip and unzip it in a directory
  • Go in that directory and do a “pip install -r requirements-gpu-python36-cuda9.txt” : it shoud install the requirements and build some of them (namely dlib for example). That’s the part I had the most headache with.

Ensure everything is 64bits (especially python)

Now, it should be ready to work.

In c:\fakes\data_A put the target pictures (the ones you want a new face on). Can come from a video extracted with ffmpeg.

In c:\fakes\data_B put the source pictures (with the person you want to swap the face with data_A, aka “The Celeb”).

I think there’s a limit to the size of the pictures ?

Typical commands to use :

python extract -i C:\Fakes\data_A -o C:\Fakes\data_A\faces --serializer json --alignments C:\Fakes\data_A\faces\alignments.json -D cnn
python train -A C:\Fakes\data_A\faces -B C:\Fakes\data_B\faces -m C:\Fakes\model -p -bs 128
python convert -i C:\Fakes\data_A -o C:\Fakes\data_A\merged --serializer json --alignments C:\Fakes\data_A\faces\alignments.json -m C:\Fakes\model -S -D cnn

You can ignore the warnings about deprecation of some functions.

If you have errors about memory, I can’t help you : get a better graphic adapter.

After extraction, check the faces and remove the ones that are not good (not really faces, glasses, hair in the way, etc…)

Train until it looks good (YMMV) and loss is below 0.02xx. In my experience, what’s important is not the quantity of pictures, but their quality and diversity of poses. A few hundreds should be enough in most cases (I did a training with 115 pictures in data_A\faces and 200 in data_B\faces and it looked quite good, if a bit blurry on some closeups).

You can remake a video with ffmpeg afterward. Unless you used an option to skip them, faceswap will copy as-is the pictures it couldn’t merge.

You can check the possible parameters of faceswap with “python <command> -h” and play with the options (blur, batch size, trainer, Mask, etc…). If you get OOM (Out Of Memory) for training, you can try the LowMem  trainer and/or allow growth.

You don’t have to have a premade model : it builds one from scratch if there’s none (you’ll have black columns at first). The one shipped with FakeApp is pre-trained with Trump and Cage (ewww).

You can always download the latest version of faceswap and try it regularly.

You don’t have to have a killer configuration to have it work. It just might take longer. The most important part is the GPU.

My configuration : Windows 7 64bits, GTX 1060 6G, 8Gb RAM, i5-2500@3.30Ghz.