Skip to content

JunHyeok96/DeepFake

Repository files navigation

DeepFake v1

* I do not allow malicious video production through this source code. This is just a practice code. (해당 소스 코드를 통한 악의적인 영상 제작을 불허합니다.)

Version

Tensorflow 2.0, Ubuntu 18.04

1. Make dataset

  • You have to get character images from youtube or other media.

  • If you have a video editing tool, just bring the part where the face of the person is shown.

ezgif com-resize (3) ezgif com-resize (2)

  • We will extract the landmarks of the characters through the dlib Library

  • You have to save both face and landmark images.

Model

  • We will use an auto-encoder.

  • But if you look closely at the picture, there is one encoder and two decoder.

  • You have to share an encoder when you learn two characters. The reason is to compress the features of the face in the encoder well.

  • Introducing 'warping' in the learning process improves performance. 'Warping' is distorting the image. From this, when a new look comes in, it can produce better results.

  • 'warping' applies to the Landmark image, which is input data of the model.

  • Do not 'warping' on original face images other than Landmark images.

  • If you are good at restoring the two characters, try changing the decoder to add images.

Image processing

  • If you have followed the process so far, the image above will be made. However, because of the background other than the face, it becomes unnatural.

  • This is the part that we each need to modify to match the characteristics of the video.

  • I detected the skin color and replaced the background with black. It also went through blending to blur the boundaries of skin color between characters in the synthesis process.

  • If this process is complicated and cumbersome, there is another way. When you create a dataset, you crop an image.

  • You only need to bring in the face by setting the highest and lowest points of the landmark coordinates as shown above. I recommend this method and the implementation is here . This does not completely bring only facial skin, but most of the time the background is removed.

result

final3
final2
final

  • Source, Conversion Image, Image Processing Image

ezgif com-crop

  • Results for the entire image

ezgif com-resize (7)

  • Actually, I didn't use my face to learn, but it's okay if the landmarks are similar.

Data Information

  • It used the video for about two to three minutes.

  • 64 x 64 images were used.

Quick Start

  • dataset path
DeepFake
  dataset_video
    src
      video
    dst
      video
  dataset
    src
      img
      land
    dst
      img
      land

$ git clone https://github.com/JunHyeok96/DeepFake.git
$ cd DeepFake 
$ python make_landmark.py 
And follow the train.ipynb process.  
Once the learning is complete, 
$ python make_deepfake_video.py 
Image Source

https://medium.com/@jonathan_hui/how-deep-learning-fakes-videos-deepfakes-and-how-to-detect-it-c0b50fbf7cb9

About

👭 Autoencoder를 사용한 딥페이크 영상 제작

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published