Transform Your Photo’s With Help from a Deep Learning App

There’s a fast photo style app your photos might thank you for, given a content photo and a style photo, the deep learning code can transfer the style from one photo to another.

The photorealistic style transfer algorithms aim at stylizing a target photo with the style of a reference photo with the constraint that the stylized photo should remain photorealistic. While several methods exist for this task, they tend to generate spatially inconsistent stylizations with noticeable artifacts.

In addition, these methods are computationally expensive, requiring several minutes to stylize a VGA photo. In the arxiv paper a novel algorithm to address the limitations is presented. The proposed algorithm consists of a stylization step and a smoothing step. While the stylization step transfers the style of the reference photo to the content photo, the smoothing step encourages spatially consistent stylizations.

Unlike existing algorithms that require iterative optimization, both steps in this algorithm have closed-form solutions. Experimental results show that the stylized photos generated by this algorithm are twice more preferred by human subjects on average. Moreover, this method runs 60 times faster than current state-of-the-art approaches.

Deep Learning Photo Styling

Here’s the Setup

The code was tested in the following environment.

  • OS: Ubuntu 16.04
  • CUDA: 9.1
  • Python 2 from Anaconda2
  • pytorch 0.3.0
  • Setup environment variables. You might already have them set up properly.
    • export ANACONDA=PATH-TO-YOUR-ANACONDA-LIBRARY
    • export CUDA_PATH=/usr/local/cuda
    • export PATH=${ANACONDA}/bin:${CUDA_PATH}/bin:$PATH
    • export LD_LIBRARY_PATH=${ANACONDA}/lib:${CUDA_PATH}/bin64:$LD_LIBRARY_PATH
    • export C_INCLUDE_PATH=${CUDA_PATH}/include
  • Install requirement
    • sudo apt-get install -y axel imagemagick (Only used for demo)
    • conda install pytorch=0.3.0 torchvision cuda90 -y -c pytorch
    • pip install scikit-umfpack
    • pip install -U setuptools
    • pip install cupy
    • pip install pynvrtc

Download pretrained networks

  • Download pretrained networks via the following link.
  • Unzip and store the model files under models.

Example 1: Transfer the style of a style photo to a content photo.

  • Create image and output folders and make sure nothing is inside the folders. mkdir images && mkdir results
  • Go to the image folder: cd images
  • Download content image 1: axel -n 1 http://freebigpictures.com/wp-content/uploads/shady-forest.jpg --output=content1.png
  • Download style image 1: axel -n 1 https://vignette.wikia.nocookie.net/strangerthings8338/images/e/e0/Wiki-background.jpeg/revision/latest?cb=20170522192233 --output=style1.png
  • These images are huge. We need to resize them first. Run
    • convert -resize 25% content1.png content1.png
    • convert -resize 50% style1.png style1.png
  • Go back to the root folder: cd ..
  • Test the photorealistic image stylization code python demo.py
  • You should see an output image like

Deep Learning

 

Example 2: Transfer the style of a style photo to a content photo with semantic label maps.

By default, the algorithm performs the global stylization. In order to give users control to decide the content–style correspondences for better stylization effects, it also supports the spatial control through manually drawing label maps.

Prepare label maps

  • Install the tool labelme and run the following command to start it: labelme
  • Start labeling regions (drawing polygons) in the content and style image. The corresponding regions (e.g., sky-to-sky) should have the same label.
  • The labeling result is saved in a “.json” file. By running the following command, you will get the label.png under path/example_json, which is the label map used in our code. label.png is a 1-channel image (usually looks totally black) consists of consecutive labels starting from 0.
labelme_json_to_dataset example.json -o path/example_json

Stylize with label maps

python demo.py \
   --content_image_path PATH-TO-YOUR-CONTENT-IMAGE \ 
   --content_seg_path PATH-TO-YOUR-CONTENT-LABEL \ 
   --style_image_path PATH-TO-YOUR-STYLE-IMAGE \ 
   --style_seg_path PATH-TO-YOUR-STYLE-LABEL \ 
   --output_image_path PATH-TO-YOUR-OUTPUT

Below is a 3-label transferring example (images and labels are from the DPST work by Luan et al.):

Deep Learning

Docker image

A docker image is provided for testing the code.

  1. Install docker-ce. Follow the instruction in the Docker page
  2. Install nvidia-docker. Follow the instruction in the NVIDIA-DOCKER README page.
  3. Build the docker image docker build -t your-docker-image:v1.0 .
  4. Run an interactive session docker run -v YOUR_PATH:YOUR_PATH --runtime=nvidia -i -t your-docker-image:v1.0 /bin/bash
  5. cd YOUR_PATH
  6. ./demo.sh

This is a very cool deep learning application, you can find the project on GitHub.

LEAVE A REPLY

Please enter your comment!
Please enter your name here