You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
where `style.jpg` is the image that provides the style of the final generated image, and `content.jpg` is the image that provides the content. `style_factor` is a constant that controls the degree to which the generated image emphasizes style over content. By default it is set to 5E9.
34
+
where `style.jpg` is the image that provides the style of the final generated image, and `content.jpg` is the image that provides the content. `style_factor` is a constant that controls the degree to which the generated image emphasizes style over content. By default it is set to 1E9.
27
35
28
-
The optimization of the generated image is performed on GPU. On a 2014 MacBook Pro with an NVIDIA GeForce GT 750M, it takes a little over 4 minutes to perform 500 iterations of gradient descent.
36
+
This generates an image using the VGG-19 network by Karen Simonyan and Andrew Zisserman (http://www.robots.ox.ac.uk/~vgg/research/very_deep/).
29
37
30
38
Other options:
31
39
40
+
-`model`: {inception, vgg}. Convnet model to use. Inception refers to Google's [Inception architecture](http://arxiv.org/abs/1409.4842). Default is VGG.
32
41
-`num_iters`: Number of optimization steps. Default is 500.
33
42
-`size`: Long edge dimension of the generated image. Set to 0 to use the size of the content image. Default is 500.
34
43
-`display_interval`: Number of iterations between image displays. Set to 0 to suppress image display. Default is 20.
35
-
-`smoothness`: Constant that controls smoothness of generated image (total variation norm regularization strength). Default is 6E-3.
44
+
-`smoothness`: Constant that controls smoothness of generated image (total variation norm regularization strength). Default is 1E-4.
36
45
-`init`: {image, random}. Initialization mode for optimized image. `image` initializes with the content image; `random` initializes with random Gaussian noise. Default is `image`.
37
46
-`backend`: {cunn, cudnn}. Neural network CUDA backend. `cudnn` requires the [Torch bindings](https://github.com/soumith/cudnn.torch/tree/R3) for CuDNN R3.
38
47
-`optimizer`: {sgd, lbfgs}. Optimization algorithm. `lbfgs` is slower per iteration and consumes more memory, but may yield better results. Default is `sgd`.
39
48
49
+
### Out of memory?
50
+
51
+
The VGG network with the default L-BFGS optimizer gives the best results. However, this setting also requires a lot of GPU memory. If you run into CUDA out-of-memory errors, try running with the Inception architecture or with the SGD optimizer:
The Eiffel Tower in the style of Van Gogh's *Starry Night*:
@@ -59,17 +76,13 @@ Picasso-fied Obama:
59
76
60
77
## Implementation Details
61
78
62
-
The primary difference between this implementation and the paper is that it uses Google's Inception architecture instead of VGG. Consequently, the hyperparameter settings differ from those given in the paper (they have been tuned to give aesthetically pleasing results).
63
-
64
-
The outputs of the following layers are used to optimize for style: `conv1/7x7_s2`, `conv2/3x3`, `inception_3a`, `inception_3b`, `inception_4a`, `inception_4b`, `inception_4c`, `inception_4d`, `inception_4e`.
79
+
When using the Inception network, the outputs of the following layers are used to optimize for style: `conv1/7x7_s2`, `conv2/3x3`, `inception_3a`, `inception_3b`, `inception_4a`, `inception_4b`, `inception_4c`, `inception_4d`, `inception_4e`.
65
80
66
81
The outputs of the following layers are used to optimize for content: `inception_3a`, `inception_4a`.
67
82
68
-
By default, optimization of the generated image is performed using gradient descent with momentum of 0.9. The learning rate is decayed exponentially by 0.75 every 100 iterations. L-BFGS can also be used.
69
-
70
83
By default, the optimized image is initialized using the content image; the implementation also works with white noise initialization, as described in the paper.
71
84
72
-
In order to reduce high-frequency "screen door" noise in the generated image, total variation regularization is applied (idea from [cnn-vis](https://github.com/jcjohnson/cnn-vis) by [jcjohnson](https://github.com/jcjohnson)).
85
+
In order to reduce high-frequency "screen door" noise in the generated image (especially when using the Inception network), total variation regularization is applied (idea from [cnn-vis](https://github.com/jcjohnson/cnn-vis) by [jcjohnson](https://github.com/jcjohnson)).
0 commit comments