You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardexpand all lines: README.md
+5-4
Original file line number
Diff line number
Diff line change
@@ -29,12 +29,13 @@ The optimization of the generated image is performed on GPU. On a 2014 MacBook P
29
29
30
30
Other options:
31
31
32
-
-`num_iters`: Number of optimization steps.
32
+
-`num_iters`: Number of optimization steps. Default is 500.
33
33
-`size`: Long edge dimension of the generated image. Set to 0 to use the size of the content image. Default is 500.
34
-
-`nodisplay`: Suppress image display during optimization.
35
-
-`smoothness`: Constant that controls smoothness of generated image (total variation norm regularization strength). Default is 7.5E-3.
34
+
-`display_interval`: Number of iterations between image displays. Set to 0 to suppress image display. Default is 20.
35
+
-`smoothness`: Constant that controls smoothness of generated image (total variation norm regularization strength). Default is 6E-3.
36
36
-`init`: {image, random}. Initialization mode for optimized image. `image` initializes with the content image; `random` initializes with random Gaussian noise. Default is `image`.
37
37
-`backend`: {cunn, cudnn}. Neural network CUDA backend. `cudnn` requires the [Torch bindings](https://github.com/soumith/cudnn.torch/tree/R3) for CuDNN R3.
38
+
-`optimizer`: {sgd, lbfgs}. Optimization algorithm. `lbfgs` is slower per iteration and consumes more memory, but may yield better results. Default is `sgd`.
38
39
39
40
## Examples
40
41
@@ -64,7 +65,7 @@ The outputs of the following layers are used to optimize for style: `conv1/7x7_s
64
65
65
66
The outputs of the following layers are used to optimize for content: `inception_3a`, `inception_4a`.
66
67
67
-
Optimization of the generated image is performed using gradient descent with momentum of 0.9. The learning rate is decayed exponentially by 0.75 every 100 iterations.
68
+
By default, optimization of the generated image is performed using gradient descent with momentum of 0.9. The learning rate is decayed exponentially by 0.75 every 100 iterations. L-BFGS can also be used.
68
69
69
70
By default, the optimized image is initialized using the content image; the implementation also works with white noise initialization, as described in the paper.
0 commit comments