File tree 1 file changed +7
-6
lines changed
docs/articles_en/learn-openvino/openvino-samples
1 file changed +7
-6
lines changed Original file line number Diff line number Diff line change @@ -14,10 +14,7 @@ This page demonstrates how to use the Benchmark Tool to estimate deep learning i
14
14
15
15
.. note ::
16
16
17
- The Python version is recommended for benchmarking models that will be used
18
- in Python applications, and the C++ version is recommended for benchmarking
19
- models that will be used in C++ applications. Both tools have a similar
20
- command interface and backend.
17
+ Use either Python or C++ version, depending on the language of your application.
21
18
22
19
23
20
Basic Usage
@@ -226,8 +223,12 @@ should be used purposefully. For more information, see the
226
223
227
224
.. note ::
228
225
229
- If the latency or throughput hint is set, it will automatically configure streams
230
- and batch sizes for optimal performance based on the specified device.)
226
+ * If either the latency or throughput hint is set, it will automatically configure streams,
227
+ batch sizes, and the number of parallel infer requests for optimal performance, based on the specified device.
228
+
229
+ * Optionally, you can specify the number of parallel infer requests with the ``-nireq ``
230
+ option. Setting a high value may improve throughput at the expense
231
+ of latency, while a low value may give the opposite result.
231
232
232
233
Number of iterations
233
234
++++++++++++++++++++
You can’t perform that action at this time.
0 commit comments