You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardexpand all lines: docs/articles_en/openvino-workflow/running-inference/inference-devices-and-modes/cpu-device/performance-hint-and-thread-scheduling.rst
+14-14
Original file line number
Diff line number
Diff line change
@@ -63,19 +63,19 @@ the model precision and the ratio of P-cores and E-cores.
63
63
64
64
Then the default settings for low-level performance properties on Windows and Linux are as follows:
| ``ov::inference_num_threads`` | is equal to the number of P-cores or P-cores+E-cores on one numa node | is equal to the number of P-cores or P-cores+E-cores on one numa node|
| ``ov::hint::scheduling_core_type`` |:ref:`Core Type Table of Latency Hint <core_type_latency>` |:ref:`Core Type Table of Latency Hint <core_type_latency>`|
| ``ov::inference_num_threads`` | is equal to the number of P-cores or P-cores+E-cores on one socket | is equal to the number of P-cores or P-cores+E-cores on one socket|
| ``ov::hint::scheduling_core_type`` |:ref:`Core Type Table of Latency Hint <core_type_latency>` |:ref:`Core Type Table of Latency Hint <core_type_latency>` |
@@ -96,7 +96,7 @@ Then the default settings for low-level performance properties on Windows and Li
96
96
Starting from 5th Gen Intel Xeon Processors, new microarchitecture enabled new sub-NUMA clusters
97
97
feature. A sub-NUMA cluster (SNC) can create two or more localization domains (numa nodes)
98
98
within a socket by BIOS configuration.
99
-
By default OpenVINO with latency hint uses single NUMA node for inference. Although such
99
+
By default OpenVINO with latency hint uses single socket for inference. Although such
100
100
behavior allows to achive best performance for most of the models, there might be corner
101
101
cases which require manual tuning of ``ov::num_streams`` and ``ov::hint::enable_hyper_threading parameters``.
102
102
Please find more detail about `Sub-NUMA Clustering <https://www.intel.com/content/www/us/en/developer/articles/technical/xeon-processor-scalable-family-technical-overview.html>`__
0 commit comments