1
- Supported Inference Devices
2
- ============================
1
+ Supported Devices
2
+ ===============================================================================================
3
3
4
4
.. meta ::
5
5
:description: Check the list of devices used by OpenVINO to run inference
6
6
of deep learning models.
7
7
8
8
9
- The OpenVINO™ runtime enables you to use a selection of devices to run your
9
+ The OpenVINO™ runtime enables you to use the following devices to run your
10
10
deep learning models:
11
11
:doc: `CPU <../../openvino-workflow/running-inference/inference-devices-and-modes/cpu-device >`,
12
12
:doc: `GPU <../../openvino-workflow/running-inference/inference-devices-and-modes/gpu-device >`,
@@ -18,16 +18,20 @@ deep learning models:
18
18
Beside running inference with a specific device,
19
19
OpenVINO offers the option of running automated inference with the following inference modes:
20
20
21
- * :doc: `Automatic Device Selection <../../openvino-workflow/running-inference/inference-devices-and-modes/auto-device-selection >` - automatically selects the best device
22
- available for the given task. It offers many additional options and optimizations, including inference on
23
- multiple devices at the same time.
24
- * :doc: `Heterogeneous Inference <../../openvino-workflow/running-inference/inference-devices-and-modes/hetero-execution >` - enables splitting inference among several devices
25
- automatically, for example, if one device doesn't support certain operations.
26
- * :doc: `(LEGACY) Multi-device Inference <./../../documentation/legacy-features/multi-device >` - executes inference on multiple devices.
27
- Currently, this mode is considered a legacy solution. Using Automatic Device Selection is advised.
28
- * :doc: `Automatic Batching <../../openvino-workflow/running-inference/inference-devices-and-modes/automatic-batching >` - automatically groups inference requests to improve
29
- device utilization.
21
+ | :doc:`Automatic Device Selection <../../openvino-workflow/running-inference/inference-devices-and-modes/auto-device-selection>`:
22
+ | automatically selects the best device available for the given task. It offers many
23
+ additional options and optimizations, including inference on multiple devices at the
24
+ same time.
25
+ | :doc:`Heterogeneous Inference <../../openvino-workflow/running-inference/inference-devices-and-modes/hetero-execution>`:
26
+ | enables splitting inference among several devices automatically, for example, if one device
27
+ doesn't support certain operations.
30
28
29
+ | :doc:`Automatic Batching <../../openvino-workflow/running-inference/inference-devices-and-modes/automatic-batching>`:
30
+ | automatically groups inference requests to improve device utilization.
31
+
32
+ | :doc:`(LEGACY) Multi-device Inference <./../../documentation/legacy-features/multi-device>`:
33
+ | executes inference on multiple devices. Currently, this mode is considered a legacy
34
+ solution. Using Automatic Device Selection instead is advised.
31
35
32
36
33
37
Feature Support and API Coverage
@@ -36,16 +40,17 @@ Feature Support and API Coverage
36
40
======================================================================================================================================== ======= ========== ===========
37
41
Supported Feature CPU GPU NPU
38
42
======================================================================================================================================== ======= ========== ===========
43
+ :doc: `Automatic Device Selection <../../openvino-workflow/running-inference/inference-devices-and-modes/auto-device-selection >` Yes Yes Partial
39
44
:doc: `Heterogeneous execution <../../openvino-workflow/running-inference/inference-devices-and-modes/hetero-execution >` Yes Yes No
40
- :doc: `(LEGACY) Multi-device execution <./../../documentation/legacy-features/multi-device >` Yes Yes Partial
41
45
:doc: `Automatic batching <../../openvino-workflow/running-inference/inference-devices-and-modes/automatic-batching >` No Yes No
42
46
:doc: `Multi-stream execution <../../openvino-workflow/running-inference/optimize-inference/optimizing-throughput >` Yes Yes No
43
- :doc: `Models caching <../../openvino-workflow/running-inference/optimize-inference/optimizing-latency/model-caching-overview >` Yes Partial Yes
47
+ :doc: `Model caching <../../openvino-workflow/running-inference/optimize-inference/optimizing-latency/model-caching-overview >` Yes Partial Yes
44
48
:doc: `Dynamic shapes <../../openvino-workflow/running-inference/dynamic-shapes >` Yes Partial No
45
49
:doc: `Import/Export <../../documentation/openvino-ecosystem >` Yes Yes Yes
46
50
:doc: `Preprocessing acceleration <../../openvino-workflow/running-inference/optimize-inference/optimize-preprocessing >` Yes Yes No
47
51
:doc: `Stateful models <../../openvino-workflow/running-inference/stateful-models >` Yes Yes Yes
48
52
:doc: `Extensibility <../../documentation/openvino-extensibility >` Yes Yes No
53
+ :doc: `(LEGACY) Multi-device execution <./../../documentation/legacy-features/multi-device >` Yes Yes Partial
49
54
======================================================================================================================================== ======= ========== ===========
50
55
51
56
@@ -80,10 +85,10 @@ topic (step 3 "Configure input and output").
80
85
81
86
.. note ::
82
87
83
- With OpenVINO 2024.0 release, support for GNA has been discontinued. To keep using it
88
+ With the OpenVINO 2024.0 release, support for GNA has been discontinued. To keep using it
84
89
in your solutions, revert to the 2023.3 (LTS) version.
85
90
86
- With OpenVINO™ 2023.0 release, support has been cancelled for:
91
+ With the OpenVINO™ 2023.0 release, support has been cancelled for:
87
92
- Intel® Neural Compute Stick 2 powered by the Intel® Movidius™ Myriad™ X
88
93
- Intel® Vision Accelerator Design with Intel® Movidius™
89
94
0 commit comments