Skip to content

Commit 4394ca9

Browse files
Docs merging conformance articles 24.3 (#26222)
1 parent dd0fe60 commit 4394ca9

27 files changed

+7384
-32319
lines changed

docs/articles_en/about-openvino/compatibility-and-support.rst

+8-9
Original file line numberDiff line numberDiff line change
@@ -7,18 +7,17 @@ Compatibility and Support
77
:hidden:
88

99
compatibility-and-support/supported-devices
10+
compatibility-and-support/supported-operations
1011
compatibility-and-support/supported-models
11-
compatibility-and-support/supported-operations-inference-devices
12-
compatibility-and-support/supported-operations-framework-frontend
1312

1413

15-
:doc:`Supported Devices <compatibility-and-support/supported-devices>` - compatibility information for supported hardware accelerators.
16-
17-
:doc:`Supported Models <compatibility-and-support/supported-models>` - a list of selected models confirmed to work with given hardware.
18-
19-
:doc:`Supported Operations <compatibility-and-support/supported-operations-inference-devices>` - a listing of framework layers supported by OpenVINO.
20-
21-
:doc:`Supported Operations <compatibility-and-support/supported-operations-framework-frontend>` - a listing of layers supported by OpenVINO inference devices.
14+
| :doc:`Supported Devices <compatibility-and-support/supported-devices>`:
15+
| compatibility information for supported hardware accelerators.
2216
17+
| :doc:`Supported Operations <compatibility-and-support/supported-operations>`:
18+
| a listing of operations supported by OpenVINO, based on device and frontend conformance.
2319
20+
| :doc:`AI Models verified for OpenVINO™ <compatibility-and-support/supported-models>`:
21+
| a list of selected models confirmed to work with Intel® Core Ultra™ Processors with the
22+
OpenVINO™ toolkit.
2423

docs/articles_en/about-openvino/compatibility-and-support/supported-devices.rst

+21-16
Original file line numberDiff line numberDiff line change
@@ -1,12 +1,12 @@
1-
Supported Inference Devices
2-
============================
1+
Supported Devices
2+
===============================================================================================
33

44
.. meta::
55
:description: Check the list of devices used by OpenVINO to run inference
66
of deep learning models.
77

88

9-
The OpenVINO™ runtime enables you to use a selection of devices to run your
9+
The OpenVINO™ runtime enables you to use the following devices to run your
1010
deep learning models:
1111
:doc:`CPU <../../openvino-workflow/running-inference/inference-devices-and-modes/cpu-device>`,
1212
:doc:`GPU <../../openvino-workflow/running-inference/inference-devices-and-modes/gpu-device>`,
@@ -18,16 +18,20 @@ deep learning models:
1818
Beside running inference with a specific device,
1919
OpenVINO offers the option of running automated inference with the following inference modes:
2020

21-
* :doc:`Automatic Device Selection <../../openvino-workflow/running-inference/inference-devices-and-modes/auto-device-selection>` - automatically selects the best device
22-
available for the given task. It offers many additional options and optimizations, including inference on
23-
multiple devices at the same time.
24-
* :doc:`Heterogeneous Inference <../../openvino-workflow/running-inference/inference-devices-and-modes/hetero-execution>` - enables splitting inference among several devices
25-
automatically, for example, if one device doesn't support certain operations.
26-
* :doc:`(LEGACY) Multi-device Inference <./../../documentation/legacy-features/multi-device>` - executes inference on multiple devices.
27-
Currently, this mode is considered a legacy solution. Using Automatic Device Selection is advised.
28-
* :doc:`Automatic Batching <../../openvino-workflow/running-inference/inference-devices-and-modes/automatic-batching>` - automatically groups inference requests to improve
29-
device utilization.
21+
| :doc:`Automatic Device Selection <../../openvino-workflow/running-inference/inference-devices-and-modes/auto-device-selection>`:
22+
| automatically selects the best device available for the given task. It offers many
23+
additional options and optimizations, including inference on multiple devices at the
24+
same time.
25+
| :doc:`Heterogeneous Inference <../../openvino-workflow/running-inference/inference-devices-and-modes/hetero-execution>`:
26+
| enables splitting inference among several devices automatically, for example, if one device
27+
doesn't support certain operations.
3028
29+
| :doc:`Automatic Batching <../../openvino-workflow/running-inference/inference-devices-and-modes/automatic-batching>`:
30+
| automatically groups inference requests to improve device utilization.
31+
32+
| :doc:`(LEGACY) Multi-device Inference <./../../documentation/legacy-features/multi-device>`:
33+
| executes inference on multiple devices. Currently, this mode is considered a legacy
34+
solution. Using Automatic Device Selection instead is advised.
3135
3236

3337
Feature Support and API Coverage
@@ -36,16 +40,17 @@ Feature Support and API Coverage
3640
======================================================================================================================================== ======= ========== ===========
3741
Supported Feature CPU GPU NPU
3842
======================================================================================================================================== ======= ========== ===========
43+
:doc:`Automatic Device Selection <../../openvino-workflow/running-inference/inference-devices-and-modes/auto-device-selection>` Yes Yes Partial
3944
:doc:`Heterogeneous execution <../../openvino-workflow/running-inference/inference-devices-and-modes/hetero-execution>` Yes Yes No
40-
:doc:`(LEGACY) Multi-device execution <./../../documentation/legacy-features/multi-device>` Yes Yes Partial
4145
:doc:`Automatic batching <../../openvino-workflow/running-inference/inference-devices-and-modes/automatic-batching>` No Yes No
4246
:doc:`Multi-stream execution <../../openvino-workflow/running-inference/optimize-inference/optimizing-throughput>` Yes Yes No
43-
:doc:`Models caching <../../openvino-workflow/running-inference/optimize-inference/optimizing-latency/model-caching-overview>` Yes Partial Yes
47+
:doc:`Model caching <../../openvino-workflow/running-inference/optimize-inference/optimizing-latency/model-caching-overview>` Yes Partial Yes
4448
:doc:`Dynamic shapes <../../openvino-workflow/running-inference/dynamic-shapes>` Yes Partial No
4549
:doc:`Import/Export <../../documentation/openvino-ecosystem>` Yes Yes Yes
4650
:doc:`Preprocessing acceleration <../../openvino-workflow/running-inference/optimize-inference/optimize-preprocessing>` Yes Yes No
4751
:doc:`Stateful models <../../openvino-workflow/running-inference/stateful-models>` Yes Yes Yes
4852
:doc:`Extensibility <../../documentation/openvino-extensibility>` Yes Yes No
53+
:doc:`(LEGACY) Multi-device execution <./../../documentation/legacy-features/multi-device>` Yes Yes Partial
4954
======================================================================================================================================== ======= ========== ===========
5055

5156

@@ -80,10 +85,10 @@ topic (step 3 "Configure input and output").
8085

8186
.. note::
8287

83-
With OpenVINO 2024.0 release, support for GNA has been discontinued. To keep using it
88+
With the OpenVINO 2024.0 release, support for GNA has been discontinued. To keep using it
8489
in your solutions, revert to the 2023.3 (LTS) version.
8590

86-
With OpenVINO™ 2023.0 release, support has been cancelled for:
91+
With the OpenVINO™ 2023.0 release, support has been cancelled for:
8792
- Intel® Neural Compute Stick 2 powered by the Intel® Movidius™ Myriad™ X
8893
- Intel® Vision Accelerator Design with Intel® Movidius™
8994

0 commit comments

Comments
 (0)