Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Docs merging conformance articles 24.3 #26222

Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
17 changes: 8 additions & 9 deletions docs/articles_en/about-openvino/compatibility-and-support.rst
Original file line number Diff line number Diff line change
Expand Up @@ -7,18 +7,17 @@ Compatibility and Support
:hidden:

compatibility-and-support/supported-devices
compatibility-and-support/supported-operations
compatibility-and-support/supported-models
compatibility-and-support/supported-operations-inference-devices
compatibility-and-support/supported-operations-framework-frontend


:doc:`Supported Devices <compatibility-and-support/supported-devices>` - compatibility information for supported hardware accelerators.

:doc:`Supported Models <compatibility-and-support/supported-models>` - a list of selected models confirmed to work with given hardware.

:doc:`Supported Operations <compatibility-and-support/supported-operations-inference-devices>` - a listing of framework layers supported by OpenVINO.

:doc:`Supported Operations <compatibility-and-support/supported-operations-framework-frontend>` - a listing of layers supported by OpenVINO inference devices.
| :doc:`Supported Devices <compatibility-and-support/supported-devices>`:
| compatibility information for supported hardware accelerators.

| :doc:`Supported Operations <compatibility-and-support/supported-operations>`:
| a listing of operations supported by OpenVINO, based on device and frontend conformance.

| :doc:`AI Models verified for OpenVINO™ <compatibility-and-support/supported-models>`:
| a list of selected models confirmed to work with Intel® Core Ultra™ Processors with the
OpenVINO™ toolkit.

Original file line number Diff line number Diff line change
@@ -1,12 +1,12 @@
Supported Inference Devices
============================
Supported Devices
===============================================================================================

.. meta::
:description: Check the list of devices used by OpenVINO to run inference
of deep learning models.


The OpenVINO™ runtime enables you to use a selection of devices to run your
The OpenVINO™ runtime enables you to use the following devices to run your
deep learning models:
:doc:`CPU <../../openvino-workflow/running-inference/inference-devices-and-modes/cpu-device>`,
:doc:`GPU <../../openvino-workflow/running-inference/inference-devices-and-modes/gpu-device>`,
Expand All @@ -18,16 +18,20 @@ deep learning models:
Beside running inference with a specific device,
OpenVINO offers the option of running automated inference with the following inference modes:

* :doc:`Automatic Device Selection <../../openvino-workflow/running-inference/inference-devices-and-modes/auto-device-selection>` - automatically selects the best device
available for the given task. It offers many additional options and optimizations, including inference on
multiple devices at the same time.
* :doc:`Heterogeneous Inference <../../openvino-workflow/running-inference/inference-devices-and-modes/hetero-execution>` - enables splitting inference among several devices
automatically, for example, if one device doesn't support certain operations.
* :doc:`(LEGACY) Multi-device Inference <./../../documentation/legacy-features/multi-device>` - executes inference on multiple devices.
Currently, this mode is considered a legacy solution. Using Automatic Device Selection is advised.
* :doc:`Automatic Batching <../../openvino-workflow/running-inference/inference-devices-and-modes/automatic-batching>` - automatically groups inference requests to improve
device utilization.
| :doc:`Automatic Device Selection <../../openvino-workflow/running-inference/inference-devices-and-modes/auto-device-selection>`:
| automatically selects the best device available for the given task. It offers many
additional options and optimizations, including inference on multiple devices at the
same time.
| :doc:`Heterogeneous Inference <../../openvino-workflow/running-inference/inference-devices-and-modes/hetero-execution>`:
| enables splitting inference among several devices automatically, for example, if one device
doesn't support certain operations.

| :doc:`Automatic Batching <../../openvino-workflow/running-inference/inference-devices-and-modes/automatic-batching>`:
| automatically groups inference requests to improve device utilization.

| :doc:`(LEGACY) Multi-device Inference <./../../documentation/legacy-features/multi-device>`:
| executes inference on multiple devices. Currently, this mode is considered a legacy
solution. Using Automatic Device Selection instead is advised.


Feature Support and API Coverage
Expand All @@ -36,16 +40,17 @@ Feature Support and API Coverage
======================================================================================================================================== ======= ========== ===========
Supported Feature CPU GPU NPU
======================================================================================================================================== ======= ========== ===========
:doc:`Automatic Device Selection <../../openvino-workflow/running-inference/inference-devices-and-modes/auto-device-selection>` Yes Yes Partial
:doc:`Heterogeneous execution <../../openvino-workflow/running-inference/inference-devices-and-modes/hetero-execution>` Yes Yes No
:doc:`(LEGACY) Multi-device execution <./../../documentation/legacy-features/multi-device>` Yes Yes Partial
:doc:`Automatic batching <../../openvino-workflow/running-inference/inference-devices-and-modes/automatic-batching>` No Yes No
:doc:`Multi-stream execution <../../openvino-workflow/running-inference/optimize-inference/optimizing-throughput>` Yes Yes No
:doc:`Models caching <../../openvino-workflow/running-inference/optimize-inference/optimizing-latency/model-caching-overview>` Yes Partial Yes
:doc:`Model caching <../../openvino-workflow/running-inference/optimize-inference/optimizing-latency/model-caching-overview>` Yes Partial Yes
:doc:`Dynamic shapes <../../openvino-workflow/running-inference/dynamic-shapes>` Yes Partial No
:doc:`Import/Export <../../documentation/openvino-ecosystem>` Yes Yes Yes
:doc:`Preprocessing acceleration <../../openvino-workflow/running-inference/optimize-inference/optimize-preprocessing>` Yes Yes No
:doc:`Stateful models <../../openvino-workflow/running-inference/stateful-models>` Yes Yes Yes
:doc:`Extensibility <../../documentation/openvino-extensibility>` Yes Yes No
:doc:`(LEGACY) Multi-device execution <./../../documentation/legacy-features/multi-device>` Yes Yes Partial
======================================================================================================================================== ======= ========== ===========


Expand Down Expand Up @@ -80,10 +85,10 @@ topic (step 3 "Configure input and output").

.. note::

With OpenVINO 2024.0 release, support for GNA has been discontinued. To keep using it
With the OpenVINO 2024.0 release, support for GNA has been discontinued. To keep using it
in your solutions, revert to the 2023.3 (LTS) version.

With OpenVINO™ 2023.0 release, support has been cancelled for:
With the OpenVINO™ 2023.0 release, support has been cancelled for:
- Intel® Neural Compute Stick 2 powered by the Intel® Movidius™ Myriad™ X
- Intel® Vision Accelerator Design with Intel® Movidius™

Expand Down
Loading
Loading