Skip to content

Commit 9930aea

Browse files
[DOCS] Menu Restructuring for 2025 - pass 5 (#28748)
Moving `Compatibility and Support` and `Tool Ecosystem` sections in menu.
1 parent 61b6da8 commit 9930aea

25 files changed

+25
-25
lines changed

docs/articles_en/about-openvino.rst

+3-3
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ About OpenVINO
77

88
about-openvino/key-features
99
about-openvino/performance-benchmarks
10-
about-openvino/compatibility-and-support
10+
OpenVINO Ecosystem <about-openvino/openvino-ecosystem>
1111
about-openvino/contributing
1212
Release Notes <about-openvino/release-notes-openvino>
1313

@@ -42,8 +42,8 @@ Along with the primary components of model optimization and runtime, the toolkit
4242
* `Neural Network Compression Framework (NNCF) <https://github.com/openvinotoolkit/nncf>`__ - a tool for enhanced OpenVINO™ inference to get performance boost with minimal accuracy drop.
4343
* :doc:`Openvino Notebooks <get-started/learn-openvino/interactive-tutorials-python>`- Jupyter Python notebook, which demonstrate key features of the toolkit.
4444
* `OpenVINO Model Server <https://github.com/openvinotoolkit/model_server>`__ - a server that enables scalability via a serving microservice.
45-
* :doc:`OpenVINO Training Extensions <documentation/openvino-ecosystem/openvino-training-extensions>` – a convenient environment to train Deep Learning models and convert them using the OpenVINO™ toolkit for optimized inference.
46-
* :doc:`Dataset Management Framework (Datumaro) <documentation/openvino-ecosystem/datumaro>` - a tool to build, transform, and analyze datasets.
45+
* :doc:`OpenVINO Training Extensions <about-openvino/openvino-ecosystem/openvino-training-extensions>` – a convenient environment to train Deep Learning models and convert them using the OpenVINO™ toolkit for optimized inference.
46+
* :doc:`Dataset Management Framework (Datumaro) <about-openvino/openvino-ecosystem/datumaro>` - a tool to build, transform, and analyze datasets.
4747

4848
Community
4949
##############################################################

docs/articles_en/documentation/openvino-ecosystem.rst docs/articles_en/about-openvino/openvino-ecosystem.rst

+3-3
Original file line numberDiff line numberDiff line change
@@ -68,7 +68,7 @@ without the need to convert.
6868

6969
| **OpenVINO Training Extensions**
7070
| :bdg-link-dark:`Github <https://github.com/openvinotoolkit/training_extensions>`
71-
:bdg-link-success:`Overview Page <https://docs.openvino.ai/2025/documentation/openvino-ecosystem/openvino-training-extensions.html>`
71+
:bdg-link-success:`Overview Page <https://docs.openvino.ai/2025/about-openvino/openvino-ecosystem/openvino-training-extensions.html>`
7272
7373
A convenient environment to train Deep Learning models and convert them using the OpenVINO™
7474
toolkit for optimized inference.
@@ -77,7 +77,7 @@ toolkit for optimized inference.
7777

7878
| **OpenVINO Security Addon**
7979
| :bdg-link-dark:`Github <https://github.com/openvinotoolkit/security_addon>`
80-
:bdg-link-success:`User Guide <https://docs.openvino.ai/2025/documentation/openvino-ecosystem/openvino-security-add-on.html>`
80+
:bdg-link-success:`User Guide <https://docs.openvino.ai/2025/about-openvino/openvino-ecosystem/openvino-security-add-on.html>`
8181
8282
A solution for Model Developers and Independent Software Vendors to use secure packaging and
8383
secure model execution.
@@ -86,7 +86,7 @@ secure model execution.
8686

8787
| **Datumaro**
8888
| :bdg-link-dark:`Github <https://github.com/openvinotoolkit/datumaro>`
89-
:bdg-link-success:`Overview Page <https://docs.openvino.ai/2025/documentation/openvino-ecosystem/datumaro.html>`
89+
:bdg-link-success:`Overview Page <https://docs.openvino.ai/2025/about-openvino/openvino-ecosystem/datumaro.html>`
9090
9191
A framework and a CLI tool for building, transforming, and analyzing datasets.
9292
|hr|

docs/articles_en/about-openvino/performance-benchmarks/getting-performance-numbers.rst

+1-1
Original file line numberDiff line numberDiff line change
@@ -165,7 +165,7 @@ as:
165165
benchmark_app -m <model> -d <device> -i <input>
166166
167167
168-
Each of the :doc:`OpenVINO supported devices <../compatibility-and-support/supported-devices>`
168+
Each of the :doc:`OpenVINO supported devices <../../documentation/compatibility-and-support/supported-devices>`
169169
offers performance settings that contain command-line equivalents in the Benchmark app.
170170

171171
While these settings provide really low-level control for the optimal model performance on a

docs/articles_en/about-openvino/performance-benchmarks/performance-benchmarks-faq.rst

+1-1
Original file line numberDiff line numberDiff line change
@@ -97,7 +97,7 @@ Performance Information F.A.Q.
9797

9898
Intel partners with vendors all over the world. For a list of Hardware Manufacturers, see the
9999
`Intel® AI: In Production Partners & Solutions Catalog <https://www.intel.com/content/www/us/en/internet-of-things/ai-in-production/partners-solutions-catalog.html>`__.
100-
For more details, see the :doc:`Supported Devices <../compatibility-and-support/supported-devices>` article.
100+
For more details, see the :doc:`Supported Devices <../../documentation/compatibility-and-support/supported-devices>` article.
101101

102102

103103
.. dropdown:: How can I optimize my models for better performance or accuracy?

docs/articles_en/documentation.rst

+3-3
Original file line numberDiff line numberDiff line change
@@ -13,18 +13,18 @@ Documentation
1313

1414
API Reference <api/api_reference>
1515
OpenVINO IR format and Operation Sets <documentation/openvino-ir-format>
16-
Tool Ecosystem <documentation/openvino-ecosystem>
16+
Compatibility and Support <documentation/compatibility-and-support>
17+
Legacy Features <documentation/legacy-features>
1718
OpenVINO Extensibility <documentation/openvino-extensibility>
1819
OpenVINO™ Security <documentation/openvino-security>
19-
Legacy Features <documentation/legacy-features>
2020

2121

2222
This section provides reference documents that guide you through the OpenVINO toolkit workflow, from preparing models, optimizing them, to deploying them in your own deep learning applications.
2323

2424
| :doc:`API Reference doc path <api/api_reference>`
2525
| A collection of reference articles for OpenVINO C++, C, and Python APIs.
2626
27-
| :doc:`OpenVINO Ecosystem <documentation/openvino-ecosystem>`
27+
| :doc:`OpenVINO Ecosystem <about-openvino/openvino-ecosystem>`
2828
| Apart from the core components, OpenVINO offers tools, plugins, and expansions revolving around it, even if not constituting necessary parts of its workflow. This section gives you an overview of what makes up the OpenVINO toolkit.
2929
3030
| :doc:`OpenVINO Extensibility Mechanism <documentation/openvino-extensibility>`

docs/articles_en/about-openvino/compatibility-and-support/supported-devices.rst docs/articles_en/documentation/compatibility-and-support/supported-devices.rst

+2-2
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,7 @@ deep learning models:
1313
:doc:`NPU <../../openvino-workflow/running-inference/inference-devices-and-modes/npu-device>`.
1414

1515
| For their usage guides, see :doc:`Devices and Modes <../../openvino-workflow/running-inference/inference-devices-and-modes>`.
16-
| For a detailed list of devices, see :doc:`System Requirements <../release-notes-openvino/system-requirements>`.
16+
| For a detailed list of devices, see :doc:`System Requirements <../../about-openvino/release-notes-openvino/system-requirements>`.
1717
1818

1919
Beside running inference with a specific device,
@@ -43,7 +43,7 @@ Feature Support and API Coverage
4343
:doc:`Multi-stream execution <../../openvino-workflow/running-inference/optimize-inference/optimizing-throughput>` Yes Yes No
4444
:doc:`Model caching <../../openvino-workflow/running-inference/optimize-inference/optimizing-latency/model-caching-overview>` Yes Partial Yes
4545
:doc:`Dynamic shapes <../../openvino-workflow/running-inference/dynamic-shapes>` Yes Partial No
46-
:doc:`Import/Export <../../documentation/openvino-ecosystem>` Yes Yes Yes
46+
:doc:`Import/Export <../../about-openvino/openvino-ecosystem>` Yes Yes Yes
4747
:doc:`Preprocessing acceleration <../../openvino-workflow/running-inference/optimize-inference/optimize-preprocessing>` Yes Yes No
4848
:doc:`Stateful models <../../openvino-workflow/running-inference/stateful-models>` Yes Yes Yes
4949
:doc:`Extensibility <../../documentation/openvino-extensibility>` Yes Yes No

docs/articles_en/documentation/openvino-extensibility.rst

+1-1
Original file line numberDiff line numberDiff line change
@@ -25,7 +25,7 @@ OpenVINO Extensibility Mechanism
2525

2626
The Intel® Distribution of OpenVINO™ toolkit supports neural-network models trained with various frameworks, including
2727
TensorFlow, PyTorch, ONNX, TensorFlow Lite, and PaddlePaddle. The list of supported operations is different for each of the supported frameworks.
28-
To see the operations supported by your framework, refer to :doc:`Supported Framework Operations <../about-openvino/compatibility-and-support/supported-operations>`.
28+
To see the operations supported by your framework, refer to :doc:`Supported Framework Operations <../documentation/compatibility-and-support/supported-operations>`.
2929

3030
Custom operations, which are not included in the list, are not recognized by OpenVINO out-of-the-box. The need for custom operation may appear in two cases:
3131

docs/articles_en/documentation/openvino-security.rst

+1-1
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ with encryption or other security tools.
88
Actual security and privacy requirements depend on your unique deployment scenario.
99
This section provides general guidance on using OpenVINO tools and libraries securely.
1010
The main security measure for OpenVINO is its
11-
:doc:`Security Add-on <openvino-ecosystem/openvino-security-add-on>`. You can find its description
11+
:doc:`Security Add-on <../about-openvino/openvino-ecosystem/openvino-security-add-on>`. You can find its description
1212
in the Ecosystem section.
1313

1414
.. _encrypted-models:

docs/articles_en/openvino-workflow/model-preparation/convert-model-onnx.rst

+1-1
Original file line numberDiff line numberDiff line change
@@ -62,7 +62,7 @@ Supported ONNX Layers
6262
#####################
6363

6464
For the list of supported standard layers, refer to the
65-
:doc:`Supported Operations <../../about-openvino/compatibility-and-support/supported-operations>`
65+
:doc:`Supported Operations <../../documentation/compatibility-and-support/supported-operations>`
6666
page.
6767

6868
Additional Resources

docs/articles_en/openvino-workflow/model-preparation/convert-model-paddle.rst

+1-1
Original file line numberDiff line numberDiff line change
@@ -152,7 +152,7 @@ Supported PaddlePaddle Layers
152152
#############################
153153

154154
For the list of supported standard layers, refer to the
155-
:doc:`Supported Operations <../../about-openvino/compatibility-and-support/supported-operations>`
155+
:doc:`Supported Operations <../../documentation/compatibility-and-support/supported-operations>`
156156
page.
157157

158158

docs/articles_en/openvino-workflow/model-preparation/convert-model-tensorflow-lite.rst

+1-1
Original file line numberDiff line numberDiff line change
@@ -42,7 +42,7 @@ Supported TensorFlow Lite Layers
4242
###################################
4343

4444
For the list of supported standard layers, refer to the
45-
:doc:`Supported Operations <../../about-openvino/compatibility-and-support/supported-operations>`
45+
:doc:`Supported Operations <../../documentation/compatibility-and-support/supported-operations>`
4646
page.
4747

4848
Supported TensorFlow Lite Models

docs/articles_en/openvino-workflow/model-preparation/convert-model-tensorflow.rst

+1-1
Original file line numberDiff line numberDiff line change
@@ -383,7 +383,7 @@ Supported TensorFlow and TensorFlow 2 Keras Layers
383383
##################################################
384384

385385
For the list of supported standard layers, refer to the
386-
:doc:`Supported Operations <../../about-openvino/compatibility-and-support/supported-operations>`
386+
:doc:`Supported Operations <../../documentation/compatibility-and-support/supported-operations>`
387387
page.
388388

389389

docs/articles_en/openvino-workflow/model-preparation/setting-input-shapes.rst

+1-1
Original file line numberDiff line numberDiff line change
@@ -22,7 +22,7 @@ To learn more about dynamic shapes in runtime, refer to the
2222
you can visit `Hugging Face <https://huggingface.co/models>`__.
2323

2424
The OpenVINO Runtime API may present certain limitations in inferring models with undefined
25-
dimensions on some hardware. See the :doc:`Feature support matrix <../../about-openvino/compatibility-and-support/supported-devices>`
25+
dimensions on some hardware. See the :doc:`Feature support matrix <../../documentation/compatibility-and-support/supported-devices>`
2626
for reference. In this case, the ``input`` parameter and the
2727
:doc:`reshape method <../running-inference/changing-input-shape>` can help to resolve undefined
2828
dimensions.

docs/articles_en/openvino-workflow/running-inference/dynamic-shapes.rst

+1-1
Original file line numberDiff line numberDiff line change
@@ -190,7 +190,7 @@ For the same reason, it is not recommended to leave dimensions as undefined, wit
190190

191191
When specifying bounds, the lower bound is not as important as the upper one. The upper bound allows inference devices to allocate memory for intermediate tensors more precisely. It also allows using a fewer number of tuned kernels for different sizes.
192192
More precisely, benefits of specifying the lower or upper bound is device dependent.
193-
Depending on the plugin, specifying the upper bounds can be required. For information about dynamic shapes support on different devices, refer to the :doc:`feature support table <../../about-openvino/compatibility-and-support/supported-devices>`.
193+
Depending on the plugin, specifying the upper bounds can be required. For information about dynamic shapes support on different devices, refer to the :doc:`feature support table <../../documentation/compatibility-and-support/supported-devices>`.
194194

195195
If the lower and upper bounds for a dimension are known, it is recommended to specify them, even if a plugin can execute a model without the bounds.
196196

docs/articles_en/openvino-workflow/running-inference/optimize-inference/high-level-performance-hints.rst

+2-2
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ High-level Performance Hints
88
an inference device.
99

1010

11-
Even though all :doc:`supported devices <../../../about-openvino/compatibility-and-support/supported-devices>` in OpenVINO™ offer low-level performance settings, utilizing them is not recommended outside of very few cases.
11+
Even though all :doc:`supported devices <../../../documentation/compatibility-and-support/supported-devices>` in OpenVINO™ offer low-level performance settings, utilizing them is not recommended outside of very few cases.
1212
The preferred way to configure performance in OpenVINO Runtime is using performance hints. This is a future-proof solution fully compatible with the :doc:`automatic device selection inference mode <../inference-devices-and-modes/auto-device-selection>` and designed with *portability* in mind.
1313

1414
The hints also set the direction of the configuration in the right order. Instead of mapping the application needs to the low-level performance settings, and keeping an associated application logic to configure each possible device separately, the hints express a target scenario with a single config key and let the *device* configure itself in response.
@@ -32,7 +32,7 @@ Performance Hints: How It Works
3232

3333
Internally, every device "translates" the value of the hint to the actual performance settings.
3434
For example, the ``ov::hint::PerformanceMode::THROUGHPUT`` selects the number of CPU or GPU streams.
35-
Additionally, the optimal batch size is selected for the GPU and the :doc:`automatic batching <../inference-devices-and-modes/automatic-batching>` is applied whenever possible. To check whether the device supports it, refer to the :doc:`Supported devices <../../../about-openvino/compatibility-and-support/supported-devices>` article.
35+
Additionally, the optimal batch size is selected for the GPU and the :doc:`automatic batching <../inference-devices-and-modes/automatic-batching>` is applied whenever possible. To check whether the device supports it, refer to the :doc:`Supported devices <../../../documentation/compatibility-and-support/supported-devices>` article.
3636

3737
The resulting (device-specific) settings can be queried back from the instance of the ``ov:Compiled_Model``.
3838
Be aware that the ``benchmark_app`` outputs the actual settings for the ``THROUGHPUT`` hint. See the example of the output below:

docs/notebooks/auto-device-with-output.rst

+1-1
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ device <https://docs.openvino.ai/2025/openvino-workflow/running-inference/infere
66
(or AUTO in short) selects the most suitable device for inference by
77
considering the model precision, power efficiency and processing
88
capability of the available `compute
9-
devices <https://docs.openvino.ai/2024/about-openvino/compatibility-and-support/supported-devices.html>`__.
9+
devices <https://docs.openvino.ai/2025/documentation/compatibility-and-support/supported-devices.html>`__.
1010
The model precision (such as ``FP32``, ``FP16``, ``INT8``, etc.) is the
1111
first consideration to filter out the devices that cannot run the
1212
network efficiently.

docs/sphinx_setup/index.rst

+1-1
Original file line numberDiff line numberDiff line change
@@ -196,5 +196,5 @@ Key Features
196196
LEARN OPENVINO <learn-openvino>
197197
HOW TO USE - MAIN WORKFLOW <openvino-workflow>
198198
HOW TO USE - GENERATIVE AI WORKFLOW <openvino-workflow-generative>
199-
DOCUMENTATION <documentation>
199+
REFERENCE DOCUMENTATION <documentation>
200200
ABOUT OPENVINO <about-openvino>

0 commit comments

Comments
 (0)