You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardexpand all lines: README.md
+1-1
Original file line number
Diff line number
Diff line change
@@ -35,7 +35,7 @@ Check out the [OpenVINO Cheat Sheet](https://docs.openvino.ai/2025/_static/downl
35
35
pip install -U openvino
36
36
```
37
37
38
-
Check [system requirements](https://docs.openvino.ai/2025/about-openvino/system-requirements.html) and [supported devices](https://docs.openvino.ai/2025/documentation/compatibility-and-support/supported-devices.html) for detailed information.
38
+
Check [system requirements](https://docs.openvino.ai/2025/about-openvino/release-notes-openvino/system-requirements.html) and [supported devices](https://docs.openvino.ai/2025/documentation/compatibility-and-support/supported-devices.html) for detailed information.
Copy file name to clipboardexpand all lines: cmake/features.cmake
+1-1
Original file line number
Diff line number
Diff line change
@@ -52,7 +52,7 @@ ov_dependent_option (ENABLE_GPU_DEBUG_CAPS "enable GPU debug capabilities at run
52
52
ov_dependent_option (ENABLE_CPU_DEBUG_CAPS "enable CPU debug capabilities at runtime"ON"ENABLE_DEBUG_CAPS;ENABLE_INTEL_CPU"OFF)
53
53
ov_dependent_option (ENABLE_SNIPPETS_DEBUG_CAPS "enable Snippets debug capabilities at runtime"ON"ENABLE_DEBUG_CAPS"OFF)
54
54
55
-
ov_dependent_option (ENABLE_SNIPPETS_LIBXSMM_TPP "allow Snippets to use LIBXSMM Tensor Processing Primitives"OFF"ENABLE_INTEL_CPU AND X86_64"OFF)
55
+
ov_dependent_option (ENABLE_SNIPPETS_LIBXSMM_TPP "allow Snippets to use LIBXSMM Tensor Processing Primitives"OFF"ENABLE_INTEL_CPU AND (X86_64 OR AARCH64)"OFF)
56
56
57
57
ov_option (ENABLE_PROFILING_ITT "Build with ITT tracing. Optionally configure pre-built ittnotify library though INTEL_VTUNE_DIR variable."OFF)
Copy file name to clipboardexpand all lines: docs/articles_en/about-openvino/openvino-ecosystem/openvino-project/openvino-security-add-on.rst
+1-1
Original file line number
Diff line number
Diff line change
@@ -17,7 +17,7 @@ In this release, one person performs the role of both the Model Developer and th
17
17
Overview
18
18
########
19
19
20
-
The OpenVINO™ Security Add-on works with the :doc:`OpenVINO™ Model Server <../../../openvino-workflow/model-server/ovms_what_is_openvino_model_server>` on Intel® architecture. Together, the OpenVINO™ Security Add-on and the OpenVINO™ Model Server provide a way for Model Developers and Independent Software Vendors to use secure packaging and secure model execution to enable access control to the OpenVINO™ models, and for model Users to run inference within assigned limits.
20
+
The OpenVINO™ Security Add-on works with the :doc:`OpenVINO™ Model Server <../../../../model-server/ovms_what_is_openvino_model_server>` on Intel® architecture. Together, the OpenVINO™ Security Add-on and the OpenVINO™ Model Server provide a way for Model Developers and Independent Software Vendors to use secure packaging and secure model execution to enable access control to the OpenVINO™ models, and for model Users to run inference within assigned limits.
21
21
22
22
The OpenVINO™ Security Add-on consists of three components that run in Kernel-based Virtual Machines (KVMs). These components provide a way to run security-sensitive operations in an isolated environment. A brief description of the three components are as follows. Click each triangled line for more information about each.
Deployment on a Local System <openvino-workflow/deployment-locally>
17
-
Deployment on a Model Server <openvino-workflow/model-server/ovms_what_is_openvino_model_server>
16
+
Deployment on a Local System <openvino-workflow/deployment-locally>
18
17
openvino-workflow/torch-compile
19
18
20
19
@@ -86,11 +85,11 @@ OpenVINO uses the following functions for reading, converting, and saving models
86
85
and the quickest way of running a deep learning model.
87
86
88
87
|:doc:`Deployment Option 1. Using OpenVINO Runtime <openvino-workflow/deployment-locally>`
89
-
|Deploy a model locally, reading the file directly from your application and utilizing about-openvino/additional-resources available to the system.
88
+
|Deploy a model locally, reading the file directly from your application and utilizing resources available to the system.
90
89
|Deployment on a local system uses the steps described in the section on running inference.
91
90
92
-
|:doc:`Deployment Option 2. Using Model Server <openvino-workflow/model-server/ovms_what_is_openvino_model_server>`
93
-
|Deploy a model remotely, connecting your application to an inference server and utilizing external about-openvino/additional-resources, with no impact on the app's performance.
91
+
|:doc:`Deployment Option 2. Using Model Server <../model-server/ovms_what_is_openvino_model_server>`
92
+
|Deploy a model remotely, connecting your application to an inference server and utilizing external resources, with no impact on the app's performance.
94
93
|Deployment on OpenVINO Model Server is quick and does not require any additional steps described in the section on running inference.
95
94
96
95
|:doc:`Deployment Option 3. Using torch.compile for PyTorch 2.0 <openvino-workflow/torch-compile>`
0 commit comments