Skip to content

Commit 996f52b

Browse files
msmykx-intelkblaszczak-intelsgolebiewski-intel
authored
[DOCS] New Key Features in docs - master (openvinotoolkit#26308)
New Key Features article for the documentation. --------- Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com> Co-authored-by: Sebastian Golebiewski <sebastianx.golebiewski@intel.com>
1 parent 796b75e commit 996f52b

File tree

4 files changed

+94
-20
lines changed

4 files changed

+94
-20
lines changed

docs/articles_en/about-openvino.rst

+8-18
Original file line numberDiff line numberDiff line change
@@ -1,11 +1,11 @@
11
About OpenVINO
22
==============
33

4-
54
.. toctree::
65
:maxdepth: 1
76
:hidden:
87

8+
about-openvino/key-features
99
about-openvino/performance-benchmarks
1010
about-openvino/compatibility-and-support
1111
about-openvino/contributing
@@ -15,29 +15,19 @@ OpenVINO is a toolkit for simple and efficient deployment of various deep learni
1515
In this section you will find information on the product itself, as well as the software
1616
and hardware solutions it supports.
1717

18-
OpenVINO (Open Visual Inference and Neural network Optimization) is an open-source software toolkit designed to optimize, accelerate, and deploy deep learning models for user applications. OpenVINO was developed by Intel to work efficiently on a wide range of Intel hardware platforms, including CPUs (x86 and Arm), GPUs, and NPUs.
18+
OpenVINO (Open Visual Inference and Neural network Optimization) is an open-source software
19+
toolkit designed to optimize, accelerate, and deploy deep learning models for user applications.
20+
OpenVINO is actively developed by Intel® to work efficiently on a wide range of Intel® hardware platforms,
21+
including CPUs (x86 and Arm), GPUs, and NPUs.
1922

2023

2124
Features
2225
##############################################################
2326

24-
One of the main purposes of OpenVINO is to streamline the deployment of deep learning models in user applications. It optimizes and accelerates model inference, which is crucial for such domains as Generative AI, Large Language models, and use cases like object detection, classification, segmentation, and many others.
25-
26-
* :doc:`Model Optimization <openvino-workflow/model-optimization>`
27-
28-
OpenVINO provides multiple optimization methods for both the training and post-training stages, including weight compression for Large Language models and Intel Optimum integration with Hugging Face.
29-
30-
* :doc:`Model Conversion and Framework Compatibility <openvino-workflow/model-preparation>`
31-
32-
Supported models can be loaded directly or converted to the OpenVINO format to achieve better performance. Supported frameworks include ONNX, PyTorch, TensorFlow, TensorFlow Lite, Keras, and PaddlePaddle.
33-
34-
* :doc:`Model Inference <openvino-workflow/running-inference>`
35-
36-
OpenVINO accelerates deep learning models on various hardware platforms, ensuring real-time, efficient inference.
37-
38-
* `Deployment on a server <https://github.com/openvinotoolkit/model_server>`__
27+
One of the main purposes of the OpenVINO toolkit is to streamline integration and deployment of
28+
deep learning models. Yet its feature set is much wider, offering various optimization and implementation options.
3929

40-
A model can be deployed either locally using OpenVINO Runtime or on a model server. Runtime is a set of C++ libraries with C and Python bindings providing a common API to deliver inference solutions. The model server enables quick model inference using external resources.
30+
To learn about the main properties of OpenVINO, see the :doc:`Key Features <about-openvino/key-features>`.
4131

4232
Architecture
4333
##############################################################
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,69 @@
1+
Key Features
2+
==============
3+
4+
Easy Integration
5+
#########################
6+
7+
| :doc:`Support for multiple frameworks <../openvino-workflow/model-preparation/convert-model-to-ir>`
8+
| Use deep learning models from PyTorch, TensorFlow, TensorFlow Lite, PaddlePaddle, and ONNX
9+
directly or convert them to the optimized OpenVINO IR format for improved performance.
10+
11+
| :doc:`Close integration with PyTorch <../openvino-workflow/torch-compile>`
12+
| For PyTorch-based applications, specify OpenVINO as a backend using
13+
:doc:`torch.compile <../openvino-workflow/torch-compile>` to improve model inference. Apply
14+
OpenVINO optimizations to your PyTorch models directly with a single line of code.
15+
16+
| :doc:`GenAI Out Of The Box <../learn-openvino/llm_inference_guide/genai-guide>`
17+
| With the genAI flavor of OpenVINO, you can run generative AI with just a couple lines of code.
18+
Check out the GenAI guide for instructions on how to do it.
19+
20+
| `Python / C++ / C / NodeJS APIs <https://docs.openvino.ai/2024/api/api_reference.html>`__
21+
| OpenVINO offers the C++ API as a complete set of available methods. For less resource-critical
22+
solutions, the Python API provides almost full coverage, while C and NodeJS ones are limited
23+
to the methods most basic for their typical environments. The NodeJS API, is still in its
24+
early and active development.
25+
26+
| :doc:`Open source and easy to extend <../about-openvino/contributing>`
27+
| If you need a particular feature or inference accelerator to be supported, you are free to file
28+
a feature request or develop new components specific to your projects yourself. As open source,
29+
OpenVINO may be used and modified freely. See the extensibility guide for more information on
30+
how to adapt it to your needs.
31+
32+
Deployment
33+
#########################
34+
35+
| :doc:`Local or remote <../openvino-workflow>`
36+
| Integrate the OpenVINO runtime directly with your application to run inference locally or use
37+
`OpenVINO Model Server <https://github.com/openvinotoolkit/model_server>`__ to shift the inference
38+
workload to a remote system, a separate server or a Kubernetes environment. For serving,
39+
OpenVINO is also integrated with `vLLM <https://docs.vllm.ai/en/stable/getting_started/openvino-installation.html>`__
40+
and `Triton <https://github.com/triton-inference-server/openvino_backend>`__ services.
41+
42+
| :doc:`Scalable and portable <release-notes-openvino/system-requirements>`
43+
| Write an application once, deploy it anywhere, always making the most out of your hardware setup.
44+
The automatic device selection mode gives you the ultimate deployment flexibility on all major
45+
operating systems. Check out system requirements.
46+
47+
| **Light-weight**
48+
| Designed with minimal external dependencies, OpenVINO does not bloat your application
49+
and simplifies installation and dependency management. The custom compilation for your specific
50+
model(s) may further reduce the final binary size.
51+
52+
Performance
53+
#########################
54+
55+
| :doc:`Model Optimization <../openvino-workflow/model-optimization>`
56+
| Optimize your deep learning models with NNCF, using various training-time and post-training
57+
compression methods, such as pruning, sparsity, quantization, and weight compression. Make
58+
your models take less space, run faster, and use less resources.
59+
60+
| :doc:`Top performance <../about-openvino/performance-benchmarks>`
61+
| OpenVINO is optimized to work with Intel hardware, delivering confirmed high performance for
62+
hundreds of models. Explore OpenVINO Performance Benchmarks to discover the optimal hardware
63+
configurations and plan your AI deployment based on verified data.
64+
65+
| :doc:`Enhanced App Start-Up Time <../openvino-workflow/running-inference/optimize-inference>`
66+
| If you need your application to launch immediately, OpenVINO will reduce first-inference latency,
67+
running inference on CPU until a more suited device is ready to take over. Once a model
68+
is compiled for inference, it is also cached, improving the start-up time even more.
69+

docs/sphinx_setup/_static/css/homepage_style.css

+10-2
Original file line numberDiff line numberDiff line change
@@ -163,13 +163,21 @@ h1 {
163163
}
164164
}
165165

166+
p.sd-card-text > .sd-btn-outline-primary {
167+
position: absolute;
168+
bottom: 20px;
169+
}
170+
166171
.sd-btn-outline-primary {
167172
color: #0054AE !important;
168173
text-decoration: none !important;
169174
background-color: #FFF !important;
170175
border-radius: 0;
171-
position: absolute;
172-
bottom: 20px;
176+
}
177+
178+
p.sd-text-right:has(a.key-feat-btn) {
179+
position:relative;
180+
top:-50px;
173181
}
174182

175183
.homepage-begin-tile {

docs/sphinx_setup/index.rst

+7
Original file line numberDiff line numberDiff line change
@@ -148,6 +148,13 @@ Places to Begin
148148
Key Features
149149
++++++++++++++++++++++++++++
150150

151+
.. button-link:: about-openvino/key-features.html
152+
:color: primary
153+
:outline:
154+
:align: right
155+
:class: key-feat-btn
156+
157+
See all features
151158

152159
.. grid:: 2 2 2 2
153160
:class-container: homepage_begin_container

0 commit comments

Comments
 (0)