Skip to content

Commit 1ef751e

Browse files
Update OVMS tab in Generative Workflow
1 parent 9b54cd7 commit 1ef751e

File tree

1 file changed

+9
-5
lines changed

1 file changed

+9
-5
lines changed

docs/articles_en/openvino-workflow-generative.rst

+9-5
Original file line numberDiff line numberDiff line change
@@ -62,12 +62,16 @@ options:
6262
| - Available in both Python and C++.
6363
| - Allows client applications in any programming language that supports REST or gRPC.
6464
65-
Deploy deep learning models remotely, using
6665
:doc:`OpenVINO™ Model Server <model-server/ovms_what_is_openvino_model_server>`
67-
- a high-performance solution, developed in C++ for scalability and optimized for
68-
Intel architectures. The deployment is straightforward, as you simply connect
69-
your application via gRPC or REST endpoints to a server, where OpenVINO's logic for
70-
inference tasks is applied.
66+
provides a set of REST API endpoints dedicated to generative use cases. The endpoints
67+
simplify writing AI applications, ensure scalability, and provide state-of-the-art
68+
performance optimizations. They include OpenAI API for:
69+
`text generation <https://openvino-doc.iotg.sclab.intel.com/seba-test-8/model-server/ovms_docs_rest_api_chat.html>`__,
70+
`embeddings <https://openvino-doc.iotg.sclab.intel.com/seba-test-8/model-server/ovms_docs_rest_api_embeddings.html>`__,
71+
and `reranking <https://openvino-doc.iotg.sclab.intel.com/seba-test-8/model-server/ovms_docs_rest_api_rerank.html>`__.
72+
The model server supports deployments as containers or binary applications on Linux and Windows with CPU or GPU acceleration.
73+
See the :doc:`demos <model-server/ovms_docs_demos>`.
74+
7175

7276

7377
The advantages of using OpenVINO for generative model deployment:

0 commit comments

Comments
 (0)