-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Executorch initial support #28425
base: master
Are you sure you want to change the base?
Executorch initial support #28425
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have several question about this feature:
- Will is support ExecuTorch binary compiled to OV ExecuTorch backend?
- What other ExecuTorch binaries will be supported: default, xnnpack?
- We won't support ExecuTorch binary inference using native OpenVINO workflow, right?
- Do you have PR/branch with ExecuTorch OV backend implementation?
Best regards,
Roman
@@ -83,3 +83,10 @@ def _is_testing(options) -> Optional[Any]: | |||
return True | |||
return False | |||
|
|||
def _executorch(options) -> Optional[Any]: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If executorch is added in options, this will also be available to torch.compile users and may break the flow if the option is set to True as ExecuTorchPythonDecoder will be used. Instead, can we add it as a kwarg to openvino_compile and set it to False by default? This can be set to True in Executorch openvino backend. That way it is not visible to the user. Please let me know your thoughts.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see, I was trying not to increase the number of parameters for openvino_compile at first but it makes sense not to expose this feature to users. I think it is a better idea to pass this as kwarg. I will update accordingly.
Please let me know for additional questions and clarifications needed. |
self._inputs.append(i) | ||
self._input_signature.append(value.name) | ||
if hasattr(value, "meta") and ('tensor_meta' in value.meta.keys()) and value.meta['tensor_meta']: | ||
found_shapes.append(value.meta['tensor_meta'].shape) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can you please comment what type of this input that requires special branch?
for else, we need a comment as well
self._input_signature = [] | ||
self._example_input = None | ||
|
||
if issubclass(type(pt_module), torch.fx.graph_module.GraphModule): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if issubclass(type(pt_module), torch.fx.graph_module.GraphModule): | |
if isinstance(pt_module, torch.fx.graph_module.GraphModule): |
can we do it?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
please create separate internal function in this class to handle torch.fx.graph_module.GraphModule
case
and separate function to handle torch.fx.Node
because now it is quite long for constructor
for idx, shape in enumerate(found_shapes): | ||
if shape is not None: | ||
new_shape = [] | ||
for dim in shape: | ||
if (dynamic_shapes or type(dim).__name__ == "SymInt"): | ||
new_shape.append(-1) | ||
else: | ||
new_shape.append(dim) | ||
found_shapes[idx] = torch.Size(new_shape) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I believe we should have helper function for this. Can you please check or create?
Why can't you reformat found_shapes[idx]
right away when you were adding elements above? Now you have to re-iterate through it again
@@ -789,6 +789,7 @@ const std::unordered_map<std::string, CreatorFunction> get_supported_ops_fx() { | |||
{"aten.argmin.default", op::translate_argmin}, | |||
{"aten.as_strided.default", op::translate_as_strided}, | |||
{"aten.as_strided_.default", op::translate_as_strided}, | |||
{"aten.as_strided_copy.default", op::translate_as_strided}, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can you please add layer tests for newly added translators?
from openvino.frontend.pytorch.fx_decoder import TorchFXPythonDecoder | ||
from openvino import Core, Type, PartialShape, serialize | ||
from openvino.frontend.pytorch.fx_decoder import TorchFXPythonDecoder, ExecuTorchPythonDecoder | ||
from openvino.runtime import Core, Type, PartialShape, serialize |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
from openvino.runtime import Core, Type, PartialShape, serialize | |
from openvino import Core, Type, PartialShape, serialize |
openvino.runtime
gets deprecated this release
@@ -78,7 +78,7 @@ def openvino_compile_cached_model(cached_model_path, options, *example_inputs): | |||
|
|||
return compiled_model | |||
|
|||
def openvino_compile(gm: GraphModule, *args, model_hash_str: str = None, options=None): | |||
def openvino_compile(gm: GraphModule, *args, model_hash_str: str = None, options=None, executorch=False): | |||
core = Core() | |||
|
|||
device = _get_device(options) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
for ExecuTorch
, will you cache ov::Model
or ov::CompiledModel
?
@@ -13,8 +13,8 @@ | |||
from torch.fx import GraphModule |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Don't we need to have separate file for executorch implementation? Just to avoid mix with torch.compile
Details:
Tickets: