Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: Error occurred during export, unable to generate onnx model #2577

Open
1 task done
bommbomm opened this issue Feb 21, 2025 · 1 comment
Open
1 task done

[Bug]: Error occurred during export, unable to generate onnx model #2577

bommbomm opened this issue Feb 21, 2025 · 1 comment

Comments

@bommbomm
Copy link

Describe the bug

The model.ckpt generated after training with Patchcore can be inferred using inference code, and the results can be seen in the image. However, the onnx model exported by export cannot be inferred using openvino'inreference.exe and will result in an error.

  1. The following will paste the export code------------export
    from anomalib.models import Patchcore
    from anomalib.models import Padim
    from anomalib.models import EfficientAd
    from anomalib.engine import Engine

model = Patchcore()
engine = Engine(task="classification")
onnx_model = engine.export(
model= model,
export_type='onnx',
export_root=None,
input_size=[256, 256],
transform=None,
compression_type=None,
datamodule = None,
metric = None,
ov_args = None,
ckpt_path='results/Patchcore/TGV-20250221-01/latest/weights/lightning/model.ckpt',
)
print(onnx_model)
------------export

  1. The following error occurred during the inference process using openvino-in. py, and the onnx file cannot be opened

D:\software\Anaconda\envs\anomalibGpu\lib\site-packages\kornia\feature\lightglue.py:44: FutureWarning: torch.cuda.amp.custom_fwd(args...) is deprecated. Please use torch.amp.custom_fwd(args..., device_type='cuda') instead.
@torch.cuda.amp.custom_fwd(cast_inputs=torch.float32)
Traceback (most recent call last):
File "E:\WHGWD\anomalib-main\tools\inference\openvino_inference.py", line 101, in
infer(args)
File "E:\WHGWD\anomalib-main\tools\inference\openvino_inference.py", line 77, in infer
inferencer = OpenVINOInferencer(path=args.weights, metadata=args.metadata, device=args.device)
File "D:\software\Anaconda\envs\anomalibGpu\lib\site-packages\anomalib\deploy\inferencers\openvino_inferencer.py", line 108, in init
self.input_blob, self.output_blob, self.model = self.load_model(path)
File "D:\software\Anaconda\envs\anomalibGpu\lib\site-packages\anomalib\deploy\inferencers\openvino_inferencer.py", line 137, in load_model
model = core.read_model(path)
File "D:\software\Anaconda\envs\anomalibGpu\lib\site-packages\openvino\runtime\ie_api.py", line 502, in read_model
return Model(super().read_model(model))
RuntimeError: Exception from src/inference/src/cpp/core.cpp:90:
Check 'util::directory_exists(path) || util::file_exists(path)' failed at src/frontends/common/src/frontend.cpp:117:
FrontEnd API failed with GeneralFailure:
onnx: Could not open the file: "results\weights\onnx\model.onnx"

3、Some parameters of openvino'inreference
parser.add_argument("--weights", type=Path, default="./results/weights/onnx/model.onnx", required=False, help="Path to model weights")
parser.add_argument("--metadata", type=Path, default="./results/weights/onnx/metadata.json", required=False, help="Path to a JSON file containing the metadata.")
parser.add_argument("--input", type=Path, default="./dataset/TGV-inference/defect", required=False, help="Path to an image to infer.")
parser.add_argument("--output", type=Path,default="./dataset/TGV-inference/defect_out", required=False, help="Path to save the output image.")

Dataset

Other (please specify in the text field below)

Model

PatchCore

Steps to reproduce the behavior

1、先train;
2、用inference代码推理;
3、用export导出模型;
4、用用openvino_inference.py进行推理

OS information

OS information:

  • OS: [e.g. Ubuntu 20.04]
  • Python version: [e.g. 3.10.0]
  • Anomalib version: [e.g. 0.3.6]
  • PyTorch version: [e.g. 1.9.0]
  • CUDA/cuDNN version: [e.g. 11.1]
  • GPU models and configuration: [e.g. 2x GeForce RTX 3090]
  • Any other relevant information: [e.g. I'm using a custom dataset]

Expected behavior

It should be able to export onnx normally

Screenshots

Image

Pip/GitHub

GitHub

What version/branch did you use?

No response

Configuration YAML

model:
  class_path: anomalib.models.Patchcore
  init_args:
    backbone: wide_resnet50_2
    layers:
      - layer2
      - layer3
    pre_trained: true
    coreset_sampling_ratio: 0.1
    num_neighbors: 9

Logs

D:\software\Anaconda\envs\anomalibGpu\python.exe E:\WHGWD\anomalib-main\tools\inference\openvino_inference.py 
D:\software\Anaconda\envs\anomalibGpu\lib\site-packages\kornia\feature\lightglue.py:44: FutureWarning: `torch.cuda.amp.custom_fwd(args...)` is deprecated. Please use `torch.amp.custom_fwd(args..., device_type='cuda')` instead.
  @torch.cuda.amp.custom_fwd(cast_inputs=torch.float32)
Traceback (most recent call last):
  File "E:\WHGWD\anomalib-main\tools\inference\openvino_inference.py", line 101, in <module>
    infer(args)
  File "E:\WHGWD\anomalib-main\tools\inference\openvino_inference.py", line 77, in infer
    inferencer = OpenVINOInferencer(path=args.weights, metadata=args.metadata, device=args.device)
  File "D:\software\Anaconda\envs\anomalibGpu\lib\site-packages\anomalib\deploy\inferencers\openvino_inferencer.py", line 108, in __init__
    self.input_blob, self.output_blob, self.model = self.load_model(path)
  File "D:\software\Anaconda\envs\anomalibGpu\lib\site-packages\anomalib\deploy\inferencers\openvino_inferencer.py", line 137, in load_model
    model = core.read_model(path)
  File "D:\software\Anaconda\envs\anomalibGpu\lib\site-packages\openvino\runtime\ie_api.py", line 502, in read_model
    return Model(super().read_model(model))
RuntimeError: Exception from src/inference/src/cpp/core.cpp:90:
Check 'util::directory_exists(path) || util::file_exists(path)' failed at src/frontends/common/src/frontend.cpp:117:
FrontEnd API failed with GeneralFailure:
onnx: Could not open the file: "results\weights\onnx\model.onnx"

Code of Conduct

  • I agree to follow this project's Code of Conduct
@bommbomm
Copy link
Author

This is the log for exporting onnx:

D:\software\Anaconda\envs\anomalibGpu\python.exe E:\WHGWD\anomalib-main\export.py
D:\software\Anaconda\envs\anomalibGpu\lib\site-packages\kornia\feature\lightglue.py:44: FutureWarning: torch.cuda.amp.custom_fwd(args...) is deprecated. Please use torch.amp.custom_fwd(args..., device_type='cuda') instead.
@torch.cuda.amp.custom_fwd(cast_inputs=torch.float32)
Trainer already configured with model summary callbacks: [<class 'lightning.pytorch.callbacks.rich_model_summary.RichModelSummary'>]. Skipping setting a default ModelSummary callback.
GPU available: True (cuda), used: True
TPU available: False, using: 0 TPU cores
HPU available: False, using: 0 HPUs
D:\software\Anaconda\envs\anomalibGpu\lib\site-packages\anomalib\data\transforms\center_crop.py:57: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if image.numel() == 0:
D:\software\Anaconda\envs\anomalibGpu\lib\site-packages\anomalib\data\transforms\center_crop.py:61: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if crop_height > image_height or crop_width > image_width:
D:\software\Anaconda\envs\anomalibGpu\lib\site-packages\anomalib\data\transforms\center_crop.py:41: TracerWarning: torch.tensor results are registered as constants in the trace. You can safely ignore this warning if you use this function to create tensors out of constant variables that would be the same every time you call this function. In any other case, this might cause the trace to be incorrect.
crop_top = torch.tensor((image_height - crop_height) / 2.0).round().int().item()
D:\software\Anaconda\envs\anomalibGpu\lib\site-packages\anomalib\data\transforms\center_crop.py:41: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
crop_top = torch.tensor((image_height - crop_height) / 2.0).round().int().item()
D:\software\Anaconda\envs\anomalibGpu\lib\site-packages\anomalib\data\transforms\center_crop.py:41: TracerWarning: Converting a tensor to a Python number might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
crop_top = torch.tensor((image_height - crop_height) / 2.0).round().int().item()
D:\software\Anaconda\envs\anomalibGpu\lib\site-packages\anomalib\data\transforms\center_crop.py:42: TracerWarning: torch.tensor results are registered as constants in the trace. You can safely ignore this warning if you use this function to create tensors out of constant variables that would be the same every time you call this function. In any other case, this might cause the trace to be incorrect.
crop_left = torch.tensor((image_width - crop_width) / 2.0).round().int().item()
D:\software\Anaconda\envs\anomalibGpu\lib\site-packages\anomalib\data\transforms\center_crop.py:42: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
crop_left = torch.tensor((image_width - crop_width) / 2.0).round().int().item()
D:\software\Anaconda\envs\anomalibGpu\lib\site-packages\anomalib\data\transforms\center_crop.py:42: TracerWarning: Converting a tensor to a Python number might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
crop_left = torch.tensor((image_width - crop_width) / 2.0).round().int().item()
D:\software\Anaconda\envs\anomalibGpu\lib\site-packages\anomalib\models\image\patchcore\torch_model.py:226: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
n_neighbors=min(self.num_neighbors, memory_bank_effective_size),
D:\software\Anaconda\envs\anomalibGpu\lib\site-packages\torch\onnx\symbolic_opset9.py:5385: UserWarning: Exporting aten::index operator of advanced indexing in opset 14 is achieved by combination of multiple ONNX operators, including Reshape, Transpose, Concat, and Gather. If indices include negative values, the exported graph will produce incorrect results.
warnings.warn(
D:\software\Anaconda\envs\anomalibGpu\lib\site-packages\torch\onnx_internal\jit_utils.py:308: UserWarning: Constant folding - Only steps=1 can be constant folded for opset >= 10 onnx::Slice op. Constant folding not applied. (Triggered internally at C:\actions-runner_work\pytorch\pytorch\builder\windows\pytorch\torch\csrc\jit\passes\onnx\constant_fold.cpp:180.)
_C._jit_pass_onnx_node_shape_type_inference(node, params_dict, opset_version)
D:\software\Anaconda\envs\anomalibGpu\lib\site-packages\torch\onnx\utils.py:663: UserWarning: Constant folding - Only steps=1 can be constant folded for opset >= 10 onnx::Slice op. Constant folding not applied. (Triggered internally at C:\actions-runner_work\pytorch\pytorch\builder\windows\pytorch\torch\csrc\jit\passes\onnx\constant_fold.cpp:180.)
_C._jit_pass_onnx_graph_shape_type_inference(
D:\software\Anaconda\envs\anomalibGpu\lib\site-packages\torch\onnx\utils.py:1186: UserWarning: Constant folding - Only steps=1 can be constant folded for opset >= 10 onnx::Slice op. Constant folding not applied. (Triggered internally at C:\actions-runner_work\pytorch\pytorch\builder\windows\pytorch\torch\csrc\jit\passes\onnx\constant_fold.cpp:180.)
_C._jit_pass_onnx_graph_shape_type_inference(
results\weights\onnx\model.onnx

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant