-
Notifications
You must be signed in to change notification settings - Fork 726
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug]: Error occurred during export, unable to generate onnx model #2577
Comments
This is the log for exporting onnx: D:\software\Anaconda\envs\anomalibGpu\python.exe E:\WHGWD\anomalib-main\export.py |
Describe the bug
The model.ckpt generated after training with Patchcore can be inferred using inference code, and the results can be seen in the image. However, the onnx model exported by export cannot be inferred using openvino'inreference.exe and will result in an error.
from anomalib.models import Patchcore
from anomalib.models import Padim
from anomalib.models import EfficientAd
from anomalib.engine import Engine
model = Patchcore()
engine = Engine(task="classification")
onnx_model = engine.export(
model= model,
export_type='onnx',
export_root=None,
input_size=[256, 256],
transform=None,
compression_type=None,
datamodule = None,
metric = None,
ov_args = None,
ckpt_path='results/Patchcore/TGV-20250221-01/latest/weights/lightning/model.ckpt',
)
print(onnx_model)
------------export
D:\software\Anaconda\envs\anomalibGpu\lib\site-packages\kornia\feature\lightglue.py:44: FutureWarning:
torch.cuda.amp.custom_fwd(args...)
is deprecated. Please usetorch.amp.custom_fwd(args..., device_type='cuda')
instead.@torch.cuda.amp.custom_fwd(cast_inputs=torch.float32)
Traceback (most recent call last):
File "E:\WHGWD\anomalib-main\tools\inference\openvino_inference.py", line 101, in
infer(args)
File "E:\WHGWD\anomalib-main\tools\inference\openvino_inference.py", line 77, in infer
inferencer = OpenVINOInferencer(path=args.weights, metadata=args.metadata, device=args.device)
File "D:\software\Anaconda\envs\anomalibGpu\lib\site-packages\anomalib\deploy\inferencers\openvino_inferencer.py", line 108, in init
self.input_blob, self.output_blob, self.model = self.load_model(path)
File "D:\software\Anaconda\envs\anomalibGpu\lib\site-packages\anomalib\deploy\inferencers\openvino_inferencer.py", line 137, in load_model
model = core.read_model(path)
File "D:\software\Anaconda\envs\anomalibGpu\lib\site-packages\openvino\runtime\ie_api.py", line 502, in read_model
return Model(super().read_model(model))
RuntimeError: Exception from src/inference/src/cpp/core.cpp:90:
Check 'util::directory_exists(path) || util::file_exists(path)' failed at src/frontends/common/src/frontend.cpp:117:
FrontEnd API failed with GeneralFailure:
onnx: Could not open the file: "results\weights\onnx\model.onnx"
3、Some parameters of openvino'inreference
parser.add_argument("--weights", type=Path, default="./results/weights/onnx/model.onnx", required=False, help="Path to model weights")
parser.add_argument("--metadata", type=Path, default="./results/weights/onnx/metadata.json", required=False, help="Path to a JSON file containing the metadata.")
parser.add_argument("--input", type=Path, default="./dataset/TGV-inference/defect", required=False, help="Path to an image to infer.")
parser.add_argument("--output", type=Path,default="./dataset/TGV-inference/defect_out", required=False, help="Path to save the output image.")
Dataset
Other (please specify in the text field below)
Model
PatchCore
Steps to reproduce the behavior
1、先train;
2、用inference代码推理;
3、用export导出模型;
4、用用openvino_inference.py进行推理
OS information
OS information:
Expected behavior
It should be able to export onnx normally
Screenshots
Pip/GitHub
GitHub
What version/branch did you use?
No response
Configuration YAML
Logs
Code of Conduct
The text was updated successfully, but these errors were encountered: