Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

在升腾npu-310p上能运行启动api,但是显存占用对比英伟达翻倍了 #7522

Open
1 task done
monument-and-sea-all-the-gift opened this issue Mar 28, 2025 · 3 comments
Labels
bug Something isn't working npu This problem is related to NPU devices pending This problem is yet to be addressed

Comments

@monument-and-sea-all-the-gift

Reminder

  • I have read the above rules and searched the existing issues.

System Info

  • llamafactory version: 0.9.1.dev0
  • Platform: Linux-4.19.90-89.11.v2401.ky10.x86_64-x86_64-with-glibc2.28
  • Python version: 3.10.16
  • PyTorch version: 2.1.0+cpu (NPU)
  • Transformers version: 4.46.0
  • Datasets version: 2.21.0
  • Accelerate version: 1.0.1
  • PEFT version: 0.12.0
  • TRL version: 0.9.6
  • NPU type: Ascend310P3
  • CANN version: 8.0.RC2.alpha001
  • DeepSpeed version: 0.13.2

Reproduction

Put your message here.

Others

No response

@monument-and-sea-all-the-gift monument-and-sea-all-the-gift added bug Something isn't working pending This problem is yet to be addressed labels Mar 28, 2025
@github-actions github-actions bot added the npu This problem is related to NPU devices label Mar 28, 2025
@hiyouga
Copy link
Owner

hiyouga commented Mar 28, 2025

try --infer_dtype float16?

@monument-and-sea-all-the-gift
Copy link
Author

infer_dtype float16

write it into the infer.yaml?

@monument-and-sea-all-the-gift
Copy link
Author

try --infer_dtype float16?

I already try it,but the problem all the same

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working npu This problem is related to NPU devices pending This problem is yet to be addressed
Projects
None yet
Development

No branches or pull requests

2 participants