You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Oct 16, 2023. It is now read-only.
Hi, currently in the examples, only linear describes a naive example of offload, in other projects such as opt, bloom, gpt, there is no option for offload.
I am wondering how to apply offload to large model inference, and any examples?
The text was updated successfully, but these errors were encountered:
Hi @YJHMITWEB This is technically feasible, but would cause a sharp decline in the inference speed. Therefore, the practical significance is limited, and we currently do not consider it a high priority.
Welcome to submit the corresponding proposal or PR to participate in the construction. Thanks.
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
Hi, currently in the examples, only
linear
describes a naive example of offload, in other projects such asopt
,bloom
,gpt
, there is no option for offload.I am wondering how to apply offload to large model inference, and any examples?
The text was updated successfully, but these errors were encountered: