You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have trained VITS model now and when I apply LORA to attention layer, fine-tuning is not working properly, could you please tell me which layer you applied to fine-tune VITS model with LORA and what values you used for rank and alpha ?
The text was updated successfully, but these errors were encountered:
Thanks for your reply. Can I ask one more thing?
While i'm checking your repo, i noticed that you set the conv_post, activation and speaker_adaptor to be trainable.
As i know, LoRA is something like attaching linear layers to adapt other weights, but your repo seems like fine-tuning part of the model.
Is it some other adaptation of LoRA?
I have trained VITS model now and when I apply LORA to attention layer, fine-tuning is not working properly, could you please tell me which layer you applied to fine-tune VITS model with LORA and what values you used for rank and alpha ?
The text was updated successfully, but these errors were encountered: