Exploiting the low-rank nature of weight updates during fine-tuning results in orders of magnitude reduction in learnable parameters
Originally appeared here:
LoRA: Revolutionizing Large Language Model Adaptation without Fine-Tuning
Go Here to Read this Fast! LoRA: Revolutionizing Large Language Model Adaptation without Fine-Tuning