Skip to main navigation Skip to search Skip to main content

Symbiotic Tuning: A Simple Approach for Enhancing Task Performance of Side-Tuning

Research output: Contribution to journalArticlepeer-review

Abstract

Reducing computational and memory overhead in fine-tuning large language models remains a significant challenge in natural language processing. While parameter-efficient fine-tuning (PEFT) methods, such as LoRA, have gained attention for reducing trainable parameters while maintaining task performance, they have not achieved substantial memory savings, as memory usage is still dominated by model weights and activations during backpropagation. In contrast, Ladder Side-Tuning (LST) addresses the memory usage problem by freezing the backbone language model (BLM) and training only lightweight side networks. However, this approach often leads to a performance decline compared to PEFT methods. To overcome these limitations, we propose Symbiotic Tuning (SymTune), a novel method that extracts intermediate outputs which includes the hidden states and attention weights from the BLM and integrates symbiotic modules to enhance feature processing capabilities. SymTune strike a much better trade-off between performance and memory efficiency, offering two key advantages: 1) robust performance across a wide range of natural language tasks, and 2) reduced memory consumption through an improved side-tuning architecture. Experimental results demonstrate that SymTune provides a scalable and memory-efficient solution for fine-tuning auto-encoder and auto-regressive language models.

Original languageEnglish
Pages (from-to)198471-198481
Number of pages11
JournalIEEE Access
Volume13
DOIs
Publication statusPublished - 2025

All Science Journal Classification (ASJC) codes

  • General Computer Science
  • General Materials Science
  • General Engineering

Cite this