AdaMix introduces a mixture-of-adapters approach to parameter-efficient fine-tuning that consistently beats state-of-the-art baselines across major NLP benchmarks. Tested on GLUE, E2E, WebNLG, and DART, AdaMix not only matches but often outperforms full model fine-tuning with BERT, RoBERTa, and GPT-2. Its advantage extends to few-shot learning, where AdaMix narrows the performance gap with full prompt-based fine-tuning, delivering strong results with fewer labeled examples.AdaMix introduces a mixture-of-adapters approach to parameter-efficient fine-tuning that consistently beats state-of-the-art baselines across major NLP benchmarks. Tested on GLUE, E2E, WebNLG, and DART, AdaMix not only matches but often outperforms full model fine-tuning with BERT, RoBERTa, and GPT-2. Its advantage extends to few-shot learning, where AdaMix narrows the performance gap with full prompt-based fine-tuning, delivering strong results with fewer labeled examples.

Smarter Fine-Tuning for NLU and NLG Tasks

2025/10/01 19:00

Abstract and 1. Introduction

  1. Background

    2.1 Mixture-of-Experts

    2.2 Adapters

  2. Mixture-of-Adaptations

    3.1 Routing Policy

    3.2 Consistency regularization

    3.3 Adaptation module merging and 3.4 Adaptation module sharing

    3.5 Connection to Bayesian Neural Networks and Model Ensembling

  3. Experiments

    4.1 Experimental Setup

    4.2 Key Results

    4.3 Ablation Study

  4. Related Work

  5. Conclusions

  6. Limitations

  7. Acknowledgment and References

Appendix

A. Few-shot NLU Datasets B. Ablation Study C. Detailed Results on NLU Tasks D. Hyper-parameter

4 Experiments

4.1 Experimental Setup

Dataset. We perform experiments on a wide range of tasks including eight natural language understanding (NLU) tasks in the General Language Understanding Evaluation (GLUE) benchmark (Wang et al., 2019) and three natural language generation (NLG) tasks, namely, E2E (Novikova et al., 2017), WebNLG (Gardent et al., 2017) and DART (Nan et al., 2020). For the NLU and NLG tasks, we follow the same setup as (Houlsby et al., 2019) and (Li and Liang, 2021; Hu et al., 2021), respectively.

\ Baselines. We compare AdaMix to full model fine-tuning and several state-of-the-art parameterefficient fine-tuning (PEFT) methods, namely, Pfeiffer Adapter (Pfeiffer et al., 2021), Houlsby Adapter (Houlsby et al., 2019), BitFit (Zaken et al., 2021), Prefix-tuning (Li and Liang, 2021), UNIPELT (Mao et al., 2021) and LoRA (Hu et al., 2021). We use BERT-base (Devlin et al., 2019) and RoBERTa-large (Liu et al., 2019) as encoders for NLU tasks (results in Table 1 and Table 2), and GPT-2 (Brown et al., 2020) for NLG tasks (results in Table 3).

\ AdaMix implementation details. We implement AdaMix in Pytorch and use Tesla V100 gpus for experiments with detailed hyper-parameter configurations presented in Section D in Appendix. AdaMix with adapters uses a dimension of 16 and 48 using BERT-base and RoBERTa-large encoders following the setup of (Hu et al., 2021; Mao et al., 2021) for fair comparison. AdaMix with LoRA uses rank r = 4 following the setup of (Hu et al., 2021) to keep the same number of adaptation parameters during inference. The number of adaptation modules in AdaMix is set to 4 for all the tasks and encoders unless otherwise specified. The impact of adapter dimension and number of adaptation modules for NLU tasks are investigated in Table 9 and 10. For most of the experiments and ablation analysis, we report results from AdaMix with adapters for NLU tasks. For demonstrating the generalizability of our framework, we report results from AdaMix with LoRA (Hu et al., 2021) as the underlying PEFT mechanism for NLG tasks.

\

4.2 Key Results

4.2.1 NLU Tasks

\ Tables 1 and 2 show the performance comparison among PEFT models with RoBERTa-large and BERT-base encoders respectively. Fully fine-tuned

\ \ Table 1: Results for NLU tasks on GLUE development set with RoBERTa-large encoder. The best result on each task is in bold and “-” denotes missing measure. AdaMix with a mixture of adapters outperforms all competing methods as well as fully fine-tuned large model with only 0.23% tunable parameters.† denotes results reported from (Hu et al., 2021). Mcc refers to Matthews correlation coefficient, and Pearson refers to Pearson correlation. #Param. denotes the number of tunable adaptation parameters used during inference.

\ \ RoBERTa-large and BERT-base provide the ceiling performance. We observe AdaMix with a mixture-of-adapters to significantly outperform other state-of-the-art baselines on most tasks with different encoders. AdaMix with adapters is the only PEFT method which outperforms full model fine-tuning on all the tasks and on average score.

\ \

\ \ 4.2.2 NLG Tasks

\ AdaMix leverages mixture of adaptations to improve over underlying PEFT method as demonstrated in Table 3 for E2E NLG i.e. AdaMix with LoRA and AdaMix with adapters outperform LoRA (Hu et al., 2021) and adapters (Houlsby et al., 2019) respectively. We report results on DART and WebNLG in Tables 4 and 5 in Appendix.

\ 4.2.3 Few-shot NLU

\ In contrast to the fully supervised setting in the above experiments, we also perform few-shot experiments on six GLUE tasks following the same setup (e.g., shots, train and test splits) and evaluation as in (Wang et al., 2021). Detailed experimental configuration presented in Section A of Appendix. AdaMix uses a mixture-of-adapters with prompt-based fine-tuning (Gao et al., 2021).

\ Table 6 shows the performance comparison among different PEFT methods with |K| = 30 labeled examples with RoBERTa-large as frozen encoder. We observe significant performance gap for most PEFT methods with full model promptbased fine-tuning i.e. with all model parameters being updated. AdaMix with adapters outperforms full model tuning performance for few-shot NLU similar to that in the fully supervised setting. Note that AdaMix and LiST (Wang et al., 2021) use similar adapter design with prompt-based fine-tuning.

\

:::info Authors:

(1) Yaqing Wang, Purdue University (wang5075@purdue.edu);

(2) Sahaj Agarwal, Microsoft (sahagar@microsoft.com);

(3) Subhabrata Mukherjee, Microsoft Research (submukhe@microsoft.com);

(4) Xiaodong Liu, Microsoft Research (xiaodl@microsoft.com);

(5) Jing Gao, Purdue University (jinggao@purdue.edu);

(6) Ahmed Hassan Awadallah, Microsoft Research (hassanam@microsoft.com);

(7) Jianfeng Gao, Microsoft Research (jfgao@microsoft.com).

:::


:::info This paper is available on arxiv under CC BY 4.0 DEED license.

:::

\

Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact service@support.mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.
Share Insights

You May Also Like

Crypto Shows Mixed Reaction To Rate Cuts and Powell’s Speech

Crypto Shows Mixed Reaction To Rate Cuts and Powell’s Speech

The post Crypto Shows Mixed Reaction To Rate Cuts and Powell’s Speech appeared on BitcoinEthereumNews.com. Jerome Powell gave a speech justifying the Fed’s decision to push one rate cut today. Even though a cut took place as predicted, most leading cryptoassets began falling after a momentary price boost. Additionally, Powell directly addressed President Trump’s attempts to influence Fed policy, claiming that it didn’t impact today’s decisions. In previous speeches, he skirted around this elephant in the room. Sponsored Sponsored Powell’s FOMC Speech The FOMC just announced its decision to cut US interest rates, a highly-telegraphed move with substantial market implications. Jerome Powell, Chair of the Federal Reserve, gave a speech to help explain this moderate decision. In his speech, Powell discussed several negative economic factors in the US right now, including dour Jobs Reports and inflation concerns. These contribute to a degree of fiscal uncertainty which led Powell to stick with his conservative instincts, leaving tools available for future action. “At today’s meeting, the Committee decided to lower the target range…by a quarter percentage point… and to continue reducing the size of our balance sheet. Changes to government policies continue to evolve, and their impacts on the economy remain uncertain,” he claimed. Crypto’s Muted Response The Fed is in a delicate position, balancing the concerns of inflation and employment. This conservative approach may help explain why crypto markets did not react much to Powell’s speech: Bitcoin (BTC) Price Performance. Source: CoinGecko Sponsored Sponsored Bitcoin, alongside the other leading cryptoassets, exhibited similar movements during the rate cuts and Powell’s speech. Although there were brief price spikes immediately after the announcement, subsequent drops ate these gains. BTC, ETH, XRP, DOGE, ADA, and more all fell more than 1% since the Fed’s announcement. Breaking with Precedent However, Powell’s speech did differ from his previous statements in one key respect: he directly addressed claims that President Trump is attacking…
Share
2025/09/18 09:01
Share
Warsaw Stock Exchange lists its first Bitcoin ETF

Warsaw Stock Exchange lists its first Bitcoin ETF

The post Warsaw Stock Exchange lists its first Bitcoin ETF appeared on BitcoinEthereumNews.com. The Warsaw Stock Exchange has listed its first Bitcoin ETF, offering investors regulated exposure to BTC through futures contracts. Summary The Bitcoin BETA ETF tracks BTC through CME futures and includes a hedging strategy to reduce USD/PLN currency risk. Approved by Poland’s Financial Supervision Authority, the fund is managed by AgioFunds TFI. Bitcoin ETF arrives on Warsaw Stock Exchange The Warsaw Stock Exchange (GPW) has listed its first-ever crypto ETF, the Bitcoin BETA ETF. According to GPW’s official announcement, the Bitcoin BETA ETF does not invest in physical Bitcoin (BTC), but gains exposure through futures contracts traded on the Chicago Mercantile Exchange. To minimize foreign exchange volatility, the fund employs a hedging strategy using forward contracts, insulating investors from fluctuations in the USD/PLN exchange rate. Developed by AgioFunds TFI, the ETF was approved by Poland’s Financial Supervision Authority in June and is backed by Dom Maklerski Banku Ochrony Środowiska S.A. as its market maker. “Offering exposure to Bitcoin through an ETF listed on GPW increases safety of trading, as investors can participate in the cryptocurrency market using an instrument which is supervised, cleared, and subject to the transparency standards applicable to a regulated capital market,” said Michał Kobza, Member of the Management Board of the Warsaw Stock Exchange. The current crypto ETF landscape Globally, Bitcoin ETFs have already gained traction on major exchanges, including Nasdaq, NYSE, and Cboe in the U.S., where a wave of spot Bitcoin ETFs was approved in early 2024. Other prominent markets include the Toronto Stock Exchange in Canada, Germany’s Xetra, Switzerland’s SIX Exchange, Brazil’s B3, and Cboe Australia. These ETFs offer various structures, from physically-backed spot products to futures-based funds, like the one just listed on GPW. Beyond Bitcoin and Ethereum, altcoin ETFs are increasingly gaining traction. According to the latest count by Bloomberg analysts,…
Share
2025/09/19 14:30
Share