Parameter-efficient fine-tuning on large protein language models improves signal peptide prediction

(Downloading may take up to 30 seconds. If the slide opens in your browser, select File -> Save As to save it.)

Click on image to view larger version.

Figure 4.
Figure 4.

The architectures for the ESM-2 model and PEFT-SP using different PEFT modules. The light green modules are tunable during training, whereas the gray modules are fixed. (A) The ESM-2 backbone model uses amino acid sequences for SP and CS prediction. (B) PEFT-SP using adapter tuning contains a bottleneck architecture. (C) PEFT-SP using prompt tuning appends soft embedding into token embedding. (D) PEFT-SP using LoRA adds trainable rank decomposition matrices into the self-attention layer.

This Article

  1. Genome Res. 34: 1445-1454

Preprint Server