Parameter-efficient fine-tuning on large protein language models improves signal peptide prediction

(Downloading may take up to 30 seconds. If the slide opens in your browser, select File -> Save As to save it.)

Click on image to view larger version.

Figure 3.
Figure 3.

The sequence logo of Sec/SPII generated with (A) known Sec/SPII sequences (represented by information content) and (B) predicted Sec/SPII sequence patterns via LoRA-tuned ESM2-3B (represented by attention weights). (C) Comparison between the gold-standard sequence logo and the predicted sequence pattern using Spearman's rank correlation between information content and attention weights. Each dot represents one position on the sequence.

This Article

  1. Genome Res. 34: 1445-1454

Preprint Server