Searching journal content for articles similar to Zeng et al. 34 (9): 1445.

Displaying results 1-10 of 5955
For checked items
  1. ...for predicting GO annotations at a fine granularity. TALE 74 (Cao and Shen 2021) on the other hand, uses a pre-trained language model to increase 75 the amount of useful information for GO annotations. Similarly, DeepFRI (Gligorijevic 76 et al. 2021) combines representations derived from pretrained sequence...
    ACCEPTED MANUSCRIPT
  2. ...) increasing the marker density across Chr 28 and (2) focusing on the QTL within Chr 28. The prediction results of the four models (GBLUP, BayesR, LightGBM, and CatBoost) exhibited varying degrees of improvement. The second strategy achieved the optimal solution in all results. For TW, the accuracies...
  3. ...for genomic data. Integrated into diverse architectures such as convoluted neural networks (CNNs), long short-term memory (LSTM), dilated CNNs, and transformers, ConvNeXt V2 blocks consistently improve performance, leading to similar prediction accuracy across these different model types. This reveals...
  4. ...train or fine-tune a HyenaDNA model to take the processed sequences and perform next token prediction beginning with the prompt tokens (Fig. 1C). This formulation allows us to use any prior knowledge on sequences in the model explicitly.View larger version: In this window In a new window Figure 1...
  5. ...RNA degradation prediction. As we show, both the pretrained and the fine-tuned version of the models can learn new biology and improve on current state-of-the-art methods for mRNA vaccine design.To assess generalization of our CodonBERT model, we collected a novel hemagglutinin flu vaccine data set. Different m...
  6. ...prediction task according to their ablation tests. PLMs are foundation models trained with large-scale amino-acid sequences to model protein data as a language (Ofer et al. 2021; Zhang et al. 2024), and the success of applying PLMs to this task suggests the utility of PLMs to build expression relationships...
  7. ...would improve the biological relevance of ERC scores and, therefore, predictive power.To test this, we used logistic regression modeling to test how well ERC can be used to predict which genes belong to a given functional annotation (Methods). Logistic regression modeling allows us to test...
  8. ...of the RBHs between amino acid embeddings. To improve this network, we use MNCM to remove false-positive RBHs (Supplemental Fig. 2). However, it is likely that the RBH network could be improved in multiple ways, including through fine-tuning embeddings, use of other protein language models, and choice...
  9. ...proportion of categories per bin. (C) Positive predictive value (PPV) per bin.Fine-tuning on synthetic regulatory genomic data sets improves predictive performanceWe then developed a fine-tuning strategy to improve model performance through incorporation of synthetic regulatory genomic data sets. We added...
  10. ...relationships that were curated by Liska et al. (2022) based on experimental evidence such as ChIP-seq. We observed a modest improvement in the precision of predicting direct gene-gene relationships at higher values of recall with sequential data aggregation. Although we do see improvement, the identification...
For checked items

Preprint Server