Searching journal content for articles similar to Geleta et al..

Displaying results 1-10 of 41
For checked items
  1. ...), whether usingGNN to encode spot embeddings, whether using variation auto-encoder, whether modeling the decoder output with distribution other than Normal distribution. The results of these settings are shown in Supplemental Figure 3A–C. We found that making the current model more complicated does...
  2. ...–based methods have been developed for this purpose. For example, scVI (Lopez et al. 2018) removes batch effects by conditioning on batch information in a variational autoencoder, which learns a nonlinear embedding of cells; SAVERCAT (Huang et al. 2020) uses a conditional variational autoencoder to remove batch...
    OPEN ACCESS ARTICLE
  3. ...sets, confirming the importance of this step in our workflow.Batch integration with LLOKI-CAECell embeddings from LLOKI-FP effectively capture biological variation within each ST data set but still exhibit large batch effects between technologies. To address this, we develop a conditional autoencoder...
  4. ...developed to enable cross-modality translation between scRNA-seq and scDNAm data. For example, scCross leverages variational autoencoders (VAEs), generative adversarial networks, and the mutual nearest neighbors technique for modality alignment (Yang et al. 2024), whereas MAPLE utilizes an ensemble...
  5. .... 2012), making it sufficient to capture relevant regulatory variation. The 2048-bp input window also provides adequate genomic context for the models to learn both local and distal sequence features.View larger version: In this window In a new window Figure 1. Overview of the benchmarking setup. (A...
  6. ...) of disease-relevant cellular states. Among them, scIDIST (Wehbe et al. 2025) integrates autoencoder-based dimensionality reduction with weak supervision, producing probabilistic disease labels. The labels are then used to train a neural network that assigns continuous disease progression scores to individual...
  7. ...-dimensional RNA structures on local sequencing efficiency using an innovative unsupervised variational autoencoder-Gaussian mixture model (VAE-GMM). The VAE-GMM effectively captures intricate high-dimensional k-mer structural similarities by learning compact latent representations, which reduces dimensionality...
  8. ..., it is available under a Creative Commons License (Attribution-NonCommercial 4.0 International), as described at http://creativecommons.org/licenses/by-nc/4.0/.References ↵The 1000 Genomes Project Consortium. 2015. A global reference for human genetic variation. Nature 526: 68–74. doi:10.1038/nature15393 ↵Adey A...
  9. ...the dimensionality reduction from a pretrained autoencoder.Dimensional reduction by variational autoencoder: variational_autoencoder.pyAs an alternative way for the dimensional reduction, we provide another Python script, variational_autoencoder.py. This script implements a variational autoencoder that employs...
  10. ...of -wide variants. Nat Biotechnol doi:10.1038/s41587-024-02511-w ↵Brandes N, Linial N, Linial M. 2020. PWAS: proteome-wide association study—linking genes and phenotypes by functional variation in proteins. Genome Biol 21: 173. doi:10.1186/s13059-020-02089-x ↵Brandes N, Goldman G, Wang CH, Ye CJ, Ntranos V...
For checked items

Preprint Server