
Using simulated single-cell data sets to compare pre-mRNA-based and mRNA-based methods. (A) Schematic illustration of pipelines used for the generation of simulated data sets, the inference of the GRN, the evaluation of inferred GRN, and factor-dependency analysis. Simulated data sets were generated using the dyngen package, whereby different network backbones and kinetic parameters were used for performing dynamic simulations. The output count matrices were then used for network inference using GENIE3. The accuracies of the inferred networks were then calculated and used for factor-dependency analysis. See Methods for details. (B) Boxplots comparing the performance of pre-mRNA-based and mRNA-based methods in four different network backbones. The performance was determined by the accuracy of the inferred network, measured by AUPR (area under precision-recall curve). Meanwhile, AUPR for a random predictor is included for comparison. N = 20. (C) Factor-dependency analysis for pre-mRNA-based and mRNA-based methods. Simulations were performed under parameter ranges of three gene-level factors (transcription rate, mRNA half-life, and protein half-life). The effect of each factor on the inference accuracies of different network backbones was evaluated using AUPR ratio, calculated as the ratio of the AUPR under the largest parameter value to the AUPR under the lowest parameter value (see also Supplemental Fig. S4A–C). Black dashed lines indicate AUPR ratio of 1 (i.e., no effect of the parameter choice on inference accuracy).











