From Comparative and Functional Genomics to Practical Decisions in the Clinic: A View from the Trenches
During the depths of the plague which reduced the population of western Europe by over 40% during the 14th century,
“…medieval physicians attributed the onset of the disease to God's punishment for sin and to bad astrological conjunction
involving the feared planet Saturn. The king of France appointed a commission of University of Paris professors to account
for the Black Death. The professors soberly blamed the medieval catastrophe on the astrological place of Saturn in the house
of Jupiter.”
(Norman Cantor, In the Wake of the Plague: The Black Death and the World it Made)
To many participants, both Science and Medicine are about to enter a different age. The release of the first comprehensive sequences of the human genome, and the advent of microarray, proteonomic, informatic, and other technologies, either spawned directly by the science of the Genome Project or already employed to help explain the workings of the genome, have combined to form this impression. But how will all of this new information and the new techniques be implemented? Will people merely enter a phone booth apparatus, or be carried there as babies, or even perhaps as a tube of fetal cells culled from maternal blood, to have their entire medical future printed out based on the results from lab(s) on a chip? All would (I hope) agree that this represents scary and simplistic thinking, but some of the popular images and even professed expert opinions are not very far from this perception. Further questions arise: What will happen to the small percentage of patients for whom “inevitable” doom or horrible reactions to the accepted medications are predicted? Pharmaceutical companies have obvious incentives to develop pharmacogenetics to ease the early phases of drug discovery and development, but what will be the incentive to develop different medications for the smaller populations who cannot tolerate, or will not respond to, drugs that are helpful for the majority? Can patients and the public be taught to understand the differing degrees of probability that a particular diagnosis or outcome may occur based on a complicated Single Nucleotide Polymorphism (SNP) or gene expression pattern read by a computer? Will our medico-legal and insurance industry standards rise to the point at which such information will be handled in ways that can guarantee benefit, or at least prevent harm, to the patients involved?
Inevitably, some of the problems will arise from fairly common misconceptions of Science and Medicine, both as ideals and as they are commonly practiced. Although there are scientists, as well as physicians, who believe that Science and Medicine are the same pursuit, this school of opinion ignores the Art that is employed in the practice of both. Science, to be truly believable, must be reproducible; but is there a scientist who hasn't, at least once, had trouble reproducing their own results or those of another, only to discover the critical importance of the art of exactly how the experiment is designed and executed? Similarly, the practice of medicine comes down in large part to the interpretation of physical and laboratory findings, followed by honest communication with the patient concerning their interpretation in light of the clinical history. This forms the heart of informed consent, and is the prerequisite for any type of therapeutic intervention.
The prism of time often reveals how religious or societal bias has influenced the conclusions of physicians and scientists but rarely are we aware of these biases, either in our own work or in that of our contemporary colleagues. A good example is the 18th–19th century pursuit of phrenology as a medical science. Phrenology was the practice of reading character traits and mental capabilities from supposed scientific analysis of the shape and protuberances on an individual's skull. Many of these techniques were used to justify blatantly racist attitudes concerning the mental inferiority of despised races and subgroups. Figure 1 illustrates the L.N. Fowler phrenological model that was distributed to practitioners throughout Europe and America. To its credit, the British Association for the Advancement of Science never admitted phrenology as an accepted science, but many other scientific societies did, and the practice itself was widespread. The lay public at large viewed it as a credible science (van Wyhe 2001). Although by the 1850's phrenology was discredited as a science, phrenological practitioners were common until the beginning of the 20th century; phrenological concepts (as well as concepts from mesmerism, a sister pseudoscience of the same time period) continue to influence a variety of accepted modern sciences (van Wyhe 2001). Reaching back further in time, it is well known that the ancient Greeks made no distinction between science, socio-political mindset, and religion. All would be regarded as Truth, and thus there would be no difference between the perceived physiology of a man, horse, or centaur (Fig. 2). It is a common assumption that although our predecessors had these biases, today's science is free of similar influences. I would argue that humanity has not changed so much! We are just as susceptible to bias in the design, pursuit, and interpretation of our science and medicine as were many of our predecessors. It is only our belief systems and some of the tools at hand that are different.
Phrenological bust by F.N. Fowler. The American followers of Fowler reintroduced phrenology in Britain and Europe in the 1860's after it had previously died down. This latter-day phrenology was more entrepreneurial, and it is this model that is commonly found in today's antique shops. (van Wyhe 2001, reproduced by permission of the publisher.)
Illustration of a Greek centaur (Fisher 1996, reproduced by permission of the publisher).
The new sciences of functional and comparative genomics strive to “make sense” of the accumulating genomic sequence through informatic comparisons between what homologs of similar genes do in various organisms, and direct experimentation. The design and use of comparative genomic databases to allow interpretation and experimentation regarding human disease in other organisms will certainly be useful. At this point it is important to let the informatic data guide us toward direct experimentation. The degrees of similarity that lead us to suspect that a particular disease or biological process can reasonably be modeled in organisms with large phylogenetic differences need to be critically examined. Without critical thinking and direct experimental support, we run the risk of creating interesting, but mythical, informatic pictures which will only cause confusion. Although informatic comparisons and animal models are helpful, ultimately the full understanding of human physiology requires the study of humans.
Some of the most impressive advances in modern medicine have come from well-designed and well-executed clinical trials. An example from my own field is shown in Figure 3. During the past 40 years, pediatric acute lymphoblastic leukemia (ALL) has moved from a uniformly fatal illness to one with a cure rate of >80%. This was achieved through careful and somewhat empiric combinations of various medications that were known to be individually active, although not singly curative, in this disease. Science is involved in the design and execution of clinical trials. The science of clinical trials includes detailed instructions concerning the approved source(s), dosage, and timing of the medications; statistical modeling of the design of randomization between treatments; projected accrual numbers for specific P values to be obtained several years after the cessation of a trial; and “stopping” rules for recognizing and investigating untoward effects. Art is also involved in the successful execution of such an endeavor. Anyone who has ever experienced giving a foul-tasting medicine to a 4-yr-old (the age at which the incidence of ALL peaks) will understand that an artful approach is required. Now imagine orchestrating the administration of multiple foul-tasting and intravenous medicines, with serious known and unknown side effects, at nearly a hundred different clinical sites, over 3–5 yr, to the thousands of children represented by each of the curves in Figure 3, and obtaining interpretable results! This endeavor represents art, science, medicine, and the utmost dedication of both the professionals and parents of these children over a long period of time.
Improvement in survival rate of children with Acute Lymphoblastic Leukemia (ALL). Curves represent survival outcomes for patients treated on successive Children's Cancer Group clinical trials conducted over the 1968–1997 trial period (A. Bleyer and H. Sather, pers. comm.).
The strengths and pitfalls of interpreting the results of modern science and medicine fall on the shoulders of all. Scientists share the task of designing and studying relevant models, and avoiding over-interpretations of their results. Physicians must become much more computer literate because almost all of the new technologies are computer- and informatics-driven. Both must resist certain tendencies: Physicians need to adhere to evidence-based principles, avoiding the diagnosis that all too often hinges on “I know it when I see it.” Multiple compendiums of the accepted, objective criteria for making individual medical diagnoses exist, but they need to be utilized in daily practice beyond the confines of residency training and tertiary care hospitals. Scientists, in the biotech as well as academic communities, need to avoid the tendency, often driven by the high price of some of the newer techniques, of running undercontrolled experiments or experiments with fewer repeated conditions than would have been accepted with standard techniques. The computers and the software algorithms that few beyond the person who wrote the code (but may have little or no background in biology) truly understand, add another source of difficulties in data interpretation. The “eye-candy” provided by the more expensive versions of genomic interpretation software can be as intriguing and intricate as any medieval tapestry, but the scientist must constantly question and test whether the picture provided is an accurate depiction of the true biology. Finally, practitioners of both science and medicine have a responsibility to educate the public. Without this education, the public begins to regard science and medicine as “magic.” This leads to multiple difficulties—including expectations that rise far faster than the science can support—and when disappointment ultimately results, the resentment can lead to problems that impede the entire process. Government-issued certificates of confidentiality (Office for Human Research Protections 2000) for specific clinical trials, as well as legislation giving incentives to pharmaceutical companies to test and seek approval of medications in children (Food and Drug Administration Modernization Act 1997) represent the first efforts to directly address the thorny issues of confidentiality and drug development for minority or less remunerative populations. An educated public that views itself as a participant in a process will allow for the longer-term changes needed to answer difficult questions that arise along the way. If this is all done in a fair and equitable manner, then hopefully none of us will ever solemnly proclaim to an audience of our peers, the public at large, or an individual patient in an exam room, that the best scientific tools of our day have led to the modern equivalent of: “the reason for your condition is the . . . . .place of Saturn in the House of Jupiter.”
Footnotes
-
E-MAIL JMARGOLIN{at}txccc.org; FAX (832) 825-4038.
-
Article and publication are at www.genome.org/cgi/doi/10.1101/gr.192201.
- Cold Spring Harbor Laboratory Press














