Skip to main content

Paradigms and Hidden Research Strategies

  • Chapter
  • First Online:
Book cover Growth Through Innovation

Part of the book series: Management for Professionals ((MANAGPROF))

Abstract

Many scientists and engineers have a specific world view, a paradigm within which they approach all problems and which helps them to solve their puzzles. From time to time these paradigms change and lead research in new directions. Science and technology seem to be moving in revolutionary steps and long eras of small improvements. Both exert a huge influence on growth!

as we understand the body’s repair process at the genetic level… we will be able to advance the goal of maintaining our bodies in normal function, perhaps perpetually.

W. Haseltine, CEO Human Genome Science, 2001

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 54.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  • Koestler, A. (1959). The sleepwalkers. New York, NY: Macmillan.

    Google Scholar 

  • Kuhn, T. (1977). The essential tension. Chicago, IL: The University of Chicago Press.

    Google Scholar 

  • Matthew, H.C.E., Harrison, B., & Long, R.J. (2004). The Oxford dictionary of national biography. Oxford: Oxford University Press (p. 596).

    Google Scholar 

  • Wilson, E. (1999). Consilience: The unity of knowledge. New York, NY: Vintage Books.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

The Economist: The race to computerise biology

The Economist: The race to computerise biology

Bioinformatics: In life-sciences establishments around the world, the laboratory rat is giving way to the computer mouse—as computing joins forces with biology to create a bioinformatics market that is expected to be worth nearly $40 billion within three years

© The Economist Newspaper Limited, London (Dec 12th 2002)

FOR centuries, biology has been an empirical field that featured mostly specimens and Petri dishes. Over the past five years, however, computers have changed the discipline—as they have harnessed the data on genetics for the pursuit of cures for disease. Wet lab processes that took weeks to complete are giving way to digital research done in silico. Notebooks with jotted comments, measurements and drawings have yielded to terabyte storehouses of genetic and chemical data. And empirical estimates are being replaced by mathematical exactness.

Welcome to the world of bioinformatics—a branch of computing concerned with the acquisition, storage and analysis of biological data. Once an obscure part of computer science, bioinformatics has become a linchpin of biotechnology’s progress. In the struggle for speed and agility, bioinformatics offers unparalleled efficiency through mathematical modelling. In the quest for new drugs, it promises new ways to look at biology through data mining. And it is the only practical way of making sense of the ensuing deluge of data.

The changes wrought by computers in biology resemble those in the aircraft and car industries a decade or so ago, after the arrival of powerful software for CAD (computer-aided design) and CFD (computational fluid dynamics). In both industries, engineers embraced the new computational modelling tools as a way of designing products faster, more cheaply and more accurately. In a similar way, biotech firms are now looking to computer modelling, data mining and high-throughput screening to help them discover drugs more efficiently.

In the process, biology—and, more specifically, biopharmacy—has become one of the biggest consumers of computing power, demanding petaflops (thousands of trillions of floating-point operations per second) of supercomputing power, and terabytes (trillions of bytes) of storage. Bioinformatics is actually a spectrum of technologies, covering such things as computer architecture (eg, workstations, servers, supercomputers and the like), storage and data-management systems, knowledge management and collaboration tools, and the life-science equipment needed to handle biological samples. In 2001, sales of such systems amounted to more than $12 billion worldwide, says International Data Corporation, a research firm in Framingham, Massachusetts. By 2006, the bioinformatics market is expected to be worth $38 billion.

The opportunity has not been lost on information technology (IT) companies hurt by the dotcom bust and telecoms meltdown. Starting in 2000, IBM was the first to launch a dedicated life-sciences division. Since then, a host of other IT firms have jumped aboard the bioinformatics bandwagon. Along with IBM, Sun Microsystems has staked a claim on the computing and management part of the business. Firms such as EMC and Hewlett-Packard have focused on data storage. Agilent, SAP and Siebel provide so-called decision support. Even build-to-order PC makers such as Dell have entered the fray with clusters of cheap machines.

A swarm of small start-up firms has also been drawn in, mostly to supply data, software or services to analyse the new wealth of genetic information. Companies such as Accelrys in San Diego, California, Spotfire in Somerville, Massachusetts, and Xmine in Brisbane, California, are selling software and systems to mine and find hidden relationships buried in data banks. Others such as Open Text of Waterloo, Ontario, and Ipedo in Redwood City, California, have built software that improves communication and knowledge management among different areas of pharmaceutical research. Gene Logic of Gaithersburg, Maryland, has created a business to collect samples and screen their genetic code for proprietary research libraries. And Physiome Sciences of Princeton, New Jersey, is providing computer-based modelling systems that offer an insight into drug targets and disease mechanisms.

Bioinformatics is not for the faint of heart, however. Over the past year, the fortunes of a number of biotechnology firms have faltered, as venture-capital funds have sought alternative investments. Venerable names of biotechnology, including Celera Genomics of Rockville, Maryland, LION Bioscience of Heidelberg, Germany, and others, have found themselves scrambling to change the way they do business. Yet, for all the turbulence in the industry, the bioinformatics juggernaut remains on track, fuelled by new forces changing the pharmaceutical industry.

Gene genie

In retrospect, the marriage of genetics and computers was pre-ordained. After all, biotechnology is based on the genetic building-blocks of life—in short, on nature’s huge encyclopedia of information. And hidden in the vast sequences of A (adenine), G (guanosine), C (cytosine) and T (thymine) that spell out the genetic messages—ie, genes—are functions that take an input and yield an output, much as computer programs do. Yet the computerisation of genetics on such a grand scale would not have occurred without the confluence of three things: the invention of DNA microarrays and high-throughput screening; the sequencing of the human genome; and a dramatic increase in computing power.

“In just a few years, gene chips have gone from experimental novelties to tools of the trade.”

More commonly known as “gene chips”, microarrays are to the genetic revolution of today what microprocessors were to the computer revolution a quarter of a century ago. They turn the once arduous task of screening genetic information into an automatic routine that exploits the tendency for the molecule that carries the template for making the protein, messenger-ribonucleic acid (m-RNA), to bind to the DNA that produces it. Gene chips contain thousands of probes, each imbued with a different nucleic acid from known (and unknown) genes to bind with m-RNA. The resulting bonds fluoresce under different colours of laser light, showing which genes are present. Microarrays measure the incidence of genes (leading to the gene “sequence”) and their abundance (the “expression”).

In just a few years, gene chips have gone from experimental novelties to tools of the trade. A single GeneChip from Affymetrix, the leading maker of microarrays based in Santa Clara, California, now has more than 500,000 interrogation points. (For his invention of the gene chip, Affymetrix’s Stephen Foder won one of The Economist’s Innovation Awards for 2002.) With each successive generation, the number of probes on a gene chip has multiplied as fast as transistors have multiplied on silicon chips. And with each new generation has come added capabilities.

The sequencing of the human genome in late 2000 gave biotechnology the biggest boost in its 30-year history. But although the genome sequence has allowed more intelligent questions to be asked, it has also made biologists painfully aware of how many remain to be answered. The genome project has made biologists appreciate the importance of “single nucleotide polymorphism” (SNP)—minor variations in DNA that define differences among people, predispose a person to disease, and influence a patient’s response to a drug. And, with the genetic make-up of humans broadly known, it is now possible (at least in theory) to build microarrays that can target individual SNP variations, as well as making deeper comparisons across the genome—all in the hope of finding the obscure roots of many diseases.

The sequencing has also paved the way for the new and more complex field of proteomics, which aims to understand how long chains of protein molecules fold themselves up into three-dimensional structures. Tracing the few thousandths of a second during which the folding takes place is the biggest technical challenge the computer industry has ever faced—and the ultimate goal of the largest and most powerful computer ever imagined, IBM’s petaflop Blue Gene. The prize may be knowledge of how to fashion molecular keys capable of picking the lock of disease-causing proteins.

The third ingredient—the dramatic rise in computing power—stems from the way that the latest Pentium and PowerPC microprocessors pack the punch of a supercomputer of little more than a decade ago. Thanks to Moore’s law (which predicted, with remarkable consistency over the past three decades, that the processing power of microchips will double every 18 months), engineers and scientists now have access to unprecedented computing power on the cheap. With that has come the advent of “grid computing”, in which swarms of lowly PCs, idling between tasks, band together to form a number-crunching mesh equivalent to a powerful supercomputer but at a fraction of the price. Meanwhile, the cost of storing data has continued to fall, and managing it has become easier thanks to high-speed networking and smarter forms of storage.

Banking on failure

Despite such advances, it is the changing fortunes of the drug industry that are pushing biology and computing together. According to the Boston Consulting Group, the average drug now costs $880m to develop and takes almost 15 years to reach the market. With the pipelines of new drugs under development running dry, and patents of many blockbuster drugs expiring, the best hope that drug firms have is to improve the way they discover and develop new products.

Paradoxically, the biggest gains are to be made from failures. Three-quarters of the cost of developing a successful drug goes to paying for all the failed hypotheses and blind alleys pursued along the way. If drug makers can kill an unpromising approach sooner, they can significantly improve their returns. Simple mathematics shows that reducing the number of failures by 5% cuts the cost of discovery by nearly a fifth. By enabling researchers to find out sooner that their hoped-for compound is not working out, bioinformatics can steer them towards more promising candidates. Boston Consulting believes bioinformatics can cut $150m from the cost of developing a new drug and a year off the time taken to bring it to market.

That has made drug companies sit up. Throughout the 1990s, they tended to use bioinformatics to create and cull genetic data. More recently, they have started using it to make sense of it all. Researchers now find themselves swamped with data. Each time it does an experimental run, the average microarray spits out some 50 megabytes of data—all of which has to be stored, managed and made available to researchers. Today, firms such as Millennium Pharmaceuticals of Cambridge, Massachusetts, screen hundreds of thousands of compounds each week, producing terabytes of data annually.

The data themselves pose a number of tricky problems. For one thing, most bioinformatics files are “flat”, meaning they are largely text-based and intended for browsing by eye. Meanwhile, sets of data from different bioinformatics sources are often in different formats, making it harder to integrate and mine them than in other industries, such as engineering or finance, where formal standards for exchanging data exist.

More troubling still, a growing proportion of the data is proving inaccurate or even false. A drug firm culls genomic and chemical data from countless sources, both inside and outside the company. It may have significant control over the data produced in its own laboratories, but little over data garnered from university research and other sources. Like any other piece of experimental equipment, the microarrays themselves have varying degrees of accuracy built into them. “What people are finding is that the tools are getting better but the data itself is no good,” says Peter Loupos of Aventis, a French drug firm based in Strasbourg.

To help solve this problem, drug firms, computer makers and research organisations have organised a standards body called the Interoperable Informatics Infrastructure Consortium. Their Life Science Identifier, released in mid-2002, defines a simple convention for identifying and accessing biological data stored in multiple formats. Meanwhile, the Distributed Annotation System, a standard for describing genome annotation across sources, is gaining popularity. This is making it easier to compare different groups’ genome data.

Tools for the job

Such standards will be a big help. One of the most effective tools for probing information for answers is one of the most mundane: data integration. Hence the effort by such firms as IBM, Hewlett-Packard and Accelerys to develop ways of pulling data together from different microarrays and computing platforms, and getting them all to talk fluently to one another. A further impetus for data integration, at least in America, is the Patent and Trademark Office’s requirement for filings to be made electronically from 2003 onwards. The Food and Drug Administration is also expected to move to electronic filing for approval of new drugs.

It is in data mining, however, where bioinformatics hopes for its biggest pay-off. First applied in banking, data mining uses a variety of algorithms to sift through storehouses of data in search of “noisy” patterns and relationships among the different silos of information. The promise for bioinformatics is that public genome data, mixed with proprietary sequence data, clinical data from previous drug efforts and other stores of information, could unearth clues about possible candidates for future drugs.

Unlike banking, bioinformatics offers big challenges for data mining because of the greater complexity of the information and processes. This is where modelling and visualisation techniques should come in, to simulate the operations of various biological functions and to predict the effect of stimuli on a cell or organ. Computer modelling allows researchers to test hunches fast, and offers a starting-point for further research using other methods such as X-ray crystallography or spectroscopy. It also means that negative responses come sooner, reducing the time wasted on unworkable target drugs.

Computational models have already yielded several new compounds. BioNumerik of San Antonio, Texas, has modelled the way certain drugs function within the human body. It has also simulated a specific region of a cell to observe the interaction between proteins and DNA. Thanks to its two Cray supercomputers running simulations that combine quantum physics, chemistry and biological models, BioNumerik has been able to get three compounds into clinical trials. Frederick Hausheer, BioNumerik’s founder and chief executive, expects his firm’s modelling technology to cut the time for discovering new drugs by a third to a half.

In a similar vein, Aventis has used several models of cells and disease mechanisms to discover a number of new compounds. And Physiome Sciences now sells a product that integrates various clinical and genomic data to generate computer-based models of organs, cells and other structures.

“A big risk of computer modelling and other tools is to rely too much on them.”

For all their power, these computer modelling techniques should be taken with at least a grain or two of salt. Although they allow researchers to tinker with various compounds, they will never replace clinical trials or other traditional methods of drug research. Even monumental bioinformatics efforts, such as the Physiome Project, will only help researchers refine their ideas before getting their hands wet. “If people haven’t done this kind of work before, they won’t understand how difficult it really is,” says Dr Hausheer.

Indeed, a big risk of computer modelling and other information tools is to rely too much on them, says Martin Gerstel, chairman of Compugen, a maker of analytical and interpretation tools based in Jamesburg, New Jersey. Many researchers confuse the data generated by bioinformatics with information. The danger with all the computing power being brought to bear is that it is becoming seductively easy for biologists to rely on the number-crunching potential of computers and to ignore the scientific grind of hypothesis and proof. As the technology of bioinformatics outpaces the science of biology, the science risks becoming a “black box”, the inner workings of which few will be able to comprehend.

To avoid this, biologists need an ever broader set of skills. For instance, the most pervasive impact of information technology on biology has been through wholesale quantification. Suddenly, biologists are being forced to become mathematicians in order to describe the biological processes and models involved. That implies a demand for wholly new sets of skills and educational backgrounds.

Such changes are not unlike those that affected physics and chemistry during the 1940s, when new computational paradigms created the foundations for nuclear energy and the age of plastics. Much as computing made physics more predictable and prolific, many believe that its new alliance with mathematics will make biology more powerful and definitive. But the marriage will experience some turbulent times before achieving the full flowering of its promise.

Rights and permissions

Reprints and permissions

Copyright information

© 2014 Springer International Publishing Switzerland

About this chapter

Cite this chapter

Boutellier, R., Heinzen, M. (2014). Paradigms and Hidden Research Strategies. In: Growth Through Innovation. Management for Professionals. Springer, Cham. https://doi.org/10.1007/978-3-319-04016-5_2

Download citation

Publish with us

Policies and ethics