“We can talk endlessly about moral progress, about social progress, about poetic progress, about progress made in happiness; nevertheless, there is a type of progress that defies any discussion, and that is scientific progress, as soon as we judge it within the hierarchy of knowledge, from a specifically intellectual point of view.”

Gaston B achelard

The Philosophy of No - 1940

In the last few decades of the 20th century, there was a change from a biology that was relatively poor in data to a biology that was overflowing with information. This sudden avalanche was triggered mainly by technological advances that were used profitably in the structural and functional study of genomes and proteomes. The need to organize these data in such a way as to draw from them an explanatory coherence has become an imperative that requires high-performance computing facilities. After the decoding of dozens of genomes of prokaryotes and eukaryotes, particularly that of Man, the first post-genomic era is currently opening up to a questioning of the use of knowledge that has been acquired. Man’s health is at the heart of these preoccupations. Under pressure from the media and from a strong political current that leads to financial support being given to applications of so-called “high-visibility” fundamental research, society waits impatiently for quick, usable results, particularly in medicine. This can lead to a few paradoxical situations. Being subject to a mandatory finality in the short term, the experimental process comes into contradiction with academic research that is carried out over the long term, and is on the lookout for those contradictions that have often been and often remain at the origin of breathtaking findings. The interest of the pharmaceutical and agronomic industries in the economic development of discoveries made about living beings, with the necessity of making investments cost-effective and standing out from the competition, is contributing to a new image of biological research. Links that have been forged with the economic, social and political domains tend, nowadays, to have a preponderant affect on the application of the experimental method to living beings. One of the important parameters that has helped to modify the landscape of traditional biological research is related to the introduction of new exploratory methods such as biocomputing or bioinformatics and high-throughput screening, which involves the simultaneous processing of hundreds or even thousands of samples. This approach is in contrast with traditional biology, in which the research strategy is based upon the observation of effects obtained as a function of experimental parameters that are modified one by one. Another aspect of modern times is that, with the irresistible trend in genetic manipulation towards a focus on human beings, certain areas of fundamental research are finding themselves locked into philosophical dilemmas that are matter for ethical and sociocultural consideration, and the subjects of fierce debate.

1 The Accession Of Biotechnology Towards A New Paradigm For The Experimental Method

Instead of setting out to discover unknown mechanisms by analyzing effects that are dependent on specific causes, with some uncertainty as to the possible success of the enterprise being undertaken, which is the foundation stone of the Bernardian paradigm of the experimental method, many current research projects give themselves achievable and programmable objectives that depend upon the means available to them: sequencing of genomes with a view to comparing them, recognition of sequence similarities in proteins coded for by genes belonging to different species, with the aim of putting together phylogenetic trees, synthesis of interesting proteins in transgenic animals and plants, analysis of the three-dimensional structure of proteins, in order to find sites that are likely to fix medicinal substances, and synthesis of molecular species able to recognize pathogenic targets. The facilities that are called into play include instruments that are often sophisticated, the performance of which, in terms of miniaturization, computerization and robotization, is far beyond that of apparatus that was in use a few decades ago. These facilities, applied to research into living beings, have entered the framework of a methodology that has been given the label biotechnologyProceeding hand-in-hand with applications that have become more and more meaningful in the domains of medicine, pharmacology, agronomy and animal husbandry, the biotechnological process has come to the fore as a new paradigmfor the experimental method as applied to living beings. In addition to new discoveries, the driving forces behind biotechnologies are related to economic imperatives as well as the interest and support they receive from the political powers-that-be. The academic spiritthat presides over fundamental science gives way to the entrepreneurial spiritthat implements a rational programming of facilities and an efficient organization of scientific collaborations. As an example, the sequencing of the human genome, which includes three billion nucleotide base pairs, required the coordination of several dozen scientific teams around the world and the matching of several tens of thousands of results.

Research on DNA provides a typical illustration of the way in which research has become divided, over the last few decades, between an approach and an interest that had previously been purely academic, and the increasing role of technology, which can be justified by the results that arise in the life of society at large, but which, because of these results, also gives rise to questions concerning how well-founded some of these results are, particularly in the health domain. The experimental method, which had been confined to the laboratory, is now a matter for public debate.

1.1 The Genome Explored

Before it won acclaim, DNA, which was isolated under the name of nuclein by Johann Friedrich Miescher (1844 ‑ 1895), at the end of the 19th century, had to undergo a series of structural evaluation tests that were spread out over the first five decades of the 20th century. An overall conclusion then came to the fore. DNA is a polydeoxyribonucleotide that carries four cyclic bases, adenine, thymine, cytosine and guanine. Each base is involved in the structure of a mononucleotide where it is itself associated with a sugar, deoxyribose, which is associated with a phosphate residue. DNA was compared to a ladder, the rungs of which (mononucleotides) were linked by ester bonds between an acid group of a phosphate residue of a nucleotide and the free hydroxyl group of the deoxyribose of the following nucleotide.

1.1.1 From molecular biologyto genetic engineering

“The very birth of molecular biology illustrates the impossibility of organizing research in a new domain, of scheduling it […]. This biology was born of individual decisions taken by a small number of scientists between the end of the thirties and the beginning of the fifties. Nobody pushed them in this direction. No administrator, no foundation, no Minister for Research committed them to this path. It was the curiosity of each, a new way of considering old problems, that led a few men and women to solve the problems of heredity.”

François J acob

Of Flies, Mice and Men - 1997

The middle of the 20th century saw an accumulation of experimental evidence showing that DNA carries genetic information, and because of this, that it controls the transmission of hereditary characteristics: the proof provided in 1944 by Oswald Avery (1877 ‑ 1955), Colin MacLeod (1909 ‑ 1972) and Maclyn McCarthy (1911 ‑ 2005) of the transforming power of DNAin Pneumococcus, the highlighting by Alfred Hershey (1908 ‑ 1997) and Martha Chase (1927 ‑ 2003), in 1952, of the role played by bacteriophage DNAas an infectious agent for bacteria, the revelation by Erwin Chargaff at the beginning of the 1950s of the equivalence of molar concentrations of adenine (A) and thymine (T), on the one hand, and of cytosine

(C) and guanine (G), on the other hand, in DNAs arising from a multitude of sources, animal, plant and microbial, thus suggesting a complementary pairing of adenine and thymine, and cytosine and guanine. Based on the pairing of A/T and C/G bases, the model of the double helix structure of DNA, formulated in 1953 by James Watson and Francis Crick, made it possible to understand the identical synthesis of double strands of DNAby replicationduring cell division (Figure IV.1) and, as a consequence, the conservation of hereditary characteristics in descendants. Afterwards, it was found that the information contained in the DNA base sequence determines the amino acid sequence in proteins. Then the roles played by messenger RNAand transfer RNAs were elucidated, the former acting as a carrier of information between DNA and the proteins being synthesized and the latter acting as double-headed adaptors, able to recognize nucleotide triplets (codons) in messenger RNA and to specifically fix amino acids in order to position them on the ribosomes, the final result being the synthesis of a protein chain. In 1966, the genetic code was deciphered. The veil of mystery that had covered the mechanism of the synthesis of proteins was lifted, and the decisive role played by nucleic acids in this synthesis was shown.

Figure IV.1
figure 1

The central dogma of molecular biology

A - Double helix structure of DNAand simplified representation of its self-replicationEach strand of the parent molecule of DNA acts as a matrix for the synthesis of a daughter molecule of complementary DNA, in conformity with the rules of pairing: adenine (A) with thymine (T) and guanine (G) with cytosine (C). The double strands that appear are identical to each other and identical to the parent DNA molecule.

B - Transcription of DNAinto messenger RNA and its translation into an amino acid chain. Diagram of gene expression in a eukaryotic cell. One of the strands of DNA (the coding strand) has coding sequences (exons) and non-coding sequences (introns). It is said that the gene is split. The transcription of the exons, accompanied by their splicing leads to the formation of a Messenger RNA, the codons (nucleotide triplets) of which are translated into amino acids that are linked to each other by covalent bonds in order to form a protein chain. In prokaryotic cells (bacteria), the genes do not contain introns and are not split.

A: adenine; C: cytosine; G: guanine; T: thymine; U: uracil. Met: methionine; His: histidine; Tyr: tyrosine; Gly: glycine; Phe: phenylalanine.

Later on, there were a few adjustments. Although, in bacteria, proteins are coded for by a continuous sequence of nucleotide triplets in DNA, in the 1970s the surprising discovery was made that in eukaryotic organisms, genes are discontinuous and made up of coding DNA sequences (exons) interrupted by non-coding sequences (introns). From the end of the 1950s, François Jacob and Jacques Monod had postulated the existence of a dual determinism for protein synthesis and shown that, next to structural genesexpressed as proteins, there are regulatory genesable to control the expression of the structural genes. The importance of the differential regulation of gene expression in cell differentiation in higher organisms was quickly recognized. From this point on it was possible to explain why a particular species of protein is more specifically expressed in a given tissue and another species of protein is more particularly expressed in another tissue, each type of tissue finding its specificity in its molecular components. This fantastic framework of knowledge, which was built up over a couple of decades, has been used as the foundation stone for the so-called central dogma of molecular biology, which explains the transcription of DNA sequences into messenger RNAs and the translation of messenger RNAs into proteins and which, with only a few variants, is the same throughout the living world (Figure IV.1) 1 Footnote 1.

Interlinked with the epic rise of molecular biology, there was a succession of technical innovations that led to the synthesis of DNA by chemical or enzymatic means, and to its being cleaved at specific locations, with the pieces that were obtained being joined together again 2 Footnote 2. In 1962, in Geneva, Werner Arber (b. 1929) 3 Footnote 3 and Daisy Dussoix highlighted the restriction phenomenon, which involves the degradation of bacteriophage DNA by a recipient bacterium. They discovered that an extract of E. coli has a restriction activity, and that this activity is of an enzymatic nature, caused by a nuclease that breaks the phosphodiester bonds in DNA. In 1970, the Americans Hamilton Smith (b. 1931) and Kent Wilcox 4 Footnote 4 purified the first restriction enzyme from a strain of Hæmophilus influenzae. In 1971, Daniel Nathans (1928 ‑ 1999) and Kathleen Danna 5 Footnote 5 (b. 1945) at Johns Hopkins University in Baltimore (USA) drew up the first restriction map based on the circular DNA of the monkey SV40 virus, using a restriction enzyme that was named HindIII and a follow-up of the sequential appearance of shorter and shorter fragments resulting from the partial digestion of DNAIn the following years, dozens of restriction enzymeswere isolated, all of them endowed with a surprising specificity with respect to specific base sequences in DNA (Figure IV.2). These enzymes were to be indispensable tools in genetic recombination experiments.

Figure IV.2
figure 2

Mode of action of restriction enzymes

Certain restriction enzymes cleave DNA, leaving free cohesive (or sticky) ends (this is the case for the enzyme EcoRI), while others cleave DNA leaving blunt ends (this is the case for the enzyme HaeIII ).

A: adenine; C: cytosine; G: guanine; T: thymine.

The transformation of RNA back into DNA was observed by Howard Temin (1934 ‑ 1994) and S. Mizutani 6 Footnote 6, in experiments on the Rous sarcoma virus, a virus with RNA that, when it proliferates in host cells, is able to synthesize a DNA that is complementary to its RNA. The enzyme responsible, reverse transcriptase, was purified by both H. Temin and David Baltimore (b. 1938) 7 Footnote 7. Starting with a determined messenger RNA, it then became possible to work back to the DNA, i.e., to the gene, by a simple enzymatic reverse transcription operation. DNA that has been synthesized in this way is called complementary DNA (cDNA)In eukaryotic organisms, reverse transcription has proved to be all the more useful as a technique in that all cDNA is coding, unlike the situation in vivo in which the genes are divided up into portions that are coding (exons) and portions that are non-coding (introns).

The ability to cleave DNA and to join together the fragments obtained in a deliberately chosen order, or, in other words, to manufacture previously unseen DNA sequences by making new combinations, led to the dawning of recombinant DNA technologyand caused scientists to come to the sudden realization that the Pandora’s box that contains the secrets of life had been opened, that uncontrollable catastrophes might arise from this, and that there was a potential danger of causing tumorigenic viruses to reproduce in commensal bacteria such as the enterobacterium Escherichia coliIn 1975, around a hundred molecular biologists gathered together at the Asilomar Conference Center 8 Footnote 8 near Monterey in California, in order to discuss the dangers of the new DNA technology. They proposed strict regulation to govern genetic manipulation. Time and experience have shown that the risks being run were very low.

In 1977, DNA sequencing methods were published. One of them made use of chemical techniques 9 Footnote 9, the other made use of enzymatic techniques 10 Footnote 10. Applications were not slow in appearing. From 1977, the team led by Frederick Sanger (b. 1918) in Cambridge (UK) determined the first sequence of a genome, that of bacteriophage PhiX174, which is 5 375 nucleotides long. This was the beginning of an audacious adventure, the apparently senseless challenge being met with unbelievable rapidity thanks to the innovative methods of bioengineering, resulting, during the first years of the 21st century, in the sequencing of the human genomeAnalysis of the human DNA sequence involved the participation of two rival groups, one of them being academic, coordinated by Francis Collins (b. 1950), and bringing together dozens of laboratories around the world, and the other being a private Californian company directed by Craig Venter (b. 1946).

At the beginning of the 1980s, when everyone was persuaded that the RNAs could be placed into three well defined categories, messenger RNAs, transfer RNAs and ribosomal RNAs, it was with great surprise that it was learned that there were RNAs that have catalytic properties (see Thomas Cech 11 Footnote 11 [b. 1947]). These RNAs, called ribozymes, have, in keeping with enzymatic proteins, structured catalytic sites that are able to catalyze RNA or DNA cleavage or ligation reactions. Recently, engineering techniques have been used to obtain artificial ribozymes that have been found to be able to catalyze reactions as varied as oxidations or the synthesis of peptides and nucleotides, thus opening up wide-ranging possibilities of applications in molecular therapeutics, and, in addition, reinforcing the famous theory of the “World of RNA”at the beginning of the appearance of life on Earth 12 Footnote 12.

Another discovery of the 1980s was the role of methylation of DNA bases, cytosine and adenine, and its deregulation in a certain number of pathologies: fragile X syndrome, scapulohumeral dystrophy, certain forms of cancer 13 Footnote 13

In the past decade or so, basic proteins known as histones that are associated with the nuclear DNA of eukaryotes in the form of a complex called chromatin and which had previously been assigned a structural role, have now acquired the status of functional partners. Thanks to specific modifications of certain amino acids (acetylation, methylation, phosphorylation), histones control the state of condensation of the chromatin and the efficacy of transcription of DNA contained in the chromatin, to such an extent that we now speak of the “histone code”. The development of our understanding of histones is a good illustration of the complexification of a concept, the DNA code, into an entity that comes closer to living reality, the DNA code in partnership with the histone codeThere has also been the discovery of interfering microRNAs, small polymers made up of around twenty nucleotide units, the role of which is to control protein synthesis (Chapter IV‑1.2.2). Methylation of DNA, structural modifications of the chromatin histones, and blocking of transcriptional activity by interfering microRNAs are a few of the major areas of research in a scientific domain that is in full expansion, epigenetics, which could be said to have “pipped the science of genetics at the post,” and which explains the plasticity of the functions of living beings.

1.1.2 DNA becomes a molecular tool

With the arrival of restriction enzymes and reverse transcription, the foundation stones of genetic engineeringhave now been laid, and are ready to be used, this being all the easier in that synthetic chemistryis now able to manufacture DNA chains that are several hundreds of nucleotides long, and progress in roboticsand computing techniquesmade it possible for chemists to avoid carrying out tedious routine tasks by using completely-programmable machines. The hope that it would be possible to experiment on living beings by means of the manipulation of DNA became a reality when the American researchers Paul Berg 14 Footnote 14 (b. 1926), Stanley Norman Cohen 15 Footnote 15 (b. 1937), and Herbert Boyer 15 (b. 1936) succeeded in incorporating foreign DNA into a bacterial cell and making it express itself as a specific protein. To make a foreign DNA penetrate into a bacterium, it is often inserted into a bacterial plasmid, i.e., a circular molecule of extrachromosomal DNA that acts as a vector (Figure IV.3). The plasmid‑foreign DNA chimera is replicated inside the bacterium at the same time as the bacterium divides. Viruses are also used as cloning vectors. Other techniques for making DNA penetrate into cells are now available: bombarding cells with tungsten microbeads covered with DNA, electroporation of cells submitted to rapid, high-voltage electrical pulses in the presence of DNA, and direct injection of DNA into mammal cells by micromanipulation (Figure IV.4).

Figure IV.3
figure 3

Genetic recombinationtechnique

Genetic recombination is used in order to cause bacteria to manufacture a foreign protein of animal or plant origin. This involves the insertion of the fragment of animal or plant DNA that codes for this foreign protein into a plasmid. The plasmid, a small ring of bacterial DNA, acts as a vector for the foreign DNA. In order to carry out insertion, plasmid DNA is cleaved by an appropriate restriction enzyme. The foreign DNA is obtained by reverse transcription from a useful messenger RNA. Its duplication is catalyzed by a DNA polymerase. The S1 nuclease makes it possible to break the covalent bond between two strands of DNA. In the following step, a terminal transferase is used to add four nucleotides for which the base is a cytosine (C) to each of the two DNA strands. The same lengthening operation is carried out on the bacterial plasmid, but, in this case, the addition involves a sequence of four nucleotides for which the base is a guanine (G) (complementary to the cytosine C). The bacterial plasmid is hybridized in vitro with foreign animal or plant DNA and then introduced into the bacterium which, using its own machinery, perfects the junction between the integrated DNA and the plasmid DNA.

Figure IV.4
figure 4

Injection of DNAinto individual cells by micromanipulationunder the microscope

(reproduced from J.E. Darnell, N. Lodish and D. Baltimore - Molecular Cell Biology, Scientific American Books, W.H. Freeman and Company, New York, USA, 1986, p. 207, with permission of A. Gräßmann, Ph.D. thesis, 1968, Freie University, Berlin)

A very small volume of DNA (2.10–9 ml) is injected under the microscope into eukaryotic cells (HeLa cells in inset) using a micropipette with a very fine end that pierces the cell membrane. The swelling of the cells at the moment of injection can be seen (inset).

In the first work carried out on the expression of foreign genes, the use of plasmids as vectorswas preferred, particularly that of plasmid pBR322, because of its considerable replicationcapacity. In 1978, a first success was obtained by Herbert Boyer and his co-workers, with the expression of the gene for somatostatin, a peptide hormone comprising twelve amino acids that negatively regulates the secretion of growth hormone, in the bacterium E. coli. Because of its small size, the somatostatin genewas synthesized by chemical means. The expression of somatostatin in E. coli was verified using immunological and physiological criteria, thus demonstrating the validity of the procedure that was used.

The following year, human insulin was produced in E. coli. Fairly soon, yeastwas substituted for this bacterium because, as a eukaryotic organism, it has enzyme systems that are able to carry out chemical finishing operations on neosynthesized proteins that bacteria are unable to do, for example the formation of disulfide bridges in insulin.

Supported by these successes, genetic engineering started to come to the fore as an application-oriented discipline. Levels of performance that would never have been imagined half a century before were achieved, such as the production of growth hormone, interferons, blood coagulation factors and vaccines. In the final decades of the 20th century, phenotype transformations using genetic modifications that had previously been carried out in bacteria and yeasts were successfully attempted in animals and plants. It was observed that a mutated DNA integrated into a plasmid and introduced into a fertilized mouse egg (by micromanipulation) modifies the mouse’s genetic inheritance, which affects first the embryo and then the adult mouse with phenotype modifications. Such mice, which are said to be transgenicbecause of the stable integration of a foreign DNA into their genome, are now widely used as animal models in studies that aim to understand the mechanisms involved in high-incidence human pathologies such as cancer, diabetes, and rheumatoid conditions. In 1982, two American researchers 16 Footnote 16, Ralph Brinster (b. 1932) and Richard Palmiter (b. 1942) carried out a spectacular transgenesis experiment in mice. Using microinjection, they introduced the growth hormone gene (obtained from the rat) into oocytes of mice from a “little” germ line. Once the transgeneic mice had reached adulthood, they were giants. At present, the transgenesis technique is being applied both to the animal kingdom and to the plant kingdom. In the 1980s, Kary Mullis 17 Footnote 17 (b. 1944) perfected an ingenious technique, the Polymerase Chain Reaction (PCR), which makes it possible to produce several tens of thousands of copies of a fragment of DNA. Using this technique, it is possible to detect traces of a fragment of DNA of a given sequence down to an attomolar concentration, i.e., one billion billion (10–18) times smaller than molar concentration.

By the end of the 20th century, genetic engineering had become well-established and wide-spread, thanks to a mastery of techniques involving the manipulation of DNA such as the accurate cleavage of a gene into fragments using commercially available restriction enzymes, the covalent assembly of two fragments of DNA by ligases, the automated chemical synthesis of fragments of DNA of more than one hundred nucleotides and the possibility of manufacturing a complementary DNA (cDNA) from a messenger RNA by using a reverse transcriptase and automated DNA sequencing. Given this particularly well-equipped toolbox, the molecular biologist is now able to manipulate DNA, that is to say, the chemical material that contains the information that is central to the functioning of living structures (microorganisms, plant and animal organisms), and thus to modify, at will, the genotype of these structures that the selective pressure of evolution has previously favored.

Genomics has produced enormous quantities of data that are stored in databanks. Automated procedures have been invented to make the information contained in these data intelligible, these procedures forming the basis for a new discipline, biocomputing, or bioinformatics, which develops programs, or algorithm-based strategies, that are able to solve specific problems, of which the annotation of genomes, i.e., the identification of coding and non-coding sequences. While the annotation of the prokaryotic genomes is relatively easy, because of the absence of introns, that of the eukaryotic genomes is considerably more difficult because of the alternating exons and introns and the small proportion of coding exons (fewer than 2% in the case of the human genome). This explains why, at the time of writing, several hundred genomes of prokaryotes (around 300 at the beginning of 2006) have been sequenced, as opposed to only a few dozen genomes of eukaryotes. Annotation was carried out manually at first, but it has become automated and it is now possible to analyze thousands of items of genomic data.

The comparison of nucleotide base sequences in DNAs and of amino acids in proteins of different origins involves biocomputing. The identification of similar or identical regions that provide information about functional similarities and phylogenetic proximity involves the use of alignment methods. One of these methods, which is in current use, is called BLAST (Base Local Alignment Search Tool). Comparison of protein sequences has been particularly instructive in the science of evolution. It has highlighted evolutive processes in the phylogenesisof proteins and linked these processes to precise functions. It has been possible to deduce that, over time, different families of proteins with similar functions appeared independently and evolved along different routes. This is the case for membrane proteins whose polypeptide chain crosses the thickness of the membrane six or twelve times; thus, the mitochondrial membrane proteins that transport metabolites are formed by triplication of an element with two transmembrane segments, while proteins located in other membranes of the cell are derived from duplication of an element with three transmembrane segments. From this academic context arose the study of Paleogenetics, a new discipline that compares DNA sequences extracted from fossils and amplified by PCR with DNA sequences of current species. In addition to being of immense interest to fundamental biology, genetic bioengineering has led to innumerable industrial applications making use of genetically modified microorganismsthat are able to synthesize molecules with a high added value that can also be used in xenobiotic depollution operations.

So much data has already been deposited, equivalent to the sequencing of more than one hundred billion nucleotides, that it is inevitable that there have been errors, some of which might prove prejudicial for future use (comparison of sequences, screening of drugs…). Nevertheless, the ever-increasing numbers of genome sequencing projects for animal, plant and microbe species show the interest that is shown in understanding the genetic information present in different types of cells, in order to be able to exploit their potential.

1.1.3 DNA chips and protein chips - From genomics to proteomics

DNA chipsappeared in the last decade of the 20th century, and came to the fore as part of a new technical revolution, the “high throughput” revolution (Figure IV.5). An article written by the group headed by Ronald Davis and Patrick Brown (b. 1954) at the University of Stanford 18 Footnote 18 gives a precise description of the hybridization technique used in DNA chips. Thus, around a hundred short DNA strands, corresponding to portions of genes of the plant Arabidopsis thaliana, commonly known as mouse-ear cress, a small plant in the brassicaceae (formerly cruciferae) family, are synthesized. A robot is used to deposit microquantities of these DNAs in solution in a dot pattern on a small glass slide coated with poly-L‑lysine, thus comprising a “chip” on which the covalently fixed DNAs act as probes for specific molecules. A later step involves both the use of reverse transcription to produce complementary DNAs (cDNAs)from messenger RNAs arising from the expression of genes in the same plant and the labeling of these cDNAs with fluorescent ligands for use in screening. Once these fluorescent cDNAs have been denatured, i.e., after separation into single strands, they are brought into contact with the DNA chip. The unhybridized molecules are removed by washing.

Figure IV.5
figure 5

Technology of DNA chips

A - The term DNA chip corresponds to a small, chemically-treated glass (or sometimes silicon) plate on which a robot has deposited DNA strands of known sequence in a pre-determined order.

B - The DNA chip may be used for different types of diagnostic procedures. In the differential diagnosis experiment represented here, messenger RNAs are prepared from two cell samples that have been treated in parallel, the control sample (normal cell) and the experimental sample (pathological cell). These messenger RNAs are reverse transcribed into complementary DNAs (cDNAs) by means of a reverse transcriptase. Each of these two types of cDNA, corresponding to the two types of messenger RNA, is labeled by a chemical reaction with a specific fluorescent ligand (Cy3, which emits at 568 nm and Cy5, which emits at 667 nm). They are then hybridized with chip DNA strands. After hybridization and then washing, the fluorescence emitted at 568 nm and 667 nm under laser irradiation are analyzed using an appropriate detection system. The differential expression of the genes in the control cell sample and the experimental sample (shown by a color difference) can thus be analyzed.

Hybridization between the fluorescent DNAs called targets and the complementary nucleotide probes fixed to the DNA chip is detected by means of an automated fluorescence detection system. At the beginning of the 1990s, Stephen Fodor 19 Footnote 19 and his group, who were working at the Affymax Research Institute (Palo Alto, California), developed an ingenious procedure for microphotolithography that led to the synthesis of a network of a thousand peptides on chemically pre-treated glass microscope slides. The resolution of the network was shown by epifluorescence microscopy after fixation of specific antibodies labeled with fluorescent probes. Soon after this, microphotolithography was used for the manufacture of DNA molecule networks on solid supports. From then on, two competing techniques for the preparation of DNA chips became well-established, either the depositing on a solid support of cDNA obtained by gene amplification (technique used by Davis and Brown), or the synthesis in situ of oligonucleotides carried out directly on a solid support (technique used by the American Affymetrix company, arising from Affymax).

One considerable advantage of DNA chip technologyis that it provides information on the level of transcription of thousands of genes into messenger RNAs (mRNAs), in a simultaneous manner and in a relatively short lapse of time. Experiments that previously required weeks, months or even years to be completed can now be carried out in a matter of hours. We therefore have a sort of instantaneous, precise, freeze-frame picture of the state of a cell at a given moment, with a great number of parameters explored in a semi-quantitative manner. DNA chips are a typical example of the application of high throughput technology to the study of living beings.

The panoply of mRNAs produced by the transcription of DNA is called the transcriptomeThe method by which transcriptomes are obtained is called transcriptomicsIt should be added here that there is not necessarily a correlation between the abundance of a mRNA, evaluated on a DNA chip, and the functionality of the corresponding protein, which depends on multiple factors that particularly involve post-translational modifications: phosphorylation, glycosylation, hydroxylation, and so on.

At the end of the 1990s, DNA chips were being used extensively in the research programs of many biology laboratories. They are used in a variety of domains: human pathology, to differentiate between forms of cancer linked to multiform mutations; microbiology, to identify pathogenic germs; comparative genomics, to look at model eukaryotic organisms that have in common a certain number of genes; or even in populations genetics, to detect polymorphisms linked to a change in a single base in a DNA sequence.

As a complement to the DNA chip technique, the FISH (Fluorescent In Situ Hybridization) methodholds a key position in the study of cytogeneticsThis method is based on hybridization between fluorescent nuclear probes of known nuclear sequence and of complementary motifs located in the DNA of the chromosomes. It allows the detection of chromosomal modifications with a gain or loss of genetic material, such as those that are found in certain tumors. It is used in prenatal diagnostics to diagnose such modifications.

Protein chips, which are used to characterize the reactivity of proteins with respect to specific ligands, are another example of the application of high throughput technology. Dozens of proteins of different types (antibodies, for example), as well as derivatives of nucleic acids or even molecules capable of being ligands of proteins that might arise from combinatorial chemistry (Chapter IV‑3.3), are arranged in a network on small glass plates that are chemically treated to act as hooks to entrap specific proteins present in a tissue extract or serum (Figure IV.6A). This procedure, which is essentially analytical in nature, is complemented by a functional study in which proteins that have been isolated in their native form, i.e., those that are capable of expressing the same functions that they posses in vivo, are deposited on a glass microplate. This type of biochip makes it possible to analyze the reactivity of the proteins that are fixed to it with respect to a multitude of targets (proteins, nucleic acids or pharmaceutical substances) (Figure IV.6B).

Figure IV.6
figure 6

Use of biochipsin proteomics

A - Biochip for the identification of proteins. Different types of ligands, antibodies, antigens, DNA or RNA, small molecules with a high affinity and specificity, are deposited on a reactive surface. These biochips can be used to determine the level of expression of proteins and the type of proteins expressed in cell extracts. They can be used for clinical diagnostics.

B - Biochip for the functional study of proteins. Native proteins or peptides are arranged in micronetworks on a reactive medium. Biochips produced in this way are used to analyze the activities of proteins and their affinities as a function of post-translational modifications. They are useful for identifying drug targets.

Analytical proteomics, which was still in its infancy in the last decades of the 20th century (Chapter III‑6.2.3) has become a vigorous discipline. The association of liquid nanochromatography and of mass spectrometry allows the identification of peptides obtained by the trypsin hydrolysis of samples of proteins of around one picomole in size. Applied to fundamental and pathological cell biology, the aim of analytical proteomics is not only to decipher the list of proteins on the scale of the cell, but also to highlight variations in the abundance of synthesized proteins as a function of environmental conditions. It also aims to determine the post-translational modifications that are undergone by the proteins inside the cells, which, for a large part, control the specificity of their operation. In parallel with the study of proteomics, the study of peptidomics, or the study of all of the peptides (peptidome) present in animal and plant cells and in the fluids that bathe these cells, has developed. For example, several hundred different peptides have been found in the cerebrospinal fluid.

Structural proteomics, which deals with the three-dimensional structure of proteins, has also become a domain in which the activity has been increasingly dominated by high-throughput techniques. This is partly due to the fact that the pharmaceutical industry has given considerable, sustained attention to the understanding of the structure of proteins that could play the role of therapeutic targets. This is the case for protein kinases that catalyze, via ATP, the phosphorylation of endocellular proteins, of proteases involved in hydrolysis reactions, or even of cell surface receptors that are able to combine with ligands such as hormones. The use of automated crystallization systems has become common in structural biology. From the classical technique of the hanging drop, in plates comprising 24 wells, we have moved on to plates with 96, 384 and, recently, 1536 wells. Obviously, this increase in dimensions requires the use of an automated system that includes a robot in charge of transferring microaliquots of the protein solution into the wells and adding media that differ according to their pH, ionic force and molecular composition to these wells. The crystallization process is followed by an automated microscopic examination coupled with video photography.

Making use of the recent development of genomics and of proteomics, and a detailed inventory of the structures and functions of the different protein species of living beings, contemporary biology is now able to sketch out a scheme of molecular systematics, including a classification into phylla, families, and classes that echoes those of the zoological and botanical systematics of the 17th and 18th centuries. However, modern systematics does not tell us how protein macromolecules interact within dynamic networks.

There is still an enormous amount of work to be done in order to achieve an understanding of the meaning of the dialogue between macromolecules in a normal or pathological cell context. This work will require a detailed analysis of metabolic pathways and of how they are controlled, and their evaluation in kinetic and thermodynamic terms. It will be accompanied by modeling (Chapter IV‑4). There is no doubt that it will be successful. Making use of subtle differences in the qualitative and quantitative expression of genes, it will become possible to understand the molecular principles that modulate differences in morphology and in function between neighboring animal, plant or microbial species. The science of evolution should benefit from this. In medicine, the forecasting of predispositions for certain diseases should be made easier (Chapter IV‑3.2), opening up the perspective of prevention strategies. Using recombinant DNA technology, metabolic engineering applied to microorganisms and plants should make it possible to improve the production of molecules that are of economic interest or can be used for drugs.

1.1.4 From genomicsto metagenomics

The diversity of bacteria is amazing, much greater than might be supposed by looking at the number of bacterial species identified by culturing on appropriate media. In fact, the number of bacterial species that can be cultivated only represents 1% of the total number of existing bacterial species on the surface of the Earth. There are two major reasons for this:

  • ▶ We do not know the appropriate conditions for culturing these bacteria;

  • ▶ A certain number of environmental bacteria live in symbiosis, acting as commensal organisms that benefit from the products secreted by other organisms.

Nevertheless, the study of the bacterial genome, without any clonal culture, has been carried out, and comprises a branch of genomics known as metagenomics. Instead of looking at an isolated, well-identified bacterial species, in order to analyze the sequence of its DNA, as has been done traditionally, researchers look at a heterogeneous bacterial sample from which the DNA is extracted, amplified, and then sequenced by high throughput methods. Computer processing of the data provides information about individual germs. Craig Venter, who had already gained notoriety with the sequencing of the human genome, recently applied “metagenomic” procedures to the study of the sequence of the “metagenome” of the bacterial species of the Sargasso Sea 20 Footnote 20. He came up with nucleotide sequences corresponding to approximately 1 million kilobases of non-redundant nucleotides, attributable to more than two thousand different genomes.

The challenge to be met by metagenomics is to connect a function to its phylogenetic source and to extend this information to specific species within a bacterial community. The functional analysis of metagenomics banks has already led to the identification of new antibiotics and of new proteins equipped with various enzymatic activities. For example, analysis of the metagenome of symbiotic bacteria hosted by a marine sponge, Theonella swinhoei, has shown that the genes it contains are at the origin of the synthesis of antitumoral substances of the polyketide group 21 Footnote 21. In human biology, a metagenomics approach has been applied to the study of the population of bacteriophages present in the intestinal flora. Approximately 1 200 genotypes have been identified, a number that greatly exceeds the 400 bacterial species of this flora 22 Footnote 22. This result leads us to think that the luxuriant community of bacteriophages which cohabits with that of the intestinal bacteria may influence the diversity of the latter by selective bacterial lysis and also by promoting the exchange of genes between bacteria.

A rapid overview of the history of the exploration of genomic DNA over the last fifty years shows the rapidity with which a traditional experimental paradigm can move thanks to modern computing and robotics procedures. In less than twenty years, we have moved from the manual sequencing of DNA that was developed at the end of the 1970s to automated high throughput sequencing. At the turn of the 21st century, the sequencing of communities of genomes (metagenomics) has been substituted for the sequencing of individual genomes. DNA and protein chips have become objects of everyday use in fundamental and applied biology. Transgenesis is widely practiced. DNA, a molecule that remained mysterious for a long time after it was discovered, delivered some of its secrets during the second half of the 20th century.

1.2 The Manipulated Genome

The purpose of the first experiments on DNA was to understand how DNA, detector of the genetic code, transmitted its message. After having questioned DNA, researchers moved on to manipulating it. The current aim is to use oligonucleotides to build nanoscale constructions with original and, if possible, useful, properties. In addition, the possibility that has recently become available of being able to interfere with the expression of the genome in living cells, with the intervention of small RNA molecules, allows the programmed manipulation of the genome. Another challenge, the extending of the coding power of the genetic code, now appears to be achievable.

1.2.1 DNA used as a construction material

The production of nanomachines made of DNA is no longer just a dream. Such constructions have been built recently by the American Nadrian Seeman 23 Footnote 23 (b. 1945) using fragments of DNA with cohesive or “sticky” ends, which result from the fact that each of the strands of the double helix overhangs in one direction, and in the other the strand with which it is paired, thus leaving a few bases free (Figure IV.7). If two strands of DNA with sticky ends are brought into contact, when the bases of these ends are complementary, a branched structure will appear spontaneously. Using this principle as a basis, cube-shaped nanometric constructions that make it possible to encage molecules of interest have been built. The opening of the cage by appropriate devices liberates the encaged molecules, which can act as substrates in specific reactions.

Figure IV.7
figure 7

Building a cubic construction from DNA double heliceswith sticky ends

(reprinted from N.C. Seeman © (2003) “DNA in a material world”, Nature, vol. 421, pp. 427‑431, by permission from N.C. Seeman and Macmillan Publishers Ltd)

The cutting of a double strand of DNA using restriction enzymes able to create fragments with cohesive ends (A) has been used to “build” an artificial construction (B), which, in this case, is a cube (C), but which could be an object of a different geometrical type.

A DNA nanomachinethat is capable of movement is becoming a reality. One DNA nanomachine, which is admittedly still rudimentary, has been put together based on the structural difference that exists between B‑DNA, the classical double helix that twists to the right, and Z‑DNA, a double helix that twists to the left. A propensity to adopt the Z‑form is triggered when there is an alternating sequence of cytosine (C) and guanine (G) (CG sequence) in the DNA. The experiment illustrated in figure IV.8 makes use of a duplex formed of B‑double helices.

Figure IV.8
figure 8

Construction of a DNA nanomachine

(reprinted from N.C. Seeman (2003) “DNA in a material world”, Nature, vol. 421, pp. 427‑431; C. Mao, W. Sun, Z. Shen and N.C. Seeman (1999) “A nanomechanical device based on the B-Z transition of DNA”, Nature, vol. 397, pp. 144‑146, by permission from N.C. Seeman and Macmillan Publishers Ltd)

The DNA nanomachine constructed by N.C. Seeman comprised a duplex of double strands of DNA. One of the double strands, of the classical B‑form of DNA (right-hand twist), has been cleaved in such a way as to fix fluorescent molecular probes onto the cleavage zones. Facing this cleavage zone, a short nucleotide sequence, in which the cytosine (C) guanine (G) motif is repeated, can be found in the other DNA double strand, which is also of B‑form. The addition of cobaltihexammine induces the transition of the CG segment from a right-hand twist (B‑DNA) to a left-hand twist (Z‑DNA), which leads to a rotation of this segment and to a rotation of the assembly, which can be detected using FRET (Fluorescence Resonance Energy Transfer) spectroscopy.

One of the double helices has a short CG segment. Facing the CG segment, the other double helix is interrupted, and its ends, where the interruption is, carry fluorescent molecular probes. The simple fact of adding a cationic substance such as cobaltihexammine, which neutralizes the negative charges of the phosphate groups, triggers a conformational transition, with the CG segment taking the Z‑form, causing a rotational movement of the assembly that is detected by the movement of the probes.

There is no doubt that the use of DNA strands in order to build nanomolecular constructions that are capable of programmed movement marks the beginning of an adventure that we may imagine will be rich in outlets for domains such as computer technology, nanomechanics and even the life sciences. In addition, the discovery that DNA conducts electrical current gives rise to dreams of a revolutionary technology in which DNA may be used in the design of electrical circuits, in competition with classical electronics 24 Footnote 24.

1.2.2 RNA interference : a new frontier in the manipulation of the expression of the genome

Interfering RNAsare non-coding RNAsof around twenty nucleotides that control gene expressionat post-transcriptional level. As with many discoveries, that of RNA interferencewas the result of serendipity. It began during the 1980s with observations made by two American research groups, that of Victor Ambros 25 Footnote 25 now at Darmouth Medical School, Hanover, and that of Gary Ruvkun 26 Footnote 26(b. 1951) at Boston’s Massachusetts General Hospital, that a gene named lin‑4, which is involved in the post-embryonic development of the nematode C. elegans, did not code for a protein, but for a small size RNA that played an antisense role.

This odd discovery was supported and made more explicit a few years later by the research groups of Andrew Fire (b. 1959) at Baltimore’s Carnegie Institute and Craig C. Mello (b. 1960) at the University of Massachusetts in Worcester 27 Footnote 27. In order to block the production of certain proteins in the nematode C. elegans, the researchers used synthetic antisense RNAs. The control involved the use of sense RNAs according to a classical protocol. Unexpectedly, protein synthesis was blocked in both cases, suggesting that a contaminant was present in the sense and antisense RNApreparations. This contaminant was identified as a double strand RNA (dsRNA – double strand)that is, an RNA that is folded back on itself in a “hairpin” loop because of the pairing of complementary bases (adenine vs uracil and guanine vs cytosine). In order to verify the mechanism by which the translation of messenger RNAs into proteins is silenced, the nematode C. elegans was injected with a synthetic dsRNA, part of the sequence of which was complementary to that of the gene unc‑22, known to code for a protein involved in muscular contraction. Within a few hours, the worm was making disordered movements, suggesting that the dsRNA interferes with the production of proteins in the process of muscular contraction.

The mechanism of action of dsRNA was quickly unraveled: dsRNA gives rise to two single strand RNAs after cleavage by a specific enzymatic mechanism. One of the single strand RNAs (siRNA – small interfering RNA) is paired thanks to a complementarity of bases with a short sequence of messenger RNA transcribed from the gene unc‑22. The result is a blockage of the translation of messenger RNA into a protein, followed by the destruction of messenger RNA. This phenomenon was named RNA interference(Figure IV.9). These results shed light on the observations made ten years earlier at the University of Arizona by Richard A. Jorgensen (b. 1951) on purple petunia plants treated with an excess of copies of the genes involved in the synthesis of the purple pigment. Completely unexpectedly, instead of their becoming brighter in color, the treated petunias lost their color and became white. The treatment triggered a phenomenon in which the gene responsible for the pigmentation was silenced, no doubt by the implementation of interfering RNAs.

Figure IV.9
figure 9

Silencing of messenger RNA (mRNA) by interfering RNAs from double strand RNA that is either synthetic (dsRNA) or natural (miRNA)

The DICER cleavage enzyme, which has a ribonuclease activity, cuts the double strand RNA into two strands. In the presence of the RISC (RNA-Induced Silencing Complex) protein complex, one of the RNA strands finds a complementary nucleotide sequence in a messenger RNA (mRNA) and associates itself with this RNA, making it unable to be translated into protein.

We now know that eukaryotic cells from animals and plants produce and host interfering RNAs that are said to be “natural 28 Footnote 28. Natural interfering RNAs, of around twenty nucleotides, are called microRNAs (miRNAs) Although a few details differ between the modes of formation and action of natural interfering RNAs and those of synthetic interfering RNAs, in particular the fact that messenger RNAs are not destroyed by miRNAs but blocked in their translation, the effect of negative regulation on the production of specific proteins comes to the same thing (Figure IV.9). There is a far from negligible number of genes that code for miRNAs. Already, several hundred miRNAs have been identified in the genomes of plants and animals. The amount of interest that they arouse, and the feverishness of the research being carried out on them, are in keeping with the major mechanisms that they control: embryogenesis, hematopoiesis, neuronal differentiation, etc.

Given an understanding of the genome sequence in Man, the rat and the mouse, trials have begun that aim to achieve an understanding of how the expression of mammal genes of known sequence might be manipulated by the interplay of interfering RNAs (Chapter IV‑3.1). The treatment of viral infections such as AIDS or hepatitis B, which are worrying public health problems, could benefit from this new technology. It appears that interfering RNAs have much more to give to us in the near future than they have taught us up until now.

1.2.3 The experimental transgression of the genetic code

The deciphering of the genetic code in the middle of the 1960s was the end of a first step in elucidating the mechanism by which a sequence of nucleotides in DNA is translated into a sequence of amino acids in a protein (Chapter IV‑1.1). During the years that followed, the subtleties of the transcription of DNA into messenger RNA and of the translation of messenger RNA into protein via transfer RNAs were explored in hundreds of laboratories around the world. Particular attention was paid to the understanding of how a given amino acid is activated and bound to a transfer RNA (tRNA) after being picked up by an aminoacyl-tRNA synthetase. Nevertheless, the idea remained of a code in which triplets of purine and pyrimidine bases of messenger RNAs are translated into natural amino acids.

Recently, methods have been developed that give more flexibility to the action of the aminoacyl-tRNA synthetases, or, in other terms, relax their specificity 29 Footnote 29. Synthetases that have been manipulated in this way are able to recognize non-natural amino acids and to incorporate them into proteins by working together with the ribosomal machinery. It is in this way that, at the time of writing, around thirty non-natural amino acids, obtained by insertion of different types of chemical residue (photoactivable, fluorescent or radioactive residues capable of acting as probes for structural and functional analyses) (Figure IV.10) have been incorporated into protein structures. With such an innovation, an unexpected field of exploration has opened up to research in domains as far apart as pharmacology and the science of evolution, giving rise to burning questions: could such non-natural proteins have therapeutic properties? could they give a selective advantage to the organisms that host them? With the addition of non-natural amino acids to the genetic code and the demonstration that proteins containing such amino acids can function in living cells, in sum, with the transgression of the potentialities of the natural genetic code, the experimental method appears to challenge the order of living beings.

Figure IV.10
figure 10

Expansion of the genetic code

(adapted from T. Ashton Cropp and Peter G. Schultz (2004) “An expanding genetic code”, Trends Genet., vol. 20, pp. 625‑630, with permission from Elsevier)

Examples of amino acids added to the genetic code: A: p-azido-L-phenylalanine; B: benzoyl-L-phenylalanine; C: p-iodophenylalanine; D: p-aminophenylalanine; E: homoglutamine.

The triumph of genetic engineering via the study of DNA is not unique to biology. Many other sectors are undergoing changes in their type of experimental approach, dictated by the technosciences and making use of computer sciences, robotics and high-throughput screening. However, given the many questions that its operation continues to raise, and its central position at the heart of scientific ethics, the study of DNA remains a typical example of the way in which the experimental life sciences and the techniques that underlie them are evolving nowadays.

2 Towards a Mastery of the Functions of Living Beings for Utilitarian Purposes

“I perfectly agree that when physiology is sufficiently advanced, the physiologist will be able to make new animals or plants, as the chemists produces substances that have potential, but do not exist in the natural order of things.”

Claude B ernard

Principles of Experimental Medicine - 1877

More than a century after Claude Bernard predicted a genetically manipulated world, it has come to pass. The molecular biologist, having original, high-performance methods for “tinkering” with DNA, has moved on to the application and use of his technical expertise for utilitarian ends. During the 1970s, with transgenesis, research on bacteria (Chapter IV‑1.1.2) opened up a new biological domain, that of genetically modified organisms (GMOs). Results led to predictions that it would be possible to transfer a fragment of DNA corresponding to a gene of a certain species into the genome of another species and have this foreign gene express itself as a protein in the host cell. In 1983, the successful trans-genesis of a gene for resistance to an antibiotic, kanamycin, in tobacco plants, signaled the beginning of the technology of the first plant-type genetically modified organisms, still called GMPs (genetically modified plants). In 1996, the birth of Dolly the ewe unveiled the era of reproductive cloning in mammals, i.e., the identical reproduction of an already-existing organism. An additional step was taken with the first tests on the differentiation of embryonic stem cells towards different types of lines that are characteristic of well-defined tissues, such as nerve tissue, cardiac tissue or the hepatic parenchyma, thus opening up promising perspectives in regenerative medicine. The frontiers of the experimental method continue to be pushed back to the limit of what is feasible and sometimes into the realm of fiction, as in immunology, for example, with the idea of xenotransplantation, using “humanized” animal organs.

2.1 Manipulations Of Plant Dnathe Challenge Of Genetically Modified Plants

Given the universality of the genetic code, any gene that is introduced into the genome of a plant, whether that gene is of animal, plant or microbial origin, is able to replicate itself and be expressed as a specific protein. Thus plant GMOsor genetically modified plants (GMPs)are able to express specific foreign proteins from another plant, a bacterial microorganism or an animal organism. In the 1990s, a short time after fundamental research had revealed the feasibility of plant transgenesis, the first transgenic plant, the Flavr Savr tomato, was marketed in the USA. Since this time, numerous other plant GMOs have been cultivated on a large scale and become available on the world market, including corn, soya, rice, cotton and the poplar. One of the desired aims is to produce modified plants that are able to resist destruction by the herbicides that are commonly used to eliminate weeds, while another is to prevent predation by harmful insects. In the first case, transgenesis involves the insertion of a herbicide-resistant gene, and in the second case, the inserted gene codes for an insecticidal toxin. Recently, plant GMOs that produce proteins with a therapeutic effect have appeared, ranging from antibiotic peptides to antibodies or proteins as unexpected as human hemoglobin. Current projects aim to create plants that are resistant to adverse conditions such as the dryness of arid climate zones.

The preferred procedure for producing a plant GMO is to use a bacterium, Agrobacterium tumefaciens, a microorganism that is able to insert fragments of its own DNA into plant cells (Figure IV.11). The useful gene that we wish to transfer into the plant may be a gene for resistance to a pesticide such as glyphosate, marketed under the name of Roundup, phosphinothricin (Basta) or glufosinate (Liberty).

Figure IV.11
figure 11

Creation of a plant GMO by genetic engineering

A - Use of the Ti (Tumor-inducing) plasmid as a vector for genetic engineering in plants. The Ti plasmid resides naturally in the bacterium Agrobacterium tumefaciens. This bacterium infects certain plants. During infection, the part of the Ti plasmid called T‑DNA is integrated into the genome of the infected plant. The T‑DNA fragment can be modified by insertion of a selective marker such as the gene for resistance to kanamycin, which will allow selective isolation, in the presence of kanamycin, of modified clones and a gene of interest (gene for resistance to a herbicide or gene expressing an insecticidal toxin). When A. tumefaciens, carrying the plasmid modified T‑DNA, infects a plant cell, it transfers the T‑DNA to it in linear form after its excision from the plasmid (step 4). After this, the T‑DNA is integrated into the chromosome of the plant cell.

B - Enlarged view of step 4: transfer of DNA from the Agrobacterium to the plant cell.

In the case of the fight against insect predators, the useful gene is carried by a fragment of DNA contained in the genome of the bacterium Bacillus thuringiensis. This gene, called Bt, expresses a toxin responsible for the insecticidal capability of B. thuringiensis. A current application involves the protection of Bt corn with respect to the corn-borer, a devastating insect whose caterpillars are particularly destructive. Another, more direct, gene transfer method, known as biolistics, involves bombarding plant cells with tungsten microbeads covered with modified DNA.

With the implementation of large-surface-area experimental fields and the first marketing of GM soya, in 1996, the question of whether or not the advantages achieved with respect to crop yields are counter-balanced by risks for the environment and for consumers came to the fore. Food risks could arise from the toxicity or allergenic power of artificially synthesized proteins. At the time of writing, this question remains unanswered, due to the lack of epidemiological studies carried out rationally over several years.

When the first creations of GMOs took place, the transfer of the gene of interest was carried out by means of the co-transfer of an antibiotic resistance gene. The transformed cells were selected according to the criterion of their resistance to this antibiotic, which involved a risk of dissemination of the resistance gene. This selection technique has been abandoned. In practice, it is difficult to evaluate the theoretical ecological risk of wild plants being invaded by genes that have been inserted artificially into GMOs. As a precaution, zones used for experimentation of plant GMOs are now surrounded by refuge zones, i.e., fields in which the same species of plants, in non-GMO form, are cultivated. There has been a much fiercer and completely legitimate debate concerning the presence of the Terminator gene in seed from the first GMOs marketed by the Monsanto company in the USA. The Terminator gene blocked germination of the seed from the cultivated plant, so it was necessary for the farmer to buy more seed from the company each season, thus creating a state of dependency. This technique is no longer in use, but the fact remains that most transgenic seed is patented, and therefore farmers who use such seed are dependent on the companies that posses this genetic know-how.

The culture of plant GMOs has spread around the world, covering more than a billion hectares of our planet, more than half of which are in the United States of America. This type of culture is used on a large scale for soya and in a less extensive way for corn, rape seed and cotton, but there are many other applications of plant transgenesis. Among the countries that are actively involved we may mention Argentina, Brazil, Canada and China, and more recently India, Paraguay and South Africa. While the policies of these countries are based on the fact that GMO products do not differ fundamentally from non-GMO products with respect to checks carried out a posteriori, and that there is thus no reason to prohibit them, European policy has taken refuge behind a principle of precaution, and it remains basically restrictive. Although the moratorium on the culture and marketing of plant GMOs that was put in place in 1999 was lifted in 2004, mandatory labeling for any consumable product containing more than 0.9% GMO remains dissuasive. The United States of America has refused to use such labeling.

The worries that are aroused by plant transgenesis, which are often exacerbated by the diktats of ecology groups, must be analyzed in a reasoned manner. Common sense and lucid thought dictate that the debate should be situated within a scientific perspective in which the main role is played by the experimental method in long-term applications. Simple reflection leads us to think that with time parasites and self-propagating plants will develop a resistance to the most drastic treatments, as was the case for bacteria confronted with antibiotics. The perspective of an acquisition of uncontrolled resistances, which gives rise to so much passionate debate, is, in fact, only the first stage of a technology with promising applications. The mastery of plant transgenesis that was acquired through the first experimentations should, in fact, allow the emergence of plant GMOs that are assigned to the production of molecules with therapeutic effects (drugs, vaccines, human proteins, vitamins…). In this domain, there have already been creations that include golden rice, which carries β‑carotene, the precursor of vitamin A, banana plants that express a vaccine against hepatitis B and tobacco that produces human lactotransferrin and hemoglobin. If we just look at the production of golden rice as a palliative for vitamin A deficiency, it should be remembered that, in certain countries of our planet, this deficiency affects people’s sight and is a frequent cause of blindness, that it generates problems with development and the immune response to infections, that it affects more than a hundred million children around the world and that it is responsible for the death of three million of them each year. If these plants are considered to be a material of choice for the production of proteins with a therapeutic effect, this is partly due to the yield of such crops over large surface areas, and also partly due to the low risk of transmission of viral pathogens to Man, because of the species barrier, a risk that is less negligible when animal productions are involved. Genetically modified plants are also potential factories for the manufacture of chemical products with an industrial impact, for example lubricants, perfumes and aromas. Given the unpredictable outlets that plant GMOs may have in human medicine and the different domains of the economy, plant GMO technology should be considered in a manner that is free of any pressure or passion, and, as far as the political authorities are concerned, it should be subject to appropriate measures to surround and protect certain strategic experiments.

When looking at the worries being expressed by the European society, it should be remembered that the genetic inheritance of plants has never ceased changing, not only in the most of natural of manners, over millions of years, particularly with the mobility of transposable elements located in the genome, but also artificially, at the hands of farmers from ancient times onwards, with their methods of hybridization and selection. The nervousness of European authorities, showing an ignorance of basic scientific ideas, with the pretext of a principle of precaution, and sometimes political compromises that are exemplified by fluctuating and contradictory positions, runs the risk, in the short term, of causing their countries to lag disadvantageously behind the United States of America, which holds the majority of plant biotechnology patents.

2.2 Manipulations Of Human Dnaand Hopes For Gene Therapy

The principle of gene therapyis simple: the introduction of an appropriate gene into the cells of a patient who carries a mutation can correct the phenotypical consequences of this mutation, or, in other terms, cure the disease affecting the patient, or at least slow down its evolution. The technical difficulty involved in gene therapy is that of finding an appropriate vehicle or vector for the transfer of the gene and addressing it to an appropriate location in the genome of the host cell. The most commonly used vectors in human gene therapyare viralA certain number of criteria are necessary for a transfer to be efficacious, including a high concentration of viral particles carrying the gene to be transferred (more than a billion viral particles per milliliter) and a good capability on the part of the foreign gene to be integrated into the host’s genome. The patient’s immune response remains a major worry in the use of viral vectors: at cell level it often leads to a proliferation of cytotoxic lymphocytes and, especially at humoral level, to the synthesis of antibodies directed against the viral proteins. In order to minimize its immune response, the genetic material of the viral vectors is modified.

For ethical reasons, gene therapy is currently only applied to somatic cells, germinal gene therapy being rejected. Somatic gene therapyhas been experimented in the treatment of hereditary illnesses linked to hematopoiesis. One of the technical reasons for this choice is easy access to the progenitor cells of the bone marrow, with the aim of transfectionIt was with this in mind that mouse gene therapy models were developed a few years ago. The sickle cell mouseis one of these models. Human drepanocytosis (sickle cell anemia) is a serious disease that is caused by a mutation in the β protein chain of normal human hemoglobin A. The molecules of sickle cell hemoglobin S tend to aggregate and form fibers that obstruct the blood capillaries of the microcirculation. Somatic gene therapy has been applied to these sickle cell mice. This involves an autograft of bone marrow hematopoietic cells transfected with a retrovirus hosting the gene coding for the β sub-unit of normal hemoglobin. Encouraging results have shown the validity of this approach.

In 2000, a gene therapy protocol that had been applied with success to Man was described by the group of Alain Fischer (b. 1949) and Marina Cavazzana-Calvo at the Necker hospital in Paris 30 Footnote 30 (Science, vol. 288, pp. 669‑672). The purpose of this therapy was to bring about a long term remission in the case of an immune diseaseknown as SCID‑X1 (Severe Combined ImmunoDeficiencylinked to a mutation on the X chromosome). Because of their susceptibility to microbial and viral infections, babies who are affected can only survive in sterile rooms. They are known as bubble babiesIn this illness, the hematopoietic progenitor cells of the bone marrow are unable to differentiate into T and NK (Natural Killer) lymphocytes because of a mutation that affects a cytokine receptor. Previous experiments carried out on model mice show that SCID can be corrected by in vivo transfer of the cytokine receptor gene into hematopoietic progenitors. The transfer of the gene of interest paired with a retroviral vector was carried out first in March 1999, in two babies, one of them eleven months and the other two months old. Progenitor cells from their own bone marrow, cultured and modified genetically, were injected into them. These were therefore autografts, without any risk of immune rejection. A remission of symptoms over a period of nearly a year, shown by the almost normal behavior of the babies’ immune cells, encouraged the application of the same therapy to other babies. In total, ten babies were given this therapy. The enthusiasm that greeted the successes that were recorded was nevertheless tempered by fact that in the spring of 2002, and again in the following year, a child who had undergone the gene therapy developed a leukemia characterized by an anarchical proliferation of lymphocytes, necessitating chemotherapy. These two occurrences were explained by the random character of the insertion of the gene of interest into the patients’ genomes: insertion into a site close to a proto-oncogene had led to activation of this proto-oncogene and the proliferation of the lymphocytes. While the trial carried out at the Necker hospital gave rise to great hopes, it nevertheless showed that there is still a long way to go before we achieve a targeted transfection of genes so that no undesirable consequences follow. Here we have a typical example of the limits of an experimental method that is based on an in-depth technological know-how, but also on a still imperfect understanding of the complex arcana of the mechanisms that regulate the positioning and interaction of genes in the chromosomes of eukaryotic cells. This example highlights a harrowing ethical dilemma: should we not treat a patient whose illness is likely to be fatal, or attempt a therapy that may save the patient, without having any formal assurance of its success?

An experimental medicine that has the power to modify the human organism via its genetic material is now able to take over from the experimental method that up until now operated on animals and plants. We can easily understand, given the progress that has already been accomplished and that which is to come in the domain of gene therapy, that the temptation will be great, in the future, to consider manipulations of the human germ cell genome as being licit, insofar as such manipulations make it possible to eradicate a handicapping defect in our descendents. At present, the idea of any attack on the germinal genetic inheritancehas been rejected unconditionally on the basis of ethical considerations. Nevertheless, the history of science shows that prohibitions that were once considered to be untouchable finish by being contravened. This was the case for abortion. In a text entitled Why genetic engineering should continue its battle 31 Footnote 31, James Watson writes of his confusion when faced with a choice that is likely to become more and more insistent over the years: “Dare we be entrusted with improving on the results of several million years of Darwinian natural selection? or do the human germ cells represent on the contrary Rubicons that geneticists will never dare to cross?”

2.3 stem Cells and Cloning

A mastery of the differentiation of stem cells and of cloning are two essential weapons in the biotechnological arsenal, the use of which for utilitarian ends, particularly in human medicine, gives rise to hope and disquiet, agreement and disapproval.

2.3.1 The hope of stem cells

At the beginning of the 1960s, experiments carried out by the Canadian biologists Ernest McCulloch (b. 1926) and James Till (b. 1931) attracted attention to the particular properties of cells in the bone marrow, the stem cells, which would subsequently be found in other tissues 32 Footnote 32. The experimental protocol is simple. Bone marrow cells from a mouse are injected into another mouse that has previously been irradiated in order to destroy its stem cells. The injected cells go to the spleen where they divide and form colonies that take the form of nodules of different sizes. The researchers realized that the cells of these nodules present differences in their potential for renewal, which is more or less rapid. They reinjected the nodule cells into mice from a second batch. The reinjected cells showed themselves capable of multiplying and generating several types of blood line.

These observations suggest the presence in the nodules of progenitor cells that have a strong potential for self-renewal and self-differentiation. In the following years, these observations were confirmed and explained by two characteristic criteria of stem cells; self-renewal and differentiation into multiples cell lines with specific characteristics. From this point on, it was possible to understand the enigma of the amputated hydra in the experiments carried out by Trembley, two centuries beforehand (Chapter II‑3.4.2). We now understand why, like the hydra, organisms like the flatworm, the salamander, the starfish and the zebrafishare able to recreate an amputated or damaged part of their bodies. The hydra mobilizes stem cells that it has preserved since its birth. In the case of the salamander, regeneration involves the reprogramming of cells that have already been differentiated.

Like all stem cells, embryonic stem cells (or ES cells) are able to self-renew and differentiate into the different types of known adult cell line, giving rise to different types of cell such as neurons, cardiac cells that are able to contract, or hepatocytes (Figure IV.12). This potential has led to the hope that ES cells could be used in regenerative medicine.

Figure IV.12
figure 12

Diagram illustrating how to obtain differentiated cellsfrom stem cells

In a fertilized egg that has developed to the blastocyst stage, it is possible to distinguish a cell mass (inner cell mass, ICM) which protrudes inside the blastocyst. The ICM cells are removed and placed on a mat of irradiated (and thus unable to divide) fibroblasts that provide them with a support and nutrients (steps 1 and 2) so that they can proliferate. The stem cells arising from the ICM cells, placed in a medium that has been specifically conditioned to provide cytokines and other biomolecules, are able to differentiate into various cell types (step 3).

At what stage of embryo development is it possible to remove ES cells for experimental purposes? After fertilization by a sperm cell, the ovum undergoes a series of divisions that give rise to a microstructure, the blastocyst, the cells of which are called blastomeres. Each isolated blastomere remains capable of producing an entire organism of fetus and placenta, by division and differentiation. At this stage, blastomeres are totipotent. Five days after fertilization, the embryo has the form of a hollow sphere. An external layer of cells, the trophoectoderm, surrounds a cavity, the blastocele, inside which a small mass of cells, the inner cell mass, protrudes. From the beginning of the implantation of the blastocyt in the uterus, the trophoectoderm evolves to form the placenta. The cells of the inner cell mass take part in the process of differentiation that generates all of the tissues of the future adult organism. These are called embryonic stem cells (ES cells) ES cells are said to be pluripotentIsolated, they have lost their ability to give rise to a complete individual, but they have maintained the possibility of differentiating, according to their environment, into any of the two hundred cell types that make up animal tissues. During their division, ES cells evolve from a stage of being pluripotent to a stage of being unipotent, passing through a stage of multipotencybeyond a hundred cells. A state of multipotency characterizes cells that give rise to a restricted number of cell lines in the tissues in which they nest. This is the case for of the hematopoietic stem cells of the bone marrow that form the red blood cells and the white blood cells. The term unipotent refers to the progenitors, which give rise to a single type of cell, for example the hepatocyte of the liver or the cardiomyocyte of the heart.

When ES cells are cultivated for 4 to 7 days in a conventional nutritive medium, they multiply and aggregate. If the culture medium is supplemented with certain biomolecules such as insulin, retinoic acid, transferrin or fibronectin, the differentiation of the ES cells is oriented towards cells of different types, such as neuron cells, glial cells or muscle cells. There are many publications about experiments concerning the grafting of differentiated stem cells in the mouse or the rat. For example, neuron precursors derived from the spinal cord or the brain are grafted into rats whose spinal chords have been injured. Five weeks after the grafts are carried out, the transplanted cells have filled the area of the injury and differentiated into oligodendrocytes, astrocytes and neurons. What is more, after about twelve weeks, locomotive function has been partially restoredIrradiated 33 Footnote 33. Other experiments involving the grafting of differentiated stem cells have been carried out on rats in which the dopaminergic neurons of the “substantia nigra” of the brain that secrete the neurotransmitter dopamine have been selectively destroyed by injection of 6‑hydroxydopamine. The problems found in the rat as a result of this neuronal degenerescence mimic those found in Man in patients suffering from Parkinson’s diseaseDopaminergic neurons obtained by the differentiation of mouse ES cells are grafted into the striatum of each of these rats, a region of the brainwhose neurons communicate with those of the substantia nigra and play a fundamental role in the control of movement. This results in a significant improvement in the motor deficit, coupled with the establishment of functional synapses between the injected neurons and those of the host 34 Footnote 34, 35 Footnote 35. A recent publication bringing together the results of two French research teams, that of Michel Pucéat (b. 1961) in Montpellier and that of Philippe Menasché (b. 1950) in Paris, provides interesting information about how mouse embryonic stem cells, grafted into sheep cardiac tissue where an infarctus has been artificially induced, are able to colonize the infarct zone and regenerate cardiac contraction in a functional manner. Moving from the mouse to the sheep constitutes a considerable species leap, and the absence of any immune rejection leads us to say that embryonic stem cells have an “immune privilege” 36 Footnote 36.

The use of ES cells in regenerative medicine necessarily requires that their differentiation be regulated in an exhaustive manner into well-defined pathways, in order to produce homogeneous cell lines with a view to implanting them in damaged tissues. In fact, contamination with non-differentiated ES cells is likely to cause tumors (teratomas) over the long term. The mastering of the use of ES cell culture and differentiation, as well as of cloning, in such a way as to overcome problems of histocompatibility, is still in its infancy.

For a long time, the mouse was the preferred animal model for experimental studies on the differentiation of ES cells. In 1981, the first ES cells from mouse blastocysts were isolated and successfully cultured by two groups of researchers in Great Britain and the USA. It was only in 1998 that human embryonic stem cells (hES) were isolated for the first time 37 Footnote 37 and held in culture, on a nutritive layer of fibroblasts from irradiated mice. This delay with respect to the ability to culture animal ES cells can be explained by the fact that the molecular machinery that activates replicationand cell differentiation programs is not completely identical in Man and the mouse 38 Footnote 38. For example, a cytokine called LIF (Leukemia Inhibitory Factor), which is indispensable for the renewal of ES cells in an undifferentiated state in the mouse, has no effect on human ES cells. There are several other differences concerning the control of proliferation and differentiation in human and murine ES cells by growth factors. Briefly, the conclusions obtained from experiments carried out on murine embryonic stem cells cannot be transposed automatically to human embryonic stem cells. This simple observation should encourage politicians to think constructively about the use of human embryonic stem cells for fundamental research studies and applications in therapeutics.

While there is a highly promising future for the use of ES cells, this future is littered with obstacles, and rigorous checks and balances need to be put in place. Nevertheless, research on such cells is mandatory if we wish to move on to a regenerative medicine that aims to be a new frontier in the art of healing. After specific differentiation, hES cells could provide unlimited quantities of the tissues needed to replace damaged tissues responsible for handicapping illnesses (dopaminergic neurons in Parkinson’s disease, cardiomyocytes in myocardial infarction, pancreatic islets of Langerhans cells in diabetes, fibroblasts in skin grafts, chondrocytes in rhumatoid arthritis). In addition, metabolic analysis of hES cells carrying defective genes whose phenotypical expression is known in human pathology should improve our understanding of the perturbed mechanisms, and could lead to pharmacological advances. As well as the technical difficulties involved, which have not yet been adequately overcome, the handling of hES cells is subject to much ethical debate in many countries, with those who object to it holding to their prejudices, which are linked to religious or cultural traditions. This is the case in France, where, nevertheless, a few timid dispensations had begun to appear at the time of writing. In contrast, in Great Britain, the law authorizes the isolation of hES cells for therapeutic purposes, using embryos of less than one hundred cells, produced by in vitro fertilization, and surplus to requirements. The British response to the burning question of whether an isolated hEs cell may be considered as a potential human embryo is clearly “no”, for, in order to be able to develop in utero, such hES cells would need to have the placental progenitor cells.

An alternative to the use of ES cells is to make use of adult stem cellsHowever, the proliferation capacity of adult stem cells is considerably lower than that of their embryonic homologues. The hematopoietic stem cell is the paradigm of the adult stem cell. It can differentiate into all known types of cells. In the last decade of the 20th century, several publications concerning the plasticity of the adult stem cell awakened a hope that these cells could transform the treatment of degenerative illnesses. Certain of these publications stated that adult bone marrow stem cells, implanted into different types of tissues, differentiate into hepatocytes, cardiomyocytes or neurons, depending on the specific environment. Careful re-examination of the techniques used revealed that, in certain cases, interpretation of the results as showing cell transdifferentiation was an erroneous one, and that the fusion of the bone marrow stem cells with cells from other tissues was a more plausible explanation.

In any case, while not ignoring the use of adult stem cells, experimentation on hES cells remains a judicious choice, given our current state of understanding. In France, the 2004 law application decree that was issued on the 7th of February 2006, revising the restrictive bioethical standards of 1994, opens up the possibility of using human embryonic stem cells for scientific purposes, with certain ethical reserves being maintained.

2.3.2 The specter of cloning

One of the obstacles to the stabilization over time of a stem cell graft in a receiver involves the phenomenon of rejection for reasons of histocompatibility. Considered to be foreign by the receiver (host), grafted stem cells coming from a donor are rejected. This obstacle could be overcome by using the technique of cloningBased on experiments on several animal species, it is now accepted that the transfer of the nucleus of an adult somatic cell from a host into an enucleated oocyte makes it possible to obtain from this oocyte, which is once again nucleated, and which is the equivalent of a zygote and able to divide, ES cells whose genome is identical to that of the host. Because of this, the ES cells are immunologically compatible with the tissues of the host. In Man, such cells could be directed by differentiation towards stable cell lines creating well-defined tissues and organs (liver, muscle…) that could be used in regenerative medicine. This is the principle of therapeutic cloningIn March 2004, Korean veterinary researcher Woo Suk Hwang (b. 1953) and his co-workers, who were recognized experts in animal cloning, announced in the American review Science 39 Footnote 39 that they had succeeded for the first time in obtaining around thirty human blastocysts by cloning, i.e., by the transfer of nuclei of somatic cells into enucleated ova. This first experiment involved autologous cloning (ovum nuclei and enucleated somatic cells taken from the same woman). Hwang and his team used 176 ova, and the yield from the experiment was close to that obtained at that time for the cloning of mammals. Using the inner cell mass of one of the blastocysts, they isolated a line of embryonic stem cells able to maintain a normal karyotype after several dozen divisions. This publication, which appeared in a highly prestigious scientific review, triggered an enthusiasm in the media that was in keeping with the spectacular nature of the team’s exploit, tempered here and there by a few comments that were mainly linked to questions of medical ethics. In 2005, there were numerous other articles by the same team on the same subject, reinforcing the first results with a heterologous cloning technique (ovum nuclei and enucleated somatic cells taken from different people), thus giving rise to great hopes that the era of regenerative medicine was near. At the beginning of 2006, Professor Hwang’s retractation of all his work, and a public confession of a spectacular fraud, were even more dramatic, offering certain media an occasion for a disproportionate level of fury against therapeutic cloning. However, despite such rear-guard combats, it is obvious that one day these technical difficulties will be overcome. Human cloning, in order to obtain stem cells for therapeutic purposes, cannot escape the future. Once this aim has been achieved, it will be spoken of as the outcome of a long story.

The adventure of animal reproductive cloningbegan in 1960. In Developmental Biology 40 Footnote 40, two American researchers, Robert Briggs (1911 ‑ 1983) and Thomas King (1921 ‑ 2000) described experiments involving the transfer of cell nuclei of embryos from a frog (Rana pipiens), at the blastula and gastrula stages, into enucleated eggs of the same species. A high percentage of the clones obtained in this way were able to reach the tadpole stage when the transferred nuclei came from the early blastula stage, but only mediocre success was achieved when the nuclei came from the later gastrula stage. These experiments emphasized both the totipotency of the embryo somatic cells and the equivalency of the somatic cell nucleus and the nucleus of the fertilized egg in cell division and differentiation. Briggs and King’s publication did not arouse any particular interest. It is true that the 1960s were dominated by the saga of molecular biology, which would reach its culmination in the deciphering of the genetic code.

From the 1980s onward, the first attempts to clone mammals (rat, mouse, pig) began. Moving from the amphibian egg, which was a millimeter wide, to a mammal egg that was one hundred times smaller, presented a technical difficulty that would be overcome by a technique of cell-to-cell electrofusion. Cloned embryos were thus obtained by nuclear transfer and then implanted into the uterus of a surrogate female. However, in all cases, the nucleus came from embryo cells. In February 1997, the announcement made by Ian Wilmut (b. 1944), Keith H. Campbell (b. 1954) and their collaborators at Edinburgh’s Roslin Institute 41 Footnote 41 of the birth of the cloned lamb Dolly had an immediate effect in the media. In fact, this was not only the cloning of a higher mammal, but, above all, the cloning by insertion of an adult somatic cell, in this case a mammary tissue cell, into an enucleated oocyte. This went far further than the experiment carried out by Briggs and King, which essentially involved the transfer of embryo cell nuclei into enucleated frog eggs. The trick that gave Wilmut and Campbell their success was to bring the cells providing the nuclei to a quiescent state corresponding to the interphase stage of the cell cycle, by impoverishing their culture medium, before electrofusion with enucleated oocytes. Although we should be aware that 434 attempts were made before a positive result was achieved, this does not make it any less astonishing that the nucleus of a cell in its adult state, i.e., completely differentiated, was able to behave as if it were totipotentDespite being committed to a program of differentiation that is considered to be more-or-less irreversible, and which will give it a specific identity, the nucleus of an adult cell can be reprogrammed and become totipotent. Since Dolly, many other mammals have been cloned from nuclei of adult cells; mice, cows, goats, pigs, rabbits, cats, dogs, rats and horses. As far as ethical discussion about cloning is concerned (Chapter IV.5), it is essential to note that the demarcation line between reproductive cloning and therapeutic cloning is situated where decisions are made concerning the destiny of the cloned blastocyst(Figure IV.13).

Figure IV.13
figure 13

Therapeutic cloning versus reproductive cloning

The transfer of the nucleus of a somatic cell (liver, epidermis, muscle) containing 2n chromosomes into an enucleated oocyte gives rise to an egg (2n chromosomes) that is able to divide and to produce a blastocyst. The cells of the blastocyst inner cell mass (ICM) can be used as stem cells that can differentiate into different types of cell line (therapeutic cloning). On the other hand, if the whole blastocyst is implanted into a uterus, it will produce an embryo which, after birth, will grow into an adult animal (reproductive cloning). Reproductive cloning and therapeutic cloning therefore differ because of the fact that in reproductive cloning, the whole blastocyst is used, while in therapeutic cloning, only certain cells, corresponding to the inner cell mass (ICM) of the blastocyst, are used.

Its implantation into a uterus determines reproductive cloning , while the use of the cells of the inner cell mass, with the aim of making them differentiate towards different cell lines, constitutes the basis of therapeutic cloning.

The structural and functional identity of the cells of a given tissue in an adult organism involves a basic mechanism: while each cell has the same set of genes, only some of the genes are expressed as proteins and the genes that are expressed differ according to the tissue involved. The key to the mechanism responsible is in the epigenetic type chemical modifications of cell DNA, for example methylations, which repress the expression of certain genes without altering the expression of others. These modifications of the DNA, which control cellular specificity (muscle, liver, brain…) are not very reversible, but, in certain circumstances, they can become so. This is what happens from time to time when the nucleus of an adult cell is inserted into an enucleated oocyte. We are thus able to assume that in the molecular arsenal of the oocyte cytoplasm there are substances that can cancel the epigenetic modifications of the DNA present in the nucleus of an adult somatic cell and recreate a state of pluripotency in this nucleus, or, in other words, provoke the reprogrammingof the somatic cell nucleus. In the long term, it is to be hoped that biochemical technology will be able to find and purify the molecules responsible for the nuclear reprogramming of somatic cells.

The use of human oocytes for the purpose of therapeutic cloning is still subject to severe criticism. Certain groups wish it to be prohibited, because of a fear of a drift towards reproductive cloning. To obviate this risk, the idea has been to make use not of human oocytes but of those of animals, transferring the nuclei of human somatic cells into them. Even supposing that the technical difficulties involved could be overcome, the cells that would result, a sort of Man-animal chimera, would also be the subject of an ethical debate, even if the purpose of this type of cloning were to be solely therapeutic.

2.3.3 The bias of parthenogenesisin cloning

Some Japanese researchers 42 Footnote 42 have succeeded in creating mice according to a parthenogenetic process that involves adding the nucleus of an oocyte that is haploid (1n chromosomes) to another haploid oocyte, the result being the equivalent of a fertilized egg (2n chromosomes). This exploit is achieved by the invalidation of one of the genes (H19) involved in the control of the parental imprint. It is known that sexual reproduction in mammals involves a phenomenon called the parental imprint, which, by means of the methylation of DNA and perhaps also of histones, allows the expressing or silencing of certain genes in male and female gametes. A single copy of a given gene, originating either from the oocyte or from the sperm cell, is therefore expressed, while the other is inactive. In the Japanese experiment, if the mouse H19 gene had not been invalidated, the result would have been an anarchical development of the responsible genes involved in the parental imprint with overexpression in the case of some and an absence of expression in the case of others. These disturbances would have been incompatible with the viability of the embryo. However limited its application might be, the manipulation of the germinal genome poses the problem of the mechanism by which the parental imprint intervenes in the viability of the egg, a parameter that at the time of writing is still not completely understood, but is being actively explored.

2.4 The “Humanization” Of Animal Cells For Purposes Of Xenotransplantation

In Boston, Massachusetts, in 1954, a kidney was transplanted from a healthy boy into his twin brother, who was suffering from a fatal renal anomaly. The success of this graft ushered in the era of transplantations of such organs as the heart, liver and kidney in Man. In order to try to prevent the rejection of grafts, caused by an immune incompatibility between the receiver and the graft from the donor, different immunosuppressing treatments were tried, one by one, involving corticosteroids or cytostatic agents such as 6‑mercaptopurine. In the 1980s, a decisive step forward was made with the fortuitous discovery of the powerful immunosuppressive effect of the cyclosporin A, a cyclic polypeptide isolated from the mold Tolypocladium inflatum.

Each year, human organ transplants into patients make it possible to save many lives. However, for some time now, organ transplantation has been suffering from penury of donors. One alternative to the homograft is the grafting of animal organs, or the xenograft, and, at the dawn of the 21st century, this type of graft has entered an active, promising phase, with the creation of pigs that have been partially “humanized” and are thus, as a consequence, immunocompatibleFor reasons of genetic and physiological similarity, the first choice for such grafts was to use apes or monkeys. However, this idea was quickly abandoned, for several reasons; a non-negligible risk of viral infection due to the phylogenetic kinship of human and simian species; a slow growth rate; a low reproduction rate and, finally, laws that protect primates. These disadvantages are not found, or are at least minimized, in the pig: the risk of a viral infection passing from the pig to Man should be low because of the species barrier (but nevertheless, it ought to be evaluated), the pig growth rate is relatively rapid, pig litters are large and pig organs are of a size close to those of Man.

Hyperacute rejection of grafts is the critical obstacle that must be overcome before it is possible even to envisage the feasibility of xenotransplantation. Hyperacute rejection is caused by the presence in Man of natural antibodies (xenoantibodies)that accumulate throughout a lifetime and are directed against antigenic motifs carried by the products of the digestion of food or dust that is breathed in.

Xenoantibodies are mobilized when a xenograft occurs, and when they combine with xenoantigensbrought by the graft this activates immune proteins such as the complement proteinsThe catastrophic effect of this xenoantibody/xenoantigen combination is a vascular thrombosis followed by necrosis and rejection of the graft. The pig xenoantigen that is considered to be the one mainly responsible for the phenomenon of rejection in Man is a sugar molecule, galactose α‑1,3‑galactose, located on the plasma membrane of endothelial cells. Synthesis of this molecule requires the enzyme α‑1,3‑galactosyltransferase, which is present in most mammals, but absent in Man and the primates. This enzyme disappeared in Man around twenty million years ago, following a double mutation of the gene. In 2002, cloning by nuclear transfer, associated with the invalidation of the gene coding for galactosyltransferase, made it possible to create pigs without galactose α‑1,3‑galactose 43 Footnote 43. This performance shows that the xenotransplantation objective, although it can only be envisaged over the long term, is not based on false hopes.

Plant GMOs , gene therapy , embryonic stem cells , therapeutic cloning , and xenotransplantation are a few of the many examples that show how far experimentation on living beings has progressed in just a few decades, from inquiries into the operating mechanism of an organ or a cell, in the interests of pure understanding, to a programmed process, planned with an objective in mind, the chances of success of this objective being analyzed and counted in terms of impact and cost-effectiveness.

During the Renaissance, ecclesiastical authorities, worried by the libertarian forces that were assailing them, applied the brakes to audacious questioning of dogma such as the circumterrestrial revolution of the Sun that had, since Ancient times, placed Man at the center of the Universe. Nowadays, civil authorities, conscious of the potential but also of the possible misuses of genetic manipulation, insist on having the right to oversee such procedures. In truth, since the 19th century, governments have been interesting themselves in research on living beings and encouraging it, as long as its applications have allowed improvements in human health. This has been the case for vaccinations against infectious diseases or for prevention of microbial infections by means of aseptic or antiseptic methods. With the breakthroughs made in genetic manipulation at the end of the 20th century, it was more than just the results of experiments on living beings that attracted the attention of the political authorities, it was, above all, the manner in which the experimental method, with all its hazards, made use of living material, sometimes of human origin, in order to unlock mysteries. Conscious of the social impact of emerging discoveries that are subject to considerable media coverage and are sensationalized in both the written and audiovisual media, the State, with the help of researchers and philosophers, has laid down a code of bioethics, applied through strict or even restrictive legislation. It remains to be seen whether the rules of this code will continue to be an inviolable absolute or will be modified according to the evolution of the moral codes and the cultures of nations. University teaching and the education of a society must now take into account not only the content of successive discoveries, but also the fallout of these discoveries, insofar as they concern Man, and even the ethical justification of the methods that have allowed these discoveries. In “remaking” living beings according to imposed norms, and in scheduling, in a certain fashion, the manufacture of life according to new codes, certain questions move from the “how” to the “why”, i.e., from the scientific domain that is accessible to human thought to the metaphysical sphere, with its problems of the limit of what is surmountable and tolerable in terms of ethics.

3 The Progress of Medicine Face to Face with the Experimental Method

In his Birth of Predictive Medicine, Jacques Ruffié (1921 ‑ 2004) reminds us that medicine has evolved through three stages over the course of time: curative medicine, which has been practiced since Ancient times and is still being practiced; preventive medicine, which is more recent, and is designed to prevent people from falling ill, either by vaccinating them, in the case of infectious diseases, or by recommending an appropriate diet and medication in the case of metabolic disorders such as diabetes or arterial hypertension that have been detected by means of systematic examination; and, finally, predictive medicine, a branch of medicine that is still in its early phases, and which is based on modern technology and is able to predict situations of risk because of anomalies detected in the genetic inheritance or because of exposure to environments that are reputed to be dangerous (for example, carcinogenic smoke, asbestos).

About one and a half centuries ago, the publication of the introduction to The Study of Experimental Medicine (1865) provided proof, based on scientific arguments, that the time had come to transfer the experience that had been acquired through the experimental method practiced on animal models to the ill person. After Claude Bernard, attentive to the progress made by ideas and techniques in the physical and chemical sciences, and making use of its own advances in the understanding of the living cell, both normal and pathological, experimental medicine was to live through a development that was without precedent in the history of Humanity. To understand the causes of epidemics, nutritional deficiencies, metabolic deviations of hereditary origin and degenerative illnesses, and then to translate these causes in cellular and molecular terms, this was the process undertaken by medicine once it began to use the experimental method. In fact, for several decades, from the beginning of the 19th century, medicine had already undergone some major revisions of outdated practices and had inaugurated a new era in diagnosis. For example, the differential diagnosis of pulmonary ailments became possible because of the invention of the stethoscope by René Laennec (1781 ‑ 1826) and the practice of auscultation and percussion by Joseph Skoda (1805 ‑ 1881), the uncontested master of the Vienna school. In France, Pierre Louis (1787 ‑ 1872) used statistical methods to evaluate the efficacy of different treatments. Armand Trousseau (1801 ‑ 1867), a pupil of Pierre Bretonneau (1787 ‑ 1862) wrote a famous treatise on the Hôtel-Dieu Medical Clinic in Paris. In Great Britain, chronic nephritis, with its identifying symptoms, was described by Richard Bright (1789 ‑ 1858), paralysis agitans by James Parkinson (1755 ‑ 1824) and Addison’s disease, which affects the adrenal glands, by Thomas Addison (1793 ‑ 1860). During the 19th century, many other famous names signaled the arrival of a medicine that was resolutely anatomo-clinical in nature, in line with Bernardian doctrine.

3.1 From Empirical Medicine To Experimental Medicine

“Experimental medicine is thus a medicine that claims to understand the laws of the organism in sickness and in health, in such a way that it not only predicts phenomena, but also in such a way that it can regulate and modify them, within certain limits.”

Claude B ernard

Introduction to the Study of Experimental Medicine - 1865

In the introduction to The Study of Experimental Medicine, Claude Bernard stigmatizes the relics of an empirical medicine that was still being practiced in his day and was forgetful of rationalism. The terms that he uses are without leniency: “I have often heard doctors who, when asked the reason for a diagnosis, reply that they don’t know how they recognize such a case, but it is obvious, or who, when asked why they administer certain remedies, reply that they don’t really know how to put it exactly, and that anyway they are not required to give a reason, because it is their medical tact and their intuition that guides them. It is easy to understand that doctors who reason in this way are denying science. What is more, it is impossible to be too forceful in rising up against such ideas, which are bad not only because they stifle any scientific seed in the young, but also, above all, because they favor laziness, ignorance and charlatanism.” In order to evaluate the meaning of these words, it should be remembered that in Claude Bernard’s time, the medical profession was far from considering the microscope as a useful instrument for the study of cell structures and that the cause and effect relationship between bacterial germs and infections was still to be shown.

With the development of increasingly effective instruments for exploration, and of methods for microanalyses concerning a wide range of blood and humoral constants, throughout the 20th century, medicine, which was once empirical, has now become scientific. Claude Bernard’s dream, experimental medicine, is now operative. This medicine is no longer content simply to determine the cause of an illness and to locate the affected organ, which was the major objective of clinical medicine, but it seeks to detect the mechanisms of pathological processes by means of histological and physicochemical explorations. This medicine is no longer willing to passively monitor the evolution of an infectious disease. After having identified the responsible germ, it tries to target this germ with the chemical weapon that is able to selectively destroy it. This medicine is no longer content simply to find remedies, it aims to understand the mode of action. It sets itself the goal of meeting challenges such as finding the genetic cause of degenerative illnessesor of cancers and developing appropriate therapies. It is supported by statistical data. When a new drug is implemented, the results are now evaluated by the double blind method: neither any of the patients (treated and non-treated) nor any of the investigators are aware of who has been administered with the drug and who has been administered with a placebo. In the surgical domain, audacious techniques have also led to considerable progress, particularly in neurosurgery and in cardiovascular surgery. Thanks to robotics and to computer technology, remote surgery or telesurgery has become practicable, although up until not that long ago, it was only to be found in fiction.

Faced with emerging problems in public health, the task undertaken by experimental medicine is immense. In the middle of the 20th century, the spectacular recovery from high-incidence infectious diseasessuch as pneumococcal pneumonia, meningococcal meningitis or acute forms of tuberculosis, which that was brought about by antibiotics, gave rise to the idea that medicine had won a battle against the microbial world and that, from then on, it would be able to control the evolution of infectious diseases and to offer rational treatments. The gradual appearance of a microbial resistance to antibiotics has brought an end to this euphoric era. Penicillin, for example, which was put on the market at the end of the 1940s, was active on practically all strains of Staphylococcus aureus. Sixty years later, more than 90% of the strains of this same microbe are resistant to penicillin. The incidence of nosocomial infections, which are contracted in health care facilities, never ceases to rise. At present, around 10% of the hospitalizations that take place are complicated by the patient developing a nosocomial infection. Equally worrying are the re-emergence of diseases that were once considered to be under control, such as tuberculosis or poliomyelitis in Africa, and the emergence of new diseases such as AIDS, whose HIV virus (Human Immunodeficiency Virus), which was identified at the beginning of the 1980s, has generated a pandemic that has spread throughout the planet. Infectious diseases are currently responsible for more than a quarter of human deaths. The Koch bacillus responsible for tuberculosis and the pneumococcus kill three to four million people a year, around the world. In 2004, HIV killed more than three million people, and more than forty million people are infected. One person is infected every 30 seconds.

In viral diseases, the role of vectors (insects, various animals)as well as the notions of contagiousness and aggressivity have been emphasized. We have only to remember the dreadful contagiousness and aggressivity of the Spanish flu virus (Influenzavirus AH1N1) which, in 1918 ‑ 1919, killed more human beings around the world than the First World War that preceded it. In contrast, the SARS (Severe Acute Respiratory Syndrome)epidemic of 2003, the vector of which was doubtless the civet, a small carnivore raised in China and desired for its meat, was rapidly contained because of its low contagiousness and also because of the isolation measures that were taken. Human behavior is not without its effect on the emergence of viral diseases. The growth in intercontinental travel and human migration, as well as intensive deforestation in Africa and South America, which bring virus vectorsinto contact with Man, are factors concerned in the emergence of viral diseases that risk being explosive and devastating. In this context, the history of the Ebola virusand of the Marburg virus, which cause violent hemorrhaging, is edifying. In 1967, in the German village of Marburg, an epidemic of unknown origin broke out, the illness manifesting itself with brutal suddenness by vomiting, diarrhea, a high fever and an increased tendency to bleed. This pathology, which was contained rapidly by means of drastic isolation measures, was found to be of viral origin. The pathogen concerned was a filovirus (filiform virus). A brief enquiry showed that the origin of the epidemic was contact between technicians of a pharmaceutical company and monkeys that had been imported from Uganda and that were carrying the virus. In 1976, two other epidemics, characterized by severe and often fatal hemorrhagic fevers, were reported in the Sudan and the Republic of the Congo. Here again, the illness was caused by a filovirus, the Ebola virus. At the time of writing, only public health organizations, including the NIH (National Institutes of Health) in the USA, have attempted to set up vaccination and therapeutic strategies. Research on these dangerous viruses requires high security installations that are particularly costly, so that private companies are reticent about investing in work that is only targeted on poor regions and which concerns epidemics that have so far been contained successfully, although one day the Ebola and the Marburg virus could quite well escape their African niches.

Experimental medicine must also understand the colossal challenge of the five thousand hereditary diseasesthat are currently listed, the most handicapping of which are myopathies and neuropathies. Given the means that are available to the contemporary clinician in order to assign each of these diseases to a genetic defect, one can only be amazed by the mass of information about them that has accumulated over a century, since the first diagnosis of a hereditary disease, alcaptonuria, which was made in 1902 by Archibald Garrod (1857 ‑ 1936), a doctor at London’s St Bartholomew’s hospital. Alcaptonuria is a non-serious genetic flaw that can be detected easily by a blackening of the urine. It is the result of a blockage caused by the mutation of an enzyme involved in the catabolism of an amino acid, tyrosine, this blockage leading to the accumulation of homogentisic acid, the polymerization of which gives rise to a brownish color. The patient examined by Garrod was a young boy. Investigation of the family history revealed that transmission of the flaw was correlated to cross-cousin marriages and followed Mendel’s laws for recessive traits. Garrod demonstrated other hereditary-type anomalies, cystinuria, porphyria and pentosuria. In 1909, these observations were published in a work that became a classic: Inborn Errors of Metabolism.

In 1956, the specific molecular defect of a metabolic anomaly linked to a mutation was identified for the first time by the German-born British biochemist Vernon Ingram. This was the hemoglobin defect responsible for drepanocytosis or sickle cell anemia: a glutamic acid in the β chain is replaced by a valine. The consequence of this simple change is a modification of the structure of the hemoglobin, leading to a sickle-shaped deformation of the red blood cells, the increased fragility of these cells and also a tendency towards cell lysis. This discovery made use of the electrophoresis and chromatography techniques that had just been introduced in biochemistry (Chapter III‑6.2.2): such a discovery would not have been possible without these techniques. Because of the progress made in molecular biology, the nosological framework of hereditary diseases has been greatly enriched over the last twenty years. For example, at present, more than one hundred hereditary-type myopathieshave been identified by accurately locating molecular lesions in the genomic DNA and characterizing the structural and functional modifications of the mutated proteins.

Certain health problems present real challenges for experimental research. This is the case for the spongiform encephalopathy caused by a prion (proteinaceous infectious particle), which has all the more impact on the imagination because its etiology remains a mystery. It is also the case for degenerescence of the central nervous system correlated with aging, Alzheimer’s disease being a striking example, although, as far as familial forms of this illness are concerned, i.e., those of the hereditary type, it has been possible to link the invasion of the brain by a so-called amyloid peptide, which accumulates in plaques, on the one hand, and, on the other hand, the absence, due to a mutation, of an enzyme, a peptidase, which normally degrades the amyloid peptide.

Contemporary scientific medicine sometimes acquires a revolutionary aspect. Here again, as with other disciplines involved in the study of living beings, it arises from discoveries resulting from the principle of serendipity(Chapter III‑2.2.3). This was the case when, in January 1987, a team in Grenoble, France 44 Footnote 44, led by the neurosurgeon Alim-Louis Benabid (b. 1942) and the neurologist Pierre Pollack (b. 1950) discovered by accident that in patients affected by Parkinson’s disease a beneficial effect was achieved by deep, high-frequency electrical stimulation of the brainThe three major symptoms of Parkinson’s disease are muscular rigidity, a tremor when at rest and a slowing down of the execution of movements. In the 1960s, the Swedish team of Arvid Carlsson (b. 1923), who won the Nobel prize for Physiology and Medicine, demonstrated a relationship between the Parkinson syndrome and a deficit in the secretion of a neurotransmitter, dopamine. A group of neurons that is limited to half a million (of the 100 billion contained in the brain) produces this neurotransmitter in a small structure located in the midbrain, called the substantia nigra. The neurons of the substantia nigra have elongations that interact with different nerve formations (called nuclei) including the subthalamic nucleus. In 1990, Bergman et al. 45 Footnote 45 published an article that describes a curious relationship between a provoked lesion of the subthalamic nucleus and the disappearance of the signs of Parkinson’s disease in a monkey which had been made Parkinsonian by chemical treatment. This publication led the team in Grenoble to target their electrical stimulation on the subthalamic nucleus. This was completely successful. This electrical stimulation procedure, which is now well-codified, involves using stereotactic neurosurgical techniques, controlled by Magnetic Resonance Imaging, to implant an electrode into the subthalamic nucleus. The electrode is connected to a generator that is implanted under the patient’s clavicle. The generator sends brief electrical impulses of frequencies from 100 to 200 Hz. Under the effect of this stimulation, the characteristic symptoms of the illness, particularly the static tremor and bradykinesis, regress in a spectacular manner. The mechanism by which this stimulation acts is not yet understood. No doubt this has to do with complex phenomena involving the inhibition of certain neuron relays near to the substantia nigra, which remain to be deciphered. Here we have a typical case of a progression from an experimental fact, discovered by accident, towards the analysis of its cause. From the point of view of the experimental method, it is interesting to make parallels between this discovery by serendipitous means of the beneficial role of electrical stimulation of the midbrain in Parkinson’s disease and Cartesian style programmed research that aims to graft into the brain of Parkinson’s disease sufferers embryonic stem cellsdifferentiated into dopaminergic neurons 46 Footnote 46.

Civilian society and its armed force, the political authorities, have understood that experimental science has the tools, the method and the thought processes necessary to develop strategies for prevention and healing. Every year a few new antibiotics are isolated and tested and new vaccines are developed. This is the case for DNA vaccines obtained by inserting a bacterial or viral immunogenic DNA sequence into a bacterial plasmid. The modified plasmid, amplified by bacterial culture and then injected into an individual, becomes incorporated into the immune cells of this person and induces the synthesis of immune proteins. Finally, it seems relatively certain that hopes concerning gene therapy for hereditary diseases will be fulfilled within before too long (Chapter IV‑2.3.1).

One of the traits that is characteristic of the period we live in, and which arises partly from the economic stakes involved, is the shortening of the time that elapses between a discovery being made and the application of that discovery. For example, Interfering RNAs, which were discovered in the 1990s (Chapter IV‑1.2.2) are already the subject of therapeutic investigation. More than a hundred biopharmaceutical companies around the world are using them with a view to producing drugs from them 47 Footnote 47. In mice, a certain number of synthetic interfering RNAs have proved their efficacy in silencing genes which, following mutation, have acquired carcinogenic potential. However, the use of interfering RNAs as therapeutic agents requires them to be stabilized, because they are fragile molecules. The group headed by Achim Aigner 48 Footnote 48(b. 1965 ), at the school of Medicine in Marburg, Germany, managed to stabilize a synthetic interfering RNA by complexing it with polyethyleneimine, and this interfering RNA is able to block the expression of a receptor involved in cancerization (c‑erbB2/neu(her‑2) receptor). Used in mice, such a drug appears promising.

Despite the undeniable progress that has been made, experimental medicine is still some way from finding solutions to some of the enigmas that it meets along the way, and which underline the complexity of living beings. Some time ago, it was thought that after having invalidated a gene coding for a protein that is indispensable to a function, we would discover the secret of a cause-and-effect relationship. Experimental practice has shown that, generally, this is far from being the case. Another example of the complex relationships that exist in living beings is the interference of the mental and the organic. One experiment that suggests this interference was carried out on mice who had acquired a form of pathology similar to Huntington’s chorea, by transgenesisMice from the same line were separated into two batches, one acting as a control, and the other being subjected to daily mental stimulation, including memorization tests. Unexpectedly, the appearance of symptoms was noticeably slowed down in mice who had been subjected to mental gymnastics 49 Footnote 49, as if the brain, by intentionally mobilizing its neuron activity, was able to secrete substances able to alleviate its own defects. In short, by means of possible retroactive mechanisms that are called upon by the mind, the brain appears to act as actor and spectator.

3.2 Contemporary Advances In Biotechnology The Example Of Medical Imaging

At the turn of the 21st century, experimental medicine was being nourished by techniques inherited from experimental physics, chemistry, and even mathematics and computer technology, in the same way as the other sciences of living beings. The progress made in medical imaging techniques has been particularly impressive since the time, at the end of the 19th century, when the X-rays discovered by Wilhelm Röntgen made it possible to view the structure of the human skeleton. The saga of X-radiation continued through the 20th century (Chapter III‑2.6.1). For the last few decades, new imaging techniques have come to the fore. They have spread rapidly, and been refined.

Ultrasound imaging, which is based on the principle of the reflection of ultrasound waves off of different kinds of surfaces, has become an everyday technique for viewing blood flow in blood vessels and the heart. However, it is mainly in the study of the brainthat medical imaging has benefited from technical advances in the domains of physics and computer technology, and it has been innovative in assigning cognitive activities to well-identified anatomical structures. This functional neuroanatomy makes it possible, in a non-trauma-inducing manner, to monitor and locate the operation of neuron networks with great temporal and spatial precision, during various cognitive tasks such as reading and the written or oral expression of thought.

The middle of the 20th century saw the gradual development of two methods for exploring zones of cerebral activity, electroencephalographyand magnetoencephalographyAt present, these techniques are being taken over by MRI (Magnetic Resonance Imaging)(Figure IV.14). The principle of MRI is based on the detection of hydrogen nuclei and their differentiation according to their environment. Functional MRI leads to the location of the areas of the brain that are active during calculation exercises, the perception of sounds, language and objects, and memorization, with a resolution of just a few millimeters. Its power of exploration is such that it has been possible to analyze the brain response, in sleeping or awake babies who are only three months old, to auditory stimuli from language that either makes sense or does not make sense 50 Footnote 50. The response, located in the left hemisphere and the prefrontal cortex, leads to the conclusion that, from the first months of life, there are zones of the brain that are potentially active before the first attempts at language appear.

Figure IV.14
figure 14

Application of NMRI (Nuclear Magnetic Resonance Imaging)to the study of the neuron activity in a normal human brain

(reproduced from D. Maintz et al. (2002) “Phosphorus-31 NMR spectroscopyof normal adult human brain and brain tumors”, NMR in Biomedicine, vol. 15, pp. 18‑27, John Wiley & Sons Ltd, with permission)

A - The two images of the magnetic resonance of protons (H) correspond to two virtual cross-sections of the brain along the axial plane, in T mode1 (interaction of the protons with the environment)

.B - The profile corresponds to the NMR spectrum of phosphorus 31P (framed region). The pH is determined as a function of the position of the peak for (inorganic) mineral phosphate, Pi. PME: phosphomonoesters; PDE: phosphodiesters; Pcr: phosphocreatine; ATP: adenosine triphosphate with the specific resonances of phosphate groups in the α, β and γ position.

Both in France (CEA-Saclay and the Frederic Joliot Hospital at Orsay) and abroad, recent MRI performance has encouraged projects concerning the manufacture of instruments able to produce magnetic fields of around ten teslas, which allows an unequaled definition in the identification of areas of the brain assigned to specific cognitive functions and in the highly accurate determination of the location of pathological lesions.

A technique that is complementary to MRI is Positron Emission Tomography (PET)This generally uses water labeled with oxygen 15 (15O), a radioactive isotope of natural oxygen that has a very short lifetime (123 s), produced extemporaneously in a cyclotron by bombardment of an 14N target with protons. The radiolabeled water is injected into the blood flow of the patient. It is found in greater concentration in the zones that are the most irrigated by blood capillaries. The positrons that it emits collide with the surrounding electrons and give rise to photons that can be detected by the appropriate apparatus. Affected by a stimulus (whether this stimulus results from talking, writing or listening), the blood irrigation of the zones of the brain that have been specifically excited increases noticeably. The location of the positron emission provides information about the location of these zones. Within a few dozen minutes, it is possible to locate a highly vascularized cerebral tumor. PET can use molecules other than water, such as organic molecules labeled with positron-emitting atoms, (18F) fluorine (half life 110 min) and (11C) carbon (half life 20 min). Around twenty years ago, in Canada, an analogue of L‑dopa, the precursor of dopamine in the brain, 18F‑6‑L‑ fluorodopa, was synthesized, and was found to be an excellent probe for determining the capture capability of the endings of the dopaminergic neurons in the striatum. In patients suffering from Parkinson’s disease, this capture capability is noticeably reduced. At present, PET involving 18F‑6‑ L‑ fluorodopa is being used to evaluate the survival of dopaminergic cells grafted into the striata of Parkinson’s disease sufferers 51 Footnote 51, 52 Footnote 52.

Nowadays, brain imaging techniquescan be used to explore the electromagnetic anomalies of neurological or neuropsychiatric illnesses such as Huntington’s chorea, the different forms of Alzheimer’s disease or even autism, the genetic origin of which is in the process of being deciphered. A bridge has now been built between the molecular defects identified by genetics and the electromagnetic anomalies that result, analyzed by functional cerebral imaging. It was not so long ago that Descartes considered that human thought was unconnected to a material support (Chapter II‑3.4.3). We are not far from the era when Broca located the language area in a specific zone of the brain after the autopsy of an aphasic patient (Chapter III‑3.1), thus opening the door to another scientific domain, neuropsychology, which had previously only been the subject of speculation.

The consequences, from the societal point of view, were far from being insignificant. Thus autism, which was once suspected of being caused by errors in the mother’s behavior with respect to her child, has been shown to be a disturbance in the development of the fetal nervous system, in the temporooccipital region.

While the neurosciences occupy a preponderant position in the medicine of the beginning of the 21st century, because of the development of techniques that aim to analyze even the functions of thought, emerging methodologies of another order, such as gene therapy (Chapter IV‑2.2), are in the process of completely modifying our ways of treating and curing a range of previously incurable human diseases, from incapacitating immune disorders to cardiovascular diseases and cancer.

3.3 From Experimental Medicine To Predictive Medicine

“It is in the domain of thought about the future that Man is singled out. We are beings who have an imagination. Not content to live in the present, to profit from past experience, we remain haunted by a future that we are conscious of constantly entering. This obsession with the future has been a powerful driving force in cultural evolution. We seek to predict in order to avoid the worst and to better prepare for our tomorrows.”

Jacques R uffié

Birth of Predictive Medicine - 1993

By predicting potential dangers in subjects who are in good health, predictive medicine aims to provide the means of avoiding these dangers. These dangers can be intrinsic in nature, being written, for example, into a certain genome DNA sequence, or they can be extrinsic in nature, linked to an unsuspectedly deleterious environment. In each generation, mutations occur, certain of which can lead to so-called genetic diseases; between 3 and 4% of newborns are affected. Besides these spontaneous mutations, there are also mutations arising from the genetic inheritance of the parents. The purpose of genetic counselingis to warn parents when the existence of a potentially serious genetic flaw is suspected.

The highlighting of genes that give a predisposition for cancer(proto-oncogenes)is a convincing illustration of the power of predictive medicine. This involves genes that control the synthesis of growth factors, the activity of which is essential to embryogenesis and to the repair of damaged tissue. While they are normally subject to strict control by anti-oncogenes, proto-oncogenes are able to become active in an anarchical manner, under different influences, and to transform themselves into cancer-generating oncogenes. Recently, mutations have been found in two genes, BRCA1 and BRCA2, these mutations giving a predisposition for cancers of the breast and of the ovary. Thanks to genetic exploration, it will soon be possible to predict whether a cancer of the breast will have a rapid progression leading to uncontrollable metastases or a slow progression. Depending on the case patients will be subject to heavy chemotherapy or to a less aggressive treatment. In this context, targeted therapy with monoclonal antibodies is a source of great hope. While genetic inheritance has a role in cancer, the environment plays a not-insignificant role as well. This is the case, for example, in lung cancer sufferers who smoke tobacco, cancer of œsophagus in those who drink alcohol and job-related cancers in those working in factories producing colorants or materials derived from asbestos or tars.

Cardiovascular diseasesare the primary cause of death in the more developed countries, involving either an infarctus, or a stroke. Many risk factors for these diseases are known, i.e., metabolic deviations affecting cholesterol or the blood serum proteins involved in the transport of lipids. These metabolic anomalies result in a syndrome known as atherosclerosis, which is characterized anatomically by the deposit of fats in the form plaques in the arteries. While genetic factors are at the origin of these metabolic problems, the latter are clearly amplified by an inappropriate diet. The role of predictive medicine is to recognize the genes that are responsible, warn individuals of the risks they are running and to advise them about the types of lifestyle and diet that do not increase these risks.

Being able to predict, predictive medicine should be able to prevent by means of targeted drugs. Within this context, it gives rise to reflection upon polymorphism linked to variation in a single nucleotide in the DNA of the genome of an individual. Known as SNP (Single Nucleotide Polymorphism), this polymorphism has proved to be a very useful auxiliary in molecular medicine. Hundreds of thousands of SNPs are present in the human genome and several tens of thousands in genes coding for the proteins. Where they are located differs according to ethnic backgrounds. Among these SNPs, some appear to be linked to certain pathologies, such as certain forms of cancer or degenerative illnesses such as Alzheimer’s disease. In addition, in a small number of patients, the location of certain SNPs has been connected with previously-inexplicable drug incompatibilities. In line with these observations, pharmacogenomics, a branch of pharmacology that deals directly with genome sequence data, is trying to evaluate the impact of “SNP variants” on the efficacy and toxicity of drugs and to understand the genetic bases that explain the differences that are observed in the responses of different individuals to the same medication 53 Footnote 53. Rather than using a standard drug that is not very efficacious or causes adverse side effects, it might be possible, depending on the genetic profile of the patient, to use a drug that is more appropriate to his or her genetic mapIt is doubtless not just a fantasy to imagine that, in 20 or 30 years’ time, a patient visiting the doctor will be offered a genetic map thanks to cells taken from the buccal mucosa. Finding SNP variants that are known to be responsible for drug incompatibilities in such a map will make a targeted prescription possible. It will allow the detection of genes for susceptibility to an illness, at the same time uncovering targets for new drugs. Pharmacogenomics, which is still called new pharmacogenetics, contrasts with old pharmacogenetics in which, having found an adverse clinical response to a certain therapy, an attempt was made to identify the protein target of the incriminated drug, and then to go back to the gene coding for this protein, and to look for the mutation responsible for the aberrant response to the drug.

The existence of customized predictive medicine, which would read the destinies of individuals in their genes, would not be without its consequences in the life of a citizen. By registering each citizen with a genetic map, matched with a named identity card, predictive medicine might begin to take on the aspect of a Janus, with his beneficent face warning subjects of potential risks of metabolic problems, and guiding them towards the actions to be taken to lower the risks, but also with his evil face delivering each individual’s intimate details to the indiscrete inquisitiveness of investigators who are operating towards their own ends (insurance companies, employers…). No less worrying would be the sly but predictable transformation of the individuality of the repaired or even doped human being within a system of imposed, docilely-accepted assistance.

3.4 The Drug Libraryof The Future

In the 19th and 20th centuries, the methodology for biological experimentation underwent a revolution caused by the progress made in the domain of chemistry, both analytical chemistry, with the deciphering of increasingly complex molecular structures, and also in synthetic chemistry, with the large-scale production of tens of thousands of new molecules. The effects of these molecules, which might eventually be used as drugs, were tested directly on animals. It was thus that in 1910 the German chemist Paul Ehrlich discovered Salvarsan, a derivative of arsenic, which was active against a type of treponeme, the agent of syphilis. This was the result of a systematic analysis of the effect of synthetic products, aromatic derivatives of arsenic acid, on syphilis in rabbits. Salvarsan was the 606th derivative that was tested, and this is why it was called 606 for a long time before it was given the name Salvarsan. Sometimes, lucky chance shows surprising and unexpected properties in synthetic molecules. This was the case for chlorpromazine, which was initially used as an antihistamine. It was luck that led to its antipsychotic activity being discovered in 1950. A new era opened up in psychiatry with the arrival of synthetic narcoleptics like chlorpromazine.

A new chemical science known as combinatory chemistry, which dates from the 1990s, has aroused an increasing amount of interest in pharmacology. This involves making two or more species of organic molecules that carry reactive functional residues react in solution or in the solid phase in such a way as to synthesize, by means of all possible combinations, a number of final and intermediary products that is situated in the hundreds or even the thousands, and which makes up chemical library or drug library. We can directly test all of the products formed on a sample of eukaryotic cells, in order to verify their effects (for example the inhibition of an anarchical proliferation of cancerous cells), or on microorganisms in order to evaluate an antibiotic capability. We can also proceed straight away with the fractioning of the reaction products and the testing of each of the fractions. If the response is positive, fractioning is continued until the molecular species responsible for the desired effect is obtained. Other evaluation parameters for this molecule, such as its absorption, its toxicity and its metabolic future (distribution in the organs, chemical modifications and excretion) are then explored, first in cells, and then in animals (rats, mice), thus comprising pre-clinical tests. These screening operations, which are said to be high-throughput, require automation and robotization aided by powerful computer technologyEach year, pharmaceutical companies screen tens of thousands of different molecules on hundreds of targets.

Complementary to combinatory chemistry, in silico chemistryworks by molecular modelingand uses computer programs for the rational design of new drugs that are able to fix onto specific protein targets. The purpose of this is to provide a virtual follow up to modifications in the reactivity of a given drug molecule as a function of the modifications imposed on its structure, for example, the addition of residues that differ according to their electrophilic or hydrophilic properties, or according to the length of their side-chain. Provided there is a chemical library and we know the three-dimensional structure of a macromolecule, for example an enzyme, as well as the nature of the residues that define its active site, we can hope to select and chemically modify a substance that is able recognize the active site of this enzyme and to make an almost perfect ligand out of it which is able to efficiently block the operation of the target enzyme. This method, which is based on computer-aided chemistry, is called “Structure-based drug design”, and has had some notable successes. It has made it possible to develop an inhibitor capable of blocking a protease involved in the replicationof the AIDS virus.

However, both in combinatory chemistry and in molecular modeling, the many successes that have been achieved remain modest in number compared to the means that have been deployed to achieve them. In terms of statistics, out of ten thousand molecules that are recognized as being efficacious for a given target in vitro, around one hundred are chosen for preclinical trials on animals, around ten are chosen for preclinical trials in Man and only one will come out as a drug. The financial and economic effect is far from being negligible. It has even become a preoccupation in a system where merciless competition is the rule.

In addition to synthetic chemistry, preparative chemistry, which is based on the isolation of natural molecules, is now the subject of renewed interest, due to the introduction of high-throughput techniques. High-throughput screening, which is an essential tool in combinatory chemistry, is also carried out to ensure the systematic detection and isolation of natural substances having interesting pharmacological activities such as antibiotic activities or anti-cancer activities, based on marine animals, microscopic fungi, prokaryotic organisms and various plants. For example, among the substances that have been isolated recently are cibrostatin, a specific cytostatic of melanoma cells, from a marine sponge, mannopeptimycin, a bacterial antibiotic from an actinobacterium Streptomyces hydroscopicus and a whole set of alkaloids with a cytostatic activity with respect to human tumor cells from an exotic plant of the genus Daphniphyllum. The molecular diversity of the living world is such that the reserves of natural products having pharmacological activities are far from being exhausted. So far, only a small percentage of the microbial species populating the Earth have been listed. The depths of the oceans harbor many unknown species. Thousands of insect species remain to be discovered in the canopies of tropical rainforests. Exploration of the plant kingdom is far from being complete. The listing of natural molecules having a therapeutic activity has only just begun. The hunt promises to be a fruitful one, all the more so because the high-throughput screening methods that can now be used greatly increase the efficiency of the search.

High-throughput screening, applied to natural molecules, has overturned the methodological procedures that were in use until recently, which progress through logical steps, using relatively simple artisanal analytical methods, from observation, often resulting from serendipity, to the isolation of the active substance. Thus, in the 19th century, using inherited traditional knowledge that a decoction of Cinchona officinalis bark calms malaria crises, Pierre Joseph Pelletier (1788 ‑ 1842) and Joseph Bienaimé Caventou (1795 ‑ 1877) decided to isolate the active substance of this bark. From the raw extract, they purified an alkaloid, quinine, which proved to be the anti-malarial substance they were looking for. More recently, the starting point of Florey and Chain’s isolation of penicillin from the microscopic fungus Penicillium notatumwas the fortuitous observation made by Fleming that this Penicillium secretes an antibiotic factor (Chapter III‑2.2.5).

There are many examples in which serendipity has been the principle factor involved in the discovery of a drug, and this will no doubt continue to be the case. The appearance of a lucky chance, after all, is not incompatible with high-throughput practices. Also, it is not impossible that in the future there will be a conjugation of the discovery of new natural substances and the use of combinatory chemistry, with the aim of manufacturing derivatives having a much greater power of action and quality of specificity from these substances 54 Footnote 54.

To sum up, the experimental method has caused contemporary medicine to take a giant leap forward, with the discovery of increasingly high-performance functional exploration techniques, the development of therapies using molecules that are already present in Nature or are manufactured by synthesis and the more and more advanced understanding of molecular mechanisms that takes into account the functions of the cell and makes it possible to predict, if not to prevent, pathological malfunctions.

4 Towards a Global Understanding of the Functions of Living Beings

The basic idea of the pioneers of molecular biology was that the function of a macromolecule depended on its structure. Thus, Perutz’s elucidation of the tetrameric three-dimensional structure of hemoglobin, and of its modifications depending on the degree of oxygenation, shed a considerable amount of light on the cooperative mechanism of the transition from the hemoglobin state to the oxyhemoglobin state (Chapter III‑6.2.1). In the same way, an understanding of the structure of many enzymes, receptors and transporters of metabolites has shed light on their mechanisms. In a parallel manner to the exploration of the structures and functions of proteins, that of genomes has made remarkable progress. The subtle entanglements of genomics and proteomics that have become accessible to the experimental method are the order of the day. One major challenge for post-genomics is to understand how proteins, expressed by genes, interact with one another to generate functions that characterize cellular specificity. Even more ambitious are attempts to understand the operation of organs or even of living organisms, based on mechanisms that are implemented at molecular level. These attempts lead straight to an integrated biology, that is, a biology that aims to understand the overall functioning of living beings. Taking as its purpose the access to emerging functions, resulting from interactions between macromolecules, integrated biology first tries to invent methods that make it possible to detect these interactions. Strengthened by the information obtained, it tries to integrate this information with a mathematized language into modules that attempt to simulate living beings.

4.1 Experimental Demonstration of Protein Interactions

From the simplistic procedures of the middle of the 20th century, which were justifiable within the reductionist context of this period, and which involved considering each species of proteins as an autonomous functional entity, we have moved on to the idea that the different species of protein that inhabit a cell have a dialogue with one other, and that they may move from one endocellular organelle to another, depending on post-translational modifications (for example, phosphorylations) that change their conformation and, at the same time, their reactivity and their behavior. Thus, an enzyme protein is not only defined by its catalytic performance with respect to a given substrate, but also by its place in a metabolic network where it interacts in a dynamic and transitory manner with a multitude of other protein species (Figure IV.15A).

Figure IV.15
figure 15

Evolution of ideas concerning the transfer of information between endocellular proteins

A - Case of enzyme systems. The diagram on the left refers to the classical idea of enzyme transformation of a substrate S into product P, catalyzed by enzyme protein A. The diagram on the right shows that besides its catalytic function, protein A interacts with other proteins in the cell.

B - Case of the transduction of a signal that is external to the cell (a hormone, for example). The diagram on the left refers to the classical idea of signaling from a receptor R according to a linear cascade of protein-protein interactions inside the cell, leading to an effector Z. The diagram on the right shows that the signal is spread through proteins organized into interactive networks.

The concept of cell signalinghas also evolved. Instead of considering that a cell membrane receptor, activated by fixation of an extracellular ligand (a hormone, for example), addresses the information received to an endocellular effector protein via a linear cascade of individual proteins, it has come to be postulated that communication between an activated receptor and its effector is mediated by proteins organized into interactive networks (Figure IV.15B). This machinery provides more flexibility in the addressing of messages to effectors.

Another subject to be considered is the density of macromolecules of all types, such as proteins, nucleic acids, lipids and polysaccharides, contained by a microorganism or a eukaryotic cell, which reaches values of 300 to 500 g/liter, denoting a semi-solid state or a considerable degree of compacting. However, for technical reasons, kinetic studies carried out in vitro on isolated enzymes have been carried out with solutions that are 1 000 or 100 000 times more dilute. Conscious of this difference in scale between information obtained from in vitro studies and the in vivo reality, the biology of today is trying to re-evaluate molecular dynamics within the context of a cell. This is why we are seeing the birth of an integrated (or integrative) biology of functions, which, using modeling procedures, aims to achieve an understanding of the temperospatial dynamics of the interactive components inside cells. This holistic conception of biological systems (“systems biology”)has been made possible by progress in technological expertise in domains as varied as biochemistry, molecular biology, physical optics, electronics, nanomechanics, physical and mathematical modeling and computer technology. It is a necessary complement to the classical experimental method based on Bernardian determinism which, in order to connect an effect with a cause, explores living beings in a manner that is often monoparametric and is inevitably reductionist. This signals a change in paradigmin the experimental approach to living beings.

A particularly effective investigative method used to explore the dialogue between proteins is the double-hybrid method described in 1989 by Stanley Fields (b. 1955) and Ok‑Kyu Song 55 Footnote 55(Figure IV.16).

Figure IV.16
figure 16

Principle of highlighting protein-protein interactionsusing the double hybrid system

By genetic construction, two proteins, P1 and P2, whose interaction is to be tested, are expressed in the form of fusion proteins in yeastProtein P1 is fused with the binding domain (GAL4-BD) to the DNA of GAL4, a protein that regulates the transcription of the β-galactosidase gene. Protein P2 is fused with the activation domain of GAL4 (GAL4-AD). Insofar as P1 interacts with P2 (B), the GAL4 transcription regulation activity is re-established, which is verified by the transcription of the reporter gene. If the opposite occurs, i.e., in the absence of any interaction between the two domains of GAL4 (A), the reporter gene is not transcribed.

The principle of this method is based on the modular nature of numerous transcription factors in eukaryotes. These factors contain both a DNA-binding domainthat includes a specific DNA-binding site and a transcription activationdomain that starts up the machinery for transcribing DNA into messenger RNA. These two domains can be dissociated and then re-associated in a functional manner, by forming hybrids with interacting proteins. A first protein, P1, is fused with the DNA-binding domain of a transcription factor by genetic manipulation, and a second protein, P2, is fused with the activation domain of the same transcription factor. If P1 is able to interact with P2, the transcription factor is reconstituted and the reporter gene upon which it depends can be expressed.

The trapping technique, which is complementary to the double-hybrid system, was developed to make it possible to identify a set of interactive proteins within a cell. A protein that is included in this set (protein of interest) is fused by genetic engineering techniques to a short polyhistidine chain (called a tag). Using this tag, the protein of interest is fixed to a solid medium containing nickel ions, a material that is reactive with respect to the polyhistidine chain. In the presence of a soluble cell extract, the protein of interest binds the cognate proteins of this extract, making it possible to retrieve a complex whose components, corresponding to interactive proteins, can be resolved after denaturing gel electrophoresis and then characterized (Figure IV.17).

Figure IV.17
figure 17

Principle of isolating proteins that are interactive with respect to a protein of interest immobilized on a solid medium

The protein of interest P is expressed in the form of a protein fused to a protein “tag” T that is able to bind to a solid support with a specific affinity. The assembly is brought into contact with a cell extract. Certain proteins of this extract, A, B and C, which are able to interact with the protein P, become fixed to the latter. In a second step, the tag T is freed from its attachment to protein P by means of a specific cleavage enzyme. The PABC complex that is recovered from the solid medium in soluble form is subjected to polyacrylamide gel electrophoresis, in order to separate and identify its components.

The techniques described above are backed up by cell imagingtechniques that make use of confocal microscopy, which is more directly in keeping with living reality. The optical performance level of confocal microscopes has improved lately, with the arrival of biphotonic and multiphotonic lasers that illuminate precise points of the cell. As we have seen previously (Chapter III‑6.2.1), it is possible to create a protein chimera made up of a protein of interest fused with a fluorescent protein, in this case GFP (Green Fluorescent Protein), inside a cell. There are currently several variants of GFP that are able to emit fluorescent light at different wavelengths. This has allowed the development of a technique known as FRET (Fluorescence Resonance Energy Transfer)which explores the interaction between two fluorescent proteins. In practice, two GFP variants that have neighboring emission spectra are fused, by genetic engineering inside the cell, to two proteins of interest, P1 and P2, that are suspected of being interactive. If this is the case, the fluorochromes that they carry are sufficiently close that the result is a modification in the intensity of the emission fluorescence of the donor fluorochrome (decrease) and of the receiver fluorochrome (increase), which is readily detectable.

All of these studies, taken together, have given rise to the idea that endocellular proteins are organized into networks, that these networks are interactive and that their location in defined compartments of the cell is dependent on epigenetic events such as phosphorylations. Two attributes can be found in integrated systems: firstly, the presence of modules, i.e., interactive motifs, which, like the pieces of a jigsaw puzzle, fit together to produce a complex, coherent structure, and, secondly, the emergence of functional properties due to the newly created interactions.

Given an analytical description of the basic building blocks that are used to construct living systems, and an understanding of their modes of association in defined circumstances, it is normal to try to reconstruct, in their entirety, mechanisms that show the functioning of these systems. This idea was first applied to the yeastSaccharomyces cerevisiaefor different reasons, such as cell homogeneity (in principle, and, in any case, statistically speaking, all yeast cells have the same genome and the same proteome), an in-depth understanding of the genome and the proteome and the presence of a vast directory of well-characterized mutants.

The use of techniques for the detection of interactions between proteins has revealed the existence of a potential dialogue of unexpected richness between a multitude of proteins (Figure IV.18), in the yeast cell. It is necessary to reflect upon this evidence, which leads to the postulate that, for a given protein, there are mechanisms that restrict and select the many partners able to react with it at a precise moment.Faced with a situation in which chance has the upper hand, leading to uncontrollable anarchy, it is necessary to have regulation, which is underpinned by Darwinian logic.

Figure IV.18
figure 18

Network of protein interactions in yeast

(reprinted from D. Eisenberg, E.M. Marcotte, I. Xenarios and T.O. Yeates (2000) “Protein function in the post-genomic era”, Nature, vol. 405, pp. 823‑826, by permission from Macmillan Publishers Ltd, D. Eisenberg and Nature)

Example of an interaction network involving the yeast Sup45 prionprotein. The line of dashes refers to experimental data concerning the interaction of Sup45 with another protein, Sup35. The lines in bold refer to interactions taken from experimental data; while the fine lines refer to predictions, particularly phylogenetic ones.

This logic arises from a choice of the most efficient reaction path, which is first of all dictated by the speed constants involved in the association and dissociation of molecular partners, without, however, neglecting any stochastic events that may arise. Chemical modifications of proteins participate in this regulation, such as phosphorylation, glycosylation and acylation. The result at cell level is a coherent channeling of the information that is carried from a molecular signal. Thus, fixation of a hormone onto a receptor induces a series of modifications to the intracellular proteins that channel information towards an effector terminal, for example an enzyme responsible for the production of a metabolite with a strategic function.

4.2 Mathematical Modeling of The Complexity Of Living Beings

How can the sum of the scattered experimental data that we have concerning the catalytic capabilities of a multitude of enzymes of cellular origin be integrated into the operation of a cell? How can we envisage the gene-enzyme relationship according to current evidence concerning the complexity of the genetic message? Biocomputing, or bioinformatics, a science that emerged towards the end of the 20th century, proposes to try to answer these questions.

At the turn of the 21th century, with the development of increasingly powerful computer microprocessors that are able to carry out complex operations with amazing rapidity, the hope arose that it would be possible to simulate processes as varied as the regulation of the cell cycle, molecular flow in metabolic pathways and the reception of molecular signals, for example from hormones by living cells, as well as the transmission of the messages that result. The dream of an in silico virtual biologyhas become achievable.

The first mathematical theory of simple enzyme reaction kineticswas put forward approximately a century ago, by Victor Henri (1872 ‑ 1940). Born in Marseilles to Russian parents, Victor Henri studied in Saint Petersburg and then spent time at the universities of Göttingen and Leipzig before becoming established in Paris. Having an eclectic mind, studying both psychology and physicochemistry, he had the wonderful intuition that enzyme catalysis arises from a specific mechanism, different from that implemented in a chemical reaction. The study carried out by Henri concerned the cleavage of sucrose (table sugar) into fructose and glucose by the action of an enzyme called invertase (sucrase). The term invertase was used because during reaction there was a change in the rotatory power of the sucrose solution, shown by a polarimeter. Analysis of the reaction suggested that an enzyme-substrate complexis formed, which breaks down to regenerate the enzyme and liberate the product of the reaction. This analysis gave rise to an equation for the speed of the enzyme reaction as a function of substrate concentration. Henri published the results of his experiments both in his thesis, which he presented to the Paris Faculty of Sciences in 1902, and in two articles that appeared in the Reports of the Academy of Sciences 56 Footnote 56, 57 Footnote 57. In 1913, in Biochemische Zeitschrift (vol. 49, pp. 333‑369), Leonor Michaëlis and Maud Menten (1879 ‑ 1960) confirmed the results of Victor Henri and formulated an equation that became a classic, describing the speed of formation of a product from a substrate in enzyme catalysis. Since the period of these first studies, the concepts involved in enzyme kinetics have evolved considerably. The first metabolic pathway be deciphered was that of the degradation of glucose (glycolysis), either into ethanol in yeast, or into lactic acid in muscle tissue.

After this, researchers became aware that the activity of enzymescould be modulated as a function of covalent modifications of amino acid residues of their protein chain (phosphorylation, dephosphorylation, acylation…). Metabolic flow analysis led to the idea of the limiting reaction. In the 1970s, the idea that there is a single limiting reaction in a chain or a cycle of reactions gave way to the idea that metabolic control is distributed over several reactions, and that each reaction has its own, more-or-less intense control force. Another complexity factor came to light with the discovery of allostery 58 Footnote 58. Allosteric enzymeshave the particularity that they can fix reversibly onto a site that is different from the active site (allosteric site) molecules that are often the terminal products of a chain of reactions: the consequence of this is a conformational modification of the structure of the enzyme that has repercussions on the geometry of the active site and modifies its reactivity with respect to the substrate.

In the 1980s, faced with the complexity of the tangle of listed metabolic and signaling networks, attempts were made to use mathematical modelingto show the progress of the traffic of molecules inside a cell in relatively simple metabolic pathways such as glycolysis. In the modeling procedure, the concentrations of the different molecular species are considered to be variables whose variations over time depend on their speed of production and their speed of disappearance, which leads to a set of paired differential equations. With this procedure, we entered the domain of virtual biology.

One of the earliest examples that illustrates the scope of this virtual biology was the modeling of a bacterial infection by a bacteriophage, which was carried out at the University of California Berkeley campus in the USA 59 Footnote 59. The simulation encompassed the different phases of the infection of the enterobacterium E. coli by the bacteriophage T7This infection involves the translocation of the bacteriophage DNA into the bacterium, the replicationof this DNA in the body of the bacterium and the bacterium’s own protein synthesis machinery being diverted to the production of bacteriophages (Chapter III‑6.1). In order to model the infection of E. coli by phage T7, the genome of the phage was divided into 73 numbered segments, each segment representing one part of the genome. The modeling took into account the translocation of the viral DNA into the bacterium, the transcription of the viral DNA into messenger RNAs (mRNA) and the translation of the messenger RNAs into viral proteins. It used measurement values taken from experiments carried out in vivo, such as those concerning the kinetic constants of various enzyme reactions involved in virus proliferation. It was observed that, depending on whether the RNA polymerase is of bacterial or viral origin, the speed of transcription is 40 or 200 base pairs per second. By carrying out virtual mutations involving the in silico permutation of the order of the genes in the bacteriophage genome, for example the position of the gene coding for the RNA polymerase, researchers observed a slowing down of the replication of the bacteriophage that is practically equivalent to that measured in vivo. They came to the conclusion that the arrangement of genes in a natural virus is optimal for its proliferation and thus for its survival, in accordance with the concept of Darwinian selection.

Thanks to the creation of increasingly powerful software, the aim of virtual biology is to simulate signaling and metabolic pathways. In the longer term, the aim is to understand the molecular and cellular processes that direct embryo development, or to test the effects of drugs of known target on the metabolic behavior of the cell. Metabolic engineering(which is already well developed) comprises two types of models, stoichiometric modelsand kinetic modelsStoichiometric models describe metabolic networks in the stationary state, based on analytical data. Kinetic models combine stoichiometric information and that concerning the catalytic capabilities of the enzymes in a metabolic network.

In Canada, the Cyber-cell project plans to model the overall functioning of the machinery which, in the bacterium, includes its metabolism and its proliferation. The aim of the AfCS (Alliance for Cellular Signaling), which was launched in the USA, is to understand how signaling occurs in cells such as the B lymphocyte, the macrophage or the cardiac cell in response to different types of stress. The techniques that are used range from identification of all signaling network proteins to the evaluation of the flow of circulating information and to the integration of the data acquired into theoretical models. The European nerve synapse project makes use of similar procedures, with its long-term hope of linking the functioning of nerve cells with the cognitive and behavioral functions of living beings. This is a sizable challenge. In fact, there is far from being a real consensus concerning the principle of a demarcation between, on the one hand, cognitive functions such as language or memory, which are located in precise zones of the brain, and which could be reduced to physicochemical processes, and, on the other hand, forms reflective thought that are expressed through the creative imaginationor by judge- ments concerning ethics or esthetics, the notion of personal responsibility, or even pictorial, architectural or musical beauty. Should we see the human soul as the programmer of a superb computer that never ceases to develop from the embryonic state onwards, like John Eccles (1903 ‑ 1997) and others, or should we admit, like Jean-Pierre Changeux (b. 1936), Stanislas Dehaene (b. 1965), Daniel Dennett (b. 1942) and others, that thought is not transcendent, and that it is intrinsically dependent on the brain, which is considered to be a neurochemical system, and thus look for the secret of the individuation of the human being in brain information storage mechanisms with retrocontrol loops associated with subtle neuron architectures, or, in short, refer to a sort of Turing machine ? Whatever the case, in this domain, as in others, simple animal models are used in order to identify elementary processes that are able to explain easily-tested functions such as the memory in anatomical and physiological terms. This is the case for the sea slug or sea hare (Chapter III‑6.1) which, despite its rudimentary cognitive capabilities, provides information that can be used to reconstruct higher cognitive functions, present in the brains of mammals. It is clear that the cognitive sciences have reached a stage in which they are emerging from their infancy (Chapter IV‑3.2). Now, ingenious computing methods and a basis for reflection that has spread beyond the confines of psychology and philosophy, are available to them. They have set themselves the goal of producing an artificial intelligence, using ultrarapid computers as well as software that is able to model the operation of neural networks and to come close to the performance of human intelligence in terms of the power of their reactivity and their memorization. At present, many other biological systems are being subjected to multiparametric exploration, with the aim of producing models. This is the case, for example, with the program of the differentiation of certain white blood cells, the neutrophils (Chapter III‑2.2.4), from precursors located in the bone marrow, a differentiation that leads to the emergence of functions such as phagocytosis that are implemented in the fight against microbial infections 60 Footnote 60.

In a domain that is closer to mechanical science, hydrodynamics, the digital simulation of the cardiovascular system has already made it possible to represent the physical phenomena associated with the propagation of a wave in deformable arteries during a cardiac contraction, in the form of equations, with a good approximation 61 Footnote 61.

In short, from a monoparametric approach that often began as being essentially and necessarily reductionist, the experimental method, applied to living beings, has become a “globalized”, or synthetic, multiparametric approach, the aim of which is to understand the dynamics of molecular interactions in defined biological systems. Making use of data obtained, the hope is to use mathematical processing to simulate the overall functioning of a cell, organ or organism. This new paradigm of the experimental method (“systems biology” 62 Footnote 62)is not limited to a simple accumulation of observations concerning a given biological system and their abstraction in mathematical form. The originality of this approach is that it formulates predictions of changes in the behavior of a system as a function of the manipulation of parameters such as substrate concentration, the presence of inhibitors, and so on. The mathematical processing of experimental data, with a view to learning about the functioning of complex systems by modeling, is supported by the technosciences, particularly biocomputing. It is linked not only to the enormous sum of accumulated knowledge concerning the structures and functions of living beings in the post-genomic era, and to the notion that the life of a cell depends on multiple networks of molecular interactions and thousands of enzyme reactions located in its different organelles, but also to the information that comes to it from its environment. A first type of modeling is based on observations made or experiments carried out on an easy-to-study model system. Laws are drawn up from this. This so-calledbottom-up” (or synthetic) procedure(“systems biology”) is therefore to put living beings into equations, that is, to represent them in virtual systems for which the behavior, accessible by means of calculation, can be predicted as a function of modifying parameters. In addition to the possibilities that are opened up in terms of a deeper understanding of physiological mechanisms, such virtual systems could be used for the design of new drugs or for the manufacture of economically valuable biomolecules.

Figure IV.19
figure 19

The post-genomic era: from the genometo the phenome

The diagram illustrates the different levels of complexity in the pathway that goes from all the genes together (genome) to all of the expressed characteristics (phenome) in the living being, passing via coding RNAs (transcriptome) and non-coding RNAs (non-coding RNAome), all the proteins (proteome), all addressing systems in the cell compartments (localisome) and all of the metabolic pathways (metabolome).

At a scientific meeting held in Sheffield, England, in January of 2005 63 Footnote 63, with the theme Systems biology: will it work?, an argumentative discussion of the advantages as well as the disadvantages of an integrated, mathematized biology was useful in that it included a reminder that most of the parameters used in “systems biology” come from studies that are carried out in vitro on purified enzymes, and that it is not sufficient to know the value of the Michaelian parameters (Vmax and Km) in order to reach biological reality. In fact, in vivo, many enzymes record variations in activity that are hard to control due to allosteric type regulation or interenzyme contact; several enzymes of a metabolic pathway being able to interact to form a metabolonHowever, by compacting several enzymes that catalyze contiguous reactions in a metabolic pathway, a metabolon considerably increases the catalytic efficiency of this pathway. Another element of uncertainty arises from the protein density of the cell medium, and also from the fact that covalent modifications of enzymes can introduce a change in endocellular location (nucleus, organelles of the cytoplasm…). Nevertheless, an approximative approach could limit itself to dealing with biological systems in modular terms, i.e., to considering them as being made up of a number of black boxes, each black box containing a series of reactions being processed mathematically together, with an input and an output. There is still a long way to go if we place ourselves on the cellular scale, but the end of the pathway seems even further away if we envisage the organism as a whole, taking into account the remote interactions between organs involving the interplay of chemical mediators.

The brainplays a critical role in the dialogue between different organs, and in the regulation of the energy equilibrium in higher animals. This equilibrium can be disturbed by fasting or intense, prolonged muscular activity, or by an overabundant diet. The corrective response comes from a deep region of the brain, the hypothalamus, via the secretion of different types of peptides, some of which stimulate the appetite and others of which suppress it 64 Footnote 64.

While taking into account the multitude of parameters that affect the complexity of living beings on an individual level, the theoretical approach to the study of cell function by modeling has the advantage that it produces predictions and provides information about the validity of conclusions and of theories based on experiments that are old and accepted in the absence of contradictory elements. This was the case for the theory that stated that the state of activation of a gene is determined only by the presence in its environment of transcription factors. Recent studies concerning the level of gene transcription in isolated cells have shown that there are probabilistic-type factors which mean that a given gene in a given cell can be activated at any moment. A review which came out in 2005 65 Footnote 65 sums up this subject. In this review, the authors use modeling to analyze the behavior of cells in the process of differentiation during embryogenesis. Their Darwinian model, which associates contingency and selectivity, competes advantageously with the determinist (or instructive) model, based on an all-or-nothing logic, that has been implicitly accepted up until now. The Darwinian model takes into account the occurrence of stochastic events at gene expression level, events that are partially linked to the structural modifications to the chromatin that depend on covalent modifications of an epigenetic nature (phosphorylation, methylation…). By basing itself on the existence of mutational fluctuations that arise by chance, associated with a self-regulation of gene expression, the model that is obtained shows that during embryogenesis a cell has a choice either to differentiate into another cell type or to remain in its initial state. Differentiated cells stabilize their own phenotype and, in their surroundings, stimulate the proliferation of foreign cell phenotypes. A harmonious equilibrium between these two processes is the necessary condition for the setting up of the steps that lead to the arrangement of different cell types during organogenesis, which take place in an apparently inescapable order, in the absence of disturbances. A break in this equilibrium leads to an anarchical cell proliferation. Generally speaking, from the point of view of experimental science, the lesson that can be drawn from current modeling experiments is that the Bernardian determinism that has prevailed as the essential foundation stone of the methodology applied to the study of living beings may find itself being requalified by the taking into account of stochastic phenomenaThis is the case when the number of reacting molecules is low and the probability of stochastic events is non negligible. The modeling of such systems necessitates having recourse to a complex mathematical formalism. It remains true that determinist models for simulation of the dynamics of living beings, represented by classical differential equations, are more-or-less valid when the number of reacting molecules involved is high and the reactions supposedly take place in a homogeneous medium.

Should “systems biology”be regarded as a resurgence of a physiology that has been somewhat neglected over the last few decades, but has been reinvigorated by a salutary hybridization of biologists and model-makers? In any case, this is the intention of the “physiome” project 66 Footnote 66 which has recently been launched on an international scale. It is also doubtless due to this state of mind that a trend which had gone out of fashion, involving the simulation of the performance of living beings by very elaborate concrete models, robots, is being reborn.

4.3 Biorobots and Hybrid Robots

An immense distance has been covered in just over two centuries, since the time when Vaucanson presented automata in the forms of human figures, moved by ingenious springs and cogs, and giving the illusion that their movements were controlled by an intelligence, to a marveling public (Chapter II‑6.4).

In the last decades of the 20th century, considerable progress was made in the understanding of the operation of the nervous system and in the development of technologies in which miniaturized electronics have come to the aid of already high-performance micromechanics. The brainbeing considered as an information processing machine, the aim is to understand the logic of this information machine by means of simulations on computers and, based on the results obtained, to construct robots whose electrical circuits take their inspiration from the operation of animal neurons. These robots are called biorobots or animats. Insects have been chosen as a reference for the construction of such creatures because of the relative simplicity of their nervous systems: several hundred thousand neurons, in comparison with the billions of neurons present in mammals (100 billion in Man). The fly’s system of visionhas been favored as a subject of study because of the possibility it offers of being able to record the electrical response of neurons that can be identified one by one. In the middle of the 1980s, in France, this inspired the pioneering work in biorobotics carried out by Nicolas Franceschini (b. 1942) and his team 67 Footnote 67, 68 Footnote 68 (Figure IV.20). Their objective was to study how an animal can avoid obstacles by means of its ocular perception and its movement-detecting neurons, the operation of which the team just analyzed using microelectrodes and a microscope-telescope specially built for the purpose. The fly’s composite eye has 3 000 elementary units or ommatidia, each carrying eight light receptor neurons. The electrical signals emitted by these neurons in response to captured light (at most a few dozen millivolts) are sent to subjacent neurons that are organized into three levels that correspond to the optical ganglions called the “lamina”, “medulla” and “lobula”. The lobula is a strategic decoding center which, because of the small number of neurons contained in it (sixty), has been the subject of in-depth electrophysiological investigation. Each of the sixty neurons of the lobula operates as a signal integrator. The neurons of the lobula send their messages to motor neurons involved in the contraction of small muscles that control the guidance and stabilization of the fly’s flight.

Figure IV.20
figure 20

Neuromimetic robotbased on the operation of fly’s eye

(construction and photographs by N. Franceschini et al., Biorobotics Laboratory, Institute of Movement Science [UMR 6233, CNRS & University of the Mediterranean, Marseille], reproduced with permission)

A - Head of the blowfly, Calliphora, seen from the front, showing the two compound eyes with their multifacetted array. Each eye hides 40,000 photoreceptors that drive various image processors based on a few hundred thousand neurons.

B - “Elementary Motion Detector” (EMD) neuron and its evolution over fifteen years: on the left, first generation (1989), using Surface Mounted Device (SMD) technology, compared to a one franc coin from that period; on the right, the 2003 version of the highly-miniaturized hybrid (analog + digital) EMD circuit (mass 0.8 grams), compared with a one euro coin.

C - Autonomous vehicle (12 kg) able to move around in a field of obstacles that it does not know about in advance. Its vision is based on a genuine compound eye, whose circuits are inspired by those of the fly. It includes a network of 114 “motion detecting neurons”, transcribed electronically according to the principle analyzed in the fly’s eye by means of microelectrodes and a specially-constructed microscope-telescope. This network is arranged around a ring that is about thirty centimeters in diameter. The recently-constructed roboflies, Oscar and Octave, only weigh around one hundred grams.

D - Routing of the electronic components (resistances, condensers, diodes and amplifiers that operate in their thousands) soldered onto the six-layer printed circuit-board that provides the connection between the sensors and the steering motor on board the autonomous mobile robot shown in (C).

Based on an exhaustive study of the neuron wiring of the fly’s eye, Franceschini and his colleagues were able to reconstruct a facetted artificial eye that can retranscribe the light signals received optoelectronically. This artificial eye, the electronic components of which correspond to around one hundred movement detectors in the fly, was incorporated into the head of a robot. The recorded light signals were transmitted to the moving components of the robot.

Figure IV.20 illustrates the neuromimetic biorobofly constructed according to this principle. Completely autonomous because of its on-board power supply, this robot was able to move around at high speed (50 cm/s) in a cluttered area, avoiding the obstacles. This first “terrestrial” robofly, which was completed in 1991, was followed by several much lighter brothers and sisters: Fania, Oscar and Octave are aerial roboflys 69 Footnote 69.

Constructed in 1999, Oscar is a captive robot that weighs around one hundred grams. It is equipped with an eye that reproduces the retinal microscanning of the fly’s eye discovered by Franceschini, Oscar is able to rotate around a vertical axis because of its two diametrically opposed helices and can thus orient its view towards an object. If this object moves, Oscar follows it with its eye, up to an angular speed comparable to the tracking speed of the human eye.

Produced in 2003, Octave is another aerial robofly that is able not only to take off and distinguish a relief, but also to land automatically and to react sensibly to a contrary wind in a turbulent atmosphere. On board, it has an electronic visuomotive self-regulation system, the operation of which is based on the signal processing operations that, in the insect, carry out the automatic pilot functions 70 Footnote 70. The age of biorobotics, in which robots take their inspiration from animals, has only just begun 71 Footnote 71. If specimens are still so rare, this is because behaviors for which we have a good understanding of the underlying neuronal bases are also rare. At the time of writing, a robot rat named Psikharpax, with artificial muscles and a vision system that enables it to perceive objects in three-dimensional space, is being developed at the University of Paris VI.

Almost in the realm of science fiction, we find hybrid robots obtained by hybridization of the living and the non-living. This is the case for the hybrid robot produced by Japanese researchers, based on the silkworm moth. Control of the nervous system of this insect is spread throughout its body. If its head is cut off, it continues to fly, which gave rise to the idea of replacing the head with an electronic transistor system 72 Footnote 72. Using a remote measurement device, it was possible to explore certain behavioral aspects of the insect. Although the construction of hybrid robotsmay raise ethical objections, such technology is capable of giving rise to spectacular applications in the domain of prostheses. The neurological prostheses of the future will nevertheless require that a contact be made between living neurons and the electronic chips that are able to improve the inadequate processing of the physiological signal. Such a contact was produced recently in a German laboratory directed by Peter Fromherz 73 Footnote 73(b. 1942). A small network of snail neurons, chosen because of their large size, was cultured on the surface of a silicon chip. A signal emitted at one location of the chip was able to be transmitted to another location via the synapse connection between two neurons73 (Figure IV.21).

Figure IV.21
figure 21

Production of an electrical interaction between snail brain neuronsand a transistor

(reproduced from G. Zeck and P. Fromherz (2001) Non invasive neuroelectronic interfacing with synaptically connected snail neurons immobilized on a semiconductor chip”, Proceedings of the National Academy of Sciences, USA, vol. 98, pp. 10457‑10462, National Academy of Sciences, USA, with the permission of P. Fromherz and of PNAS ; photograph by Max Planck Institut of Biochemistry/Peter Fromherz)

A - Principle of the assembly of a neuroelectric system with transistor/neuron coupling. Affected by stimulation of the transistor, the neuron a of the snail braingenerates an electrical current that spreads to the neuron b, which interacts with the transistor.

B - Experimental production (1) and (2): electromicrograph showing the immobilization of a neuron cell blocked inside a network of six molecules of polyimide (20 µm white bar). (3): electromicrograph showing the dendrite expansions of neurons outside the polyimide barriers after a couple of days of growth (100 µm white bar).

On the molecular scale, mitochondrial ATPase or ATP synthase, with a size of around ten nanometers (Chapter III‑6.2.1) was used recently for the manufacture of a biorobot that made its mark in the media as the smallest known rotating molecular motor. The membrane-type enzyme catalyzes the reversible reaction ATP + H2O $$ ADP + Pi. This enzyme therefore has a double function; hydrolysis and synthesis. For this reason it is called ATPase or ATP synthase depending on the physiological context in which it is involved.

In the mitochondria that oxidize metabolites, the enzyme operates like ATP synthase. It catalyzes the synthesis of ATP coupled to oxidation reactions. In the absence of respiration or of oxidizable substrates, the enzyme operates like ATPase; it catalyzes the hydrolysis of ATP. For ease of language, the enzyme will be designated here by the term ATPase. It should be remembered that mitochondrial ATPase includes two sectors, a hydrophobic sector, Fo, characterized as a proton channel located inside the mitochondrial membrane, and a hydrophilic sector, F1, carrying catalytic subunits that are arranged as if they were on a turret (see Figure III.18C). Fo contains two master parts of the ATPase motor, i.e., a rotor comprising an assembly of around ten so-called “c” subunits and a stator that corresponds to the “a” subunit. The “c” subunit assembly is attached to the “γ” subunit of the catalytic sector F1, which thus functions as a rotor.

In 1961, the British biochemist Peter Mitchell (1920 ‑ 1992) showed that phosphorylative oxidationin the mitochondria is associated with a transmembrane transfer of protons. The mechanism involved is said to be chemiosmoticThe most important experiment involved an almost serendipitous observation, carried out with a simple pH meter. When a current of oxygen was passed through a suspension of mitochondria in an unbuffered saline medium, in the absence of ADP and phosphate, an instantaneous acidification of the extramitochondrial medium occurred, shown by means of the pH meter electrode immersed in this medium. It was concluded that the sudden switch from anaerobiosis to aerobiosis, i.e., the start up of respiration, is correlated with an ejection of protons from the mitochondrial matrix to the extramitochondrial medium. Afterwards, this fact was linked with several others, the whole leading to the formulation of the chemiosmotic theory. Briefly, mitochondrial respiration generates a vectorial movement of protons from the interior to the exterior of the mitochondrion. Because of this, a proton concentration difference is established on either side of the mitochondrial membrane. The electrical potential that is created in this way is used by the mitochondrial ATPase in order to synthesize ATP from ADP and mineral phosphate. This process involves two correlated events:

  • ▶ return movement of protons towards the inside of the mitochondrion across the Fo sector of the ATPase;

  • ▶ rotation of the assembly of c subunits and the γ subunit that is interdependent with it.

We have therefore moved from electrical to mechanical energy. During its rotational movement, the γ subunit establishes contacts with the three catalytic subunits of the F1 sector, in succession. One after the other, each of the three catalytic subunits in contact with the γ subunit undergoes a change in the conformation of its active site, which is at the origin of the synthesis of ATP. In the absence of mitochondrial respiration, the reverse process occurs. The ATP is hydrolyzed into ADP and mineral phosphate, and the energy released at each of the three catalytic subunits is used to rotate the γ subunit in the reverse direction to that which accompanies the synthesis of ATP.

The existence of a rotational movement of mitochondrial ATPase, which had been suggested on the basis of biochemical arguments 74 Footnote 74 and of structural data 75 Footnote 75 was authenticated by Masasuke Yoshida and his co-workers in Japan in 1997, thanks to an imaging technique 76 Footnote 76. In a first step, the molecular system was simplified by being limited to the catalytic F1 sector of the enzyme. A methodological trick was employed: genetic engineeringwas used to modify the α and β subunits of this sector by fixing polyhistidine chains to them. Because of the strong affinity between polyhistidine and nickel ions, the F1 sector α and β subunits were immobilized on a medium covered with nickel ions (carried by an organic molecule). An actin filament labeled with a fluorescent ligand was attached to the end of the F1 sector γ subunit. This assembly made it possible, under a fluorescence microscope, to display a rotational movement of the actin arm carried by the g subunit affected by the addition of ATP and by its hydrolysis into ADP and mineral phosphate. A similar rotational movement of the γ subunit carrying a metal microbar was observed by an American research group 77 Footnote 77. Remarkably, in 2004, after having fixed a metal microbead onto the γ subunit, the Japanese researchers 78 Footnote 78 demonstrated synthesis of ATP from ADP and mineral phosphate by rotating the γ subunit by means of rotation of the magnetic bead, induced by magnets. Thus, the experimental coupling of a mechanical force and a chemical synthesis was demonstrated. In 2005, the Japanese research team 79 Footnote 79 succeeded in photographing the rotational movement of the enzyme powered by ATP under the microscope, this time looking at the entire ATPase complex, F1Fo. After having attached a gold microbead onto the “c” subunits of the Fo sector, to act as a probe, the researchers were able to confirm that the rotational movement of these subunits depended on the hydrolysis of ATP into ADP and phosphate (Figure IV.22). The whole of the mitochondrial ATPase (ATP synthase)does, in fact, function as a molecular rotational motor powered by a proton flow, rather like an industrial rotational motor powered by a fossil fuel or electricity. The analogy is a striking one; the γ subunit of the enzyme corresponds to the motor driveshaft and the “c” subunits correspond to the motor itself. Because of its association with non-living structures, for example metal bars or gold beads, which are carried along in the rotational movement of the enzyme, it is possible to speak of molecular biorobots. This domain, in which nanomachines use macromolecules from the living world, has only just opened up, but its future is full of promise.

Figure IV.22
figure 22

Mitochondrial ATPase, rotational nanomachine

(reproduced from H. Ueno, T. Suzuki, K. Kinosita and M. Yoshida (2005) “ATP-driven stepwise rotation of Fo-F1 ATP synthase”, Proceedings of the National Academy of Sciences, USA, vol. 102, pp. 1333‑1338, National Academy of Sciences, USA, with the permission of M. Yoshida and of PNAS)

Demonstration of a rotational movement of F1Fo mitochondrial ATPase (ATP synthase), induced by ATP. ATPase or ATP synthase (reversible catalysis enzyme that hydrolyzes or synthesizes ATP) has two sectors (see Figure III.18). The membrane sector, Fo, comprises an assembly of a dozen so-called c (rotor) subunits and an a (stator) subunit. The other, extra-membrane sector, F1, is catalytic. It comprises three β catalytic subunits and three α non-catalytic subunits arranged in a ring, in alternating order. At the center of the ring is the γ subunit which is attached to the c subunits of the Fo sector. Subunits δ, ε and b stabilize the whole of the molecular complex. In the experiment illustrated in this figure, subunits α and β of the F1 sector of the enzyme have been genetically modified to include polyhistidine chains (His-tag, artificial ligand). Due to the interaction of these chains with nickel ions (linked to an organic molecule) covering a solid support medium, the α and β subunits are immobilized. In addition, a gold microbead is fixed onto the ring of the Fo sector c subunits by means of a chemical device (streptavidin molecule, artificial ligand). Following the addition of ATP, rotation of the microbead attached to the ring of Fo sector c subunits is observed by microscopy on a black background. This rotation is dependent on (and at the same time an indicator of) the rotation of the c subunits, itself led by the rotation of the γ subunit in contact with the catalytic β subunits. Note the ejection of protons. When the enzyme functions as ATP synthase, the proton movement takes place in the opposite direction.

The story of scientific progress made with respect to the mechanisms of phosphorylative oxidation via the functioning of mitochondrial ATPase, from the time of Mitchell’s experiment with the pH meter until the time of the manufacture of Yoshida’s biorobots, is an exemplary one. It is typical of the way in which a mode of thought evolves over time, from a primary discovery resulting from serendipity or an experiment “to see what happens”, leading to the proposal of the existence of a mechanism, to a carefully programmed project which, because of its inventive technicity, shows the validity of the proposed mechanism, and, in addition, demonstrates its future utilitarian value.

Nowadays, certain biotechnologists dream of being able to “synthesize life” 80 Footnote 80 in terms of cells that are able to imitate the performance of living cells. The concept of the “Lab-in-a-cell”is coming to the fore 81 Footnote 81, 82 Footnote 82. Nevertheless, it would be necessary to design an artificial cell that is an authentic replica of a living cell, and which benefits from all the attributes of a living cell. This is not achievable at the moment. Thus, the current aim of nanobiotechnology is limited to scheduling the construction of artificial cells that are relatively simple both in composition and in function, for example, a microvesicle edged with a lipid membrane, containing a system of protein synthesis expressed from a short sequence of DNA, as well as a system of ATP synthesis able to supply the energy necessary for this protein synthesis.

5 The Design and Meaning of Words in the Experimental Process

“Progress in biology is possibly mainly tributary to the drawing up of concepts or principles […]. In the process of elaborating concepts, which marks scientific progress in biology, there is sometimes a crucial step, when we realise that a more-or-less technical term that we had previously considered to cover a given concept, in fact covers a mixture of two (or more) concepts.”

Ernst M ayr

Translated from a French Translation entitled

History of Biology. Diversity, evolution and heredity - 1989

On the margins of the modeling and the difference in mathematized systems that comprise theoretical biology, particularly in silico biology, concepts are mental representations, often image-filled and idealized ones, of fundamental mechanisms that are deduced on the basis of experimental results. From the imaginary domain of the probable, they extrapolate constructions of the mind that are in phase with the facts and experimental data, within a reflective projection that gives them their meaning and makes it possible to make certain predictions.

There are premonitory conceptsThis was the case for the concept of the reflex arcthat associates movement with sensation. This concept was already present in the ideas of Descartes (Chapter II‑3.4), but it took more than a century before the theory of the existence of a reflex arc was supported by Bell and Magendie’s demonstration of the existence of relay centers for sensory and motor nerves in the spinal chord (Chapter III‑1). There have been premonitory concepts that, while they were demolished at the time they were first proposed, were shown to be completely accurate a few decades later. In the middle of the 19th century, the German pathologist Jacob Henle (1809 ‑ 1885) needed a healthy dose of imagination and audacity in order to oppose the theory of the “miasma”, based on experimental evidence (Chapter III‑4). We may ask ourselves whether or not history is currently repeating itself in the case of spongiform encephalopathies that affect humans and animals, for which, according to the thesis of Stanley Prusiner 83 Footnote 83 (b. 1942), the prion, as an infectious protein, is responsible.

Other evocative conceptshold the keys that open doors to domains that are unknown, but are potentially rich in information. It is thus that the double helix DNA structure proposed by Crick and Watson, based on the complementarity of Adenine-Thymine and Cytosine-Guanine bases (Chapter IV‑1.1.1), led to the concept of DNA replicationwith reconstruction of a double strand that is identical to the original double strand. The concept of DNA replication spurred on Matthew Meselson (b. 1930) and Franklin Stahl (b. 1929) to develop an experimental protocol based on the labeling of the DNA nucleotide bases of the enterobacterium E. coli with a heavy isotope of Nitrogen, 15N, and on the differentiation of monocatenary DNA strands in the process of synthesis by measurement of their density, as analyzed by centrifugation in cesium chloride gradients. In the same vein, Jacob and Monod’s discovery of regulatory genes (Chapter IV‑1.1.1) gave rise to the concept of the operonwhich, in the bacterium, defines a genetic unit comprising structural genes and regulatory genes. The concept of the regulation of gene expression, extended to higher eukaryotes, makes it possible to explain the phenomenon of differentiation in cells with specific activities (muscle cells, nerve cells, epithelial cells…) by the silencing of certain genes and the activation of others. Within the framework of bioenergetics, the chemiosmotic theoryput forward by Mitchell, in order to explain the coupling of mitochondrial respiration with ATP synthesis (Chapter IV‑4.3), gave rise to consideration of the concepts of transmembrane transport of metabolites and of vectorial metabolism.

Some generalizing conceptsthat carry a unifying virtue within them are known. One such is the concept of compartmentationThe cell is no longer considered to be a bag of enzymes, as used to be the case. It is now considered to be a compartmented structure in which each type of compartment corresponds to a type of organelle delimited by a membrane and characterized by specific functions. Thus, because of the genetic material that is present in it, the nucleus of the cell holds the information necessary for the manufacture of proteins. The mitochondria, which are called cell power plants, are in charge of oxidizing the products of cell catabolism and using the resulting energy for the synthesis of ATP from ADP and mineral phosphate. The lysosomes are the garbage collectors of the cell. Among the functions carried out by peroxisomes is the partial breakdown of very long chain fatty acids. The endoplasmic reticulum and the Golgi apparatus are involved in the maturation and the secretion of proteins. The ribosomes represent the machinery upon which messenger RNAs are displayed in order to be decoded into proteins. A sign of the extreme sophistication of this setup is that the membranes of the endocellular compartments are not sealed common walls. They contain proteins that act as selective transporters of metabolites or highly specific ion channels, allowing the exchange of messages throughout the cell. Thus, each organelle, informed of the condition of the others, is able to adjust its own activity to ensure the greatest harmony of the whole. This conditioned compartmentationat cell level may be compared to the socialization of human communities. While endo-cellular organelles are compartments delimited by membranes, there are non-membrane-bound compartments in the cell, such as protein complexes in which two, three or even more proteins are closely linked. Often, these are enzymes that catalyze reactions that are contiguous in a metabolic pathway. Being compacted into a complex known as a metabolon results in an increased efficiency of the flow of metabolites by facilitating the channeling of this flow.

Concepts evolve, often adjusting their representations according to accumulated knowledge. A good example of this is the evolution of the concept of the genesince its formulation at the beginning of the 20th century. The term “genetics” was created in 1906 by the English naturalist William Bateson (1861 ‑ 1926). The term “gene” was introduced three years later by the Dane Wilhelm Johannsen. This term designated a principle which, in the chromosomes of fertilized egg, and in an intentionally vague manner, was supposed to have an influence on the phenotype of the progeniture. During the same period, the term “locus” appeared out of the experiments carried out by the American Thomas Hunt Morgan on the drosophila, a locus being defined as a region of a chromosome which, when altered by a mutation, leads to a modification of the phenotype of the living organism. Based on cross-breeding experiments carried out on hundreds of drosophila mutants, Morgan and his co-workers drew up the first genetic maps. By chance, the salivary glands of the drosophila have a particular characteristic; the nuclei of their cells contain giant chromosomes called polytenes, which result from the association of a hundred replicate copies of chromosomes that, after staining, are visible under the optical microscope. On these chromosomes, it is possible to distinguish colored bands separated by clear bands. It was observed that specific mutations had specific effects on the arrangement and number of these bands. The material contained in the bands was therefore the site of mutations. In the middle of the 1930s, the listing of more than 3 500 bands made it possible to construct a cytological mapthat was already highly detailed. The concept of the gene, the material basis of inheritance, took root. The sporadic mutagenic effect of X-radiation in the drosophila, which was shown by the geneticist and biophysicist Hermann Müller (1890 ‑ 1967), led the Austrian physicist Erwin Schrödinger to question the sporadic event which, at the level of a target of a few dozen atoms, determines a mutation. He postulated that the target is located in the chromatin of the chromosomes, organized as an aperiodic crystal. The chemical nature of this target was identified with DNA, following bacterial transformation experiments (Avery, MacLeod and McCarthy, 1944) and experiments concerning bacterial infection by the bacteriophage (Hershey and Chase, 1952) (Chapter IV‑1.1.1). This is how the idea that gene = DNA was born.

The simple and reassuring idea that one gene one enzyme, which was deduced from mutation experiments carried out by Beadle and Tatum on the mold Neurospora crassa(Chapter III‑6.1), had only a limited lifetime. A first stumbling block appeared when it was shown that the activity of a gene, and in consequence its contribution to the phenotype, depends on nucleic elements outside the gene. The definition of the term “gene”was then extended to include promoting and regulatory sequences. In the case of the lac operon of Escherichia coli, these sequences are located just upstream of the site where transcription begins. However, in eukaryotes, a regulatory sequence may be distant from the gene that must be transcribed and sometimes it may be involved in the regulation of several genes (Chapter IV‑1.1.1). In the 1970s, the idea of the existence of the mosaic genein eukaryotes appeared. A gene was now thought of as an assembly of several exons that originally in the chromosome are separated by introns. The alternative splicing of these pieces of genes gives rise to numerous possibilities for reconstitution, i.e., many messages coding for many different proteins. Thus, however useful the concept of the gene has been with respect to its ability to generate discussion and to provoke experimentation concerning the molecular machinery responsible for the transmission of the hereditary characteristics, we can see that the term itself has not ceased to be the subject of readjustments, since the time it was first formulated.

Certain concepts are matched with metaphors. While some metaphorical concepts, particularly those that make use of images designed to grab the imagination, and to be easy to understand, tend to take liberties with the realities of living beings, they can also shed light on unsuspected mechanisms in sectors that have been neglected. Metaphorical concepts are not a current fashion. It should be remembered that in his Passions of the Soul (1649), Descartes, when asked “how limbs can be moved by objects of the senses and by the mind without the help of the soul,” responds that this takes place “in the same way as the movement of a watch is produced only by the force of its spring and the arrangement of its cogs.” Later on, with Lavoisier’s clear vision of the vital role of oxygen, and his comparison of respiration with combustion, the concept of the chemistry of life, combined with that of bioenergetics, came to the fore, and was at the heart of studies on the metabolism. Chemical reactions that liberate and absorb heat were substituted for the cogs of Cartesian mechanics. The second half of the 20th century saw the birth and development of the concept of the program, a concept with computer technology connotations, which was destined to explain the phenomena of inheritance. This concept began to fill out from the moment when it became certain that, in its nucleotide sequence, DNA contains the necessary information for the construction of the protein material of cells. For a certain period of time, the passion for molecular genetics eclipsed the interest that had previously been given to metabolic chemistry. The powerfulness of the metaphorical concept may be measured according to the effect it has in pushing scientific research in particular directions, with the results this has on society. Thus, during the 17th and 18th centuries, the study of human pathology was impregnated with a strong iatromechanical current. Physiological chemistryand its corollary, pathological chemistry, which emerged as disciplines in their own right in the 19th century and achieved full expansion in the 20th century, are our inheritance from Lavoisier and the concept of the chemistry of life. At the turn of the 21st century, taking the concept of the programas a basis, the fantastic advances made in the deciphering of genomes, the comparison of their structures and the listing of their anomalies, have opened up new horizons in domains as varied as the developmental and evolutionary sciences and the pathology of hereditary diseases.

Discussion about concepts necessarily leads to a brief discussion of scientific semantics, as shown by the few examples given in the previous pages. As we have just seen, the word gene that was put forward by Johannsen around one century ago did not have the same meaning at that time as it has now, a meaning that still remains fluid. The GMO, an acronym meaning Genetically Modified Organism, which has been the subject of vehement diatribes over the last few years, becomes much less of an object of passion if it is considered within the context of evolution. After all, for the last two to three billion years, living organisms have been genetically modified constantly by spontaneous mutations, which is why the human beings that we are today are able to discuss them!

The term cloningis another example of a semantic misunderstanding that leads to inaccurate interpretation and arouses the passions. The primary meaning of the term cloning is the multiplication and the identical reproduction of a living cell. The simplest and most unambiguous example is that of bacterial cloning, a bacterial cell producing millions of cells that are identical to the original cell by its multiplication in a nutritive medium. The term animal reproductive cloningdoes not carry exactly the same semantic weight. It should be remembered that, in eukaryotes, the preliminary act of the cloning procedure involves the injection of the nucleus (with 2n chromosomes) from a somatic cell into an enucleated oocyte (Chapter IV‑2.3.2; see also Chapter IV‑6.1). The somatic cell nucleus, by providing its genetic equipment, gives the being that will develop in the uterus a phenotype that is practically identical to that of the somatic cell donor, but nevertheless not completely identical, as the cytoplasm of the enucleated ovum, with its mitochondria, provides a small but non-negligible fraction of genes, the mitochondrial genes. As for therapeutic cloning (in the absence of uterine implantation), this is used for the manufacture of differentiated cells that may be grafted into the individual who has donated the original somatic cell, with no immune-related rejection occurring. This is non-reproductive cloningThe passionate argument that has arisen because the term cloning is bandied about in an ill-considered fashion illustrates the confusion that can result from a lack of precision in the use of certain terms with a high level of media impact.

6 The Experimental Method, Understanding of Living Beings and Society

“All the major problems of the relations between society and science lie in the same area. When the scientist is told that he must be more responsible for his effects on society, it is the applications of science that are referred to […]. No government has the right to decide on the truth of scientific principles, nor to prescribe in any way the character of the questions investigated.”

Richard F eynman

The meaning of it all - 1963

The progress of science is linked to that of civilizationIt is in keeping with the state of mind, the beliefs, the lifestyle and the thought patterns of societies. In Ancient Greece, where manual work was considered to be servile, science remained essentially theoretical, confined to logic and dialectics, and strongly attached to questions of philosophy. The birth of experimental science in the 16th and 17th centuries went hand-in-hand with the rehabilitation of manual work.

The technical side dominates in modern biology, which seeks to solve problems concerning the “how”, rather than to address philosophical problems concerning the “why”. As Ian Hacking (b. 1936) says in Representing and Intervening (1983), nowadays engineering, and not theorizing, is the greatest proof of scientific realism, which leads to the minimization of philosophical thought. In A Skeptical Biochemist (1992), the Polish-born American biochemist Joseph Fruton (1912 ‑ 2007) emphasizes the contrast between the 19th century and the first half of the 20th century, when eminent scientists were still interested in the ideas of the professional philosophers of the history of the sciences concerning the progress of experimental research and, in contrast, the end of the 20th century, when philosophyand the experimental sciencespretended to ignore one another. This is doubtless partly because the history of biology has become the history of biotechnologies to such an extent that, according to some, the objects being explored are so familiar that they are now part of the life of society. In Pandora’s Hope (1999), Bruno Latour (b. 1947) considers that the current confrontation between subject and object, in which the researcher-subject explores the structure and function of the object, is being transformed into a human-nonhuman dialogue, in which the nonhuman-object becomes “socialized”. Taking yeastas an example, Latour writes that it has been “working for millenia in the brewing industry, but now it works in a network of thirty laboratories where its genome is mapped, humanized and socialized like a code, a book, or a program of action that is compatible with our ways of coding, counting and reading […]. Non-humans have become automatons, admittedly without rights, but much more complex than material entities.” Latour visualizes the human-nonhuman associations in the form of collectives that are organized into strata that implement the technical, the political, the social, the ethical, the ecological… The technosciences correspond to one of these strata, the sociotechnical stratum that is directly linked to the stratum of political ecology. In the same spirit, the Belgian philosopher Gilbert Hottois (b. 1946), in his Philosophies of the Sciences, Philosophies of techniques (2004) remarks that “laboratories pro- duce things that go off to live their lives in society and in Nature.” Thus, bacteria, yeasts or genetically modified plants are able to produce drugs such as insulin, growth hormone and vaccines for human medicine. These drugs become part of and indispensable to life in society. They are evaluated according to their market value by the companies that patent, manufacture and sell them, and according to the comfort they bring to the patients to whom they are administered. The financial management that results from their consumption becomes a worry for those responsible for public health, while their manufacture by specialized companies generates industrial activity and economic growth which may be measured according to how fashionable they are and how they sell.

For a long time, society, while benefiting from scientific progress, remained indifferent to the experimental method, that is to say, the way in which knowledge progresses. In the last decades of the 20th century, society became aware, via information concerning the occasionally demonized exploits of genetic engineering, that science can “take liberties” with the human being. Populations were well informed about the effects that genetic engineering could have on the mortality rates of pathologies such as cancer and diabetes or on degenerative illnesses of the nervous system, and about the closeness of possible solutions. However, they were also warned about the risks to which science was exposing humankind. Remembering certain tragic episodes concerning HIV-contaminated blood transfusions, growth hormone and mad cow disease, and certain Cassandra-like predictions, such as a catastrophic epidemic of spongiform encephalopathy that has happily yet to appear, society shows reservations when the media inform its members of new feats of modern technology. Political authorities, for their part, afraid of potential problems, tend to follow the principle of precaution , which in fact hides a fear of risk. However, evaluating risk involves not being afraid of it but understanding it in a lucid and courageous fashion. Informed by the media, which often use sensationalism, the citizen is increasingly calling into question whether certain practices involving the biosciences, such as cloning, or certain mercantile transactions such as the taking out of patents concerning gene sequences, or even experimentation on live animals, are well-founded.

6.1 Human Cloningcensured by Codes Of bioethics

“The problem of experimentation on Man is no longer a simple problem of technique. It is a problem of value. From the moment that biology concerns Man no longer simply as a problem, but as instrumental to the search for solutions concerning him, the question arises of deciding whether the price of knowledge is such that the subject of the knowledge is able to consent to become the object of his or her own knowledge. We have no difficulty here in recognizing the still open debate concerning Man as a means or an end; an object or a person. This is to say that Human Biology does not contain in and of itself the answer to questions concerning its nature and its significance.”

Georges C anguilhem

Knowledge of Life - 1965

Written at a time when people were far from imagining how molecular biology was going to expand, the prophetic words of Georges Canguilhem (1904 ‑ 1995) have maintained their philosophical validity. Manipulation of the human embryo, whether this involves its creation by cloning or the modification of its genetic inheritance, obviously leads to the need to consider the societal, religious and political points that arise from the domain of bioethics and are a reflection of the period in which we are living. Until recently, advances made in biology left moralists indifferent. This ceased to be the case when scientific experimentation began to look at the human embryo with a view to utilitarian ends in the health domain. The specter of cloning was brandished without any clear distinction being made between reproductive cloning and therapeutic cloning. Biology became demonized. However, as biologist Pierre Chambon (b. 1931) said in an interview in the French journal Biofutur: “In absolute terms, biology is unable to tell us whether the cloning of a human beingis moral or immoral, it simply tells us whether it is biologically possible.”

The birth Dolly the sheep in 1997 (Chapter IV‑2.3.2) triggered a virulent debate because now that the cloning of an animal had become possible, that of a human being became envisageable. The media sensationalized this debate all the more in that it was exacerbated by debate concerning GMOs (Chapter IV‑2.1). The Dolly affair became a problem of societyUp until then, the biosciences had been happy just to try and understand the mechanisms that explained the functions of living beings, but now, with the advent of GMOs and cloning, it became obvious that a forbidden barrier had been crossed and that Man had the power not only to transform but also invent himself. Faced with this desacralisation of Nature, the need arose for some philosophical reflection. This was given the name of bioethics, which is the title of the book, Bioethics, A bridge to the future, which was written by the American biologist Van Rensselaer Potter (1911 ‑ 2001) in 1971.

The term bioethics covers philosophical considerations that range from the biosphere to the human person. Bioethics tries to give a wider meaning to the moral codes which, in human societies, depend on ancestral traditions. It aims to prescribe that which is desirable according to the Kantian maxim of the categorical imperative. In his What is Bioethics? (2004), the Belgian historian Gilbert Hottois reminds us that bioethics are above traditional morals, the latter being a set of norms that are most often spontaneously respected as good habits, without any critical reflection being involved, while bioethics, on the other hand, arises out of critical thought, analysis, discussion and the evaluation of established mores. Over the last few years, the problems that are targeted by bioethics have moved towards today’s burning issues. Human cloning is an example.

While allegations of the transcendence of Man in Nature may lead to human reproductive cloningbeing considered as a crime, strictly scientific considerations lead to an emphasis on the lack of responsibility shown by a few zealots, given the hazards involved in cloning in animals, such as the need to use a large number of oocytes in order to achieve success in cloning, the very low viability of the cloned embryos and the development of serious functional anomalies in the clones that survive. Even supposing that scientific progress will one day overcome these difficulties, human reproductive cloning will come up against an insurmountable obstacle, the cloned subject’s fear of finding that he or she is identical to the relative from whom his or her genetic inheritance comes. After all, the notion of manipulation of the human ovule with the aim of serial reproduction has often haunted science fiction stories. In Brave New World (1932), Aldous Huxley (1894 ‑ 1963) gives an apocalyptic vision of the budding of human eggs that produce hundreds of identical twins which are conditioned into classes and subclasses while being raised in jars, depending on the quality of the nutritive substances they are given. In The Artificial Uterus (2005), Henri Atlan (b. 1931) predicts that the raising of human fetuses in jars could well become an alternative to uterine gestation in a distant future. Let it be understood that human reproductive cloning, which is no longer part of the domain of science fiction, as it has become feasible, must be considered as being reprehensible because it goes beyond the limits of reason, and is a denial of human transcendence. Man as subject cannot be considered as an object.

The problem of therapeutic cloningis quite different, although it leads to reticence and prohibition because the demarcation between therapeutic and reproductive cloning depends mainly on whether a cloned embryo is implanted in a uterus. While, at the time of writing, therapeutic cloning has been prohibited in France, Germany and other countries, it is tolerated in Great Britain. In the USA, the prohibition only applies to publicly-financed researched, while each state has its own legislation, which is relatively flexible.

The objective of therapeutic cloning is to provide patients with tissues that arise from their own selves, and are therefore immunocompatible and able to be grafted without there being any risk of rejection (Chapter IV‑2.3.2). It is based on the removal of somatic cells from the subject to receive the graft and the transfer of the nuclei of these cells into enucleated oocytes. The stem cells that are obtained after the first divisions are stimulated using appropriate growth factors. Depending on the factor used, the stem cells differentiate to form a type of tissue (hepatic, muscular, nerve…) that can be used as a graft. Such a procedure may be envisaged for patients who have suffered a serious, invalidating trauma, for example section of the spinal chord. A graft of immunocompatible nerve cells might make it possible to re-establish nerve continuity. A similar type of therapy has been considered for Parkinson’s disease, the cause of which is a degenerescence of certain cells of the encephalon (Chapter IV‑2.3.1). Given the hopes that are raised by the possibility of such therapies, and the fact that, after all, such therapeutic cloning is the equivalent to an autograft, even if the ways in which the graft is obtained are slightly tortuous, the demonization and rejection of such practices should be reconsidered, calmly and coolly.

Another option for therapeutic cloning is the correction of mutations identified in the mitochondrial genomeof a woman wishing to have children. It is, in fact, the mother’s ovum that provides the fertilized egg with its complement of mitochondria that are indispensable for its viability. The manipulation involves inserting the nucleus of a fertilized ovum from the mother, obtained by artificial insemination, into an enucleated oocyte taken from a woman who is not suffering from the mitochondrial defect. The cytoplasm of the enucleated oocyte provides the stock of functional mitochondria that are indispensable to normal cell function in the future embryo.

In a domain of the bioethics, in which rational objectivity comes up against deliberately technophobic religious and cultural considerations, it is useful to remember certain legal and legislative paradoxes. Thus, in France, after having been considered to be a criminal act that was subjected to severe repression by the law up until 1975, the right to have an abortion before the end of the third month of pregnancy became not only authorized but also protected by law. It is interesting to note that in the 13th century, Thomas Aquinas, The Father of the Church, had acknowledged that a fetus only becomes “animated” by the implantation of the soul by Holy Will in the third month after fertilization. Another subject to be considered is pre-implantation genetic diagnosis(PGD), in which human embryos that have been fertilized in vitro are sorted in order to find those that are without defects, a practice which is on the verge of being a deviation in the direction of eugenics. Nevertheless, PGD is the basis of a practice that is either already legalized or is in the process of being so in several European countries, the creation of so-called designer babiesA typical example is that of a designer baby arising from an embryo whose immune profile to that of an older sibling who is suffering from leukemia. In this case, there is good reason to hope that a graft of immunocompatible blood cells from the designer baby into the sibling who is suffering from leukemia will save the latter from death.

Out of the disharmony of opinions that arising from cultural tradition, religious conviction or simply scientific pragmatism, the American biologist and philosopher H. Tristram Engelhardt (b. 1941), in The Foundations of Bioethics (1996) proposes a lay bioethicsthat is based upon the principle of permissionLay bioethics advocates tolerance while admitting that this tolerance in no way prevents anyone from taking up a personal position; it means that each human being has a moral sensitivity as well as the ability to reason and to choose within a defined limit of non-harmfulness and of justice. The individual is free to modify his or her destiny, or to manipulate his or her nature by genetic interventions because, adds Engelhardt, “there is no lay moral foundation to prohibit such an intervention.”

6.2 The Patentability Of Living Beings

When a researcher or the research organization that the researcher belongs to files for a patent for an invention with a patent office, it is necessary to demonstrate the novel and utilitarian nature of this invention. If a patent is accepted, this gives the person or body that filed it the exclusive right to make use of the invention over a pre-determined period of time, generally 20 years, which is a means of protection, or, if desired, to allow others to make use of the invention by issuing a license to do so. In the domain of living beings, there has sometimes been confusion between invention and discovery. In 1991, Craig Venter, known for his participation in the sequencing of the human genome, filed a demand for a patent covering the sequences of 2 700 fragments of recombinant DNA (cDNA) called EST (Expressed Sequence Tags)that are obtained by reverse transcription from human brainmessenger RNAs, in the name of the NIH (National Institutes of Health) at Bethesda (USA). The patent specified that ESTs could be used as probes to characterize genes that are potentially involved in neurological ailments. The resulting outcry led the NIH to withdraw its patent demand.

In fact, the patenting of living beings has a long history that goes back to the patent that was filed in 1895 in France by Louis Pasteur, and then in 1873 in the USA, for the use, in brewing, of a yeastculture that was free from pathogenic bacteria. From this historical perspective, the case of Ananda Chakrabarty (b. 1938) set a legal precedent. In 1994, Chakrabarty filed a demand with the US patent office for a patent relating to a Pseudomonas type bacterium which, by genetic modification, had acquired the ability to digest crude oil. His demand was refused. After an appeal and many legal battles, the United States Supreme Court overturned the patent demand refusal, the basis of the judgement being that any modified microorganism is a product of human ingenuity and has a specific name, characteristics and use.

Thus, from 1980 onwards, the arrival of an era of patents derived from genetic engineering was indicative of how this discipline was growing. In December of that year, Stanley Cohen and Herbert Boyer, acting on behalf of the University of Stanford, patented a nucleic chimera comprising a recombinant DNA carried by a vector. In 1982, a patent concerning the growth hormone gene was awarded to the University of San Francisco. In 1984, the University of California at Berkeley obtained a patent for the human insulin gene. In 1985, the American company Pioneer Hi-Bred succeeded in patenting a variety of corn in which genetic modification has led to an increased synthesis of tryptophan, an amino acid that is indispensable for animal feed. In 1988, the Genentech company acquired a patent for the gene coding for human gamma interferon. This was followed in Japan by a patent for the gene coding for beta interferon. In the same year Harvard University patented the OncoMouse, a transgenic mouse whose susceptibility to cancer is greatly increased. After this, several species of transgenic animals were patented for utilitarian purposes, such as the production of human alpha-1‑antitrypsin taken from the milk of transgenic goats and used for the treatment of cystic fibrosis. The frenetic patenting of living beings has reached the domain of natural products arising from the plant world in tropical regions, the immensely varied essences arising from these plants being full of pharmacological potential. The potential for producing drugs of a considerable commercial value from such plants is very high. Here we return to the problem of the patenting of genetically modified, cultivatable plants (GMPs) (Chapter IV‑2.1). Thus, the experimental method, the principle of which is to acquire pure knowledge, finds itself led astray in its applications. Whatever the motives that are given, particularly for manipulations that give rise to the manufacture of marketable products, the patenting of genomes for mercantile ends shows the regrettable, but unfortunately inevitable, direction in which the very spirit of a science, molecular biology, which half a century ago wished to be at the heart of an understanding of living beings, has drifted.

6.3 Animal Experimentation versus The Fight For Animal Rights

The suffering of animals that are being experimented upon gives rise to a moral problem. The end of the 19th century saw large-scale demonstrations against vivisection and repeated demands for it to be abolished. Today, there is renewed vigor in the call for the abolition of vivisection, without any real coherent basis. This desire to stop experimentation on animals ignores the imperatives of contemporary medicine, which must meet the challenge of pathologies whose increasing incidence is worrying, such as cardiovascular diseases, diabetes, cancer, and the degenerative illnesses that are linked with aging or are of genetic origin. It is true that animal experimentation inevitably leads to questions. Are the stakes involved in a particular experiment, in terms of the acquisition of new knowledge, worth the suffering of an animal used in that experiment? Is it not necessary to ensure that the experimental protocol is well-documented, that it is not redundant, or even that it has been the subject of previous studies carried out on cells in culture? It is easy to see the size of the methodological chasm that separates contemporary physiology from that of the time of Claude Bernard, when cell culture techniques were not yet being used, when the main instrument used was the scalpel and the researcher, using his or her imagination and creativity, had to develop specific protocols that were able to validate or refute a working hypothesis. Each period in history operates in its own way according to its moral laws and its technical capabilities. The bloody operations carried out by Magendie and by Claude Bernard in the 19th century, which were tolerated at this time despite criticisms from antivivisectionists, would not be permitted today. Nevertheless, it is true that the physiologists of the 19th century, by means of the results of their experiments, wove a tapestry of new knowledge on which contemporary biologists were going to work and without which the level of understanding the modern science would be much lower than it is.

Animal experimentation remains indispensable in many areas of physiological investigation, in genomics, in toxicology and in pharmacology. It is a precondition for clinical trials of any new drug, being used to test for the drug’s efficacy, its metabolism and any toxicity. However, not all data arising from animal experimentation can be extrapolated to man. The margin of uncertainty can be reduced by means of comparative trials on several animal species. Because of their phylogenetic proximity to man, primates may seem to be the solution for experimentation prior to the application of a drug in Man. This was the case for the development of a vaccine against hepatitis B. It has been proposed the grafting of stem cells in Man should be preceded by experimentation in apes, in order to ensure the absence of tumorization over the long term. However, the researcher is confronted with a dilemma: should he or she ensure the safety of Man with respect to possible deleterious effects or respond to ethical demands that recognize the very great genomic similarities between Man and the chimpanzee.

A consideration of cloning, patenting and animal experimentation practices illustrates the excesses of the experimental method in domains where political authorities consider themselves able to legislate. Administrative decisions, often made in the absence of any dialogue with scientific authorities, can have serious consequences. Thus, given the pretext of strict obedience to the principles of bioethics, which are a matter of tradition, and while certainly respectable, are nevertheless arguable, and also given the pretext of a sickly and unconsidered fear of the risk involved in certain experimental practices, and the absence of an intelligent evaluation of this risk, research, which until recently took place in a motivating atmosphere of liberty, may, over the long term, be weighed down with a highly prejudicial handicap and a limitless sense of discouragement.

7 The Place Of The Scientific Researcherin The Changing Role Of Biotechnology

In the 17th and 18th centuries, experimental research, which was still in an emergent phase, was mainly artisanal, and in the hands of rare scholars. It took form during the 19th century in the West, particularly actively in Germany, and became operational in the 20th century, under the aegis of governmental authorities, with the creation of Institutes, the programmed recruitment of researchers and the allocation of renewable budgets. Modern science, based on the principles of the experimental method, came to the fore much later in the East than in the West. The globalization of knowledge has meant that at present experimental science, in all domains, including that of the life sciences, has spread throughout the world, with even those countries that had become relatively backward in these domains because of their isolation catching up rapidly. Nevertheless, it is true that the progress of the experimental sciences in the USA and in the United Kingdom has been distinguished by pragmatic management of these countries’ science policies, based on the excellence and the high degree of autonomy of their Universities and Research Institutes with respect to recruitment and choice of subjects of study. The efficacy of this policy in the life sciences may be judged by the number of researchers who have won Nobel prizes since the second world war (at the time of writing, more than 80 in the USA and twenty or so in Great Britain as opposed to only 4 in France).

In France, research on living beings is carried out in the laboratories of Universities, in Institutes connected with Higher Education and in laboratories that are run by large organizations such as the National Scientific Research Center (CNRS), the National Institute of Health and Medical Research (INSERM), the National Institute of Agronomic Research (INRA), the Atomic Energy Commission (CEA), the National Institute of Research in Computer Processing and Automation (INRIA), the National Center for Space Studies (CNES) and the French Institute of Research on the Seas and Oceans (IFREMER). Equivalent bodies exist in countries other than France, some of them being institutes that are dedicated solely to research, and some being university laboratories that associate research and teaching. At the beginning of the 20th century, the function of researcher was most often associated with that of a professor occupying a chair at a University, surrounded by a few assistants, the professor directing the research work in his area of specialization. Now, within a period of a few decades, the status of researcher has been modified greatly. Today we talk of research careers classified according to level of expertise and technicality. Management, or the supervision of career paths and the control of financing, is carried out by an administration that is itself highly hierarchical. The scientific process has undergone a metamorphosis, shown by changes in the behavior of researchers not only within the institutions in which they work but also in their relationships with the media, the political sphere and society. The teaching of the life sciencesneeds to take this into account.

7.1 Fundamental Research Faced with the Metamorphosis in the Experimental Method

“Long ago, there was a time when scientists recounted the exact circumstances of their discoveries, without shame, even when their recital showed up the fragility of their forecasts or an indecent collaboration on the part of every bit of luck. Such times are past, and the researchers of today often like to make us believe that they only find what they are looking for. The thousands of pages Pasteurs lab books provide an opportune reminder to us (and to program-makers or impatient users) that it is just as difficult to ask a question as to answer it, that a scientific discovery often occurs after a long, winding path, that rather than following the fashion, it is preferable to follow one’s ideas, particularly if they are good ones, and are in advance of the fashion.”

Jean J acques

Molecular Dissymmetry, in “ Pasteur, Workbooks of a Scholar” - 1995

Current technological progress, the accumulation of the scientific knowledge, the institutionalization of the public research and many other factors are disrupting a ritual of the experimental process that had survived until the middle of the 20th century, and even beyond. The experimental life sciences of the 21st century will necessarily see themselves remodeled with respect to their objectives and procedures.

7.1.1 A new strategy in the organization of research

Faced as it is by an increasingly tough international competition, the scientific community is also subject to restrictions in terms of operation and prospectives. An organization into small teams of a few researchers gathered around a boss, working in friendly interaction, is increasingly giving way to large groupings that sometimes seem like consortiums. Focused on research subjects that are deemed to be “cost-effective”, these superstructures are encouraged, or even imposed, in the sadly illusive hope that the will lead to greater efficacy. The person in charge of such large groups is taken up with everyday management tasks and by maintaining good relations with the administrative bodies on which his or her organization’s survival depends. He or she may become distanced from the experimentation and forget the intellectual motivations that in the past caused his or her competence to be recognized. It should be emphasized that the secret of future successes lies in situations where young researchers are in direct contact with their bosses, and where friendly interaction with a known master teaches the apprentice researcher how to learn, how to think and how to experiment in a critical fashion.

Preoccupied by the rapid expansion of the scientific population, accompanied by the creation of laboratories whose operation necessarily requires financing, often on a large scale, political authorities, giving way to the requirements of media-fed public opinion, are interfering more and more, via administrative relays, in the control of the objectives of experimental research. Short-term objectives, considered to be “visible”, are favored. A priori, the viability of a project is judged according to the scientific context of the period and its impact on society, insofar as the project looks at health problems with a high degree of media coverage (cancer, degenerative illnesses, viral infections…) and often in agreement with a consensus that avoids going against the orthodoxy of the moment. This leads to a rigid management of projects that are financed and controlled according to objectives that have been fixed in advance, and that are all the more easily accepted by state authorities when they are somewhat fantastic in character. However, fundamental researchproceeds from a playful activity, and for this reason, its efficacy is dependent on the passion of the researcher for the problem that he or she is studying. In contrast to what is believed by the narrow-minded, the effectiveness of a researcher in terms of discoveries depends upon the liberty that is given to this researcher, assuming, of course, that this liberty is underpinned by criteria of confidence such as the researcher’s scientific past, his or her motivation, and judgements made concerning the researcher by impartial peers. It should not be forgotten that the determination of the three-dimensional structure of hemoglobin by Max Perutz (Chapter III‑6.2.1) took around twenty years of solitary, uninterrupted and untiring labor. The theoretical and technical tricks that led to this success helped to open up the domain of the structures of giant macromolecules, several dozen kilodaltons in size, which no-one had dared study before.

Anyone who uses the experimental method realizes that while fundamental research must be organized, it cannot be scheduled. Such a person knows that the pathways to discovery are convoluted, and that an inexplicable observation that appears unexpectedly during an experiment can sometimes, if the researcher is sufficiently perspicacious, be the beginning of an adventure that leads to a discovery. It was to just such a convoluted path that the Belgian biologist Christian de Duve (b. 1917) alluded in his speech when he received the Nobel prize for Medicine and Physiology in 1974. After working at the University of Saint Louis in the USA, de Duve, who had taken up a post at the University of Louvain, Belgium, decided to look at a research theme that had received a great deal of media coverage, diabetes and insulin. It was while operating on one of the subcellular fractions obtained from ground rat’s liver, and analyzing certain of the enzyme activities of these fractions, that he was surprised to find, in one of them, enriched with mitochondria, a phosphatase activity that, paradoxically, increased with time, while the enzyme activities specific to the mitochondria declined. This was an activity belonging to organelles that were contaminating the mitochondria. Dropping all research on diabetes, de Duve set out to identify and characterize these unknown organelles. He discovered that they were involved in the breakdown (lysis) of molecules that are undesired by the cell and, for this reason, he called them lysosomesThe discovery of lysosomes helped to open a new chapter in cell biologyand to attribute a molecular cause to diseases with serious prognoses whose etiology had remained a mystery up until then. These diseases were given the label lysosomal diseasesThese diseases result from the absence of a lysosomal enzyme that is responsible for the breakdown of a given metabolite. The accumulation of this non-broken-down metabolite in the lysosomes leads to cell malfunction, which causes the lysosomal disease. As de Duve said jokingly, if he had carefully followed the experimental process laid down in his diabetes research project, and if he had not given way to the temptation of “playing hooky” or “playing truant” he would never have mounted the podium in Stockholm. In the same way, Henri-Gèry Hers 84 Footnote 84 (1923 ‑ 2008), a cell pathologist at the internationally renowned Louvain school, remarked in an article published in the review Médecine/Sciences: “I believe we would obtain maximum value for the money devoted to research if we were willing to distribute it to those who have been shown to be productive, according to their needs, and without asking them for a program.” Hers concluded, in a tone that was deliberately playful, but thought-provoking, “such a simple system would lead to unemployment for a large number of administrators, which is why I suspect that it will never be adopted.”

Research has its own set of ethics, driven by anticonformityand the creative imagination, capable of shaking up firmly-anchored ways of thinking and established hierarchies, and of leaving the researcher the freedom to express him or herself and to experiment off the beaten paths. As Eccles says in Evolution of the brain and creation of the conscience, it is important to distinguish between intelligenceand imaginationIntelligence is measured according to the rapidity and depth of understanding and clearness of expression. It may be measured and even given a numerical value. The same is not true for the imagination, a more subtle, unmeasurable phenomenon that cannot be learned. The imagination is one of the levers that is able to lift the boulder that hides scientific truth. The imagination is the ultimate weapon of research, which shakes up the knowledge acquired by the intelligence. Nevertheless, the imagination must be tempered by a good critical sense that is able to perceive potential sources of artifacts, both in sophisticated instruments that act as so many black boxes from which already manufactured information emerges and in genetic or chemical cell exploration methods whose specificity must be carefully checked.

The benefits that can sometimes be gained from prospective research that is far from dogma that is rooted in sterilizing tradition, the way in which knowledge progresses, most often by moving away from any orthodoxy, the way discoveries appear unexpectedly on the fringes of carefully put together projects, all of these points are matters for reflection for those in power in the worlds of politics, economics and industry.

7.1.2 A new way of circulating knowledge

Publication is an essential tool for communicating scientific knowledge, and is the judgement criterion for committees in charge of evaluating the creativity of a researcher. In order to have meaning, a publication must provide information that is sufficiently innovative with respect to parallel work carried out in other laboratories. Here again, media coverage has quietly infiltrated the scene. Its role is all the more perverse in that the rating of a publicationis estimated according to its impact index, or, roughly speaking, the renown of the scientific journal in which it is published. Curiously, it has happened that articles that would later be considered to be of primary importance have been rejected by highly prestigious journals, simply because the facts mentioned in the article and the conclusions made have not coincided with the orthodox opinions of the period and the traditionalist spirit of the journal’s editorial committee. This was the case for an article which the biochemist Hans Krebs (1900 ‑ 1980) submitted to the British journal Nature in 1937. In this article Krebs described a series of experiments showing that an endocellular metabolite, pyruvate, product of glycolysis, is completely degraded during a cycle of enzyme reactions. This degradation cycle would later be recognized as the central pivot of the intermediate metabolism. Called upon to judge revolutionary scientific considerations, and unable to perceive their importance, Nature’s editorial committee rejected the article. Krebs then sent his article to a journal with a relatively restricted audience, Enzymologia. It was accepted and published in the two months that followed. The importance of the concept that was put forward in the article ensured that its author gained international recognition, leading to his winning the Nobel prize for Physiology and Medicine in 1953.

For the researcher, publication is a way of making his or her work known. It is also the way in which the researcher learns about the work of others. While the rhythm at which publications in the life sciences appeared increased slightly in the first half of the 20th century, the second half of that century saw a great acceleration in this rhythm, leading to a difficult-to-manage proliferation of reviews and books. It has been estimated that in the last thirty years the volume of publications in the biological domain has increased five-fold; in the preceding twenty years it had already doubled.

This accumulation of publications makes it harder for the researcher to judge the quality of the huge mass of published articles, even in the highly targeted domains that are within his or her area of expertise. The researcher, therefore, will deliberately choose a particular article according to the prestige of the journal in which it is published, which is not an inviolable criterion of quality. In addition, any judgement concerning the pertinence of a scientific article necessitates a dissection of the subtleties of the methodology, the well-groundedness of the experimental protocol and the validity of the results, by means of a careful examination of tables of results and graphs, and, finally, the logic of the discussion. This restrictive yet absolutely necessary requirement limits the number of articles that are likely to be screened. However, this is not the worse fault of publication today. There is another problem that is much more worrying. Many documentation centers have reacted to this inflation in the scientific press by equipping themselves with computing facilities that are able to find, in data banks, articles that have been selected on the basis of a key word index, and to display them on screens. While acknowledging that this constitutes an inescapable change in the transmission of scientific know-how, it should be recognized that in browsing through the pages of a high-quality scientific review, it is possible to come across an article containing an innovative idea or a useful technique, an advantage that is less available when using the on-line system of scientific publication that is most prevalent nowadays.

Mention should also be made of the requirement to publish frequently and within short time frames, for reasons of competitivity, when aspiring to obtain jobs or promotions, or even just to obtain recognition, this requirement being another factor that is prejudicial to fundamental research. It is the cause of worrying excesses, such as experiments that are hastily published and non-reproducible, or even the falsification of experimental results, occasionally within a context of considerable media coverage. Although such practices, which are the exception rather than the rule, are rapidly detected and condemned in a scientific culture where information circulates freely, the publicity that they incite, which reaches society at large via the media, leads to an overall discrediting of experimental research.

At present, one of the most noticeable trends in scientific publicationis that of collectivism. While, in the 19th century, scientific articles were usually published in the name of a single author, occasionally two authors, and very rarely more than two, nowadays publications are often co-authored by several people, and when the work involves the analysis of structures, or the sequencing of genomes, several dozen researchers may be co-authors. From being the work of individuals, research has become collective. In domains whose complexity requires a wide selection of techniques that may range from physics to genetics, the hybridization of specific areas of expertise is certainly indispensable, and this requires the collaboration on a particular project of researchers who are sometimes physically remote from one another. The downside for the researcher, particularly one who is young, is that this requires him or her to abandon individuality and creativityBoth collectivism and inflation in scientific publication are facts that are an integral part of contemporary science, facts which reflect an irreversible trend that it would be difficult to obviate.

Over the last few years, scientific publication has been subject to a type of restraint, in that certain “sensitive” data in the domain of molecular biology might be used for the manufacture of biological weapons in a form of terrorism known as bioterrorismThus, the means of synthesizing de novo viruses (influenza virus, poliomyelitis) and the possibility of modifying their tropism by “directed molecular evolution” (change from a sexual tropism to a respiratory tropism for the AIDS virus) have been the subject of publications in prestigious journals. Given sufficient means, terrorist pharmacists could well make use of such data in order to carry out malicious actions with catastrophic consequences 85 Footnote 85.

7.1.3 A new horizon for cross-disciplinarity

In order to please a public that is eager for progress and the sensational, politicians favor, by means of targeted financing, the types of organization that appeal to their sensibilities, such as the technological platformsWhile recognizing that such platforms are now an integral part of the landscape of research on living beings, and that they must therefore be taken into account, and while acknowledging that projects which implement the latest technologies in different domains need to be federated, it is nonetheless vital not to underestimate the potential creativity of small groups of researchers, a point that was expressed by one of the greatest of contemporary biologists, Arthur Kornberg (1918 - 2007), winner of the Nobel prize for Physiology and of Medicine, in a speech given in 1997: “As I view the steady growth of collective science and big science, the greatest danger I see is a dampen- ing of individual creativity and reversion to the old politics – the inevitable local politics that infects every group and institution.”

However, conscious of the metamorphosis that is occurring in the experimental method, and faced with a particularly inventive and all-conquering technology, fundamental research in the life sciences must come to terms. A century ago, fundamental research and technological research interacted all the more directly because they were both in their infancy. This is no longer the case. Management of the ever-increasing amount of knowledge in the life sciences, and the degree of sophistication achieved by bioengineering techniques and instruments, is widening a gap that makes dialogue increasingly laborious. However, dialogue appears to be a guarantee of future progress. The solution can only come from an increase in cross-disciplinarity, which should begin with university teaching and the establishment of a recruitment policy that advocates the cohabitation of talents from different educational backgrounds in the same laboratory. Fortified by such hybrid expertise, while maintaining its share of originality and liberty in the choice of problems to be studied, fundamental research on living beings can only be enriched by a marriage of reason with biotechnology. Convinced of the necessity for such a marriage, Stanley Fields, the inventor of the double hybrid method (Chapter IV‑4.1), in an article entitled “The interplay of Biology and Technology” (Proceedings of the National Academy of Sciences, USA, 2001, vol. 98, pp. 10051‑10054), concludes,: “It is at the interfaces of biology and other sciences that many of the future discoveries will be made, at the interfaces of biology and engineering that these discoveries will come to be exploited, and at the interfaces of biology and ethics and law that their consequences for society will be decided.”

The desired dialogue between biology and technology also implies the breaking down of barriers that too often isolate fundamental research and so-called applied research, and the facilitating of consistent interaction between the discoveries made in the academic institutions and their application for utilitarian ends in private companies. This is where the twin demons of money and power raise their heads. Already, at the turn of the 1980s, A. Bartlett Giamatti 86 Footnote 86 (1938 ‑ 1989), who was then president of Yale University in the USA, commenting on American university policies, spoke of a “ballet of antagonisms” between, on the one hand, commercial companies that are interested in the rapid cost-effectiveness of any new therapeutic advance and, on the other hand, non-profit university laboratories. Recently, James J. Duderstadt 87 Footnote 87 (b. 1973), emeritus President of the University of the Michigan, argued that the University is a “counter-hierarchical” organism. In fact, its members are free to carry out the research that pleases them and to think in the ways that they wish to think, in any case within an academic norm that considers itself as being free from the constraints dictated by private interest groups. Until recently, such behavior was considered as a sort of ethic which arose out of the University conscience and dignity. The crumbling away of this ethic in the final decades of the 20th century coincided with the rise of biotechnologies and the large-scale filing of patents relating to molecular genetics techniques that could be applied to the manipulation of living beings, by researchers in the public sector. The intrusion of the American private sector into public research laboratories, in the form of collaborations with transfer of “sensitive” information from the public to the private, has become such a worrying problem that drastic control measures have had to be taken. Within this context, the American federal government, in February 2005, issued a certain number of prohibitions targeting the National Institutes of Health (NIH) of Bethesda, particularly with respect to the retribution of researchers for services rendered to industryFootnote 88. These stands call for thought concerning the place that is currently held in Universities with respect to fundamental research. Without arguing against the efficacy of major research institutes, it is nevertheless necessary to remember the part played by the University in this domain. The University is not only the place where knowledge, both as it is now, in its current state of advancement, and as it has been, it is also the place where knowledge must be created by fundamental research.

For the last few decades, under pressure from state policies, and also as a function of an improvement in social status, the world of the university has opened up to a wider public, leading to an influx of students that is sometimes so enormous that the task of teaching them has become overwhelming. Because of this, the share of their time that university researchers can, in practice, devote to their research tasks has shrunk. This situation is highly prejudicial to the mission to innovate, which should be a priority. It is, in fact, during their University studies that the thought patterns of young students are forged by contact with teachers who not only instruct them, but also educate them by inspiring in them a motivation and an enthusiasm that gives rise to hope. How could this be true if the teaching faculty did not itself participate in scientific creation?

7.2 The Experimental Method Taught And Discussed

“What can teaching, ex cathedra, do to guide the researcher? Nothing, obviously. The researcher is trained in the laboratory. And the first stroke of genius on the part of a future researcher is to find a good boss. Such a find will open up the royal road to success. The road will be opened – but the researcher must travel along it. A researcher may be taught many things. He or she can become familiar with techniques and with equipment. She or he can be assigned a problem to resolve. However, what is essential for the researcher is to know how to understand relationships between phenomena that seem unrelated, and to be able to progress from the particular to the general. A boss may develop such qualities in a gifted young researcher, but intuition is a gift; it cannot be taught.”

André L woff

Games and Combats - 1981

While the Bernardian style experimental method, based on a working hypothesis aroused by an observation, followed by implementation of an experimental proto-col, is still extant in the life sciences, and while “serendipity” is still the origin of great discoveries, “big science”, underpinned by sophisticated biocomputing or bioinformaticsprocedures, is intruding more and more, while genomics and proteomics are not far behind. The methods and instruments developed by the biotechnosciences have led to profound modifications in the ways that the structures and functions of living beings are investigated. For example, by varying multiple parameters in DNA chips or protein chips, at the same time, the experimenter is able to ask questions that lead to grouped all-or-nothing answers (Chapter IV‑1.1.3). In combinatory chemistry, screening makes it possible to detect a molecule that is active for a given pathology from among a multitude of molecules (Chapter IV‑3.4). The mathematical simulation of metabolic networks or of signal- ing chains is already well under way (Chapter IV‑4.2). Given this new technologi- cal outlook and the hope that it can provide rapid solutions to health problems subject to considerable media coverage, the teaching of biology in universitiesmust not be limited to a description of current advances, no matter how brilliant and promising they may be. This teaching should return to its origins, be a reminder of history, and should not hesitate to use examples to illustrate how a major discovery can arise from a long period of wandering in the wilderness. In practical terms, while being conscious of the extraordinary complexity of living nature, and carefully avoiding the dangers of simplification, it is important to remember that the reductionist method was a necessary path to an understanding of the integrated, modelized biology that is emerging nowadays. At present, certain people call reductionism naive, but this is only the case insofar as we have faith in recent advances in integrated biologyFootnote 89. With this in mind, it should be noted that the deciphering of the protein synthesis mechanism in prokaryotic microorganisms (Chapter IV‑1.1.1) was, along with the discovery of the genetic code, a jumping-off point for an inventory of similar, but noticeably more sophisticated, mechanisms in eukaryotic organisms. The reductionist “one gene, one enzyme” dogma, formulated on the basis of Beadle and Tatum’s experiments on the mold Neurospora crassa(Chapter III‑6.1) was a necessary prerequisite to a considerably more elaborate understanding of the relationship between the genotype and the phenotype. The way in which the nucleic acid and protein units in the tobacco mosaic virus spontaneously organize themselves (Chapter III‑7.3) acted as a basis for thought concerning the self-organization of macromolecular complexes in the cell. These few examples underline the fact that it is difficult to comprehend the scientific research process if we only refer to experiments carried out in the present, and if we do not have a clear idea not only of the way in which hypotheses, even false ones, were once formulated, but also of the way in which experimental work, which may have led to failures, was once carried out, or, in brief, if we do not look back at the past. Let us add that it is occasionally good for us to show some humility when we take the trouble to examine the past. Thus, the processes involved in the phagocytosis of bacteria by innate immune cells (neutrophils, macrophages), which are today studied in the greatest detail with particularly refined technical facilities, had already been perceived more than a century ago by Metchnikoff, and even analyzed, admittedly with the clumsy means at his disposal, but with such accuracy that none of the conclusions formulated at that time have yet been disproved (Chapters III‑2.2.4 and III‑6.2.5).

The experimental method applied to the Life Sciences, the history of its birth and of its development, the way in which it is regarded by political and societal authorities, and, finally, the dependencies that are developing at present between the technosciences, human medicine and the different branches of the economic sector, all of these aspects should be covered by university teaching that includes not only the pure sciences, but also the human, political and economic sciences, as well as philosophy.

The student should not be saturated with book-learning, but he or she should be taught to reason, to imagine and to criticize, not to accumulate knowledge in an indigestible catalogue, but to ask questions about the way in which certain, carefully chosen, items of knowledge have been acquired, and not to deliberately accept science in its current state without knowing what it was like in the past. He or she should understand what pathways of thought led to dogmas that were established and taught as truths being refuted, and favor experimentation, with its risks and questions, rather than well-smoothed, abstract theoretical presentations without rough edges. These should be the principles of teachingthat is designed to open up young minds to creativity.

In Anglo-Saxon countries, the worlds of industry and research that welcome the graduate manage to communicate with one another, but these worlds ignore one another in France, or at least remain reserved, a situation which is prejudicial from the economic point of view. If we look at the pharmaceutical industry in particular, we see that only half a century ago the pharmacopeia was limited to plant extracts or active agents isolated from these plants, with antibiotics quietly beginning to make their appearance. In the last decades of the 20th century, a great technological leap forward was made, with completely new methods in bioengineering, combinatory chemistry, and the finding of therapeutic targets in macromolecules, and this created a hiatus that severely handicapped countries that were unprepared for it. France, with its biological fundamental research training that is out of phase with that of the Anglo-Saxon countries, fell behind, and continues to be behind, a situation that is prejudicial for its economy. The remedy for this does not lie in incantatory speeches. It requires a volontarist policy for the management of experimental research. Generally speaking, the fact that the major engineering schools in France, which recruit the scientific intellectual elite, students being chosen by competitive exams that select for intelligence rather than imagination, are unable to impose upon their students an end-of-course thesis that would authenticate their engineering degree, should not be tolerated. In contrast to other countries, in France only a small percentage of engineers have received doctoral training or had to present a thesis before entering their careers. The French dual system of major engineering schools and universities, which, a century ago, made sense for the economy of that period, has become completely obsolete, and deserves a courageous revision.

8 Conclusion Looking At the Present in the Light of the Past

“There is a question, much older than modern science, which has never ceased haunting certain men of science: that of the conclusions that the existence of science and the contents of scientific theories can lead to concerning the relationships that humankind has with the natural world. Such conclusions cannot be imposed by science as is, but they are an integral part of the metamorphosis of this science.”

Prigogine and I. Stengers

The New Alliance. Metamorphose of Science - 1986 (2nd edition)

In the 1950s1960s, the hybridization of the techniques of genetics, biochemistry and biophysics gave birth to molecular biology. With the resolution of the double helix structure of DNA, the demonstration of its replication, the elucidation of the mode of expression of its nucleotide sequence as a sequence of amino acids in proteins and finally the deciphering of the genetic code, biology underwent a revolution of an amplitude similar to that which, at the end of the 19th century, saw a blossoming of the seeds of cell biology.

The last decades of the 20th century represented the utilitarian era of molecular biology. The introduction of genetic engineeringinto biological experimentation dates to the beginning of the 1970s. It was at this time that techniques were developed that made it possible to transfer a fragment of genomic DNA from one spe- cies into the genome of another species. Genetic engineering now fills a predominant position in the Life Sciences, supported by increasingly effective biocomputing or bioinformaticstechniques. It is easy to understand that expertise and a high degree of knowledge about fundamental research is necessary in order to be able to master or even invent the genetic engineering techniques that are indispensable if we are going to produce biomolecules with a therapeutic impact, such as those that are currently being used in the pharmaceutical domain: insulin, growth hormone, blood coagulation factors, vaccines, etc. The engineering sciencesthat make up the greater part of contemporary biotechnologyhave now come to the fore in many domains of the Life Sciences. It is thus that a modernistic and original way of investigating Nature has come into being. A multiparametric model, in which biocomputing or bioinformatics and high-throughput screening reign, is added to, or even substituted for, the Bernardian model for the experimental method, based on observation, an a priori hypothesis, and experimentation to verify this hypothesis by varying a single parameter at a time. The aim of this globalized approach is to integrate the multiple reactions that take place almost simultaneously in different locations of a cell into a coherent whole, to rationalize the interpretation of the dialogue that operates between the different endocellular organelles, and finally to discover how the exchanges of information between cells in an organ and between organs in multicellular organisms are set up. We are therefore witness to the emergence of an integrated biology that has been labeled “systems biology”. Its long-term objective is to model the functioning of living beings and to theorize them. Its development is encouraged by the perspective of consequences that could revolutionize certain sectors of the human economy and of public health. Today, concrete, mechanical models, in the form of biorobots and hybrid robots, and, very recently, molecular motors are added to abstract models that are based on the logic of mathematics and algorithms, ushering in the era of nanobiomachines. Becoming more utilitarian, the life sciences are imperceptibly detaching themselves from traditional philosophical concepts that try to explain the modes of reasoning of the researcher, or even to impose a framework for thought that is likely to orient his or her way of doing research.

Looking at genetic inheritance, contemporary experimentation has shown that at all levels of the tree of Nature, including Man, this inheritance can be modified. Aware of his or her ability to influence the functioning and the destiny of living beings, the researcher is confronted with the dilemma of a desire for knowledge versus a questioning of the use to which discoveries may be put. There has never been such a real divorce between the world of phenomena that are understood by the experimenter and the world of noumena whose intelligibility is foreign to our senses. There has never been such a wide gap between the biotechnosciences, whose possibilities are coming to be seen as limitless, and a reflective analysis of thought, which wanders between freedom of action and prohibition.

As society becomes aware of the potential applications of discoveries made concerning living beings, problems of bioethics, particularly those involving reproduction, have become problems of public interest. Cloning and the production of stem cells are subjects that give rise to diatribes and passions. In the near future, genotyping, which is the result of progress in pharmacogenetics, could usher in a new form of customized medicine. Elsewhere, the cognitive sciences that are bringing together philosophy and psychology in the domains of computer technologyand artificial intelligence, and which are tackling the processes of thought, the creative imaginationand memory, will no doubt be the subject of the considerable questioning concerning research on living beings with which the experimental method will be confronted in the 21st century.

When faced with the way in which biotechnologies have erupted into the life of society, the mind travels back to the allegorical illustration that embellishes Francis Bacon’s Novum Organum (see Figure II.19), showing vessels returning from unknown lands, loaded with precious cargoes and returning to port having sailed past the pillars of Hercules. At present, the challenge has been partially met, but a great deal remains to be done. Innumerable cargoes have already reached port, but what will be the destiny of this precious merchandise? After all, the seeds of the idea of technosciencewere already in place in the 17th century, in the philosophy of Francis Bacon and Robert Boyle (Chapter II‑6). Bacon recommended that the governments of the time promote experimental science by the creation of laboratories equipped with high-performance instruments and libraries, by the organization of researchers into teams and by appropriate financing. The utilitarian ends of scientific research were underlined. Boyle imagined a situation in which laboratories were open to society and researchers were able to accept criticism. Given innovations that upset tradition, protestations arose. The pneumatic machine or vacuum pump was the subject of the fameuse diatribe between Boyle and the philosopher Hobbes (Chapter II‑6.2). Hobbes criticized the validity of Boyle’s conclusions, drawn from experiments that he qualified as doubtful. Following his words, he came to see in the discoveries of experimental science a possible threat to the power of governments and the hierarchical layout of society. Such overcautious opposition to the pursuit of knowledge is in no way anecdotal, it is still a reality, with the uprooting of genetically modified plants and the veto that has been placed in certain areas on stem cell research. This type of opposition is also shown when pressures or even vetoes are in operation that take into account more the opportunism of the moment than an in-depth understanding of science and of its history and that forget that freedom of the mind is a guarantee of its creativity, because, just as in the world of Arts and Letters, the world of Scientific Research is situated outside those norms that can be modulated by state decrees. The creativity of the researcher cannot be manufactured on demand. Where it exists, it still needs to be detected and encouraged.