Keywords

1 A World of Differences

1.1 Different but not Incompatible Sciences

Biomimetics requires the fusion of biology (currently a largely descriptive science) and engineering (almost entirely analytical) [1]. This can be achieved at the level of design. Using elements of TRIZ [2], one of the authors (JV) has shown that biology and engineering can be brought together in an ontology [3]. Ontologies are in common usage for the digital storage and integration of design information in medicine, architecture, engineering, materials science, and a number of other areas [4]. With access to enough information, an ontology can become a research tool at the heart of an AI system [5]. The intention is that the ontology of biomimetics should drive agents capable of forming structures, controlling systems and developing new materials, based on the types of interaction and change found in biological systems. Biomimetics (= biomimicry = bionics = bio-inspired design = …) relies on transforming and transferring information that we glean from biology into information that can be used in design, development and manufacture (whatever is appropriate) of something man-made [6].

Physics and chemistry provide us with tools with which we measure (and therefore classify and make predictions about) phenomena in general, and with such clear models we can unravel many of the complexities of our surroundings and separate effect from cause. But physics (and maths) fail when faced with life and living things. There must therefore be parts of biology that are currently unknown and unrecognised, even though we can observe the effects of those parts, since we inevitably view most of biology through the prism of physics. Our temptation will then be to ascribe those effects to candidate causes with which we are familiar, even if those causes are not appropriate and out of context. The distinction of a successful model is its ability to quantify and predict. There is currently no way that biological events such as speciation and morphological changes can be predicted. At its root, current understanding of biology is unquantifiable in terms of constitutive models. There is no basic theory of biology; biological phenomena cannot be predicted from first principles; we cannot make an egg. Biology is in the alchemical stage, where experiments can accumulate information but the basic general principles are obscure. Biology awaits its Newton.

1.2 Short Summary of Current Research Limits

There have been several efforts to enlarge the capacities of TRIZ to identify solutions from biology [6,7,8,9,10,11,12]. Whatever this method is called (Eco-TRIZ, Bio-TRIZ etc.) it always relies on associating functions originating in engineering with function of a specific natural body. In many cases, the achievements of nature are superior to the capabilities of engineering, especially under conditions of global accounting. Most current biomimetic concepts have previously been explored at a basic level (such as hydrophobic surfaces of plants); a few present basic problems to be solved (such as absorption of water from a dry atmosphere) if they are to be reproduced artificially. There are biological phenomena that will need new discoveries in physics before they can be transferred biomimetically [13]. Our conclusion is that there is limited utility in these simple methods.

2 Describing and Aligning Domains Using an Ontology

2.1 A Description of Biomimetics Bridged by TRIZ “Contradictions”

We are thus left with the central problem in biomimetics. How to bridge the gap between technology (quantifiable) and biology (descriptive)? At present it seems that the bridge can be only partial [13,14,15]; currently the most useful answer must, at least initially, recognise the limitations imposed by physics [16]. Thus, we can define a target in general terms. As a criterion of success, if only partial, we need to produce one kind of structure (e.g. biological theory) [17] in the context of another (e.g. engineering or design theory), and vice versa. If we could do this in such a way that the resulting pair of operations were mutually inverse, then we would have grounds for saying that the two theories were equally general; if we could say one and not the other, we could rank one theory more general than the other; if we can say nothing at all, then the generality of the two formalisms cannot be compared—they are of different generality. Viewed like this, biomimetics falls into the second category. Much, though not all, of the mechanisms of biology can be described (though not necessarily explained) by physics, engineering and chemistry. The inverse is less easy or possible, despite the efforts of 2,000+ years of biological inspiration in technology. So biology and technology are of different generality. Nonetheless, the effort should be concentrated on selecting one or more design or descriptive system in common use in engineering and seeing how far that system can describe biology as well.

TRIZ is a good candidate for this exercise because it is a systematic approach to complexity and is used by engineers and designers for solving problems in a creative manner. If this system can describe biology, even at a utilitarian level, then we have a possible model for biomimetics. We need to find or define a level at which it is possible to transfer parameters and observations. The place of TRIZ in biomimetics is argued in the following section.

2.2 How TRIZ Decomposes a Problem

Ideally, TRIZ helps in dissecting a problem, removing confusion, distilling it down to its essentials, and suggesting solutions derived from the study of a wide range of successful patents. The completeness of the description and abstraction of the problem that this produces makes it possible to compare the problem with a much wider range of solutions than is available with most other problem-solving systems. This range of solutions must necessarily include the living world. Thus it is necessary to expose biology to the same general system of reduction and classification that has been used with patents. This has not yet been done, although a number of studies have used different modules of TRIZ as a mirror for biology [e.g. 12]. For this study we have chosen to develop an ontology based on the well-known Contradiction Matrix.

The Contradiction Matrix is often regarded more as toy than tool, but this view can be considered due to a general misunderstanding of its structure and therefore of its possibilities and use. Whether or not Altshuller was aware of the internal structure of the Contradiction Matrix that we will illustrate is probably not known. However, it’s a fair guess that it was strongly influenced by the concepts of Hegelian philosophy, widely taught in continental Europe during the last century and beyond. But we must dig deeper, into the Dialectic and its origins, in order to derive a more balanced and critical understanding.

2.3 The TRIZ Matrix is Dialectic: Not a New Thought!

The concept of the Dialectic has its origins with the Greek philosopher Heraclitus of Ephasus, who said that everything is in constant change as a result of inner strife and opposition (unfortunately, Heraclitus never published his ideas formally). The concept was popularised by Plato’s Socratic dialogues, that in order to establish a truth it is necessary to have two or more people with opposing views to engage in dispassionate discussion until a resolution is reached. This unity of opposites is now known as the Dialectic. In practical terms this equates to making a statement of some sort, questioning the statement with counter-arguments, then working towards some sort of agreed truth.

Over the years, many forms of dialectic have arisen; the one most familiar to Europeans is the Hegelian Dialectic, although Hegel said that he got the idea from Kant. Kant named the two opposites thesis and antithesis, and the resolution he called the synthesis. The synthesis can then become the thesis in a new dialectic. This, however, is a formalism. As Karl Popper gleefully pointed out, advance is more likely to be made with the rather messy ‘trial-and-error’, much closer to Socratic dialogue.

But there is a basic problem with the Hegelian Dialectic. Although in its ideal form merit is recognised in both thesis and antithesis, and preserved in the synthesis (an example is the argument between wave and corpuscular theories of light, in which the synthesis has to accommodate both models), there is a great tendency for muddle arising from the loose way in which dialecticians speak of contradictions. Criticism, which forms the basis of the antithesis, invariably points out a contradiction. But this can lead to the impression that thesis and antithesis are essentially contradictory, such that any synthesis will have to challenge the law of the exclusion of contradictions of traditional logic. This law asserts that two contradictory statements can never be true together, such that any dialectic synthesis derived from such an argument must be rejected as false on purely logical grounds. But Hegelian dialecticians claimed that this law of traditional logic has been subverted by the Dialectic and must be discarded. This action totally destroys the logical argument and renders admissible and valid any statement whatsoever. We need to step back from this brink.

In fact, as Popper recommends, it would be best not even to use the term ‘dialectic’. He would use the clearer terminology of the method of trial-and-error. However, it seems to us that the concept of the dialectic needs to be understood since it appears to have played an important role in the development of TRIZ [18]. In Russia both the Hegelian Dialectic and TRIZ are taught at kindergarten level and upwards and may even be important in everyday thinking. But in Europe, Cartesian rationalism has very much been confined to the Continent; in the UK, a nation of pragmatists, the main thrust has been empiricism. Without the burden of Hegel’s tradition, it is easier to be antithetical about the Dialectic!

2.4 Altshuller’s Proposition: Place Opposites into a Matrix

It is obvious with a little study that the Contradiction Matrix is assembled using the rules of dialectic discourse [19]. A collection of carefully selected parameters describes the conditions and characteristics of materials and systems. The instructions for use say that two of these parameters must be chosen - one that describes the end goal (i.e. the ideal solution) and one that describes the characteristic that is frustrating or contradicting the achievement of this goal. Respectively these are Hegel’s Thesis and Antithesis. These two parameters, both drawn from the same list of 39, are arranged along two orthogonal sides of a matrix, and suggestions for a synthesis of each pair of parameters defined at the crossing points in the body of the matrix, drawn from patents in which this particular problem was solved, are plotted at that point. Since the two parameters are essentially different in character (basically positive or negative) the Matrix appears well populated, with no intended symmetry about the diagonal. However, simple analysis of the Matrix shows that there is a very significant symmetry about the diagonal (Fig. 1). 27% of the suggested syntheses are totally symmetrical (no disparity across the diagonal, with the suggested Inventive Principles not differing) and a further 25% differ in only one suggested Inventive Principle out of a common maximum of four. So the Matrix is heavily populated with Socratic rather than Hegelian syntheses; half of it is displaying the characteristics of a balanced trade-off rather than an Hegelian argumen2. Taking this into account, the suggested Inventive Principles appear to fall into a number of categories, as yet unanalysed. Some of them, where the dialectic is in the form of a problem to be solved, will prescribe real solutions to the problem; some, also aiding the generation of a practical synthesis, will suggest novelty that can introduce a new dimension to the problem without necessarily providing a solution; a third category, more closely associated with the ~50% or trade-offs, will suggest changes that manipulate the trade-off such that its inherent variability can be harnessed in the provision of an adaptive response to changing internal or external conditions. Since living organisms are open and adaptive systems, it seems reasonable to suggest that this form of synthesis of the trade-off will be more common in biology since it allows and underpins adaptation in general.

Fig. 1.
figure 1

The standard TRIZ Contradiction Matrix. Identical squares coloured yellow (Color figure online)

Usefully, trial-and-error provides the basic variability for natural selection in biology, genetic and phenotypic variations being exposed to the selection pressures of the environment, physical and biotic. But the trial-and-error of natural selection, whose product is evolution, is different from the Socratic and Hegelian versions of the dialectic in at least one major factor. Socrates could argue only about what was known, and Hegel’s formalism was even more limiting. Natural selection works on variants of organisms that have some novelty about them, and the selection pressures are similarly lacking in control, although they may be circumscribed by context (environment, heredity, etc.). Scientific research is much the same - you may have an inkling of what the answer has to be, but the journey you take to get there is unlikely to be direct. It is also very likely that your imagined end-point will turn out to have been illusory, and the new reality is more interesting and convincing than was initially conceived. In science, therefore, and especially in biology (biologists love surprises), the Hegelian Dialectic is not an appropriate model for research since at least half of the argument cannot be predicted since there is no coherent model in biology that will support such prediction.

The resolution of the Contradiction Matrix and biological systems may be likely to lie mainly in the 50% of the Matrix that is symmetrical and Socratic. This may or may not turn out to be true, but it suggests that the most productive area for the discovery and analysis of valid comparisons will be in the area of trade-offs whose symmetry and adaptiveness are well understood both in biology and in engineering.

It is important that biological trade-offs are assessed independently of any consideration of physics or engineering. The literature of biomimetics has many examples of a biomimetic transfer where the assessment of the biology has been made from the point of view of engineering or physics [7, 14, 20]. This is largely because the biological information is being interpreted by an engineer—more a reflection of the low number of biologists involved in biomimetics. This has the unwanted result of a strong bias towards engineering, leading to the deduction in one case [21] that some 95% of biological ‘innovations’ for inclusion in the Contradiction Matrix are not novel to engineering. This is not necessarily a result of the way that biology works, but a result of sampling bias of individual cases which have been interpreted by an engineer rather than taking the biologist’s independent assessment of the trade-off. Unfortunately, although many biological studies successfully identify the trade-off under investigation, only some 40% of these identify the factors involved. It is these factors, equivalent to the Inventive Principles of TRIZ, that are the agents of change and control, and that therefore supply the iconoclastic impulse.

2.5 Using Established Tools of Computer Science to Build a Framework

In order for these insights and ideas to be brought to fruition, they have to be arranged in a logical and dynamic framework. The ubiquitous data base is incapable of a dynamic response, but the terms in a data base, arranged hierarchically and with their relationships mapped simply, can be arranged into a Simple Knowledge Organisation System (SKOS) [22] that can be developed within the editing environment of Protégé, an open source editor more commonly used for developing ontologies [23]. Since the relationships in a SKOS are relatively broad and unruly, it is possible to generate a network of terms in a fairly short time; this network can be displayed in Protégé and explored interactively easily. But it is not easy to use such a network for analysis and prediction; it can possibly stimulate creativity but it cannot establish facts or laws. For this we need an ontology.

In its simplest form, an ontology is a standardised vocabulary. Connectivity of the items in that vocabulary is more easily obtained since computers can be programmed to deal with the logical web of reasoning that the ontologist creates.

The ontology is written in OWL2, the main language of the semantic Web, using the editor Protégé, available from Stanford University. It follows the organisation of the Basic Formal Ontology (BFO) thus ensuring that it can be integrated with other ontologies following the same, widely adopted, format. The BFO has as its primary classification ‘continuants’ (things which persist through time) and ‘occurrents’ (events which occur in time and space) [24]. The main continuants are objects, which exist in the absence of any other characteristics. They are therefore independent of those characteristics. However, they have descriptors of one sort or another, such as size, colour, mechanical properties and inbuilt tendencies. These descriptors would not exist without the objects they describe, and so they are dependent continuants. In this ontology, the objects are animals and plants and the things of which they are composed. The 39 Engineering Parameters are descriptors of the objects, and so they are dependent continuants. They have been modified from the TRIZ originals to give them relevance to biology. Thus, Parameter number 31, usually entitled ‘harmful side effects’, now includes autoimmunity as a possible side effect of the immune system, an essential component of the organism’s defence system (Fig. 2). Parameter number 39, ‘productivity’, includes growth, fecundity and rate of foraging (Fig. 3).

Fig. 2.
figure 2

Structure of class and sub-classes of Parameter 31, Harmful Side-effect

Fig. 3.
figure 3

Structure of class and subclasses of Parameter 39, Productivity

The Inventive Principles are the means of change or adaptation, and so they are events which occur in time – that is, they are occurrents. These principles have been adapted and reformatted to accommodate principles of biological control and change. Thus, principle 26, ‘copying’, includes reproduction, camouflage and substitution (as when a male spider gives the female a faux prey item during courtship, or a cuckoo lays its egg in an alien nest) (Fig. 4); principle 22, ‘convert harm to benefit’, includes altruism and sacrificial bonds in the matrix of ceramic composites such as bone (Fig. 5).

Fig. 4.
figure 4

Structure of class and subclasses of Inventive Principle 26, Copying

Fig. 5.
figure 5

Structure of class and subclasses of Inventive Principle 22, Harm to Benefit

2.6 Ontology Origins: Selected Research Papers

The ontology in its present condition derives its information from some 400 cases, each one taken from a biological research paper which defines and solves (at least partially) a trade-off. The cases cover all aspects of biology, from genetics and the molecular structure of materials of the cell through to ecosystems and behaviour. The cases therefore provide the information necessary for the ontology to work as an instrument of transformation, converting the solution of biological problems into a form that can be used in a technical (e.g. engineering) context. The papers have been analysed by one of the authors (JV) but we are developing software to implement Natural Language Processing (NLP) that will be able to identify the trade-off (not always difficult since it is commonly explicitly identified as a trade-off in a research paper); identifying the means by which the trade-off is resolved is (probably) a more difficult problem. Currently we are investigating a rule-based system. The research paper under consideration (an equivalent of an engineering patent) will commonly define the trade-off as “a trade-off between (A) and (B)”. We can then track A and B, and their synonyms, through the paper and discover what factors are reported to interact with them and whether the interaction is positive or negative, etc. In the ontology, A and B are each identified with at least one the modified Parameters (cf. Figs. 2 and 3) and its solution interpreted by the modified Inventive Principles (cf. Figs. 4 and 5) such that it can be expressed in a standard TRIZ Contradiction Matrix. Comparison between the biological resolutions of a trade-off and the TRIZ version as expressed in the standard Matrix shows very little correlation between the two.

The Inventive Principles also serve as a list of functions that can be assigned to biological objects. Thus, it is possible to identify biological structures that can serve as instances of various functions (Fig. 6). The tube foot of an echinoderm is both a deployable structure (principle: dynamism) and a hydraulic dynamic effector (principle: pneumatics and hydraulics). Another deployable structure – the proboscis of lepidopterous insects uses surface-tension forces to move a liquid (surface-tension effect in the principle: counterweight). Thus we have a more phenomenological access to functionality in biology, available for biomimetics that is directly available for the interpretation of trade-offs.

Fig. 6.
figure 6

Characterisation of echinoderm tube feet and butterfly proboscis

3 Next Steps

The ontology is built, but only as proof of concept in that it needs more data from solved biological trade-offs and (probably) additions to the list of biological continuants. The computerised analysis of relevant research papers will speed up collection of data. Currently we are investigating NLP, but it is possible that machine learning could prove useful, although since each biological trade-off is necessarily considered as an individual item. this requirement might not fit with the more statistical approach taken by machine learning.

Next we need to design interfaces that allow the ontology to be interrogated by both humans and by autonomous agents. The former is not too difficult; Protégé already has several graphical interfaces that can display inter-relationships and results of searches. It’s the way autonomous agents will be able to interact that’s exciting and some way off. The scenario is as follows: An agent has a problem of some sort which it defines as a trade-off. It goes to the ontology informing it of the trade-off plus any contextual information. The ontology searches for all examples of that trade off, then matches all or some of the contextual information, and identifies cases that match these conditions. It then informs the agent of the Inventive Principles that resolved that particular trade-off, together with information about the biological components that were active during resolution. It’s then up to the agent to select from this information what changes are relevant for it to resolve or manipulate the trade-off. Further research may orient us towards a complete AI system for the resolution of technical problems with a biomimetic solution, if such exists.