Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

The Basic Mathematics You Need to Know to Understand Economics

Most of the time economists do not “do science.” Rather they tell stories dressed up in mathematics. Neoclassical economists mostly tell stories of the magic of market self regulation. Keynesian economists tell stories about how correct amounts of spending, taxing, and money creation can balance an otherwise unstable economy and lead to economic growth. If you want to understand the economist’s story you should learn the requisite mathematics. Perhaps more importantly, if you want economists to listen to your story, you need to learn to present it in a language they understand and respect. If you can’t express yourself mathematically then most economists will not even bother to listen to your story, no matter how compelling or well-supported by evidence. Even if presented with mathematical elegance mainstream economists may still reject your story if it conflicts too badly with theirs. But at least speaking the language of mathematics will give you a fighting chance of being listened to. Far too many economists arrogantly dismiss the analyses of other social scientists whose valuable insights are expressed primarily in words or oral histories.

Generally all scientists and economists agree that their analyses should be rigorous, meaning that it is thoroughly researched and done well according to the standards of those who usually undertake similar analyses. There are at least two very different types of rigor important here, however, scientific rigor and mathematical rigor. There is often confusion between them. Scientific rigor refers to whether or not the formulation of a problem, such as in an equation, is consistent with the known laws and processes of nature, the problem is well understood, including which factors influence which other factors, and the degree to which the actual phenomenon are accurately represented by the equations used. Mathematical rigor usually means whether or not the equations are solved correctly and less frequently whether they are well formulated. It often also means that the problems are solved “elegantly” by the use of analytic (pencil and paper) means. While for many problems both scientific and mathematical rigor are required, we find too often that there has been too much attention paid to mathematical rigor and not enough to scientific rigor. Examples of this have been given for Ecology in Hall [1] and for economics in Chap. 5.

Economists are very committed to models, in fact often more committed to the model that to acquiring a broad-based knowledge of how the economy works. Nobel Prize winning economist Paul Krugman lamented this tendency in the profession when he said

The economics profession went astray because economists, as a group, mistook beauty, clad in impressive-looking economics, for truth. …the central cause of the profession’s failure was the desire for an all-encompassing, intellectually elegant approach that also gave the economists a chance to show off their mathematical prowess

After the financial collapse of 2008, former Federal Reserve Chair Alan Greenspan expressed shock and dismay that the model in which he believed so strongly (market self-regulation) let him down. During the initial stages of the authors’ collaboration Hall expressed to Klitgaard that “you can’t get a PhD in economics without knowing how the economy works.” Klitgaard responded that unfortunately you can. Much of graduate economics training now consists of graduate mathematics. Students are often awarded doctorates for developing elegant models that have little or nothing to do with how the actual economy works.

What keeps the story of mainstream economics from being scientific, despite the mathematics? We contend that in order to be considered scientific a discipline must not only follow the scientific method, but be consistent with the rest of known science. Here is our problem with most mainstream economics. Most natural sciences begin their studies with observations of nature. They then codify these observations into hypotheses, which they test statistically after gathering evidence. A scientific theory is only valid if others can reproduce the results using the same methods. Unfortunately economics suffers from two problems. To begin with, as we showed in chapters four and five the basic pre-analytical vision of the circular flow is inconsistent with the second law of thermodynamics and the law of conservation of matter. Since the economic system is isolated, without inputs and outputs, entropy must always increase. But the circular flow model leaves no room for material or thermal waste. If it did the value of the output could never equal the sum of factor prices. Secondly, beliefs about human behavior have not changed significantly from the 18th century. Students are told that humans are rational, self-interested, and acquisitive. Rarely do economists gather evidence on actual human behavior, or subject the belief to statistical testing. Rather these ideas are accepted without reservation as “maintained hypotheses.” Interestingly enough, those economists that do approach human behavior as a science often win Nobel Prizes in Economics, for finding that most humans do not behave as the models say they do. We provided a brief review of Behavioral Economics in chapter five. Unfortunately little of this cutting edge science has filtered down to the introductory level textbooks or shaped the consciousness of economics teachers. Despite these drawbacks economics is largely a matter of constructing testable hypotheses and subjecting them to the rigors of statistics. Will a tax cut increase job growth? Will deficit spending lead to inflation? Will technological change lead to lower-cost production? While these can be legitimate scientific approaches they may not be resolvable mathematically.

In order to make sense of these and countless other questions economists often construct models. We define a model as a formalization of our assumptions about a system. Formalize means to make mathematical. A system consists of inputs and outputs, boundaries, and feedbacks. Robert Costanza and other practitioners of systems dynamics say regularly that: “All models are wrong, but some models are useful.” What is it that makes a model useful? Two of the most important benefits of modeling are simplicity and the separation of cause from effect. What does simplicity mean? A simple model is one in which there are few causes. If there is more than one, the causes do not interact strongly with one another. Finally simple models are linear—the relationships between cause and effect can be drawn as straight lines. Please be advised that simple does not mean easy. Neither does it mean immediately apparent by means of casual inspection. Even simple mathematical models can be quite difficult. The model for exponential population growth meets all the criteria for simplicity, yet it takes some sophisticated mathematics (logarithms and integral calculus) to solve it.

Fortunately there is a tailor-made mathematical device for separating cause from effect. It is called the function. Functions are relations of dependency. Changes in the effect depend upon changes in the cause. The effect is also known as the dependent variable, and is often symbolized by the letter y. The cause is called the independent variable and is often denoted by the letter x. So a typical function might read y  =  f(x). This can be translated into English to read “y is a function of x” or “changes in x cause changes in y.” Sometimes functions can have more than one cause, so a multivariate function could be written as Z  =  f(x,y). For example, Keynesian economics posits that investment depends both on the cost of borrowing money and upon the expected profits that the investment might bring. In other words: I  =  f(i,π exp ). The greater the number of independent variables, the less simple the model is. Some models have a large number of independent variables, and these causes often interact strongly with one another. One example is the effect of weather and climate upon global financial markets. These models are known as complex, and are very difficult to solve. Indeed most cannot be solved without the aid of powerful computers. One problem with complex models is that they contain not only self-extinguishing negative ­feedbacks, but self-perpetuating positive feedbacks. Systems dominated by positive feedbacks tend to exhibit Sensitive Dependence on Initial Conditions. In other words, tiny differences at the beginning can turn into wild and unpredictable oscillations down the line. So most of the time economists attempt to construct simple ­systems that are often “solved” using analytic techniques.

Simple Models and Linear Functions

The simplest type of function is one in which there is one independent variable, and the relation is linear. Linear models are often comprised of constants, which do not change as the cause changes, and variables, which do change. Examples from economics include a demand curve. Here the willingness and ability of consumers to purchase various quantities of goods and services is hypothesized as a function of price. As prices go up the quantity demanded goes down. This could be specified as a function by saying:

Q d =  100 – 2P

Translated into English this means that consumers would buy 100 units of a good if it were given away for free, but every dollar increase in price would lead consumers to buy two fewer units. Keynesian macroeconomics is built upon the idea of the Consumption function.

Fig. 12.1
figure 1figure 1

Linear with y  =  a constant (in this case 5) and y  =  a constant plus a linear function of x

C  =  a  +  bY

In this function the letter a represents autonomous consumption, or consumption when income equals zero. The letter b represents the Marginal Propensity to Consume, or the fraction of additional income that is spent, and Y stands for income. To begin with, the amount of consumption depends upon the amount of income, so the most general function would be:

C  =  f(Y). If the function were to be further specified as

C  =  50  +  .9Y:

This would mean that consumers would spend $50 even if they had no income, which they would accomplish by using up their savings or borrowing money, and when they received any extra money they would spend 90% of it. So if consumers were to receive an extra income of $1000, they would spend $950.

Fig. 12.2
figure 2figure 2

(a) Growth curve, with y  =  x² and y  =  x³. (b) Growth curve, with y  =  x²  +  plus a cubed function

These are very simple models, but even the simplest model can get you a long ways towards understanding a difficult phenomenon. One way by which to understand the relation between cause and effect is to examine the rate of change in the effect with respect to the change in the cause. One can do this by calculating and interpreting the slope and the intercept of a function. The intercept is a constant value, the value of the function when the independent variable equals zero. As you probably recall the slope can be found by calculating the rise/run. Another way of expressing this would be to say a slope is the change in y divided by the change in x, or simply (Δy/Δx, where the Greek letter delta (Δ) represents change). The simplicity of the linear function is found in the fact that the slope is constant throughout the range of the function. The rate of change of the effect with respect to the change in the cause is the same at high levels of x as it is at low levels of x. To use the consumption function example, the linear slope (or marginal propensity to consume) says that the rich and the poor spend the same fraction of their additional income. Often, these types of assumptions are not useful, because the rate of change varies as the independent variable changes. These more complicated phenomena require more difficult functions and mathematical techniques.

Economists have a special use for the slope. It is also called the margin. Students in introductory economics are told to associate margins with words like extra, additional, and one more. But margins are also mathematical concepts. Margins also mean “the change in the effect with respect to the change in the cause.” So if consumption depends upon income then the Marginal Propensity to Consume is the change in consumption over the change in income. If a firm’s output depends upon the number of workers they hire, given a fixed amount of equipment, then ΔQ/ΔL is known as the Marginal Product of Labor. If a firm’s total cost depends upon how much it produces then Marginal Cost  =  ΔC/ΔQ. Margins can always be expressed as slopes. In economics every analytical concept can be expressed by the slope of a line, and every line slope has an economic meaning! A simple linear function of one variable is seen in (Fig. 12.1)

Many economic phenomena have multiple causes. If you bought a sandwich for lunch you might have considered several variables: the price of the sandwich; the price of alternatives like soup; whether you had enough money; whether you liked what was on the menu. It is difficult to separate cause from effect when there are multiple causes. Yet the demand curve we specified above was a simple linear function of one variable. One method of simplifying is to pretend that things we know are variable are constant. These assumptions are known as Ceteris Paribus assumptions, where Ceteris Paribus is a Latin phrase that means “all other things remain constant.” You might have spent many of your early days in introductory economics memorizing the lists of Ceteris Paribus assumptions for supply and demand. This method allows one to look for results by changing one variable at a time. Perhaps the most often made mistake in introductory economics is to confuse changes in quantity demanded (caused by a change in price) the changes in Demand caused by a change in one of the assumed constants.

Non-Linear Functions

While linear models are certainly most simple, many phenomena in nature and in the economy are simply not linear. In high school you probably learned the quadratic formula

Ax 2+  bx  +  c  =  0.

The graph of a quadratic equation is a parabola, as seen in (Fig. 12.3). Parabolas are especially useful for human communications, as the shape concentrates any rays to the center. Everything from satellite dishes for televisions to radio astronomy use the parabolic shape to their advantage.

Fig. 12.3
figure 3figure 3

Parabola generated from y  =  ax2  +  bx  +  c

Fig. 12.4
figure 4figure 4

Power curve, where y  =  xa  +  b, a  =  2, and b  =  0

Many curves depicting the behavior of economic firms are cubic, that is raised to the third power as shown in (Fig. 12.2b). Cubic equations in economics express mathematically the presence of diminishing marginal returns, the concept first enunciated by David Ricardo. Since the rate of change of effect with respect to cause, changes the slope will be different for different levels of x. Isaac Newton and Gottfried Leibnitz invented essentially simultaneously a special branch of mathematics called calculus to model changing rates of change. We will come back to calculus later.

Newton, in particular, needed to understand how to do the mathematics to understand Kepler’s laws about planetary motion, and invented calculus to integrate the motion of planets to show how the arcs intersected by planets during equal time intervals but at very different parts of their elliptical orbit intercepted the same area.

While there is a great deal that you can learn about calculus in many mathematics classes what you need to know about calculus for this book on economics is found in the next two paragraphs. How can that be, you say, when there are semester-long courses in calculus for economics, and in college there is calculus I, Calculus II, Calculus IV and more. Well that is true, and we do not want to discourage you from taking two or four semesters of calculus if you have not already. But we have found again and again that even if our students have had two semesters of calculus they do not know, or at least remember, what calculus means essentially even if they were able to solve many homework questions when they took calculus. We know this by giving our upper division students who have taken calculus a simple calculus test, which is to draw the curve integrating a curve we draw on the blackboard, and then the first differential. We also ask them to write down the relation between the speedometer and odometer in a car in terms of calculus. The students get an average of about 25 percent on the test, the same as at an Ivy League University where one of us previously taught. Most of our science-based college seniors cannot answer these basic questions about calculus, although they have recently passed the course. Some of course can do that and far, far more, but they are not the average. The students have been studying to the test, but in doing so did not learn the most fundamental aspects of calculus. So if you are in that category, here it is, in three minutes, how to think about what is most important in calculus.

Think of the speedometer and the odometer (the little mileage counter usually within the speedometer) in an automobile. In terms of calculus the odometer integrates the speedometer (Fig. 12.9), and the speedometer is the first ­differential of the odometer (Fig. 12.10). They are inverse functions of each other. So if you drive for one hour at 40 miles an hour and one hour at 60 miles an hour, after two hours the integral of your traveling will be 100 miles, that is, you will have traveled 100 miles. Likewise if you work for one hour at 10 dollars an hour and 3 hours at 12 dollars an hour at the end of 4 hours your integrated pay will be 46 dollars. The integral half of the relation is that if you have traveled 100 miles in two hours then by finding the first differential (and assuming a constant speed) your rate will have been 50 miles an hour. If you vary your speed then the first derivative of your speed, that is the rate of change of the speed, you will have a bit harder time deriving the integral, that is the rate of change. That, in a nutshell, is all that calculus is about, although the essence is that calculus does these calculations for “infinitely small” periods of time. This is not so hard to grasp, for a good odometer is integrating the speedometer at each second (or less) of time, and the speedometer is showing you the instantaneous rate of integration. Of course the math and the problems can get infinitely more complicated, but this is what is most important that you need to know about calculus for understanding the essence of biophysical economics.

Thus if you integrate your compound interest in the bank how much will your 100 dollars be worth in 5 years at ten percent interest? What will be the integrated cost of global-warming caused sea level rise over 100 years? We encourage you to learn much more about calculus, though, as the concept is really neat and useful. In practice the above examples can be solved easily in a computer using “finite difference” or time step arithmetic. But the answers should still be considered in terms of integrating something over time, and that is what calculus is about. And remember that calculus was invented by Isaac Newton to solve a very practical question: how to understand and predict the motion of the planets around the sun. Calculus is important because it helps us focus not only on the present state of a system, but on how it is changing, and what the ultimate results of that change will be.

Exponential Functions

Some of the world’s most important functions are exponential functions, and according to physicist Albert Bartlett, one of the primary problems of the American educational system is the failure to teach an appreciation of exponential growth. Exponential growth occurs when the next period’s growth is added to the base, so a constant percentage is added to an ever expanding base and the growth rate increases with time. Exponential or geometrical growth means that the new value (Qnew) is added to the previously-determined independent quantity so that the independent value, and hence the dependent value, increases over time. This is the common situation of bank deposits growing, in theory, exponentially through compound interest. In that case even if the equation remains linear the solution, Qnew, will grow at an increasing rate over time as the amount added in to the quantity becomes more and more:

\( \rm{N_{new}} = \rm{k*t*N_{t-1}} \qquad\qquad\qquad\qquad\qquad\qquad \)

In this case Nnew means the new quantity of something, for example number of people, is a variable that (usually) increases over time. k is a growth function or coefficient as before. t refers to time, and as before goes from one to two to three as the equation is solved for one, two or three (or more) years. Nt-1 means the population number for the previous time it was solved, which is not the same as the original value after the first time the equation is solved. When this particular equation is solved over time, that is when we solve for many years, the results will look as in (Fig. 12.12), that is, it will be a curve increasing at an increasing rate line. In both cases we can solve these equations either analytically or more commonly numerically, that is with a computer. To do this we write an algorithm (a sequence of mathematical steps) and solve it numerically. A simple computer code to solve these equations is given as Table 12.1. Today and for several decades most complex mathematical equations are usually solved using computer models, which we introduce below.

Exponential growth catches those who expect linear growth unaware, unprepared, and too often incapable of making the necessary changes to fix the problem. The Limits to Growth study contains the ancient Persian legend of the Foolish Rajah. A bored potentate summoned his court magician to invent an amusement, and the result was the game of chess. The Rajah told the magician, who was also mathematically quite astute, that he could have anything he wanted. The magician asked for one grain of rice on the first square, with that quantity double on the next, and the next until the last square. How much rice was on the 64th square? This can be easily calculated using exponential growth from base 2.

Square

Grains

Exponent

1

1

20

2

2

21

3

4

22

4

8

23

5

16

24

-

-

-

-

-

-

64

?

263

Just how much rice is 263 grains? A quick conversion to base 10 yields the approximate result of 1.8 * 1019 grains. This greatly exceeded the world rice harvest in 2010. Notice also that every doubling time includes all the increases that came before plus one: 4  =  2  +  1  +  1; 8  =  4  +  2+1  +  1. Using advanced techniques mathematicians have been able to calculate the concept of doubling time by a device known as the “Rule of 70s.” The time for any quantity that is growing exponentially can be found by dividing the number 70 by the percentage growth rate (r %).

DT  =  70/ r %

While a 3 percent annual increase in carbon dioxide emissions may not sound like much it adds up quickly. At this rate the amount of extra carbon we put in the atmosphere doubles every 20 years. And like our example of the rice, the next doubling time contains everything that came before plus one. So if the business as usual scenario continues we will have put more carbon dioxide into the atmosphere in the first two decades of the 21st century than all of humanity has done in all time. Exponential growth gives you a far different perspective. As it turns out, the growth rate of carbon emissions has decreased from about 3.5 to about 2.5 percent per year, and the primary driver has been the world recession. So what has the misery bought us? We now have an extra eight years to figure out how to move away from fossil fuels. If we do not, then climatologists predict that carbon dioxide will grow from the present level of about 390 parts per million of dry atmosphere to about 1200 ppm. Never in human history, and perhaps the history of the earth, has this occurred, and never has the increase in carbon concentration happened so rapidly. Exponential growth is powerful indeed!

The first political economist to make use of the concept of exponential or geometric growth was Thomas Robert Malthus. Malthus observed that while food production grew only arithmetically (what we call linearly) population, driven by the passion between the sexes, grew exponentially if left unchecked. While Malthus enunciated several “preventative checks” that would reduce the birth rate (moral restraint was his favorite) he thought, in the end that the “positive checks” of famine, plague, and war would be more effective once the food was divided among the smallest portions that would support life. Mathematically this would occur at the time period when the exponentially growing population curve intersected the linearly growing food production curve.

In fact since Malthus’ time both the human population and food production have increased exponentially, with (arguably) food production even increasing somewhat more than the human population. The increase in food production is normally attributed to technology, which means plant breeding and better management but especially an increased use of fertilizers, tractors and so on. Essentially all of these inputs are based on an increasing use of petroleum, of course. Thus what Malthus’ equations lacked was a factor for the invention and enormous expansion of petroleum-based agriculture. Of course if petroleum supplies becomes seriously constrained and good substitutes are not found then maybe in the long run Malthus’ equations were right all along. While we believe that the constraints on food production Malthus envisioned may be a fact of life in the 21st century, we do not endorse Malthus’ recommendation of increasing the death rate among the poor.

Exponential growth is very important in economics for at least two reasons. The first is the potential, and, in general, realized, exponential growth of the human population (and hence, in an approximate way, economic activity) increases sharply over time. The second is the exponential growth of money when invested. This concept excites many people who want to make a lot of money, for the potential is huge. A sobering reality check, however, can be found from the Bible. If we were to invest Judas’ 30 pieces of silver (worth perhaps $500 today if they were the size of silver dollars) 2000 years ago at only 2 percent then they would be worth:

X  =  500 e2000*0.02

The answer to this simple equation is 208 quadrillion dollars, far more than all the money on earth now, which the World Bank estimates as 41 trillion dollars. A sobering conclusion from this is that on average investments on this Earth have yielded far less than two percent, which is less than the rate of inflation. That of course does not mean that you cannot do very well in the stock market, as long as the economy grows, anyway! But over the Earth’s history investments have probably failed at least as often as not.

Another important set of functions are logarithmic functions. Logarithms are the inverses of exponential functions. If 102  =  100 then log10  =  2. Scientists regularly use two forms of logarithms. Common logs are those to the base 10 as in the line above. However there is also a natural logarithm that is calculated to the base e, where is an irrational number approximately equal to 2.718282 (you can go on forever without a repeating pattern.) The number e possesses some very interesting mathematical properties (such as its rate of change is itself) that make it a very useful number. For example the natural logarithm (ln) of 2  =  .693. This is where the 70 in the “Rule of 70s” comes from. Logarithms are useful when one wants to compare absolute changes to relative changes, or wants to eliminate very big numbers by looking at percentage changes. Many natural processes can be modeled logarithmically. The saturation curve depicted in (Fig. 12.5) is a logarithmic curve. The pH scale, which determines whether a compound is an acid or a base, is logarithmic, as is the Richter Scale which measures the intensity of the energy released from an earthquake.

Fig. 12.5
figure 5figure 5

Saturation curve (Michaelis Menten) where y  =  xmax * (x/(Ks  +  x)), where in this case xmax equals 900 and Ks equals 500

Fig. 12.6
figure 6figure 6

Malthus: solutions solving Malthus’ linear versus exponential equations, using approximate values for England in Malthus’ time (1800)

Statistics

Perhaps the mathematical tool used most commonly in economics is statistics. Statistics is useful in many ways, but most important:

  1. 1.

    To help understand the degree of uncertainty associated with a number and

  2. 2.

    The degree to which different things are, or are not, related; that is, whether y is indeed a function of x and in what way.

Considering #2 above, we might want to know: is economic growth related to investments? To the number of workers? To the quantity of energy used? To technical innovations? To the exploitation of resources? Which resources? Obviously the answer is not simple. This is very difficult with economic relations. When one is trying to understand a solution of chemicals in the lab a chemist can usually undertake an experiment with and without a particular material added to the mix to get a pretty robust answer about what does or does not contribute to a particular end product. With economics it is normally a lot more difficult to undertake any such experiments because you are dealing with a system outside laboratory control and many things may be happening simultaneously. Nevertheless unraveling cause and effect is not impossible, and is increasingly being done for some issues (see Chap. 12). So with experiments often difficult or impossible, economists often analyze existing economies over time, or compare many different economies, for example, between countries. To do this the most useful tool is generally some form of statistics.

Correlation

Probably the most basic statistical tool is correlation. Correlation examines whether when variable a gets larger does variable b also? Has economic growth depended upon increased energy use in the United States? In this case we might consider the economic growth the dependent variable and the energy use the independent variable, independent meaning that it changes without influence of the dependent variable. Plotting the data for 1900–1984 (Fig. 12.7) we would answer, “Yes, it appears that it does.” The relatively high R2, the most commonly used measure of goodness of fit. The high value of this coefficient of determination implies that the two are closely related, or at least tend strongly to co-occur. But if we think about it a little bit more we find at least two problems with what we have done. First of all we cannot say logically whether economic growth depends upon energy use or energy use depends upon economic growth. It is a chicken or egg question with no clear answer. What we can say is that the economic activity and the energy use are correlated, or co-related: when one is high the other tends to be high and the converse. So that is a power (and a weakness) of statistical correlation. It does not tell you something that is not true, but it does not really help you as much as you would like either for determining which is the independent variable and which the dependent, or even if you are asking an appropriate question. Another problem is that if we look at the relation for 1984–2005 there appears to be considerable economic growth with relatively little increase in energy use (Fig. 1.1). This shows you another characteristic of statistics: what happened in the past may or may not continue into the future. (Or, as we believe, that we have not fully specified the problem, that is there are some indications that the inflation-corrected GDP has been exaggerated relative to the past (see “shadowstatistics.org”) and, of course, the United States has outsourced a lot of its heavy industry since 1984).

Fig. 12.7
figure 7figure 7

Linear correlation

A further problem dogging statistical analysis is covariance: two parameters may increase or decrease together but in fact have little or no relation to each other. The correlation would suggest that they are responding one to the other, but in fact both may be responding to a third. For example, both temperature and photosynthesis of plants in a field tend to increase during the first half of the day, and one might conclude that one causes the other. But in fact each is responding independently to an increase in sunlight.

The issue is further confused by multiparametered issues. Ideally we would like to be able to study one independent variable and one dependent variable. If we are lucky we would find a straightforward relation, similar to what we see in Fig. 12.7. But what if some other factor were influencing the dependent variable? For example, we know that plants also need adequate water and nutrients. So if we want to understand or make a model of how plants grow we need to untangle the possible effects of each of these. If we are measuring the growth of a natural plant or one in an agricultural field we would need to collect considerable meteorological and soil data to unravel these effects, and we would then need to use multifactorial statistics to attempt to understand the influence of each one of them.

Econometrics

Econometrics is defined broadly as statistics on economics, but is increasingly associated with analyzing how variables change over time and also testing for causality. Most of these analyses attempt to account for statistical biases that arise when working with time-series variables. In ­addition several mathematical properties need to exist for the statistics to work properly. Error terms need to be distributed normally, and the error term in one time period is not supposed to be correlated to the error term in the next. Independent variables are supposed to be truly independent, and not interact with one another. If any of these conditions hold then the statistician cannot make confident predictions or inferences because the measure of dispersion, called the variance will be too great. However, in nearly all applied work, these conditions do hold. Consequently a great deal of the econometricians time is spent dealing heteroskedasticity, serial correlation, and multicollinearity. Today econometrics is a large academic field with its own textbooks, journals, and so on. These techniques are often very good ways of understanding what is really happening in real economies, as long as the proper factors are entered into the equations. For example, we have been very impressed with Robert Kaufmann’s econometrics examining the degree to which the United States is or is not becoming less dependent upon fuels [5] and also where greenhouse gases are going [6].

Limits of Calculus

In fact most real problems cannot be solved through the use of complex mathematical analysis such as calculus [1,2,3,4]. The reason is that economics is about, or should be about, many processes that are occurring simultaneously and analytic mathematics such as calculus can usually solve no more than one or two equations simultaneously (think back to your high school algebra when you were taught to solve one, then two, but not three, equations simultaneously). The problem becomes more difficult when the equations are non-linear (that is the basic factors are not represented by a straight line but rather a curved line) or when partial differential equations are required. In fact most real economies are about many non-linear things occurring and interacting simultaneously. If the price of one major commodity (say oil) changes it is likely to influence many other aspects of the economy, not just one or two. So a lot of the fancy-looking mathematics has to simplify these complex real problems into simpler “analytically-tractable” forms so that fancy solutions can be found through analytic means. The results may look impressive (and indeed often are) but we have to ask very carefully whether the mathematical solution is in fact representative of the real world situation n or rather some simplified, and hence “analytically tractable”, formulation. The answer is sometimes yes, sometimes no. The good news is that very powerful quantitative tools in computer models and even spreadsheets that allow people of good intuition but relatively modest mathematical skills to undertake extremely quantitative analysis of economies. But there are no spreadsheets that can test whether your concepts are accurately representing the phenomena analyzed. The power of mathematics (in its broad sense) is to make quantitative predictions from known (or hypothesized) relations of a system, which are usually called a model. The process of examining whether your model is a correct or at least adequate is called validation. This is the critical issue that seems to us to be lacking from most contemporary economic analysis Then sensitivity analysis is the examination of the degree to which uncertainty in model formulation or parameterization allows one to determine how much to trust your results or reach certain conclusions. Economic models can be made much more in line with the procedures used by natural scientists by increasing the use of validation and sensitivity analysis, which are discussed further below.

An additional problem is that economic models tend to focus almost entirely on factors that are intrinsic to the system being modeled, such as interest rates, money supply and so forth and very little on what we call as modelers forcing functions, or factors that are outside the model that influence the behavior of the system. For example no economic models predicted the impacts of the oil price increases (and supply disruptions) of the 1970s or 2008 because they did not have the possibility of such “external forcing” built into the models. Similar problems occurred in modeling in ecology when populations were modeled as if only their own actions determined their abundance, rather than external factors such as climate. Thus it is important to think of models as an attempt to reflect the entire workings of the economic system, that is to use a real systems approach.

A great deal of economics is dominated by the calculus, a sophisticated approach to quantitative analysis that is concerned with dynamics, or changes over time: differential calculus with the rate at which things change and integral calculus with the cumulative effect of changes over time. Calculus was invented apparently simultaneously by Isaac Newton in England and Gottfried Liebnitz in Germany. Newton, in particular, needed to understand how to do the mathematics to understand Kepler’s laws about planetary motion, and invented calculus to integrate the motion of planets to show how the arcs intersected by planets during equal time intervals but at very different parts of their elliptical orbit intercepted the same area.

You can make a career out of calculus but what you need to know about it for this book on economics is found in the next two paragraphs. We have found again and again that even if our students have had two semesters of calculus they do not know, or understand what calculus essentially means. We know this by giving our upper division students who have taken calculus a simple calculus test, which is to draw the curve integrating a curve we draw on the blackboard, and then the first differential. We also ask them to write down the relation between the speedometer and odometer in a car in terms of calculus. The students get an average of about 25% on the test, the same as at an ivy league university where one of us previously taught. Even most of our science-based college seniors cannot answer these basic questions about calculus, although they have recently passed the course. The students do homework and study for the test, but in doing so did not learn the most fundamental aspects of calculus. So if you are in that rather large category, here it is, in 3 min: how to think about what is most important in calculus.

Think of the speedometer and the odometer (the little mileage counter usually within the speedometer) in an automobile. In terms of calculus the odometer integrates the speedometer (Fig. 12.8), and the speedometer is the first ­differential of the odometer (Fig. 12.9). They are inverse functions of each other. So if you drive for 1 h at 40 miles an hour and 1 h at 60 miles an hour, after 2 h the integral of your traveling will be 100 miles; that is, you will have traveled 100 miles. Likewise if you work for 1 h at 10 dollars an hour and 3 h at 12 dollars an hour at the end of 4 h your integrated pay will be 46 dollars. The integral half of the relation is that if you have traveled 100 miles in 2 h then by finding the first differential (assuming a constant speed) your rate will have been 50 miles an hour. If you vary your speed then the first derivative of your speed, that is, the rate of change of the speed, you will have a bit harder time deriving the integral, that is, the rate of change. That, in a nutshell, is all that calculus is about, although the essence is that calculus does these calculations for “infinitely small” periods of time. This is not so hard to grasp, for a good odometer is integrating the speedometer at each second of time, and the speedometer is showing you the instantaneous rate of that integration. Of course the math and the problems can get infinitely more complicated, but these are the two relations that you need to know about calculus for understanding the essence of biophysical economics.

Fig. 12.8
figure 8figure 8

Calculus: taking the first derivative

Fig. 12.9
figure 9figure 9

Calculus: integrating a function

Thus if you integrate your compound interest in the bank how much will your 100 dollars be worth in 5 years at 10% interest? What will be the integrated cost of global-warming-caused sea level rise over 100 years? How do you integrate the amount of oil remaining in an oil field if you remove a certain amount each year? We encourage you to learn much more about calculus, though, as the concept is really useful. In practice the above examples can be solved easily in a computer using finite difference or time-step arithmetic. But the answers should still be considered in terms of integrating something over time, and that is what calculus is about. And remember that in Newton’s case the calculus was invented to solve a very practical question: how to understand and predict the motion of the planets around the sun. Calculus is important because it helps us focus not only on the present state of a system, but on how it is changing, and what the ultimate results of that change will be.

What Is the Proper Use of Mathematics in Economics and Natural Science?

Part of what defines science to many people (including scientists themselves) is the use of mathematics, and mathematical models, to define and resolve problems. The power of mathematics (in its broad sense) is to make quantitative predictions from known (or hypothesized) relations of a system, which are usually called a model. The process of examining whether your model is correct or at least adequate is called validation. Sensitivity analysis is the examination of the degree to which uncertainty in model formulation (how it is structured) or parameterization (what numerical coefficients are assigned) allows one to trust your results or reach certain conclusions. It is through validation and sensitivity analysis that models generate their (sometimes) tremendous power in resolving and even predicting truth, such as that is possible and accessible to the human mind.

The reader by now has seen our distrust of many mathematical models. Even so we are strong advocates of modeling as a tool. What then is the proper role of mathematics in the scientific process if it is so frequently incorrect? First of all it is necessary to distinguish mathematical from quantitative. Quantitative means simply using numbers in an important way in your analysis (e.g., 3 salmon versus 7). This does not require any particular mathematical skill, although getting accurate numbers may require enormous skills of a different kind. Mathematical means using the complex tools of quantitative analysis to manipulate those numbers or to study relations among them. It includes algebra, geometry, calculus, and so on. It is our belief that it is much better to learn good quantitative methods that include understanding clearly the relation between the real world and the equations you are attempting to use than becoming a mathematical whiz at solving problems poorly connected to reality.

Analytic Versus Numeric

As we have said there are two principal means of manipulating numbers: analytic and numeric. In our opinion (but hardly everyone’s) there are very few real problems in economics that can be represented adequately by analytic equations, and much of the economics that is done by complex analytic analysis is giving mathematical but not economic results. The use of analytical mathematics, however, does have one major benefit. Through the manipulation of equations you can transform a cause and effect relation that is stated in a way in which you cannot see the patterns you need to see into an understandable output and derive the patterns you need to understand. In other words sometimes analytic approaches can help you visualize clearly a concept you are trying to understand. In practice there are severe restrictions to the class of mathematical problems that can be solved analytically, often requiring a series of sometimes unrealistic assumptions to put the problem into a mathematically tractable format. In addition, the mathematical training required to undertake such analytic procedures precludes its use by many.

The second, numerical, technique gives approximate answers to an enormously broader set of possible equations using sometimes more complex equations often arranged in complex algorithms (or numerical recipes) solved stepwise in a computer. In theory either method can be used to solve many particular quantitative problems, and sometimes this is done. Fortunately if one learns computer programming or even becomes really good with a spreadsheet, one can solve complex multiple equations about quantitative relations that the best earlier mathematicians could not. More commonly equations are solved through the use of various spreadsheets, apps, and special programs, and the mathematics, usually numeric, are hidden from the user.

The use of analytic mathematics was especially important in the development of physics in the early part of the past century, and the creation of the atom bomb was tangible evidence to many of the power of pure mathematics combined with practical application. Even so the complex fluid dynamics equations required to build the bomb could not possibly be solved by analytic means, and as many of the nation’s mathematicians spent the summer of 1944 in Los Alamos New Mexico, many solving the fluid dynamics equations numerically with hand-cranked calculators, something that a single good undergraduate computer student could solve now in an afternoon! [7] The success in physics of mathematics, both analytic and numeric, led many practitioners in other disciplines, including ecology and economics, to attempt to emulate (or at least give the appearance of) the mathematical rigor and sophistication of the physicists. This in turn led ecologist Mary Willson to decry many of their efforts which she said were undertaken for what she has called “physics envy” (Freudian pun intended) [8]. Nevertheless even Einstein preferred to solve his problems without mathematics when that was possible. Other sciences in which mathematical models have been especially important include astronomy, some aspects of chemistry, and some aspects of biology including demography and in some cases epidemiology.

A final problem with models is that there has been frequent confusion between mathematical and scientific proof. Mathematics can generate real proofs relatively easily because you are working in a defined universe (through the assumptions and the equations used) to which it applies. If you define a straight line as the shortest distance between two points then you can solve many problems requiring straight lines. But the world handed to us by nature is neither so straight nor so clearly defined, and we must constantly struggle to represent it with our equations. Hence a mathematical proof becomes a scientific proof only in the relatively rare circumstances when the equations do indeed capture the essence of the problem. We all have been seeking to follow in Newton’s footsteps, but Newton may have skimmed the cream from what nature has to offer. Meanwhile many economics ideas are more mathematical than real. At the extreme Krugman [9] has said that the main reason for the financial meltdown of 2008 was that Wall Street turned its analyses from people with financial acumen over to other people with extremely strong mathematical skills.

Despite all of the many problems of modeling we do not understand how one can use the scientific method (i.e., generate and test hypotheses) on complex issues without the use of formal modeling. This is because it allows one to apply the scientific method to complex real systems of nature and of humans and nature. But it is critical that the right kind of models be used. And the way to do that is quite simple: try to represent the real system that you are dealing with rather than some abstraction that happens to be analytically tractable or elegant.

Questions

  1. 1.

    What is the difference between mathematical and quantitative analysis?

  2. 2.

    Under what circumstances is scientific rigor the same as mathematical rigor?

  3. 3.

    Under what circumstances is analytical mathematics most useful?

  4. 4.

    What is the difference between constants and variables?

  5. 5.

    What does “is a function of” mean?

  6. 6.

    What does linear mean? Can you give an example of something that is linear?

  7. 7.

    Give three examples of nonlinear functions or relations.

  8. 8.

    What is an algorithm?

  9. 9.

    How is a correlation different from a function?

  10. 10.

    Define econometrics.

  11. 11.

    Define calculus in terms of something familiar in your everyday life.

  12. 12.

    How does “finite difference” relate to calculus.

  13. 13.

    What does validation mean? Sensitivity analysis?

  14. 14.

    Distinguish between analytical and numeric approaches to solving mathematical equations.

  15. 15.

    Analytical techniques are best suited to what kind of scientific problems?

  16. 16.

    If the equations of economics are often complex, why are they frequently described using analytical approaches?