Advertisement

A Swift Overview of Eating and Drinking Since Antiquity

  • Paul Erdkamp
  • Wouter Ryckbosch
  • Peter ScholliersEmail author
Living reference work entry

Abstract

This chapter offers a very broad survey of the transformation of the diet in the past 2500 years. Such an ambitious venture tends to highlight spectacular changes, such as the so-called Columbian exchange of the late sixteenth century. These changes undoubtedly altered the diet radically, but many other, small and less striking developments also played their parts in the long run. This survey focuses on the history of eating and drinking, primarily but not exclusively in the West, and not on the history of agriculture, commerce, retailing, or cooking. It emphasizes the quantity and diversity of food, its consumption, food policies, and health implications. Inevitably, all big and small changes in the food chain are reflected in the history of eating and drinking.

Keywords

Dietary transformations World history Long-term food history Nutritional transition 

Introduction

Writing a survey of about 2500 years of eating and drinking throughout the world cannot but be unsatisfactory and incomplete. Authors or editors who have ventured to do this used hundreds of pages or several volumes, unless they considered one specific foodstuff. All of these studies emphasize the crucial role of food in great transformations such as urbanization or migration, as well as in everyday processes such as identity construction or social differentiation. By asserting a central role of eating and drinking in social relations, economics, policy, language, medicine, gender, fears and dreams, or any other domain, this survey aims to do no less.

Rather than proposing a chronological ordering, a thematic structure is chosen with topics that have recently attracted great interest in historical studies. This yielded eight chapters. The daily diet is connected to malnutrition and health, which links up with dietary insights and advice; the latter connects to food beliefs, policies, and rituals; all is situated within the exchanges of ideas, people, and products. Geographically, this survey at first concentrates on Ancient Greece and Rome, as the world of classical antiquity shows that developments did not take a linear course, and at the same time often offered the origin and inspiration of later ideas. Our survey takes a more global view from the late Middle Ages onwards. Because the emphasis is on eating and drinking, this essay only marginally addresses the history of agriculture, retailing, or cooking. Some aspects are, inevitably, also largely ignored, such as time spent on eating or its material culture.

Daily Diets

For long, the basic foodstuffs consisted of vegetables that provided the necessary carbohydrates. Grain continued to be the staple source of life across Europe, rice performed a similar role in East Asia, and maize allowed for the Inca, Maya, and Aztec civilizations to prosper. The importance of complimentary foodstuffs – dairy products, fish, meat, and legumes – varied according to place and time, but the tyranny of the starchy crops was most prominent when necessity forced people to choose for the cheapest sources of energy (Braudel 1992). Caloric intake was near the subsistence limit for most people living before the industrial era, averaging approximately 2,400 kcal per adult per day (Pomeranz 2005). Nevertheless, huge chronological, geographic, and social variations existed.

The diet in Classical Antiquity it is often observed was dominated by the so-called Mediterranean triad consisting of cereals, olive oil, and wine. Although not entirely untrue, this view needs nuance and clarification (Garnsey 1999; Wilkins and Hill 2006; Wilkins and Nadeau 2015). First, there was no common diet in Antiquity, as it varied according to geographical, rural/urban, and socioeconomic background. Moreover, the general term “cereals” encompasses a wide variety of crops, including various species of wheat and barley, each with different properties and eaten in different forms. On the Greek mainland and islands, species of barley remained important, as they were better suited to the dry climate, while the Romans preferred wheat. There is a general trend over time from hulled wheats (such as emmer wheat) to naked wheats, in particular hard wheat, which were easier to process and transport, and therefore better suited to provisioning the cities (Heinrich and Hansen 2018a, b).

Literary sources show a clear cultural preference for consumption in the form of leavened bread, made out of hard wheat, although other forms of (un)leavened bread were eaten too. Cereals were also eaten in the form of porridge. Various kinds of cereals offered the largest part of calories, mostly so in cities, as the people in the countryside had access to a wider variety of foodstuffs. Yet, consumption of pulses, such as lentils, chickpeas, and faba beans, should not be underestimated. Cereals and pulses were often eaten in combination. Olive oil was a staple in Mediterranean lands, while lard or butter took its role in Central and Western Europe. Wine mixed with water was common, but the poor generally drank sour wine (posca) mixed with water, or the last dregs of pressed grapes. In the countryside, the milk of cows or goats was drunk, but cheese was a more common way of consuming milk (Broekaert 2018).

Rich Greeks and Romans basically ate the same foodstuffs, but in more luxurious forms – white bread rather than porridge – and in different proportions, with more meat, sea-fish, richer wine, and fruits. Poor people, in particular in the countryside, often had to rely on food that well-to-do Romans saw as fodder, such as barley and other “inferior” cereals, while some country-dwellers had to make do with acorns and chestnuts during part of the year.

Not only the number of calories matters, but also the variety of caloric sources. Pulses and legumes were a vital source of proteins, which was all the more important as the consumption of animal proteins was probably limited for most people, not only in Antiquity, but also in later centuries. In the Greek world, many people probably consumed meat of cattle, pigs, or sheep mainly in a sacrificial context (Ekroth 2007). In Roman times, access to meat seems to have increased (MacKinnon 2004, 2018; Chandezon 2015). Fish was consumed fresh, salted, or otherwise processed for long-term storage (such as the famous fish-sauce garum), but consumption of seafood was limited away from the sea-coast (Mylona 2015; Marzano 2018).

The extent to which Europeans in medieval and early modern times were more carnivorous than Asians, Africans, or Americans has been an important topic of debate among historians (Braudel 1992). When European travelers, diplomats, or merchants in the seventeenth and eighteenth centuries wrote about the diet of the Chinese, Turks, or Japanese, they often remarked how rarely meat was eaten, although later estimates have placed the aggregate protein consumption in the seventeenth century in China on par with that in England (Pomeranz 2005). Perhaps differences were clearer in earlier times, since it is commonly held that in the two centuries following the Black Death (ca. 1348), meat consumption in Europe was more common than it had been before or would be again until modern times (Abel 1937; Teuteberg 1986). The idea was that the decline in population numbers and the redistribution of resources and economic bargaining power following the demographic crisis of the Black Death allowed for a richer diet for the remaining population. More recent research has called into question this straightforward relation between meat consumption and demography, and in particular the rise of meat consumption in the post-Plague period (Thoen and Soens 2010). However, it is clear that after 1550, when the population began to exceed pre-Plague levels again, regularly eating meat became a prerogative of the rich, or at least confined to a more limited number of days per week or per year. Declining numbers of butchers, decreasing revenues from excises on slaughtered animals, and the dwindling share of meat in hospital budgets all suggest that the carnivorous consumption of Europeans had finally reached the lowest point of its long decline since the Middle Ages.

Everywhere across the world before the eighteenth century, most of the food and drink was locally produced. Cities procured their food provisions as much as possible from their immediate surroundings (a distance of some 25 kms), and the local prevalence of meat, fish, and specific vegetables largely determined the daily diet of consumers. Transport costs and the hazards of long-distance trade loomed too large to allow for most – but not all – bulk consumables to be transported. However, exceptions existed, and these became more prevalent over time. Some densely populated places did import their basic necessities from much further away. Imperial Rome depended on grain from Egypt and modern Tunisia, while Constantinople, capital of the Byzantine Empire, received grain from the lands surrounding the Black Sea and the Aegean. Florence relied on Sicilian grain since at least the thirteenth century, regions such as Picardy exported large quantities of grain to Paris and Flanders since the late middle ages, and in early modern times the urban population of Holland relied on the regular shipping of grain from the Baltic Sea (Braudel 1992).

However significant such reliance on trade trade, these quantities and regions were exceptional, as the vast majority of food and drink was consumed in the immediate vicinity of where it was produced. This reliance on local produce does not imply that monotony was the rule. Even for those who could not rely on the import of foodstuffs from farther away, there was often a great variety of food types available in medieval and early modern times. Forests, rivers, and wild places abounded with wild birds, fish, mushrooms, herbs, berries, roots, nuts, fruits, and edible plants that either disappeared over time or were in later times no longer deemed sufficiently edible (Albala 2003).

By the late eighteenth century and worldwide, the so-called industrial revolution had affected the daily life of ten-thousands of people in a negative way: many men and women lost their job, called upon charity, moved to overcrowded towns and became fully market-dependent. Only few people gained from the process of industrialization thanks to high returns or well-paid jobs in new trades. As a consequence, social and spatial inequality grew, with direct effects on the daily diet. It took several decades before the majority of the people, aided by rising purchasing power, benefited from improvements in agriculture, transport, manufacturing and distribution, which coincided with big geographical disparities.

Data of the Food and Agricultural Organization, starting in 1961, allow to assess the nutritional transition in the world since 1800 by adding estimates on caloric intake per head in the nineteenth and first half of the twentieth centuries (FAOSTAT 2018; Allen et al. 2005). Around 1800, parts of Asia, Europe, and North America had a similar supply of about 2,300 kcal per head per day, but different protein intake due to higher consumption of meat and dairy products in Europe and North America. Other parts of the world had much less food available. The average caloric supply in Western Europe rose slowly until the late 1850s, albeit with huge fluctuations and geographical variations, to grow almost linearly up to the 1910s (3,200 kcal), which, for the first time in history, assured food security for most of the West-Europeans. Then, calorie supply declined until the 1960s (3,050 kcal), to increase again since then (3,350 kcal).

In general, most parts of the globe lagged behind this West-European pattern to various degrees. Changes of the human height (in which food plays a crucial role, next to diseases, hygiene and preadult labor) allow us to assess nutritional standards in the past. The average stature of men in northern Europe gradually declined from the late middle ages until the early nineteenth century, when it started to grow irregularly (Baten and Blum 2014; Steckel 2005). In Latin and North America, height stagnated in the nineteenth century and grew afterwards, especially in Canada and the United States; in Asia it dropped until 1880 and, again, in 1910 to only increase after 1950; height fluctuated heavily in Sub-Saharan Africa in the nineteenth century, to stagnate in the twentieth century; finally, height oscillated in North Africa and the Middle East until the 1910s, to increase from then onwards. After 1960, the world’s average caloric supply grew to 2,880 kcal per head per day (in 1961: 2,200). This caloric intake fulfills energetic needs and brings about a fast-growing group of people with overweight in many countries. Yet, health concerns gradually influence the wealthier people’s food consumption in well-off countries, who lower their consumption of, for example, sugar and alcohol.

As mentioned above, up to the 1900s most people ate and drank what was locally produced. Only small, elite groups regularly consumed food and drinks that came from remote shores, for example, cocoa, wine, or spices. Yet, radical changes occurred with the coming of new foodstuffs that, gradually, were locally grown (see below, the Columbian exchange). For instance, the diffusion of potatoes in Europe or of cassava in Africa and South-East Asia transformed the staple food in the first half of the nineteenth century. However, as the international trade intensified after 1850 (among other things, caused by expanding colonialism), the daily diet transformed once and for all. Worldwide, a pattern has developed in which the rising purchasing power of the masses determined the pace of changes. At first, more of the staple food is consumed, then, little by little, more expensive food (meat, fats, sugar, dairy products) drives back the importance of staple foods, and finally, a diverse diet emerges with a high share of animal foodstuffs (Grigg 1999). This global process involves the expanding consumption of manufactured food, the lengthening of the food chain, the fading away of seasonal foods, the appearance of “food niches” (aimed at particular groups such as youngsters, sportsmen or young mothers), and the idea of food security and of individual choice regarding eating and drinking (Scholliers 2007). This transformation can be summarized by the concept of nutritional transition, or the move from almost constant undernourishment to enough and even too much food (Popkin 2011).

From Malnutrition to Obesity

The cereal-dominated diet of the premodern world has often been regarded as poor in nutrients, but this is a misconception that takes insufficiently into account that other elements of the diet (though often contributing little in calories) offered various nutrients (Heinrich and Hansen 2018a). While two decades ago it was generally assumed that dietary deficiencies were widespread among the population of ancient Greece and Rome (Garnsey 1999), recent studies are less pessimistic about health and living standards (Waldron 2006; Killgrove 2018). Medical literature indicates the presence of diseases, like scurvy, that are the result of an unbalanced diet, but does not indicate how many individuals were affected. Osteological analysis of skeletons offers insight into the health of individuals. Porotic hyperostosis, for example, a condition that leads to porous bone tissue in the cranial vault, has been interpreted as an indicator of widespread iron deficiency, but was in most cases probably caused by red blood cell shortage, parasites or lead poisoning. Recent studies find relatively few indications that individuals had suffered from chronic dietary deficiencies. On average, yet, people in Roman times were shorter compared to earlier and later periods. However, this is largely the result of the lesser intake of animal proteins, as the skeletons were predominantly taken from urban graveyards. Most osteological studies do not point to a clear difference between men and women, although, in view of male dominance within the households, it is reasonable to assume that under stressed circumstances, women and children were more vulnerable than adult men.

Chronic under- and malnutrition were common experiences in the medieval and early modern world as well – but again this should not be generalized. The spread of diseases related to overly monotonous diets also points to the role of malnutrition – especially in the centuries leading up to the industrial revolution. Scurvy, caused by lack of vitamin C, is well known for its prevalence among sea-farers. More common was pellagra, also caused by vitamin deficiency, which was widespread in eighteenth-century Italy as a result of an overreliance on maize (the daily diet of the poor). In Asia it was beriberi that was caused by poor diets with a lack of variety and vitamin content. Health implications, caused by deficient nutrition, endured in the nineteenth and twentieth century, but diminished unevenly and gradually according to dietary improvements in specific parts of the world. The nutritional transition initiated different health issues.

Overweight people have always existed, but they constituted a small minority in most parts of the world during most of the time. In Europe and the United States, corpulent men and women often were a sign of well-being, but very obese men and women were a curiosity at fairs (Oddy et al. 2009). Only after the transition from insufficient to rich diets, fat people were perceived as an issue. In the United States, a “creeping” obesity crisis occurred. It appeared in the 1910s, when health concerns about overweight people mixed with notions of the ideal body, which led to innumerable dieting schemes, but burst after the Second World War.

With regard to both under- and overconsumption of food, debate exists about responsibility: the individual or “society”? For a long time, the former received the most attention, but more recently the latter gained more support – referring to the global escalation of convenience foods, fast expanding eating out, and the role of advertisements.

Famine

The harvest of staple foods was vulnerable to adverse weather, but also to man-made disruption, such as plundering soldiers. Growing barley or other so-called inferior cereals, which had short growing cycles and were less vulnerable to drought, alleviated but did not solve the threat of harvest failure. During Antiquity, general mentions of the fear of shortage and famine are numerous, but it is difficult to establish how frequent and serious food crises were. Literary sources on the ancient world pay most attention to political centers like Athens, Rome, or Constantinople, but precisely their political status ensured them a stronger entitlement to food. High food prices probably caused increased mortality among the urban poor and certainly caused increased inequality. If literary or epigraphic sources mention food shortages, it generally is to emphasize the measures taken by rulers, benefactors or – in the Later Roman Empire – the Christian church, but the gravity of the crisis is hidden from our view. Widespread drought or back-to-back harvest failures depleted reserves, leading to famines characterized by increased mortality. The few famines that are narrated in detail show that, just as in later times, they caused high mortality due to epidemics rather than starvation. Reliable figures are rare in sources on the ancient world, but occasional claims of hundreds of thousands of victims (e.g., 800,000 in Numidia in 125/124 BCE according to Orosius 5.11.1) are not to be rejected out of hand (Erdkamp 2018).

In the medieval and early modern world too, famines were a recurrent phenomenon, given the persistently low surplus produced in premodern agriculture. Almost everywhere famines were frequent: at least once every generation in premodern times a wide-spread famine struck most places in Europe, leading to anxiety and insecurity (Ó Gráda 2010). Unlike modern famines, medieval and early modern food shortages were usually caused by back-to-back harvest failures, often the result of too little or too much rain. They could be influenced by natural disasters such as long-term climatic shifts or volcanic eruptions (for instance, the Famine of One Rabbit of 1454).

However, the risk of famine was not equally spread across premodern societies, and man-made factors could severely exacerbate the destructive forces of nature. Warfare increased the chance that bad harvests would result in famine, as when the notoriously wet springs of 1315 and 1316 combined with warfare, caused a three-year-long famine across much of Northern and Central Europe. The unevenness across time and space with which famines struck the preindustrial world also suggests that risks could be mitigated even before modern times. Harvest shocks could be better absorbed where yields were higher, where transport networks were developed more fully, and where agricultural surplus in good years allowed for coping mechanisms to be better developed. A wide range of precautionary measures were practiced throughout premodern Europe. In times of scarcity, marriages were postponed, with a reduction in births as a result. Public granaries could allow the storage capacity to overcome temporary adversities, and trade or exchange arrangements could similarly help to overcome setbacks. If dearth struck nevertheless, poorer consumers tried to safeguard their caloric needs by “trading down” to inferior substitutes for wheat, such as oats or barley.

As a telling sign of the early improvement in the English economy, the size and duration of food crises declined gradually over time, to such extent that it was spared major peace-time famines since the 1620s. In Japan famines occurred less regularly in the seventeenth and eighteenth centuries than before. In Germany famines also became rarer in the eighteenth century, but in most parts of Eastern Europe the threat of famine lingered on until at least the end of the eighteenth century (Ó Gráda 2010).

In the nineteenth century, the general improvement of food supply and the growing buying power ended periods of starvation, except for specific episodes linked to war. Yet, also plant illness, weather conditions, ruinous policy, and speculation caused starvation. Some of these factors were increasingly under control in more and more parts of the world since the 1950s, although there remain “problem areas” while new risk areas emerged (Messer 2013). Estimations of the number of deaths due to famine reveal huge fluctuations between 1870 and 1920 (3.1–16 million deaths per decade), a disastrous increase between 1920 and 1970 (9.8–16.6 million), and an impressive fall since 1980 (0.9–1.3 million) (De Waal 2018). Mass starvation occurred in India under colonial rule in the 1870s, in Ukraine in 1932–1934, in large parts of the world as a consequence of the Second World War, and in China during the Great leap forward of 1958–1962. Since the 1960s, the Horn of Africa is permanently imperiled by hunger, which international aid campaigns only partly remedied.

Globalization and Intercultural Exchange

Throughout Antiquity crops and animals were spread from East to West, including such seemingly ordinary foodstuffs as peaches, wine, and chicken (Garnsey 1999). The consumption of wine, which was imported into Gaul by Greek traders, was adopted by Celtic leaders as a means to emphasize status. The increasing communication and trade across long distances under Roman rule stimulated the process of the diffusion of new crops and animals even more. Roman soldiers and civilians in central and northern Europe held on to their culturally accustomed foodstuffs, and in the early empire olive oil, wine, and fish sauce were transported from the Mediterranean to the northern provinces. Wine cultivation spread northwards too, first in southern Gaul, later also further north. Olives only grow in a limited range along the Mediterranean coasts and hence olive oil disappeared in central and northern Europe when the ethnic composition and dietary preferences of the Roman army changed.

Among the plants spread northwards were several Mediterranean herbs like coriander, poppy seed, dill, and mustard. Commerce along trade routes towards the East, reaching India and even beyond, ensured a steady supply of black pepper and other spices (Sidebotham 2011). The price of pepper was not excessive, and it was within reach of large segments of society. With the decline of the Roman Empire in the West, many imported foodstuffs disappeared, though some plants became permanent features of central and northern European garden plots (Cool 2006).

From the late middle ages onwards, the search for profitable spices spurred European overseas expansion and the desire for more reliable and abundant supplies of exotic cash-crops lay at the heart of the slave-based plantation system that was set up across the world (Curtin 2002). A desire for exotic foods and drinks was not the only factor in these developments: so, too, were intra-European warfare and imperial competition, the search for precious metals, and nonfood consumer goods. Nevertheless, food did play an important role. Exotic condiments had been known in Europe since Antiquity, and with the growth of international commerce in the later middle ages, they regained their place in the European imagination, if not perhaps in most kitchens. Spices such as pepper, cinnamon, cloves, or nutmeg travelled from East Asia to Venice, and then further across Europe (Freedman 2008). Since quantities were low and prices high, the incentive to find new routes was huge. By the end of the fifteenth century, Portuguese and Spanish seafaring had resulted in the establishment of a direct sea route around the African continent to the Indian Ocean, and in the opening of the American continent to European exploration, conquest, and commerce.

The spectacular strengthening of global commercial ties as a result of the new sea routes pioneered by Vasco da Gama and Christopher Columbus brought about major shifts in eating and drinking practices across the world. The variety of edible crops expanded everywhere. Europeans, for instance, discovered sweet potatoes, maize, chili peppers, and tobacco in the Americas. Over the following centuries they would also become familiar with tomatoes, green beans, turkeys, cacao, and squash. Meanwhile, wheat, horses, and livestock travelled in the other direction. This so-called Columbian exchange did not leave untouched the world outside of the Atlantic. Maize, cassava, and groundnuts were introduced from America to Africa, while tomatoes, sweet potatoes, and chili peppers influenced Asian eating practices (Albala 2003). Other things being equal, these exchanges themselves reduced global vulnerability to famine (Ó Gráda 2010).

From the sixteenth century onwards, the grip of European trading companies on the global production sites of exotic comestibles was gradually strengthened. In the most extreme cases, this resulted in colonial exploitation, with slave-based plantations for the production of sugar, nutmeg, coffee, or tobacco as sad examples. In other cases, such as those of tea or pepper, no unfree labor was involved until the nineteenth century, yet the quantities produced, shipped, and consumed globally still expanded to unprecedented levels. Already before modern times this implied that global consumer goods could become within reach of average – and even poor – households in Europe (O’Rourke and Williamson 2009). However, it is a telling irony of the early modern age that even though globalization in theory brought more potential variety in eating and drinking habits, in practice it was for many a period of increasing monotony. Sugar and pepper became more widely consumed than ever before, but at the same time a range of other spices almost entirely disappeared from European kitchens after the sixteenth century. Potatoes and maize could offer more – and cheaper – calories per hectare, as a result of which some European regions gradually became overly reliant on a monoculture of those crops, leading, for example, to the Irish Great Famine of the 1840s.

The Colombian exchange not only affected the types of food and drink consumed worldwide but also sorted cultural effects. From at least the sixteenth century onwards, eating and drinking rituals, their meanings, and their connotations travelled the world. They influenced the performances of eating and drinking themselves, as in the case of smoking or the drinking of hot beverages, which were unknown in Europe before the introduction of tobacco, coffee, and tea in the seventeenth century, but also the meanings of food and drinks: their perceived medical effects, or the degree to which they could function as social markers. This globalization of foods and drinks thus brought with it a complex history of cultural and scientific exchange, in which the perceptions, uses, and qualities of foodstuffs became appropriated differently in new contexts. Chocolate, for instance, was first introduced in Spain from Meso-America as a spicy rather than as a sweet drink, yet between the seventeenth and early nineteenth centuries it was gradually transformed into a sweet beverage with aristocratic connotations (Norton 2008).

International food trade grew swiftly from 1815 to 1914, then weakened up to 1940, but skyrocketed after 1950 (Federico and Tena-Junguito 2016). The colonial system until the 1960s and, then, free trade enabled this growth. Foods such as cocoa, life cattle, alcoholic beverages, and, increasingly, grain represented the bulk of the overall trade until the First World War. The share of fresh food diminished as manufactured foodstuffs, such as canned food, biscuits, dried pasta or soft drinks, grew in the 1920s. New modes of transport and preservation, particularly cooling – also in private homes, first in the West, then globally – revitalized international trade in fresh food such as herbs and dairy products since the 1960s, which contributed to the idea of ever-expanding choice, although it furthered global homogenization.

Availability of food from far remote shores changed the diet radically. The mass import of American wheat into Europe from the late 1870s onwards, for example, caused the fall of bread prices, which was the condition to revolutionize the overall spending pattern. The share of bread in the total budget of an average West-European household plunged from about 40% in the first half of the nineteenth century to 15% in the 1920s and to 3% today, gradually initiating the so-called consumer society.

Exchange of goods came along with the exchange of money, people, and ideas. Migrants took with them not only their language and religion, but more tenacious, also their foodways (Gabaccia 2017). The flow of migrants rose impressively throughout the nineteenth and twentieth centuries, with millions of people moving from Europe to the Americas, from Africa and Asia to the Americas and Europe, and within each continent. They all opened their own shops and eating places, often confronted with prejudices and xenophobia, although, inevitably, mixed diets emerged to various degrees, such as fusion (i.e., combination of existing foodways) or creolization (i.e., creation of new forms). This led to paradoxes to which testifies the fact that chicken tikka masala was labeled the English national dish in 2011. Culinary writers and performers (chefs, travelers, bloggers, …) contributed to the wide diffusion of both indigenous (“authentic” or “traditional”) and creolized dishes in an ever-growing flow of culinary books, exhibitions, articles, and radio- and TV-shows.

Governing Eating and Drinking

All over the world and throughout time, governments interfered with eating and drinking, and in some periods most people actually depended entirely on help to survive. Starvation, however, was not the only reason to develop specific policies. Political and economic structures and strategies were aimed at securing the food supply of the urban populace in Ancient Greece and the Roman Empire. Most towns and cities relied largely on their hinterland, and taxes and rents ensured a large percentage of the harvest. When local harvests failed, imports were required. However, high transportation costs over land and limitations of communication, commercial networks, and buying power restricted access to outside resources for much of the people. Apart from the most powerful ones, towns and cities did not control outside resources and were therefore limited to measures intended to attract traders to supply the urban market. Guided by the notion of a “just price,” close supervision and regulation of the city’s food market aimed at keeping prices low and avoiding speculative behavior. However, authorities did not have the means to prevent high prices when the supply failed. Members of the urban elites frequently stepped in by selling food at prices that were below current market level (Erdkamp 2005). Only with the rise of Christianity in the Later Roman Empire did measures emerge specifically aimed at the poor and destitute (Garnsey 1999).

Food riots did not occur in classical Athens, as the democratic institutions gave the citizens sufficient means to put pressure on authorities. Urban food riots did occur in Roman times, though, and not only in Rome. Riots were seen as the expected consequence of price rises, which indicates that they were relatively common (Erdkamp 2002). The political status of Rome and, later, Constantinople ensured a stable supply. The well-known distribution in Rome of free grain (later, bread) to adult male citizens was instigated in 123 BCE as a measure to stabilize the food market. From Augustus onwards, roughly about one third of the population of Rome ate grain or bread provided through this scheme, but also the remainder largely went through state-controlled supply channels known as the Annona (Erdkamp 2005).

In classical Athens, public largesse was a civic duty of the rich; in imperial Rome it was a monopoly of the emperor. Outside Rome the local elites demonstrated and legitimized their social and political position by benefactions that included public banquets, which reflected the social hierarchy of the communities: the higher the status of the recipient in the community, the better treatment in terms of food and wine that he could expect during the banquet. Sumptuary laws issued by Roman statesmen and emperors ostentatiously aimed at limiting excessive spending on luxuries, but the frequency with which these were issued shows that they had little impact on reality, beyond emphasizing the virtues of the lawgiver.

Medieval and early modern governments were most clearly concerned with regulating access to foods and drinks, rather than with eating and drinking itself. Market regulation attempted to guarantee the fairness of the prices for basic foodstuffs (the aforementioned notion of a “just price” lingered on until today), as well as the quality and safety of foods that spoiled quickly, or of which the quality could not be visibly or tacitly gauged. Eating and drinking itself was much more rarely the subject of governance intervention, unlike the abundant sumptuary legislation that attempted to regulate the wearing of clothes by limiting specific clothing types to specific social groups or occasions. In some cases, such legislation was also imposed on eating and drinking habits, for instance by imposing limits on the lavish spending on wedding and funeral banquets. It is unclear if such regulation sorted much effect (Hunt 1996).

Today, food supply still worries authorities because lack of food often causes social outbursts (e.g., Egypt’s bread riots in 2017). In the nineteenth century, securing sufficient food came along with quality concerns about food. The former intensified throughout the world because of the growing dependency of the market and the need to feeding the wage workers at low cost, but the latter – food safety – was mostly new (Bruegel 2012; Joseph and Nestle 2012).

To secure sufficient and cheap food when domestic output coped with difficulties, national authorities lowered taxes, subsidized producers and retailers, established control systems with maximum prices, put up storehouses, and organized food distributions. In many countries after 1950 food crises vanished, and only tariff policy remained. Yet, many people hold authorities responsible for taking care of the food supply. In other countries, however, diverse food policies continued to be applied according to food availability. Local initiatives influenced eating in a very direct way: public and private charity distributed staple foods on a daily basis, as was the case in Europe in the 1840s, 1850s, and during both world wars. This still is the case in many countries around the world until today, including rich Western countries.

Other interference with food related to alcohol consumption (Phillips 2014). For long, too much alcohol drinking was seen as lack of self-control. Although it was tolerated, public drunkenness was rebuked and penalized, lest it was totally prohibited for religious reasons. In Western countries in the nineteenth century, however, drinking was medicalized and conceived as a social problem, labeled alcoholism. It was central to the authorities’ so-called social question that included immorality, delinquency, disease, prostitution, socialism, and other calamities. Consumption of strong drinks would lead to unemployment, poverty, misery and, inevitably, ruin the body and the family. Antialcohol campaigns emerged in the 1810s and intensified in the 1840s and, again, 1880s, which brought about successful temperance movements all over the world. In turn, since 1900 this led to temporarily or partly prohibition of producing, selling, or consuming alcohol in many countries across the world, such as Australia (1910–1928), some states in India (after independence), Norway (1916–1927), Russia (1914–1923), the USA (1920–1933), and Yemen (1962–1990).

Wine and spirits were often adulterated. Food fraud was not new, it came under many forms and related to many foodstuffs. However, the nineteenth century, again, led to new challenges (Atkins 2013). These were due to big transformations (industrialization, urbanization, individualization, …), as well as to the lengthening of the food chain, which included a growing number of actors who saw profit opportunities. Water was added to milk, chalk and field beans to flour, which was illegal but did not threaten health; copper sulfate, coloring agents, alum, and all sorts of stuffs were added to flour, which also was illegal but, moreover, could harm health. Until the 1860s, authorities focused on honest food trade, but since then also health concerns were part of regulations. Detecting food adulteration was done by chemists, physicians, and charlatans, but increasingly so by recognized chemical laboratories, serving the authorities, merchants, and consumers. These could easily discover falsifications by 1900 but had difficulties with the emerging sophisticated production processes (chemical flavor improvers, emulsifiers, color agents, sweeteners, …), of which the general public became aware in the 1970s.

National and, later, international institutions fixed norms for quality so as to guarantee generally accepted quality norms and, hence, trust in food. Influential was the 1905 French legislation on appellations of origin pertaining to wine, mustard, cheese, cider, and other foods, primarily intended for economic reasons, but with an effect on quality and health. In 1963, the World Health Organization started establishing international food standards (the Codex Alimentarius). Recurring food scandals, however, led to genuine food panics, as for instance in Scotland in 1964 (corned-beef typhoid outbreak) or Japan in 2008 (poisoned dumplings), which backs Ulrich Beck’s notion of “risk society” (Ferrières 2006).

Optimal Diet

Philosophy and medicine in classical antiquity are intricately linked, and both can be seen as characterized by a system of contrasts. First, the contrast in Greek and Roman perception between the cooked and uncooked, cultivated and wild, civilized and barbaric, which reflected the dichotomy between “us” and “the other.” Barbarians ate uncooked food and products of the wild, unlike civilized people who ate the produce of cultivated fields and domesticated animals. Whether Scythians, Huns or Homer’s cyclops, they were characterized in Greek writing as not belonging to the realm of civilization by their food and drink. Another contrast was that between excess and moderation, the first also a mark of the uncivilized. In Plato’s view of the soul and body, intellect is located in the head, emotions in the heart, while the lower belly is linked to the lesser needs of humankind. Giving in to the needs of the body is a sign of moral weakness, to which not only barbarians, but also slaves and women were thought easily to succumb. Restraint in the face of luxury and pleasure is a common ideal in Graeco-Roman philosophy, most explicitly in that of the stoics (Wilkins and Hill 2006).

Medical thinking was based on the principle of the four humors (or temperaments), propagated by the writings of the second-century physician Aelius Galenus (or Galen). According to this theory, the balance of bodily fluids in the body determined one’s health. The four humors were either hot or cold, dry or moist. Food and drink were not only characterized by both contrasts, but they also contributed to the balance of the body. Hence, food played a large role in medicine, as ailments were thought to be caused by an imbalance that could be cured by a particular diet. Women differed from men, being moist and cold, and therefore required a different diet from men (Wilkins and Hill 2006).

These lines of thinking largely prevailed in the early modern period. As is fitting for a world in which medicine enjoyed only few successes, and the little effect it sorted was more obviously noticeable in preventing rather than in remedying illness, knowledge about the dos and don’ts in eating and drinking was considered crucial for good health. More so than in modern medical or dietetic sciences, mental and physical well-being were thought to be very directly linked to what one ate and drank. Medical thinking on the subject in the medieval and early modern era still relied mostly on the synthesis made by Galen, whose ideas were re-introduced in the European middle ages by Arab translators. This theory provided the framework from which arose a range of theories that would dominate discussion about the optimal diet up to the 1910s.

Since each individual had his/her own humoral composition, and all organic matters were composed of elements with specific humoral properties, there existed no universal optimal diet. Rather, the specific humoral properties of each individual at a given moment determined what the optimal diet was. Age, gender, weather, illness, occupation, and activities all played a role in determining the suitable diet. The main object of discussion was then how to reliably determine which humoral properties a given condiment possessed. For this one could rely on the knowledge inherited from antiquity, on appearance and taste, or on similarity in provenance and type to other foods whose properties were known.

This is not to say that medical thinking on diets did not change between the high middle ages and the end of the eighteenth century: sixteenth-century humanists such as Andreas Vesalius turned their attention to the original Greek texts of Galen, instead of relying on older translations from Arab, and as a result of the studies of humanist scholars, the texts of Hippocrates were re-discovered as one of the original sources that had influenced Galen. However, by and large such changes in medical knowledge required only minor adjustments and corrections to dietetic knowledge but did not fundamentally alter the way of thinking about the relation between food and health. Deeper shifts in medical thinking about the optimal diet emerged only in the seventeenth and eighteenth centuries, when different schools of medical thinking emerged, such as the iatrochemical school of Paracelcus, and the Leiden school of Hermann Boerhaave. However, the effect of these new modes of thinking on dietetics was in most cases superficial and slow to spread. Until the early nineteenth century most prevailing ideas about diets derived from older, humoral, ways of thinking about the human body, even if they were no longer explicitly motivated in those terms (Albala 2003).

For the majority of the world population before the nineteenth century the choice of food and drink was overwhelmingly dictated by local availability. Nevertheless, for better-off households there was room for fashion and taste. In sixteenth-century Europe, Italy was the culinary fashion-maker, the place from which shifting tastes gradually spread around. The most influential exponent of Italy’s role was Bartolomeo Scappi, whose Opera of 1570 provided an exceptionally thorough illustrated guide to Italian gastronomy in his time. In the early seventeenth century, Spain would become the new gastronomic center, before moving to France. Although the limited availability of imported foods and drinks compared to today might make the impact of fashion less obvious, there are clear examples of changing tastes over time in the early modern period. The separation of savory from sweet courses (deserts) in the different courses of a meal is an invention of seventeenth-century French gastronomy that became so self-evident that for many modern observers it is hard to perceive that it is not a universal preference of humans, but a taste that was only developed very recently in the history of humankind.

The importance and success of culinary fashions in early modern Europe was greatly helped by the invention of the printing press in the fifteenth century. Although recipe books in scriptural form were frequently copied and enjoyed some popularity already in the late medieval period, the sheer number of recipe books printed from the sixteenth century onwards indicates a change in scale that is unlikely to have occurred without the printing press.

Throughout the nineteenth and twentieth centuries, nutritional insights renewed drastically, which directly affected the way people conceived of good eating (Carpenter 2003). Around 1800, centuries-old concepts about the diet still prevailed, although gradually new insights had emerged in the eighteenth century linked to the so-called chemical revolution in Western Europe. Despite this, very old concepts related to the four bodily humors left traces up to the 1960s. Three big innovations may be detected in nutritional sciences in the nineteenth and twentieth centuries: the application of the concept of calorie (1880s), the discovery of vitamins (1920s), and the full awareness of dangers of overeating (1950s). Of course, dietitians dealt with food-related illnesses (beriberi, scurvy, pellagra, etc.), but particularly the three innovations inspired food recommendations to which the general public mostly reacted ambiguously: some recommendations were totally ignored, while others were eagerly applied.

Prior to the “calorie-era,” food advice and eating rules aimed at well-balanced diets. This consisted of the staple food that should be complemented by protein-rich foodstuffs, some variation in the menu, and suggestions regarding a peaceful eating atmosphere. These, and other eating rules, appeared in household education for young girls in several parts of the world after 1870, aimed at cooking well at low cost and creating a joyful home for husband and children. After 1900, the general public started to learn about the notion that all food and drinks consist of carbohydrate, protein, and fat that provide energy, which is quantifiable by calories. Bodily requirements were established too (3,000 kcal per day for an adult man). Hence, it became easy to compute exactly the necessary daily intake of food. Moreover, the kilocalorie “equalized” all foodstuffs, and therefore energy-rich food, such as peas and sugar, was highly promoted, while even alcohol was seen as energy provider. It took several decades to realize that sugar was not harmless, while the antialcohol lobby immediately reacted against the preeminence of the calorie (Scrinis 2013).

The coining of the vitamin in the 1910s, previously the “unknown substances essential to life,” had immediate impact on food advice and manufacturers, to which testify the many advertisements in popular newspapers throughout the world in the 1920s and 1930s. Moreover, the insight that heating fresh food may partly eliminate the effect of vitamins led to the reappreciation of raw foodstuffs. Whole meal bread contains more vitamins than white bread. This led dieticians to promote the former type, which the (European) consumer rejected until the 1990s. After 1950, food supplements with all sorts of vitamins became increasingly popular.

Finally, the insight of the dangers of overeating concurred with the expanding availability of food in most parts of the world after 1980 since it may lead to heart diseases, diabetes or some cancers. So far, the avalanche of food advice and a never-ending stream of dieting gurus cannot end the recent “obesity crisis.”

Restrictions

Unlike Jewish religion, which imposed strict dietary laws about what and with whom one could eat, Greek and Roman beliefs were not linked to strong ideas concerning impure foods. This is not to say that there were no cultural boundaries, for example, concerning the animals to eat. Eating wild animals, apart from wild boars, deer, or hares, was frowned upon, but also domesticated animals, like dogs or horses, were not normally eaten. Cannibalism, which is sometimes mentioned in the context of severe famines, was obviously a strong taboo. Vegetarianism was rare in Antiquity, although it existed among certain groups. Pythagoras believed in the reincarnation of human souls, be it in humans or animals. Hence, he and his followers regarded the eating of animals as a form of cannibalism. Also, the Neoplatonist Porphyry rejected eating meat. In general, he advocated an ascetic table, but realized that only philosophers could pursue this.

Paradoxically, food and eating were and were not important to most mainstream communities of Christianity, as it spread into Graeco-Roman society. Commensality, the act of eating together, was at the heart of Christian rituals. At the same time, these same communities chose not to adhere to Jewish dietary laws, thereby allowing Christians to remain part of wider, predominantly pagan communities. In its late first-century formulation, the Gospel says that purity was a matter of belief, not of food (Mark 7, 14–19; cf. Luke 11, 37–42). However, against the background of philosophical approaches to the relationship between body and soul, Christian writers developed a Christian discourse on food, which was mainly aimed at reaching higher forms of Christian religiosity by negating the demands of the temporary, worldly domain. Christian asceticism, which became strongly developed in the fourth century, aimed at fighting the body by depriving it from its sustenance and by condemning pleasure that could be arrived from the act of eating. Asceticism became a predominant feature of monastic life in Late Antiquity. However, outside monasteries there remained a tension between the emphasis on asceticism in Christian thinking and reality, as banquets and self-indulgence remained important elements of Christian ways of life among the elites (MacGowan 1999; Smith 2003).

Medieval and early-modern medical thinking about the optimal diet provided a few guidelines in making dietary choices, but before the nineteenth century it imposed little in terms of general rules or restrictions. If medicine offered few universal recommendations for restraint, religion did. In late medieval Christian Europe, there were an estimated 150 fasting days spread over the year, during which all healthy individuals were expected to abstain from consuming animal products – not only the flesh, but also derivatives such as butter or milk. Some of those fasting days were strict, others – such as the weekly fasting on Friday – were “minor” and allowed for some flexibility. After the Reformation, thinking about restraint in eating and drinking diverged across Europe. Lutherans did not adhere as strictly and as often to fasting as Catholics did, while Calvinists tended to favor a more austere living style in general, but not necessarily according to the rhythm of feasting and fasting imposed by the Catholic calendar (Albala 2003).

Regardless of the specific religious confession one belonged to, it was widely held throughout medieval and early modern Europe that the ability to refrain from eating and drinking could be a powerful marker of spiritual achievement. Mystic women, ascetic movements among the clerical orders, and heretics demonstrated their exceptionality by imposing specific restrictions in eating and drinking (Bynum 1988). This phenomenon lingered on, or re-emerged, in the eighteenth and nineteenth centuries, when the temperance movement allied religious ideas with restraint in eating and (especially) drinking habits.

Religion was not alone in imposing restrictions on eating and drinking. Starting from the Renaissance, a growing concern for more restrained and civil table manners gradually spread across Europe. Communal dishes made way for individual plates, table cloths, and napkins presumably improved table hygiene, and as the fork was gradually introduced from Italy to the rest of Europe, eating with bare hands became less accepted. Historians and sociologists have debated whether this concern for civility spread across early modern Europe as a result of the influence of courts, urbanization, rising living standards, changing social inequalities, or shifts in moral thought (Sarti 2004). Yet, between medieval and modern times there was much less change in what was eaten, than in how it was eaten.

In the nineteenth and twentieth centuries too, food restrictions come under many forms. Religious prescriptions continue to form the basis of food and drink regulations and avoidances, thus constituting clear and strong identifying boundaries. Yet, regulations were followed to different degrees throughout space and time. For example, alcohol consumption in countries with a Muslim majority fluctuated under the influence of the intensity of faith, legal prohibition, social pressure, international exchanges, or identification with peers (such as youngsters). Still, century-old food restrictions remain important as part of traditions that recently obtained new interest in some parts of the world. In Europe, however, the impact of the Church diminished after the Second World War, which clearly shows in the declining fasting observance and the vanishing of fish-days on Wednesdays and Fridays.

New food restrictions surfaced. Deliberate refusal of meat eating existed since classical Antiquity, while meatless days were part of religious rules, and some doctors advised to moderate meat consumption for health reasons. However, in Western countries, by the end of the nineteenth century, vegetarianism was institutionalized when various associations with a slowly growing number of (middle-class) members appeared. Sign of its success was, for example, the opening of a vegetarian restaurant at the 1910 World Exhibition in Brussels (Belgium). Motives for refusing to eat meat included animal welfare, economic concerns, health, philosophical objections, or expression of solidarity and empathy with other people, which all persuaded a growing number of people throughout the world in the late twentieth century. Nonetheless, vegetarianism (and its variants such as veganism or flexiterianism) remains a marginal phenomenon worldwide (Ankeny 2017).

Deliberately restricting eating and drinking occurred also within the frame of slimming. Dieting was not new, but with the increased attention to the (Western) beauty ideal of the body, more and more men and women paid attention to food intake. Long-established food avoidance may vanish. The recent case of eating insects (entomophagy) in Western countries exemplifies this. After being qualified for centuries as inedible food that arouse disgust, insects for human food are now praised for its high protein supply, sustainable production, and low cost.

Transgressions of food avoidances and taboos occur for various reasons and are accepted momentarily. For example, unrestrained eating may be stimulated at particular occasions such as Christmas eve, when even children may be allowed to drink some alcohol.

Commensality and Celebrations

The Greeks and Romans preferred to eat three meals a day, with the evening meal as the main one. This structure persisted in many parts of the world until today. In Antiquity, breakfast and lunch were usually light, with some bread, possibly dipped in olive oil, cheese or eggs, and meat for the prosperous. The famous poem Moretum depicts a farmer getting up in the morning and eating freshly baked bread with a mix of cheese, vegetables, herbs, and salt before he sets out to work. The evening meal could either be eaten in the domestic sphere or in a more public setting. In Greece, women may not have joined their husbands during the evening meal. Homer depicts aristocratic women as being present at banquets and busy with textile work, while the men enjoyed their meal. In classical Athens, aristocratic men reclined during dinner at home, while their wives sat beside them. Roman custom, however, was for men and women to have their meals together. During more formal meals, children sat at a separate table (Dunbabin 2003; Wecowski 2014).

In cities many kinds of cold and warm food and drink were available in the inns and taverns and from street vendors, which may be related to the generally limited living space of the common people and requirements of work. For the same reason, for more festive meals many common people relied on celebratory meetings of the collegia (associations of different kind combining religious and professional functions), which were often sponsored by rich patrons. In Roman times, women were present at such meetings, as were slaves.

Festive meals of all classes were meant to express one’s social standing. While the aristocracy in Homer sits during banquets, from the archaic onwards until the end of Antiquity, the well-to-do recline during their dinner, while being served by servants. Even Jesus is depicted in the Gospel as reclining during the Last Supper. The symposium in classical Greece was solely a meeting of men of the upper classes. Social equality was limited to the guests present. The etiquette and conversation during the symposium ruled out the participation of the uneducated. The only women present were servants and fluteplayers. The ideals of the Roman banquet centered on simplicity, friendship, and social equality, in other words, the pleasant gathering of likeminded people who enjoyed a good meal. Reality was different, though, as social hierarchy was expressed in the arrangement of guests and even quality of food and wine served. Writers like Martial and Juvenal complained about the haughtiness of their patrons at dinners, while conspicuous spending could lead to excess, as famously parodied in the Banquet of Trimalchio scene in Petronius’ Satyricon (Wilkins and Hill 2006).

Social hierarchy was not limited to the upper classes, as is revealed by the regulations of the collegia dinners. Patrons celebrated their own birthdays and those of family members by paying for these festive meals, but distinction was also made among the collegia members between the well-to-do “middle class,” who contributed wine and food, and the common members. Seating arrangements expressed this hierarchy, while rules applied against rowdy behavior.

In many respects, the role of eating and drinking in celebration during the medieval and early modern period formed a logical counterpoint to the importance of fasting in the Christian calendar. The indulgence in food (meat) and drink during Carnival stood in contrast to the period of fasting that followed it. Some religious celebrations were linked to specific foods or drinks, such as the symbolic importance of eating lambs and eggs at Easter. However, more so than religious meaning, feasting was infused with social significance. Communal eating and drinking cemented social ties, both horizontal and vertical ones. The community and cohesion of a family was tellingly symbolized by the sharing of “bed and table,” while confraternities, guilds, civic militias, and voluntary associations rarely spared expenses for the organization of their guild meals. Drinking also played an important role in social life, in particular alcoholic consumption. The consumption of intoxicant drinks probably increased during the early modern period and forms an intriguing contrast to the growing importance contemporaries placed on civil table manners during this same period.

In the modern period and until today, food and drinks continue to be used to show rough and subtle differences between countries, regions, towns and countryside, men and women, and especially, rich and poor. Celebrating was one of the most evident occasions to clearly mark these differences, but in the course of the nineteenth century new occasions appeared on the individual and collective level.

The process of individualization comprised the celebration of one’s career moves, birthdays, or anniversaries, which all led to special dining. This was mainly limited to well-off people: poor people celebrated collective events (e.g., the end of the harvest) by having loads of their habitual fare. However, in the twentieth century, a birthday or school success was increasingly celebrated among the middle and working classes too. Moreover, people tended to visit friends and relatives much more than ever before, to share a meal. In general, dinners of the rich were to some extent imitated by poorer people, and according to the increase of the purchasing power of the masses, more or less luxurious food and drinks were consumed, which became particularly apparent in the 1980s. The case of (sparkling) wine illustrates this well. This imitation led to the search of new forms of distinction by the rich who eagerly use haute cuisine and its continuous innovations.

Collective eating and drinking have existed since long, but the nineteenth century, again, brought about new features: throughout the world, big banquets were organized to celebrate the nation, the monarch, an institution (e.g., parliament, a trading board, or a workers’ union), an international exhibition, the visit of a diplomat, or an aristocratic marriage. On a more modest level, collective meals were organized by a literary society, savings association, sports club, or any cultural group. The aim was to create and strengthen solidarity and identity.

A new possibility to draw clear lines of distinction was offered by the modern Parisian-style restaurant. Fancy eating out in public places existed in earlier centuries, for example, in China in the seventh century, but the nineteenth-century restaurant had more influence in that it appeared throughout Europe and its colonies, North and South America, Australia, and parts of Asia. Worldwide, eating out was common for travelers who could visit locales of very different kind, but where, in general, choice of food was limited. The Parisian restaurant appeared somewhere in the 1780s as a public place catering for richer patrons with specific characteristics: individual tables, menu cards, prices, stylish décor, and waiters and, above all, the possibility to choose among a wide selection of food and wine. The bourgeois clientele, men and women, visited these places to see and be seen, meet with people, and enjoy food. Culinary journalists and writers of traveler guides commented upon this new cultural locus of the rich, thus establishing, destroying and diffusing cooks’ reputations (Shore 2007).

The middle and lower classes ate out for reasons linked to work: they purchased soup, bread, cold meats, and the like, often sold by street vendors. In big cities in the last quarter of the nineteenth century, new forms of popular eating out appeared, such as the snack-bar or the automat, which were the precursors of today’s (transnational) fast food eating places. By 1900 more and more people dined for pleasure in restaurants of various status (brasseries, bistrots, cafés, inns, …), offering local specialties, and increasingly, foreign cuisines. The latter’s success is connected to movements of migrants and tourists, particularly after 1950.

Conclusion

Most people for most of the time and in most places ate very monotonously, had barely enough and risked starvation frequently. Only a small group enjoyed food security and diversity, using it as a sign of status. Despite the many innovations of the Columbian exchange, the diet of the masses started to change definitely with the growth of agricultural output, international trade, and transportation facilities around 1800. The disparate evolution of purchasing power, however, led to very uneven nutritional changes throughout the world.

Perhaps the most telling transformation in the world’s food history is the changing significance of food. For centuries, food was a bare necessity for most people, and although it obviously still has this function, more and more people see and use food in a different way: a means of individual and group expression, element of pleasure, and idea of choice. This is the move from the “taste of necessity” to the “taste of freedom” or even “of luxury.” A clear chronology is lacking, although worldwide the 1980s seem to have played a decisive role because of striking changes in world trade, demography, purchasing power, politics, consumption, and perhaps decisively, meaning of food.

References

  1. Abel, W. (1937). Wandlungen des Fleischverbrauchs und Fleischversorgung in Deutschland seit dem ausgehenden Mittelalter. Berichte über Landwritschaft. Zeitschrift für Agrarpolitik und Landwirtschaft, 12(3), 411–452.Google Scholar
  2. Albala, K. (2003). Food in early modern Europe. Santa Barbara: Greenwood Publishing.Google Scholar
  3. Allen, R., Bengtsson, T., & Dribe, M. (Eds.). (2005). Living standards in the past. Oxford: Oxford University Press.Google Scholar
  4. Ankeny, R. (2017). Food and ethical consumption. In J. Pilcher (Ed.), The Oxford handbook of food history (pp. 461–480). Oxford: Oxford University Press.Google Scholar
  5. Atkins, P. (2013). Social history of the science of food analysis and the control of adulteration. In A. Murcott, W. Belasco, & P. Jackson (Eds.), The handbook of food research (pp. 97–108). London: Bloomsbury.Google Scholar
  6. Baten, J., & Blum, M. (2014). Human heights since 1820. In J. L. Van Zanden (Ed.), How was life? Global Well-being since 1820 (pp. 117–137). Paris: OECD Publishing.CrossRefGoogle Scholar
  7. Braudel, F. (1992). Civilization and capitalism, 15th–18th century, Vol. I: The structure of everyday life. Berkeley: University of California Press.Google Scholar
  8. Broekaert, W. (2018). Wine and other beverages. In P. Erdkamp & C. Holleran (Eds.), A handbook to diet and nutrition in the Roman world (pp. 140–149). London: Routledge.Google Scholar
  9. Bruegel, M. (2012). Food and politics: Policing the street, regulating the market. In M. Bruegel (Ed.), A cultural history of food in the age of empire (pp. 87–105). London: Bloomsbury.Google Scholar
  10. Bynum, C. W. (1988). Holy feast and holy fast. The religious significance of food to medieval women. Oakland: University of California Press.Google Scholar
  11. Carpenter, K. (2003). A short history of nutritional science. The Journal of Nutrition, 133, 638–645, 975–984, 3023–3032; 3321–3342.CrossRefGoogle Scholar
  12. Chandezon, C. (2015). Animals, meat, and alimentary by-products: Patterns of production. In J. Wilkins & R. Nadeau (Eds.), A companion to food in the ancient world (pp. 135–146). Oxford: Wiley-Blackwell.Google Scholar
  13. Cool, H. (2006). Eating and drinking in Roman Britain. Cambridge: Cambridge University Press.CrossRefGoogle Scholar
  14. Curtin, P. D. (2002). The world & the west. The European challenge and the overseas response in the age of empire. Cambridge: Cambridge University Press.Google Scholar
  15. De Waal, A. (2018). Mass starvation. The history and the future of famine. Cambridge: Polity Press.Google Scholar
  16. Dunbabin, K. (2003). The Roman banquet. Images of conviviality. Cambridge: Cambridge University Press.Google Scholar
  17. Ekroth, G. (2007). Meat in ancient Greece: Sacrificial, sacred or secular. Food and History, 5(1), 249–272.CrossRefGoogle Scholar
  18. Erdkamp, P. (2002). A starving mob has no respect. Urban markets and food riots in the Roman world, 100 BC-400 AD. In L. de Blois & J. Rich (Eds.), The transformations of economic life under the Roman empire (pp. 93–115). Amsterdam: J. C. Gieben.Google Scholar
  19. Erdkamp, P. (2005). The grain market in the Roman world. Cambridge: Cambridge University Press.Google Scholar
  20. Erdkamp, P. (2018). Famine and hunger in the Roman world. In P. Erdkamp & C. Holleran (Eds.), A handbook to diet and nutrition in the Roman world (pp. 296–307). London: Routledge.Google Scholar
  21. FAOSTAT. (2018). Food and agricultural organization. Rome: FAO. http://www.fao.org/faostat/en/#data.Google Scholar
  22. Federico, G., & Tena-Junguito, A. (2016). A new series of world trade, 1800–1938. European historical economics society, EHES-working paper no. 93. London: EHES.Google Scholar
  23. Ferrières, M. (2006). Sacred cow, mad cow. A history of food fears. New York: Columbia University Press.Google Scholar
  24. Freedman, P. (2008). Out of the east. Spices and the medieval imagination. London: Yale University Press.Google Scholar
  25. Gabaccia, D. (2017). Food, mobility, and world history. In J. Pilcher (Ed.), The Oxford handbook of food history (pp. 305–323). Oxford: Oxford University Press.Google Scholar
  26. Garnsey, P. (1999). Food and society in classical antiquity. Cambridge: Cambridge University Press.CrossRefGoogle Scholar
  27. Grigg, D. (1999). The changing geography of world food consumption in the second half of the twentieth century. The Geographical Journal, 165(1), 1–11.CrossRefGoogle Scholar
  28. Heinrich, F. B. J., & Hansen, A. M. (2018a). Cereals and bread. In P. Erdkamp & C. Holleran (Eds.), A handbook to diet and nutrition in the Roman world (pp. 101–115). London: Routledge.Google Scholar
  29. Heinrich, F. B. J., & Hansen, A. M. (2018b). Pulses. In P. Erdkamp & C. Holleran (Eds.), A handbook to diet and nutrition in the Roman world (pp. 116–128). London: Routledge.Google Scholar
  30. Hunt, A. (1996). Governance of the consuming passions: A history of sumptuary law. Basingstoke: Macmillan.CrossRefGoogle Scholar
  31. Joseph, M., & Nestle, M. (2012). Food and politics in the modern age, 1920–2012. In A. Bentley (Ed.), A cultural history of food in the Modern age (pp. 87–110). London/New York: Bloomsbury.Google Scholar
  32. Killgrove, K. (2018). Using skeletal remains as a proxy for Roman lifestyles: The potential and problems with osteological reconstructions of health, diet, and stature in Imperial Rome. In P. Erdkamp & C. Holleran (Eds.), A handbook to diet and nutrition in the Roman world (pp. 245–258). London: Routledge.Google Scholar
  33. MacGowan, A. (1999). Ascetic Eucharist: Food and drink in early Christian ritual meals. Oxford: Clarendon Press.CrossRefGoogle Scholar
  34. MacKinnon, M. (2004). Production and consumption of animals in Roman Italy: Integrating the Zooarchaeological and textual evidence. Portsmouth: Journal of Roman Archaeology Supplementary Series.Google Scholar
  35. MacKinnon, M. (2018). Meat and other animal products. In P. Erdkamp & C. Holleran (Eds.), A handbook to diet and nutrition in the Roman world (pp. 150–162). London: Routledge.Google Scholar
  36. Marzano, A. (2018). Fish and seafood. In P. Erdkamp & C. Holleran (Eds.), A handbook to diet and nutrition in the Roman world (pp. 163–173). London: Routledge.Google Scholar
  37. Messer, E. (2013). Hunger and famine worldwide. In A. Murcott et al. (Eds.), The handbook of food research (pp. 384–397). London: Bloomsbury.Google Scholar
  38. Mylona, D. (2015). Fish. In J. Wilkins & R. Nadeau (Eds.), A companion to food in the ancient world (pp. 147–159). Oxford: Wiley-Blackwell.CrossRefGoogle Scholar
  39. Norton, M. (2008). Sacred gifts, profane, pleasures: A history of tobacco and chocolate in the Atlantic world. Ithaca: Cornell University Press.Google Scholar
  40. Ó Gráda, C. (2010). Famine: A short history. Princeton: Princeton University Press.Google Scholar
  41. O’Rourke, K., & Williamson, J. (2009). Did Vasco da Gama matter for European markets? The Economic History Review, 62(3), 655–684.CrossRefGoogle Scholar
  42. Oddy, D., et al. (Eds.). (2009). The rise of obesity in Europe. Farnham: Ashgate.Google Scholar
  43. Phillips, R. (2014). Alcohol: A history. Chapel Hill: University of North Carolina Press.CrossRefGoogle Scholar
  44. Pomeranz, K. (2005). Standards of living in eighteenth-century China: Regional differences, temporal trends, and incomplete evidence. In R. Allen, T. Bengtsson, & M. Dribe (Eds.), Living standards in the past (pp. 23–55). Oxford: Oxford University Press.CrossRefGoogle Scholar
  45. Popkin, B. (2011). Contemporary nutritional transition. Proceedings of the Nutrition Society, 70(1), 82–91.CrossRefGoogle Scholar
  46. Sarti, R. (2004). Europe at home: Family and material culture, 1500–1800. New Haven: Yale University Press.Google Scholar
  47. Scholliers, P. (2007). Novelty and tradition. The new landscape of gastronomy. In P. Freedman (Ed.), Food, the history of taste (pp. 332–357). London: Thames & Hudson.Google Scholar
  48. Scrinis, G. (2013). Nutritionism. The science and policy of dietary advice. New York: Columbia University Press.CrossRefGoogle Scholar
  49. Shore, E. (2007). Dining out. The development of the restaurant. In P. Freedman (Ed.), Food, the history of taste (pp. 301–331). London: Thames & Hudson.Google Scholar
  50. Sidebotham, S. E. (2011). Berenike and the ancient maritime spice route. Berkeley/Los Angeles/London: University of California Press.CrossRefGoogle Scholar
  51. Smith, D. E. (2003). From symposium to Eucharist: The banquet in the early Christian world. Minneapolis: Fortress Press.Google Scholar
  52. Steckel, R. H. (2005). Health and nutrition in the pre-industrial era: Insights from a Millenium of average heights in northern Europe. In R. Allen, T. Bengtsson, & M. Dribe (Eds.), Living standards in the past (pp. 227–254). Oxford: Oxford University Press.CrossRefGoogle Scholar
  53. Teuteberg, H. J. (1986). Periods and turning-points in the history of European diet: A preliminary outline of problems and methods. In A. Fenton & E. Kisbán (Eds.), Food in change. Eating habits from the middle ages to the present day (pp. 17–18). Edinburgh: J. Donald Publishers.Google Scholar
  54. Thoen, E., & Soens, T. (2010). Vegetarians or carnivores: Standards of living and diet in late medieval Flanders. In Le interazioni fra economia e ambiente biologico nell’Europa preindustriale, secc. XIII–XVIII (pp. 1000–1033). Florence: Firenze University Press.Google Scholar
  55. Waldron, T. (2006). Nutrition and the skeleton. In C. M. Woolgar, D. Serjeantson, & T. Waldron (Eds.), Food in medieval England. Diet and nutrition (pp. 254–266). Oxford: Oxford University Press.Google Scholar
  56. Wecowski, M. (2014). The rise of the Greek aristocratic banquet. Oxford: Oxford University Press.CrossRefGoogle Scholar
  57. Wilkins, J., & Hill, S. (2006). Food in the ancient world. Oxford: Blackwell.Google Scholar
  58. Wilkins, J., & Nadeau, R. (Eds.). (2015). A companion to food in the ancient world. Oxford: Wiley-Blackwell.Google Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  • Paul Erdkamp
    • 1
  • Wouter Ryckbosch
    • 1
  • Peter Scholliers
    • 1
    Email author
  1. 1.Department of HistoryVrije Universiteit BrusselBrusselBelgium

Personalised recommendations