Keywords

In this chapter, we examine the various ways public health policymakers and practitioners imagined the public in post-war Britain. There were three key ways of thinking about the public. Firstly, the public could be seen as a whole, as a mass or as the entire population. Part of what made public health ‘public’ throughout this period was a continued interest in the collective as well as the individual or group. Secondly, the public could be broken up into distinct groups. Many of these fractured along familiar lines: class, gender and ethnicity, for instance, all figured in the way public health thought of and dealt with the public. Finally, the public was also conceived as a collection of individuals. The growing emphasis on individual behaviour in both causing and responding to public health problems helped to consolidate a focus on individuals and the risks they posed or encountered. Yet, these neat categories were rarely so well-defined in practice. Collective, group and individual ways of imagining the public often overlapped and sometimes conflicted with one another. Moreover, public health practitioners’ conceptions of the public interacted with pre-existing assumptions and values. This resulted in a fractured, but dynamic, sense of the public. Public health actors’ conception of the public, we argue, was multifaceted, sometimes contradictory, and open to change over time.

The chapter is divided into three sections. In the first, we examine how public health actors saw the public as an entirety. The epidemiological survey was an important tool for creating a sense of the population as a whole, as well as a collection of groups and individuals. Collective ways of viewing the public can also be seen in initiatives such as mass vaccination, where individuals were expected to undergo a procedure to benefit themselves and others. This mass public intersected with the new focus on individuals and their lifestyles, something observed in the ‘invention’ of exercise as a behaviour that was good for everybody. In Section Two, we get to grips with some of the ways the public were thought of as a collection of specific (albeit sometimes overlapping) groups. We examine how particular groups were made, used and applied by public health actors. Our focus is on class, gender and ethnicity. Other groups, were, of course, important, but these categories exorcised the most interest and also linked back to older ways of viewing and responding to the public. In Section Three, we reflect on how the public was imagined as a collection of individuals. Particularly important here is the emphasis on personal risk and individual behaviour, but this view of the public was not wholly atomised. Individuals were still often thought of as being part of particular groups, and some kinds of behaviour could be seen as a universal as well as an individual attribute. Throughout the chapter, we look at the imaginings of different types of public health actor, including government officials, public health researchers, health educators and medical practitioners. We conclude by reflecting on the three ways of viewing the public, and how these changed and stayed the same over time.

1 Public = Population

Despite the general move towards focusing on individual behaviour as a leading cause of ill-health over the course of the last half of the twentieth century, public health practitioners and policymakers continued to think of the public as a whole. The primary way in which this conceptualisation operated was in relation to the public as the population. ‘Population’ is a multifaceted concept, but a particular understanding of population was produced by epidemiological surveys in the post-war period. We also examine a key population-level public health intervention—mass vaccination—and how this helped create a sense of a public that was more than a collection of groups and individuals. The notion of the public as population could incorporate a focus on individual behaviours that were thought to be universally beneficial, and we look at this in relation to exercise and heart disease.

1.1 Epidemiological Surveys

The technological innovations of social surveys, medical statistics and epidemiology were integral to the development of twentieth-century public health and its scientific credibility (Porter 1996). Public health’s expansion and interpretation of statistics played a vital role in determining how population health was viewed by policymakers and what actions should be taken to improve it (Szreter 2002a). But it also encouraged a new, more comprehensive conception of the public in public health. Through epidemiological surveys the whole population had the potential to become an object of and participant in research, and the public was reconfigured as a ‘whole’ made up of many ‘parts’, categories, and individuals (Crook 2016, 295).

As Alain Desrosières explains, in the late nineteenth century ‘statistical summing elicited a more general viewpoint than that of the doctors who, seeing only patients … had a different perspective of public health issues’ (Desrosières 2010, 170). Rather than examining the individual, this ‘general viewpoint’ focused ‘attention and debate’ on the economic and social environment ‘as an explanatory factor in mortality’. While Desrosières suggests that such statistical work was concerned with ‘the improvement of health and sanitary conditions in an urban environment’, Seth Koven describes similar social surveys as a method of knowing ‘the unknown slums’ in cities expanding under industrial capitalism (Desrosières 2010, 169). Middle-class philanthropists and social reformers utilised the survey ‘to know, to contain, to control, and to speak about the poor’, often using terms of moral judgment (Koven 1991, 370). In the twentieth century, social scientists picked up the mantle conducting social surveys which focused on a public of unemployed or working people ‘whose lives were impoverished and marginalised’ over those who were ‘prosperous and secure’ (Lawrence 2013, 274–75).

The interest in those deemed to be ‘impoverished and marginalised’ was widely shared by researchers, social workers, the clergy, the police, doctors, and within public health. Public health practice largely focussed on women and children, sending sanitary inspectors and health visitors into communities to monitor and educate throughout the nineteenth and early twentieth centuries (Berridge 2007, 188; Davies 1988). In these accounts of social surveys, research was concerned with classifying populations, aiming to ‘elicit, pathologize, and sometimes exoticize the morally deviant’, separating them from the respectable and legitimate (Savage 2010, 7). Yet, as Desrosières asserts, in the early twentieth century, new sampling methods opened up a wider public to researchers. Rather than necessitating exhaustive surveys into poverty-stricken areas, representative sampling allowed parts to ‘replace the whole’, leading to a new conception of the ‘whole’ (Desrosières 2010, 226).

By the 1940s, social medicine was emphasising the dynamic relationship between health and social factors, aiming to explore how social and economic change affected health. Furthermore, practitioners refused to view health and sickness as absolute states and instead used statistical methods to examine ‘norms and ranges of variation with respect to individual differences’; bringing this new ‘whole’ public under the purview of public health (Murphy and Smith 1997, 3). As a discipline, social medicine focused on building statistical links between ‘life hazards, poor environments and poor health’ and conceived of medicine as a social science which examined the social relations of health and to rectify inequalities (Oakley and Barker 2004, 5–6). Debates around social medicine intersected with those around the planning of the NHS, drawing the suspicion of clinicians and doctors ‘for questioning their focus on the individual patient at the expense of the wider public good’ and for looking beyond their professional expertise to the field of medical statistics (Oakley and Barker 2004; Porter 2002). Social medicine as a political project identified whole population health as a social problem and looked to social science and epidemiological surveys to inform health policy.

The expanded focus of social medicine brought new members of the public to the attention of public health. Epidemiological surveys asked members of this new public about their material conditions and social status as well as their health, and certain sections of the public found themselves the subjects of social investigation for the first time. These people may have been familiar with survey methods intellectually but not with how it felt to be subjected to them. Publics made up of the middle classes and men, rather than the usual survey subjects of women and the marginalised poor, were placed under the lens of the Survey, and these newer publics did not always behave as the surveyed should. Consisting of people with greater social, economic and political capital, these publics could more easily speak back to public health. Positioning themselves as the ‘subject[s] of rights’ as well as of research, they called into question top-down narratives of expertise and the authority of state representatives (Crook 2016, 295). Although epidemiological studies viewed the public through the lens of population health, in practice they were still dealing with individual members of the public.

1.2 Mass Vaccination and Herd Immunity

Additional tensions between the notion of ‘population’ and the individuals that were its constituents can be observed in one of the key population-level public health interventions of the post-war era: mass vaccination. During the nineteenth century, vaccination against smallpox prompted considerable public opposition. Rooted in religious, scientific and class-based hostility, anti-vaccination campaigns highlighted conflict between the rights of individuals and the collective good (Durbach 2005). Although vaccination was, for most of the post-war period, less contentious than it had been in the nineteenth century, mass vaccination programmes still required negotiation between individuals and the wider public of which they were part (Blume 2017). Immunisation programmes exemplified collective risk for collective reward. The actual risk of complications arising from vaccination was slight. But this was not the only price paid by the public. Children were expected to endure the discomfort of the procedure. Parents of young children bore the inconvenience of presenting their children for vaccination and the difficulty of seeing their child in pain and dealing with any rashes or irritability that might follow. Large bureaucratic systems for procuring, distributing and administering vaccines to those who needed them took significant resources to fund and staff. And the long-standing relationship between pharmaceutical companies, research institutions and public funding bodies meant that the British public sector invested, and continues to invest, to protect individuals from communicable diseases (Blume 2017; Heller 2008).

The benefits of mass vaccination were, similarly, collective. The concept of herd immunity held that the more people immunised, the lower the possibility of other people becoming infected by a particular disease. This both worked as a form of epidemic prevention and as a benefit to those individuals who, for whatever reason, were not immunised. The most obvious benefits were those of cost reduction. The modest expense on vaccination systems was more than repaid by lower hospital admissions, reduced incapacity for work, disability and death. Such arguments became explicit from the 1970s, when the welfare state sought to deal with financial crises by investing in preventative care. But there are also cultural reasons for such an investment. As vaccination became a proven and widespread tool for preventing disease, publics themselves demanded protection in the form of vaccination. Thus, in the 1950s, when the American Jonas Salk announced the successful field trials of his vaccine against polio, the British government rushed through a vaccination programme to meet the demand from British citizens (Lindner and Blume 2006; Millward 2017). As Jacob Heller has argued, vaccination became a symbol of a modern, functioning Western nation—something that advanced countries had and backward states did not (Heller 2008). Moreover, as citizens came to expect that their governments would manage risks to their personal safety, they demanded that not only the government provide vaccination services, but that fellow citizens adhere to national guidelines and vaccinate their children for the good of the collective. Population-level measures like vaccination, therefore, did more than protect the masses: they also helped to underline the continued importance of the public as a whole to collective health, and to the actors in charge of it.

1.3 Universalising Individual Behaviour: Exercise

Such universal imaginings of the public were also interwoven with understandings of the impact of individual behaviour on health. One such example is the way in which exercise came to be understood as a behaviour that would benefit everyone, no matter their individual status or what group or groups they belonged to. In September 2009, shortly before his death, the Financial Times ran a weekend magazine feature on British epidemiologist Professor Jeremiah ‘Jerry’ Morris, headlined ‘The man who invented exercise’ (Kuper 2009). Two months later, his obituarists echoed this appraisal. Berridge, writing in The Guardian, asserted that Morris ‘was the first researcher to demonstrate the connection between exercise and health’ (Berridge 2009). This reputation was largely predicated on the research he conducted with the Social Medicine Research Unit (SMRU), and in particular the London Transport Workers study. This study established a link between physical activity and coronary heart disease (CHD) by noting the lower rates of morbidity and mortality among bus conductors compared to their more sedentary bus driver colleagues (Morris et al. 1953a, b). Through this research, and later studies of leisure-time activity among British civil servants, the SMRU developed a concept of exercise as a self-consciously modern response to a modern epidemic apparently born out of shifts in the post-industrial labour market. In 1961, at a symposium at Yale, Morris claimed that ‘[r]eduction of physical activity is surely one of the characteristic social changes of the present century, and automation promises to finish the job’ (Morris 1961).

Morris’s concerns were borne out empirically. Economic historian Andrew Newell characterised the structural changes in the British labour market over the twentieth century as being driven by the ‘engine’ of ‘technological process, which completely transformed the occupations, industries, hours of work, and, most of all, the standard of living of British workers’ (Newell 2007, 35). The results of this were clear. While in 1951 48.5% of workers were employed in manual jobs, this had declined to 38.5% by 1977. Over the same period, employment in managerial, professional and technical occupations (sedentary desk work) had increased from 8.2 to 26.7% (Newell 2007, 39).

The SMRU established a large longitudinal cohort study of mostly sedentary, desk-bound civil servants to investigate the potential links between leisure-time physical activity and CHD, finding that ‘vigorous exercise’ had a protective effect (Morris et al. 1973). This, rather than the earlier London Transport Workers Study, was the point at which exercise, as the twenty-first-century headline writers of the Financial Times might have understood it, was invented. Exercise was defined, not as a quotidian by-product of an active and physically strenuous job, but as a set of leisure-time activities that have to be consciously performed to a certain level of vigour to compensate for one’s sedentary day job. As Morris explained: ‘Vigorous exercise is very different from just a general increase in physical activity, and a clear message is needed as to which forms of exercise are most beneficial’ (Morris et al. 1973, 222).

The wider cultural influence of the SMRU research is necessarily diffuse and difficult to trace alongside other developments in post-industrial consumer societies, but as Dorothy Porter has argued, ‘[c]ommercialized physical culture expanded slowly after the Second World War up to the late 1970s and then made an exponential leap’ (Porter 2011, 77). Exercise, at least in the SMRU and Morris’s conception, was reinvented as a response to a modern epidemic (CHD), a result of modern sedentary lifestyles brought about by a modern macroeconomic shift (an increase in desk-bound work). The interwar construction of exercise as the action of responsible citizens in pursuit of the national health had been reconstituted.

Physical activity in post-war Britain was still very much part of the practice of citizenship, but now as an individualised, scientifically rational, modern way of life (Grant 2016). Alongside eating healthily and not smoking, exercise was a central tenet of public health’s new focus on lifestyle. This was illustrated by policy documents such as the UK government’s 1976 discussion paper Prevention and Health: Everybody’s Business, which framed individuals’ preventive health practices as a quid pro quo for the continuation of a health service free at the point of use as the NHS struggled financially (Department of Health and Social Security 1976). Further examples of exercise as literally ‘active’ citizenship were evident in the 1980s health promotion campaign ‘Look After Your Heart’ (memorably fronted by junior health minister Edwina Currie on an exercise bike), and the series of guidelines on physical activity published by the Chief Medical Officer, most recently in 2010 (Bull 2010). An individual behaviour could thus be a universal requirement for the entire population.

2 The Public = Groups

While ‘the public’ could be seen as being synonymous with the citizenry, or the whole population, this mass public was often sub-divided into smaller groups. The composition of these groups was determined by various factors. Socio-economic status (class) had long been a way of grouping supposedly similar people together, as had gender and ethnicity. In the post-war period, the make-up and nature of such groups were both consolidated and complicated by the rise of identity politics. The growth of new social movements helped to create new ways of identifying people, and new ways for people to identify themselves. Moreover, identities could be multiple: a gay man could also belong to an ethnic minority group and to the working-class. In this section, we do not aim to deal with every kind of identity group or with multiple identities. Rather, we focus on class, gender and ethnicity as three key groupings deployed by public health policymakers and practitioners. We highlight similarities and differences across these groups, and with past ways of imagining the public.

2.1 Class

The notion of ‘class’ is a mutable and disputed concept, and as a result the use of class in the context of public health in post-war Britain was always heterogeneous. Furthermore, as Mike Savage points out, class and class identities underwent considerable change over the course of the second half of the twentieth century (Savage 2008, 2010). Class was (and remains) a slippery, but profoundly important, category for public health actors and the ways they thought about and responded to the public. Writing in the 1970s, the Marxist theorist Raymond Williams discussed class extensively in Keywords, his ‘vocabulary of culture and society’; tracing the brief history of the term, and what it meant in the present day. As a prominent public intellectual who was influential in contemporary discussions about cultural phenomena, his definition provides a useful lens with which to view class in public health, both contextually and theoretically. For Williams, three ‘variable meanings of class’ were used ‘in a whole range of contemporary discussion and controversy … usually without clear distinction’. Class could mean either a ‘group’ (a socio-economic category), a ‘rank’ (indicating relative social position) or lastly ‘formation’, to describe organisation along social, political or cultural boundaries (Williams 1973, 66). All three meanings were mobilised at different points, and by various actors in British public health, throughout the second half of the twentieth century. In this section, we examine the ways in which class as both ‘group’ and ‘rank’ was instrumentalised by public health practitioners and policymakers. In Sect. 3, we consider how ‘formation’ mapped on to understandings of individual behaviour not only in relation to class, but to other identity categories too.

Of Williams’s three meanings of class, ‘group’, or the ‘(objective) social or economic category’ of class can be best observed in the design and delivery of public health surveys (Williams 1973, 66). Generally, public health surveys focused on this understanding of class by categorising the public into different class ‘groups’, by asking questions about income and about occupation and examining the material status of respondents. In 1911, the Registrar General introduced a ‘five-part social class stratification’ which was soon adopted by other social investigators and went on to ‘dominate demographic work during the twentieth century’ (Webster 2002, 83; Renwick 2016, 14). Simon Szretzer has termed this the ‘professional model of social classes’ (Szreter 2002b). Chris Renwick notes that the ‘professional model’ marked a ‘significant departure from earlier approaches to social structure because it identified status with work – that is, occupation – rather than worth’.

The Government Social Survey (GSS) department’s Survey of Sickness, which ran from 1943 until 1952, followed this five-part model. The instructions given to interviewers working on the Survey described the ‘economic classification’ of subjects as follows; ‘this is a broad grouping on an occupational basis, designed to show whether people who are on a level economically have similar characteristics in other respects, and whether different levels have different problems and needs in relation to a particular inquiry … The coding should be made on the basis of the occupation of the chief wage-earner, assisted, where possible, by knowing his/her wage rate’ (Survey of Sickness Instructions to Interviewers 1945, 17). Although the Survey did ask about income, this was secondary to occupation: ‘The occupation is of great importance and every effort should be made to obtain it precisely’ (Survey of Sickness Instructions to Interviewers). Wage-rates and occupations were stratified together with occupations such as ‘pensioners’ and ‘women in a variety of unskilled jobs’ making up the bottom rung of ‘Up to £3. 0. 0. a week’, and, at the other end of the scale, male ‘Managers’, ‘Management’, ‘Managerial staff’ and ‘Heads of Department’ working in various trades filled the ‘Over £10. 0. 0. a week’ highest earning category (Survey of Sickness Instructions to Interviewers 1945, 17–20).

Although relatively straightforward, these classifications were not without controversy. In 1945, redrafted instructions to interviewers noted that the ‘difficulty most widely experienced … is that of asking the Income Group of the Chief Wage Earner’ (Survey of Sickness Instructions to Interviewers). Even those who understood the necessity of putting health in a social context, sometimes expressed annoyance with having to reveal their income in person and on the doorstep (Complaints Received from Members of the Public Interviewed by S.S. Investigators). A common grievance, it was discussed in detail by survey staff. The GSS issued each fieldworker with a card printed with income categories so that the survey subject could ‘indicate … his income’ non-verbally (Survey of Sickness Instructions to Interviewers 1945). Such difficulties were not unique to the 1940s and 1950s. In 2010, a report from the Health Survey for England found that there was some reluctance to answer newly included questions on contraception and sexual health, but ‘item non-response was no higher than to other sensitive questions such as that on household income’ (Robinson et al. 2011, 5).

Depending on the purpose of the study, health surveys sometimes tried other ways of categorising class. In the late 1950s, influenced by notions of class explored in Wilmott and Young’s Family and Kinship in East London, Stephen Taylor and Sidney Chave asked respondents to their study of mental health in Harlow to self-define their social class alongside the by now-standard occupation and income questions (Young and Wilmott 2013). It did not go well. Half-way through the fieldwork, Chave wrote in his diary, ‘I have decided to drop the Social Class Question from the interview. Several times the interviewers have said they feel awkward about putting this question: it has sometimes aroused the emotions of informants who say they don’t believe in classes… one reluctant woman said to [an interviewer] – “I know you are going to ask me something silly about what social class I belong to”’ (Facing the Winter). For the most part, throughout the post-war period public health surveys continued to use a combination of occupation and income to stratify class, although the specified wage-rates increased and occupations given as examples changed. Assigning individuals to particular socio-economic categories was important for public health researchers as it enabled them to track distinctions between and within groups. As historian Charles Webster noted, a British tradition of observing inequalities between social groups was nothing new: this goes at least as far back as Edwin Chadwick or Friedrich Engels (Webster 2002). Nevertheless, ‘although inequalities in health have represented a continuing and serious social problem, active investigation tends to have been a periodic phenomenon, stimulated by perceptions of social crisis’ (Webster 2002, 82). Historians such as Dorothy Porter have argued that in the immediate post-war years the ‘pre-war political mission of social medicine to tackle health inequalities’ was lost, with a ‘shift of focus from social structure to social behaviour in the sociological analysis of disease and health’ (D. Porter 2002, 70). In the late 1970s however—one of Webster’s ‘crisis’ periods—there was a renewed focus on class and social inequality as a key determinant of health. This culminated in the August 1980 publication of the Black Report, a document widely viewed by both historians and public health campaigners as a pivotal moment in bringing the neologism of ‘health inequalities’ to public attention (Berridge 2002). The Black Report proved to be the catalyst for a decade’s worth of activity on health inequalities by a loose network of epidemiologists, sociologists and campaigners. This focus on class as a key analytical prism through which public health viewed its public was maintained into the 1990s, with Donald Acheson’s report on health inequalities in 1998, and the following decade with the Marmot Review, published in 2010 (Acheson 1998; Marmot 2010).

It is ironic, then, that one of the most influential analyses of health inequalities, the two Whitehall studies of London civil servants, started in 1968 with little or no thought given to investigating such issues. A relatively conventional risk factor study in the mould of the famous American-based Framingham project, the first Whitehall study recorded the grades of the male participants apparently merely as a ‘matter of good housekeeping’ (Marmot 2002; Oppenheimer 2005). According to the directors of the second Whitehall study, this followed the epidemiological conventions of the day:

“social class” was not an object of study but a control variable: a potential confounder that you got rid of in order to arrive at the “correct” conclusion about the association between risk factor and disease. (Marmot and Brunner 2005)

As the study progressed however, startling inequalities became apparent; ‘[m]en in the lowest grade (messengers) had 3.6 times the [coronary heart disease] mortality of men in the highest employment grade (administrators)’, a trend that was proportionately observed across all grades (Marmot and Brunner 2005). What contributed most significantly to this disparity? Was it their socio-economic status (i.e. ‘group’); the behaviours that made up their ‘formation’ as a grade or social class; or was it their ‘rank’, their place in the hierarchy?

Ultimately, the Whitehall researchers decided that the latter explanation was most compelling. In a 1981 paper, they excluded absolute poverty as having anything to do with heart disease: ‘[e]xperience in Third World countries shows that where poverty is prevalent, coronary heart disease is rare’. The researchers also discounted many of the lifestyle risk factors, concluding that ‘a man’s employment status was a stronger predictor of his risk of dying from coronary heart disease than any of the more familiar risk factors’ (Rose and Marmot 1981, 17). Only a third of the disparities in deaths from heart disease between grades could be explained by known risk factors such as cholesterol, obesity, smoking or sedentary lifestyles (Marmot and Elliott 2005, 6). For the Whitehall researchers, ‘rank’ was the most powerful way of viewing the effects of class on health. They strongly argued that their findings were neither particular to civil servants nor Britain. Indeed, they even supplemented their findings with those of neurobiologist Robert Sapolsky in his studies of baboons and their own social orders. Whitehall II director Michael Marmot argued that inequalities in health had been found across Western nations, even those that thought they were relatively egalitarian, such as Sweden. As he noted, ‘Whitehall, far from representing an atypical postimperial backwater, [was] typical of the developed world’ (Marmot 2006, 1304). Class was thus profoundly important to the way public health actors imagined the public, but also to the health problems that members of the public were thought to encounter.

2.2 Gender

Socio-economic status was not the only method by which public health practitioners and policymakers categorised the public. From at least the nineteenth century onwards, there were gender divisions in imaginings of the public. Women were often the target of key public health initiatives such as the improvement of child health and hygiene in the home. In the post-war era, women, especially in their role as wives and mothers, continued to be a key focus for public health work. For instance, gendered assumptions about family life were integral to the way population health surveys were organised in the post-war period. The positioning of women as wives was fundamental to the structure of surveys and influenced the ways women responded to them and the information that they gathered. This can be seen in the GSS’s Survey of Sickness which ran from 1943 until 1952. This was a study to measure the incidence of illness and injury in the whole population of England and Wales. Throughout its run, Government fieldworkers interviewed a representative sample of around 300,000 people about their health (Taylor 1958). In its instructions to interviewers written in 1945, the GSS defined ‘any woman who is mainly responsible for the domestic duties of the household’ as a ‘housewife’ whether she worked outside the home or not (Survey of Sickness Instructions to Interviewers 1945). Although ‘housewife’ was described as a form of ‘other occupation’, the survey was also clear that ‘housewife’ was a relationship. When categorising every individual they interviewed, the interviewers were instructed to: ‘Please be careful to give relationship to the housewife and not to any other person’ (Survey of Sickness Instructions to Interviewers 1945, 16). The questionnaire schedule was designed on the assumption that there would be a ‘housewife’ in the household and that she played a crucial role in household life. Questions about the house—the number of habitable rooms for example—were to be answered by the ‘housewife’ if she was present and the ‘chief wage earner’ only if she was not. It was suggested that ‘in the few households’ without a ‘housewife’, the ‘relationship of different members to one another’ should be made ‘clear in a note’ and attached (Survey of Sickness Instructions to Interviewers 1945). The organisation of the survey thus privileged the role of women in the home and conferred responsibility for knowledge about health and the home onto women, even when they were not the subjects of surveys themselves.

One result of this was that women, particularly wives, were often used as proxies in the absence of their husbands. The instructions to interviewers working on the Survey of Sickness asserted that ‘in general a man is not a good proxy for a woman’, but specifically mentioned that women; wives, daughters and mothers, could be used as proxies for men (Survey of Sickness Instructions to Interviewers 1945, 6). Many social researchers expected women to be knowledgeable about ‘stomachs, homes and emotions’ and to be willing to report on them (Anatomy of Don’t Knows 1947). As Caitriona Beaumont and Amy Whipple have shown, the gendered assumption of household knowledge was widespread during the 1950s and early 1960s, and was often utilised by middle-class women’s organisations to enact active forms of citizenship (Whipple 2010, 334). Groups such as the Mothers’ Union, Women’s Institute and Townswomen’s Guilds responded enthusiastically to government requests for their views in order to place the voices of housewives ‘right at the heart’ of post-war reconstruction (Beaumont 2015, 146). In trusting women to act as proxies for members of their households, the GSS recognised this form of expertise. But this recognition was not always experienced by the women’s husbands, some of whom wrote to complain. One irate man demanded in 1947: ‘What authority have you to question my wife … regarding my personal health?’ (Complaints Received from Members of the Public Interviewed by S.S. Investigators 1947). Claire Langhamer suggests that men and women experienced different meanings of home in the 1950s, and developed different understandings of domestic privacy (Langhamer 2005, 344). When considered alongside Kate Fisher’s work which indicates that communication about sensitive issues between spouses was not always frequent or detailed, this puts the use of proxies into perspective (Fisher 2008). The health surveys’ privileging of women’s roles within the home led them to trust women’s knowledge of their husbands’ health more than the men in question did. Although the emphasis on women as ‘housewives’ has somewhat disappeared over the decades as labour practices altered and more women work outside the home, population-wide health surveying has continued to centre on households, focussing on the family unit.

Viewed as the primary caregivers to children, mothers were often the point of contact for health authorities seeking to monitor and intervene in the health of children. For infants and young children, this could be facilitated by home visits and mothers bringing their children to infant welfare clinics. For older children, the School Medical Service could deal more directly with pupils, albeit with the expressed written permission of the parents—most often assumed to be the mother (Daly 1983; Davis 2012). One place where mothers figured significantly was in attempts to increase childhood vaccination rates. When there were difficulties in vaccination campaigns, mothers were often positioned as the cause of problems and the targets for solutions. At the population level, mothers could be accused of apathy. When the diphtheria immunisation rate among infants dropped significantly in 1949/1950, health authorities blamed mothers for not fearing the disease. Advertising campaigns in particularly poorly performing districts stressed the need for immunisation and focused on the relationship between mother and baby. In specific instances, accusations could be even more direct and moralising. A series of outbreaks of diphtheria in Coseley, Staffordshire, led to scathing attacks on mothers’ lack of care from local and national authorities (Ministry of Health 1954, 3, 34).

Despite such ‘apathy’, immunisation was increasingly seen as a common part of ‘good’ childrearing in the modern welfare state. The narrative around vaccinating children played on dominant notions of parenthood—which, in turn, often focused on motherhood. In the 1950s, literature and advertising made frequent references to ‘parents’ in gender-neutral terms, and fathers were not absent from discussions about vaccination, as shown through responses to government surveys on why children had or had not been immunised (Box 1945; Gray and Cartwright 1951). However, the dominance of the mother figure when discussing public health measures for children cannot be ignored. In the anti-poliomyelitis campaigns of the 1950s, for example, mothers’ attitudes were the main target of Ministry of Health advertising (Anti-Poliomyelitis Vaccination Publicity to Raise Acceptance Rate 1959). Later, mothers would take on a dual role as both guardians to young children in need of vaccination and, as the programme expanded to young adults and expectant mothers, recipients of vaccination themselves. Even in the twenty-first century, the mother remains central to vaccination policy and practice. Regular surveys of parental attitudes to childhood immunisation question mothers rather than fathers, since women remain the most influential in decision making on a child’s vaccination status (Yarwood et al. 2005).

The role of women in vaccination was not, however, confined to decisions about their own child’s immunisation status. Women also actively took part in spreading public health messages, such as through the leafletting campaigns instigated by the Women’s Institute for smallpox vaccination (Correspondence between Ministry of Health and Women’s Voluntary Service and Women’s Institute). Indeed, mothers’ activism became increasingly important for government policy as the century progressed. While women’s groups could help in education efforts, they could also resist and complicate vaccination programmes. Both the Association of Parents of Vaccine Damaged Children (1974) and Justice Awareness and Basic Support (1995) were founded by mothers who claimed their children had been damaged by vaccines (Fox 2006; JABS 2001). As vaccination crises grew around the contested safety of the whooping cough vaccine in the 1970s, and the measles-mumps-rubella vaccine in the 1990s, such groups were often able to use the media to promote their messages and demand compensation and policy changes from the governments of the day. This would suggest that mothers were not just the targets of public health initiatives but also active participants. Gendered views of the public mattered not only to public health officials, but to the public itself.

2.3 Ethnicity

Over the course of the second half of the twentieth century, ethnicity came to figure more centrally in the ways public health practitioners imagined the public. Race-based understandings of health and disease had long been present in Britain, but ethnicity as a distinct, identifiable category that could be linked to particular public health patterns and problems attracted little attention until the 1960s. This is evident in how ethnicity figured (or did not) in public health surveys. In the immediate post-war period, public health surveys ignored ethnicity. There were no questions regarding ethnicity in prominent public health surveys such as the GSS’s Survey of Sickness (1943–1952) or the National Survey of Health and Development’s 1946 Birth Cohort Study. Surveys collected information about income and occupation, to explore class, sex and age, but did not categorise the public in terms of ethnicity or ‘race’.

The reasons for this are complex. First, the absence of an ethnicity question on many health surveys suggests that public health in the 1940s and 1950s often imagined the British public as white. In 1951, the black and minority ethnic (BME) population of Britain was estimated at around 74,500 (Waters 1997, 209). Although this rose throughout the 1950s to 336,000 by the end of 1959, and reached close to half a million people by the time the Commonwealth Immigrants Act came into effect in 1962, Chris Waters has suggested that ‘Britishness and whiteness became increasingly synonymous’ in the 1950s, partly through the ‘exclusion of a racial other’ (Waters 1997, 212). Another possible reason for the absence of questions regarding ethnicity in the 1940s and 1950s was the impact of the Second World War and the Holocaust on scientific thinking about ‘race’ (Barkan 1993; Stepan 1982). The Pearsonian statistics used in public health had developed alongside theories of eugenics and ‘racial’ science, or scientific racism (Renwick 2016; Schaffer 2008; Higgs 2000; Magnello 2002). Although some have downplayed the influence of eugenicist thought on medical statisticians, there was a clear link between eugenic ideas and population statistics, including those of population health (Magnello 2002). During the Second World War, scientists in Britain and internationally began to distance themselves from Nazism and the ideas of ‘racial’ science. After the war, scientists and social scientists from all over the world rejected biological understandings of race, as evidenced by the 1950 and 1952 UNESCO statements on ‘race’ (Schaffer 2007). These two statements argued that ‘for all practical social purposes “race” is not so much a biological phenomenon as a social myth … given similar degrees of cultural opportunity to realize their potentialities, the average achievement of the members of each ethnic group is about the same’ (Schaffer 2007, 260). With this in mind, questions regarding ethnicity, nationality or even ‘race’ may have been purposefully left out of population health surveys.

Such issues, however, could not be ignored for long. Waters has argued that the growing numbers of people of colour settling in Britain in the 1950s led to ‘the emergence of a new “science,” that of “race relations,” pioneered by anthropologists and sociologists’ who set out to ‘study migrant communities and the response to them in Britain’ and the differences in opportunity afforded to members of different ethnic groups (Waters 1997). At the same time, discussions around ‘race’ and discrimination became part of the agenda of mainstream politics. In the wake of the Notting Hill riots of 1958, the Labour Party ‘issued a statement on “Racial Discrimination”, committing a future Labour government to anti-discrimination legislation’ (Schaffer 2014, 253). While public health authorities concerned themselves with the ‘classic “port health” diseases’ of tuberculosis and smallpox in relation to migration, public health surveys were slower to turn their focus onto migrant communities, never mind British-born people of colour (Bivins 2015, 14–15). In 1960, public health researcher Sidney Chave expressed surprise that his random sample of households in Harlow had selected a German family and an Indian family. Chave described them as an ‘unusual batch of families’, but this remark was the extent of his interest despite the Harlow mental health study’s focus on migration to new towns (The End of Fieldwork 1960). Chave’s survey did not ask questions about ethnicity. Although question 11 of the survey had asked ‘Where do you come from?’, which was suggestive of ‘birthplace’ questions in later surveys, the phrasing of question 12, ‘How long had you lived in that district?’, indicated that the expected answer was another town or county rather than country or continent (Taylor and Chave 1964, 209–10).

In the 1960s, when public health surveys began to address ethnicity directly, they did so in a way that embodied the tensions of the race relations project. As Waters notes, ‘race relations experts consistently narrated the migrant other as a “stranger” to assumed norms of what it meant to be British’ (Waters 1997, 209). Public health surveys dealt with ethnicity through the lens of migration. In 1958, the National Child Development Study Birth Cohort did not ask its participant mothers for their ethnicity. However, in 1965, the year of the first Race Relations Act, when the Study conducted their first follow up interviews, ‘immigrants born in the reference week’ were added into the sample (Power and Elliott 2006, 34). Ethnicity continued to be elided with the place of birth into the 1970s, but there were other, more biological understandings of ethnicity at work too. In 1971, the Office of Population Censuses and Surveys (OPCS) Social Survey Division began their General Household Survey (GHS), the health section of which picked up where the GSS Survey of Sickness had left off (Moss 1991, 159). The GHS asked questions about the ethnicity of respondents, including it as a category of analysis alongside class and sex. Respondents were asked for their parents’ country of birth and interviewers were asked to code as “coloured” all those people who are not, in their estimation, “white”’ (Office of Population Censuses and Surveys: Social Survey Division 1973, 75). The interviewers’ instructions clarified that this meant ‘Negros, brown skinned people such as Indians and Pakistanis, and yellow skinned people such as Chinese and Japanese’ (Office of Population Censuses and Surveys: Social Survey Division 1973, A63). The GHS report indicated that the ‘colour classification that results is not claimed to be either scientific or objective; however, it is expected to be reasonably consistent, meaningful and reproducible’ (Office of Population Censuses and Surveys: Social Survey Division 1973, 75). The birthplace question operated on the understanding that the majority of BME people living in Britain were either first or second generation ‘immigrants’, with the ‘colour’ question acting as a way to discern ‘white people … included among those whose parents had been born in the Indian subcontinent’ and ‘people whose parents were born in East Africa [but] were in fact of Asian descent’ (Sillitoe and White 1992, 142). In 1975, the OPCS began to devise and trial questions on ethnicity which would provide more reliable information than parent’s birthplace in recognition of the growing British-born BME population, but which were ultimately rejected by both the public and the Government before the 1981 Census (Sillitoe and White 1992, 144, 147).

The inclusion and exclusion of ethnicity from public health surveys in post-war Britain shows the malleable nature of the public and how conceptions of the public changed to fit the politics of the time. Like class and gender, dominant ideas about ethnicity were at work when public health practitioners imagined the public. These conceptions, however, took on distinct forms as they interacted with formulations that were specific to public health, such as the growing importance assigned to individual behaviour as both cause and cure for many public health problems.

3 Public = Individual Behaviours

As well as being seen as specific groups, the public could be broken down into even smaller units: that of individuals. Although this might seem somewhat contradictory, as ‘the public’ is often seen as referring to the masses, or at least large groups, in the context of public health there was a special impetus towards imagining the public as a set of individuals.

From the 1950s onwards, epidemiologists and others began to establish links between individual behaviour and diseases such as certain types of cancer and heart disease. This meant that public health practitioners and policymakers had to take greater interest in individuals and their actions. In this section, we discuss some of the ways in which this focus on individuals was manifested within post-war public health, especially in connection with ideas about risk and the communication of this through health education. We contend that ways of thinking about individuals often aligned with ways of thinking about groups. Class, gender and ethnicity once more came to the fore. At the same time, there was also a countervailing tendency to think about some of the more universal aspects of individual behaviour, as something that the entire public, no matter which group they belonged to, needed to take into account.

3.1 Risk

One of the most important means by which individual behaviour came to be seen as a causal factor in disease aetiology was through the development of the notion of ‘risk’. The linking of chronic disease to lifestyle relied upon a statistical understanding of certain behaviours or characteristics and the likelihood that these would lead to ill-health. Beginning in America in the late 1950s and early 1960s, but spreading rapidly around the developed world, ‘risk factors’ for specific diseases were identified, such as high cholesterol and blood pressure for CHD (Rothstein 2003; Oppenheimer 2006; Timmermann 2012). Although risk factors were thought to be universal in their biological or behavioural nature, these were not evenly distributed across the population. Certain groups were thought to be more at risk than others due to their behaviour and characteristics.

This can be seen in relation to understandings of CHD. For much of the twentieth century, heart disease was conceived, by default, as a male disease. When CHD emerged as an apparent epidemic in Western nations in the immediate post-war period, its most visible, and most numerous, victims were middle-aged men (Ehrenreich 1984, 70–73). Large cohort studies in Britain that attempted to tease out the causes of the epidemic were conducted exclusively with male participants. The first Whitehall study, which started in 1968, only included male civil servants (Marmot and Brunner 2005, 251). Ten years later, the British Regional Heart Study recruited 7735 middle-aged men, and no women (Walker, Whincup, and Shaper 2004). An editorial in The Lancet in 1991 puzzled over these omissions, arguing that ‘in most developed countries CHD is unquestionably the biggest killer in women as well as in men and a cause of considerable morbidity’ and that ‘there is no sex difference in the mechanism of CHD, so the classic risk factors should apply’ (Anon. 1991).

This omission of women from research agendas was informed by cultural understandings of heart attacks as being the blight of the male breadwinner, intrinsically connected with work and stress. Studies of heart disease in the immediate post-war years were keen to investigate the links between occupation and heart disease, while the persistent popularity of the Type A hypothesis—that alpha males with highly ambitious personalities were more likely to be stressed and consequently suffer heart attacks—meant that even as the workforce became increasingly feminised, heart disease continued to be highly masculinised (Aronowitz 1988). Historian Jane Hand has illustrated such conceptions in popular discourse by her close reading of Flora margarine advertising, tracing a history of ‘the visual representation of the at-risk male, widening out to all males and finally re-incorporating the female purchaser and consumer’ (Hand 2017, 479, 493). Hand suggests that by the mid-1980s ‘women themselves were increasingly being identified as at-risk from CHD’, but it was not until 1999 that the British Women’s Heart and Health Study was established as a ‘sister’ study of the British Regional Heart Study (Hand 2017, 493). The persistence of the male coronary ‘candidate’ in the public and medical imagination was underlined in 2018 by news stories reporting that women suffered worse clinical outcomes following heart attacks, with the study author commenting that there was a ‘misconception amongst the general public and healthcare professionals about what heart attack patients are like … [t]ypically, when we think of a heart attack patient, we see a middle-aged man who is overweight, has diabetes and smokes’ (Anon. 2018).

Distinctions in risk profiling were not just linked to gender. In his discussion of post-war chronic disease research in former British colonies, Martin Moore has posited that the new methodologies of risk-factor epidemiology, and its ‘harnessing [of] the power of difference’, had important implications for domestic public health. Moore persuasively argues that British biomedical researchers compared and contrasted the ethnic, cultural and environmental differences of such populations in order to gain insight into the aetiology and risk factors for conditions such as hypertension, diabetes and heart disease. These research subjects of Commonwealth countries ‘provided an “other” for the British population’ in the post-war years (Moore 2016). But it was also new migrant communities in Britain that began to capture researchers’ interests. One area where this can be observed is in the response to smallpox outbreaks from the late 1940s to the early 1960s. After its eradication from Britain in the 1930s, smallpox became intimately associated with a foreign threat (Arnold 1993, 116–58). All British cases in the post-war period could be traced to specific instances of importation, usually a named individual arriving by air or by sea from South Asia. In the 1940s and 1950s, India and later Pakistan were often associated with smallpox, but this characterisation applied to the lands rather than the people per se. Indeed, when an Australian couple died of smallpox on the SS Mooltan in 1950, causing some secondary infections among the British population, the authorities blamed the husband’s behaviour. He had visited a Bengal bazaar against the advice of the health authorities and, as was common for his compatriots at the time, he was unvaccinated because smallpox was so rare in Australasia (Morgan 1950). Similarly, a 1949 outbreak in Glasgow was attributed to a ‘Lascar’ seaman who recovered in a Scottish hospital. His race was less important than the ports he had passed through, and when he had recovered he was given a fond farewell from the locals (Anon. 1950a, b). The British population appeared to show no ill-will to such isolated incidents.

This situation changed as the practices of Indian and Pakistani migrants became more politicised in the 1960s. During the height of debates on the Commonwealth Immigration Bill (which sought to limit ‘coloured’ immigration) in 1961/1962 a series of small outbreaks of smallpox linked to air travellers led to widespread calls for more restrictions on who could enter the country and more rigorous medical checks at ports (Bivins 2008). This was not grounded in epidemiological research. Port controls had performed well for decades and smallpox was becoming less of a threat to Britain due to significant progress in the global eradication programme conducted by the World Health Organization (Dick 1962; Dixon 1962). These were politically rooted demands aimed at controlling the behaviour of foreigners and citizens travelling to contaminated lands. For while the British public appeared to demand vaccination of immigrants and emigrants—British subjects required vaccination before visiting many countries, both to protect the host country and to avoid the traveller bringing back a communicable disease—infant vaccination rates against smallpox in the domestic population remained well below 35% in most districts (Immunisation and Vaccination Statistics 1964). Certain diseases, then, could be seen as solely a problem for outsiders and for adequate border control rather than problems that required changes in behaviour or inconvenience for the ‘native’ population. A ‘risky’ group posed a threat, but this could be controlled.

Although recent immigrants were initially discussed in medical and political circles in terms of infectious disease, by the 1970s and 1980s, according to Moore, ‘doctors could no longer ignore the presence of black and Asian populations in British chronic disease clinics, and clinicians organised prevalence surveys and research programmes with the aim of determining resource implications for the NHS’ (Bivins 2015; Moore 2016, 403). While it was acknowledged that there might be genetic and biological determinants (or ‘susceptibility’) to racialised conditions such as ‘Asian rickets’ or type 2 diabetes, the means of addressing these issues were placed firmly in the cultural and social sphere, most particularly with regard to migrant communities’ dietary habits. Bivins cites the discussions over fortifying chapatti flour with vitamin D as a means to address rickets in children of South Asian descent (Bivins 2015, 252–55). Similarly, higher rates of heart disease among Bangladeshi, Pakistani and Indian men led the Health Education Authority (HEA) to publish a manual for prevention in 1994 that this group’s recommended fat intake be reduced to 30% of total energy intake, in contrast to the 33% for the rest of the nation (McKeigue and Sevak 1994). At the time of writing, the British Heart Foundation continues to produce cookbooks specifically for south Asian and African-Caribbean audiences (British Heart Foundation 2012, 2013). Over time, then, attention had shifted from seeing ethnic minorities as posing a risk to the white population (through smallpox) to focusing on the health risks thought to be encountered by ethnic minority individuals.

3.2 Targeting: Health Education

The different patterns of disease among groups and individuals has been used to justify the targeting of health education initiatives to reach those most at risk. But such efforts also reflected and reinforced other assumptions about these various imagined publics and their behaviours. This can be observed in the ways class, gender and ethnicity figured in health education campaigns throughout the post-war period. The notion of class on display in health education can be aligned with Williams’s description of class as ‘formation’, as it tended to focus on the impact of the social, political and cultural boundaries of distinctions between classes, and especially the tastes and lifestyles of different class categories. The habits of the working classes had long been of interest to health educators, especially around issues such as hygiene and cleanliness (Crook 2016). In the post-war period, the focus on what Jerry Morris described as ‘ways of living’, or ‘mass habits and social customs’ intensified as these became more strongly linked to disease and ill-health (Morris 1955). Although these habits could be found throughout the population, the efforts of health educators tended to focus on the behaviours of lower socio-economic groups. This was, as the Cohen Report on health education remarked in 1964, because of the supposed ‘difficulty of reaching people in lower social classes who may often be the most in need of education in health matters but the least ready to accept it’ (Central Health Services Council and Scottish Health Services Council 1964, 36). Such perceptions continued across the decades. In a document describing their ‘Look After Your Heart’ campaign from 1987 the HEA asserted that their efforts were ‘aimed at everyone in England. However, the prevalence of CHD is greatest among certain groups (mainly socio-economic groups C2, D and E), who have also proved the hardest to reach effectively with health education messages’ (Health Education Authority 1987, 10).

As the HEA document hinted, the strategy of focusing health education on the poorer groups in society, was, to some degree, justified by the pattern of disease and its relationship to socio-economic status. People in lower social classes experienced worse health, but the extent to which this was related to behaviour, rather than the environment or social structure, was much more debateable. This can be seen in the case of smoking. During the 1950s and early 1960s, rates of smoking were even across the social classes. Yet, during the 1970s, smoking began to decline among the higher socio-economic groups (Berridge 2007, 206). It could be suggested that this was because the more affluent groups in society took on board anti-smoking messages and acted accordingly (Britten 2007). On the other hand, more nuanced research pointed to the value of smoking in the lives of poorer people and their reduced incentives for giving up smoking in order to benefit long-term health (Lawlor et al. 2003; Graham 1987). Health education, then, may not have ‘succeeded’ or ‘failed’ in relation to class as formation: it simply missed its target.

Further evidence for this can be found in how class figured in specific health education campaigns. Many of the early anti-smoking messages appeared to have been aimed at all socio-economic groups, but by the mid-1960s there was a more concerted attempt to reach working-class young people (Berridge and Loughlin 2005). Moreover, some of these campaigns made explicit use of a wider set of social and cultural objects and activities to encourage young people to stop smoking. In the Ministry of Health’s ‘More money, more fun if you don’t smoke’ poster from 1966, a young man is surrounded by high-value consumer goods which would have been items of desire for many working-class youths. Drawing on such aspirational imagery, could, however, backfire. A poster produced for the Health Education Council’s 1977–1979 alcohol education campaign conducted in the North East of England made use of a picture of a manicured female hand reaching for a bottle of vodka. Yet, this image did not resonate with the intended working-class audience, and the entire campaign was found to be too geared towards a ‘middle-class view of life’ (Budd et al. 1983). This suggests that the long-running tension between largely middle-class public health practitioners and policymakers, and more working-class audiences, persisted. Moreover, it also indicates that class continued to matter in a variety of ways in post-war public health, even as more traditional class-based identities broke down and other methods of identifying and categorising people came to the fore.

Indeed, health education campaigns were not just targeted at changing the behaviours of the working classes, but at other groups thought to be most in need, including women and ethnic minorities. To some extent, these efforts could be related to the idea that individuals in such groups were at a higher risk of developing particular conditions, but other tropes were at work too. Gendered views of women’s role as mothers, for instance, underpinned anti-smoking campaigns during the 1970s. According to Berridge, smoking was regarded largely as a male habit until the late 1960s. From this time onwards, female smoking appeared to be on the increase, especially among younger women. Other research suggested that female smokers had smaller babies and that smoking might contribute towards foetal and neonatal deaths. Such evidence appeared to justify an anti-smoking campaign targeted specifically at pregnant women, which was launched in 1973. Yet, as Berridge argues, the roots of the campaign went beyond concerns about foetal health (Berridge 2007, 187–93). As discussed above, mothers had long been the target of public health campaigns. There were also specific reasons why pregnant women, and the health of the foetus, prompted particular concern at this time. The late 1960s and early 1970s saw the introduction of legal abortion and the development of the contraceptive pill, potentially giving women more control over their reproductive health than ever before. Interest in pregnant smokers was not, therefore, just about reducing the health risks of a specific group, but can be related to deeper, longer running issues surrounding reproduction and who had the power to control it.

Similarly, broader ideas about race and ethnicity also help explain some of the health education campaigns targeted at ethnic minorities. The persistence of rickets among the South Asian population in Britain was linked by public health policymakers to diet, culture and skin colour. Although there were debates about how to address the issue, including fortification of staple foods or exposing affected groups to more sunlight, health education was settled on as the way forward. The ‘Stop Rickets’ campaign of the 1970s and 1980s was, as Bivins points out, saturated with assumptions about the Asian community and the superiority of the ‘British’ way of life. Asians were encouraged to eat a diet more like that of the white community by posters that made use of ‘oriental’ motifs and depictions of ‘Asian’ people. Rickets was framed as a specifically ‘Asian’ problem, with the tacit (and sometimes more overt) message that ‘traditional’ culture and diet was to blame (Bivins 2015, 278–91). Yet, as contemporaries noted, rickets was not the most pressing public health issue facing British Asians. In their analysis of health education materials designed for ethnic minorities during the 1980s Bhopal and Donaldson found that these focused on pregnancy and infant care as well as the prevention of rickets. But, they suggested, this did not necessarily reflect the health education needs of ethnic minorities which included information about the biggest killer (heart disease) and how to access health services. Bhopal and Donaldson argued that ‘Health education services should focus not only on those health problems where the ethnic minority group has an excess of a problem as compared to the host population, but also where there is no difference, or indeed, where there is a deficit’ (Bhopal and Donaldson 1988, 139).

4 Conclusion

Despite the growing interest in individuals and their behaviour over the course of the post-war period, group and mass ways of imagining the public persisted. Indeed, it was sometimes hard to separate these different publics. Individual behaviour was often tied to membership of a particular group. The distinctions between groups certainly mattered, but there were numerous public health initiatives, like vaccination, where the whole public was the target. Numerical and statistical ways of imaging the public could simultaneously emphasise individual risk, encompass never-before surveyed groups (like middle-class men) and conceptualise the public at the population level. Health education efforts were often targeted at individuals who were members of specific groups but were sometimes designed to reach everybody. This indicates that the rise of identity politics, coupled with the linking of chronic disease to individual behaviour, did not mark the end of a mass public. Indeed, as we explore in Chapter 4, the public itself came to matter to public health policy in practice in new and unexpected ways through its ability to ‘speak back’ to public health.