Abstract
Altmetrics are an emerging form of bibliometric measurement that capture the online dimension of scholarly exchange. Against the backdrop of both a higher education landscape increasingly focused on quantifying research productivity and impact, as well as literature emphasising the need to address gender bias in the discipline, we consider whether and how altmetrics (re)produce gendered dynamics in political science. Using a novel dataset on the Altmetric Attention Scores (AAS) of political science research, we investigate two questions: Do AAS vary by gender? And how do AAS relate to gendered social media dynamics? We find that AAS reproduce gendered dynamics found in disciplinary publication and citation practices. For example, journal articles authored exclusively by female scholars score 27% lower on average than exclusively male-authored outputs. However, men are also more likely to write articles with an AAS of zero. These patterns are shaped by the presence of high-scoring male “superstars” whose research attracts much online attention. Complementing existing scholarship, we show that the AAS closely overlaps with virality dynamics on Twitter. We suggest that these gendered dynamics may be hidden behind the seemingly neutral, technical character of altmetrics, which is worrisome where they are used to evaluate scholarship.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
Introduction
In this article, we introduce the study of altmetrics to political science and place it in conversation with scholarship on gender bias in the discipline. Using an original dataset, we analyse the extent to which altmetrics, as an emerging indicator of research impact, reflect the gendered organisation of academia, in a context where literature raises concerns regarding the institutional and structural factors that limit women’s representation and advancement (Lundine et al. 2018, p. 1755).
Altmetrics (“alternative metrics”) are indicators of research impact that aim to acknowledge the increasingly digital diffusion of research activities via social media. They allow scholars to “see ripples generated by their research that might otherwise go unnoticed” (Kwok 2013, p. 492). For example, the Altmetric Attention Score (hereafter: AAS) commonly features on most publishers’ webpages. The AAS tracks the real-time online attention an individual research item receives, visualised as a colourful wheel containing a dynamic numeric score. Despite having become a ubiquitous part of the digital academic experience (and potentially emerging as a tool of academic governance), altmetrics are rarely discussed outside specialised literature (for a review, see González-Valiente et al. 2016). This literature has argued that, if simply taken for granted, altmetrics may contribute to the reification of sociopolitical inequities ranging from the naturalisation of gender bias to the legitimation of discrimination, for example, in hiring, promotion, or grant awards.
It strikes us as particularly appropriate to begin interrogating the politics of altmetrics where they concern political science. In this exploratory piece, we therefore investigate what altmetrics do to and for the discipline of political science when it comes to gendered dynamics. Here, we build on political science scholarship that has investigated how gendered hierarchies emerge and are reproduced in the discipline. This literature’s primary concerns have focused on structural barriers to the professional presence and representation of women (Tolleson-Rinehart and Carroll 2006; Atchison 2018; Deschouwer 2020; Pflaeger Young et al. 2021) as well as bias in publication and citation practices (Breuning et al. 2005; Evans and Moulder 2011; Maliniak et al. 2013; Teele and Thelen 2017; Dion et al. 2018; Ghica 2021; Stockemer 2022) and pedagogy and teaching (Colgan 2017; Hardt et al. 2019; Phull et al. 2019). We extend this literature to digital academia through a focus on altmetrics.
To assess the extent to which altmetrics reproduce disciplinary gendered dynamics (as previously documented in political science for publication and citation practices), we ask: do altmetrics vary by gender? Further, how do altmetrics relate to gendered social media dynamics?
To answer these questions, we introduce a novel dataset that combines information on author gender and AAS for all articles published in 65 top peer-reviewed political science journals between 2013 and 2019. We find that, overall, the AAS reflects the same gendered patterns found in broader publication and citation practices (Dion et al. 2018). Journal articles authored exclusively by women scholars score 27% lower on average than exclusively male-authored outputs, meaning that they receive less attention through online channels. However, by disaggregating these results and controlling for outliers, we nuance these findings and show that these patterns are shaped by the overwhelming presence of high-scoring male disciplinary “superstars” whose research attracts online attention and viral sharing. If we exclude such outliers, both female-authored and mixed-author research garners higher AAS than male-authored research. Male authors also dominate the category of research that receives little to no online attention (articles with an AAS of zero), which speaks to the role of networked sharing in determining impact. We find gendered dynamics pervade the online scholarly ecosystem, especially where sharing and dissemination are concerned. Specifically, we show that the AAS closely overlaps with the networked sharing and virality effects of Twitter, complementing earlier scholarship on the strong relationship between altmetrics and social media (Thelwall et al. 2013; Gumpenberger et al. 2016). These insights are useful for understanding what the AAS actually measures, and how this measurement is influenced by pre-existing gendered dynamics, both online and offline. Based on these findings, we suggest that the AAS may in turn contribute to the gendering of knowledge production and the reproduction of patterns of gendered social organisation in the discipline. Bibliometric indicators are not neutral tools, but capable of influencing and producing norms, behaviours, and practices (Schroeder 2021, p. 376). A more nuanced understanding of altmetrics as non-neutral indicators that increasingly govern research evaluation helps avoid naturalising structural inequalities in the discipline.
Altmetrics as an indicator of research impact
Altmetrics are meta-analytical tools used for monitoring scientific research with the aim to measure research impact and influence. A direct response to the rise of the digital and social web, altmetrics are motivated by a desire to capture the reach, relevance, and impact of academic research in the digital ecosystem (Priem 2010). The growing prevalence of altmetrics has started to attract considerable scholarly interest (Thelwall 2013; Bar-Ilan and van der Weijden 2015; Konkiel et al. 2016; Thelwall and Nevill 2018). Unlike traditional citation metrics codified at the journal or author level, altmetrics have no fixed or canonical definition but adapt to follow the online life of research (Lin et al. 2020, p. 214). Altmetrics can be thought of as a composite of diverse criteria of engagement with scholarship, including interactions (e.g. clicks, views, and downloads), capture (e.g. bookmarks, saves, and favourites), mentions (e.g. posts, comments, reviews, and attributions), and social media reactions (e.g. likes, shares, and tweets), in addition to citations and rankings (Roemer and Borchardt 2015).
Concurrently, altmetrics reflect our digital social behaviour (Lin et al. 2020, p. 215). As a result, they have garnered attention as a means of understanding how gender bias operates in the digital academic sphere (Sud and Thelwall 2014; Bar-Ilan and van der Weijden 2015; Fortin et al. 2021). Academic knowledge production, exchange, and dissemination take place through ever-diversifying digital channels via ubiquitous online platforms like Twitter, YouTube, Reddit, blogging, and collaborative wikis. This development is generally considered a net benefit for academia because it works to democratise scholarship and its evaluation (Daraio 2021). Digital sharing increases the likelihood for research to be cited and circulated, which stands to equalise knowledge dissemination. These practices have transformed the environment within which disciplinary debates emerge and circulate (Esarey and Wood 2018; Greenhow et al. 2019). Social networking sites, academic and popular blogs, and podcasts open channels of access and communication between academics and public audiences. They create possibilities for evaluating research, enabling real-time, crowdsourced peer review (Greenhow et al. 2019, p. 992), encouraging transparency, or eliciting policy advice. Importantly, these evolving knowledge platforms work in parallel with traditional “offline” forms of academic exchange, e.g. networking at conferences.
Research indicators, e.g. citation counts, journal impact factors, and institutional rankings, are not socially-neutral tools, however. They structure the discipline and shape the academic profession in different, unequal ways (Nygaard and Bellanova 2017; Thelwall and Nevill 2018; Ringel 2021; Crane and Glozer 2022). The higher education landscape has seen a proliferation of instruments intended to measure research productivity and quality, ultimately shaping funding, career advancement, and educational policies through calculative rationalities. Formal and informal practices of academic hiring, tenure, promotion, and evaluation can often be tied to publication status, citation statistics, and research popularity (Giles and Garand 2007; Alter et al. 2020), while publisher and university rankings are determinants of funding and social capital (Hix 2004). The use of performance metrics in research management highlights how measurement can both reproduce and generate disciplinary inequalities (Lenine and Mörschbächer 2020). Such metrics then operate as “engines of anxiety” that promote particular notions of excellence and accountability (Espeland and Sauder 2016). Altmetrics are emerging as tools of academic governance. While not (yet) formally anchored in excellence or evaluation frameworks, institutions can use altmetrics to assess impact (e.g. the UK’s Research Excellence Framework), which influences funding allocation (Kwok 2013, p. 493; Konkiel et al. 2016). Private research funders and charities are increasingly paying attention to altmetrics (Dinsmore et al. 2014), though current uses of altmetrics remain limited to informal channels (e.g. as shorthand for exemplary scholarship).
The use of altmetrics has been met with considerable criticism along three main axes. Firstly, it remains unclear what altmetrics are meant to measure, not least because their proprietary algorithms are often black boxed (Lin et al. 2020, p. 214). Secondly, it remains unclear whether altmetrics actually depart from rather than duplicate traditional measurements (Roemer and Borchardt 2015; Haustein et al. 2016; Fortin et al. 2021). Thirdly, scholars anticipate and critically interrogate the risks of institutionalising altmetrics as tools of academic governance (Kaufman-Osborn 2017; Nygaard and Bellanova 2017, p. 25; Lenine and Mörschbächer 2020; Crane and Glozer 2022, p. 807). Building on this literature, we analyse the most prevalent altmetrics indicator—the AAS—to empirically investigate the extent to which gender bias is reproduced in the field of political science.
Methodology
To investigate gendered dynamics in political science altmetrics, we develop an original dataset containing data from Altmetric.com (via the Altmetric Explorer, access to which was granted in May 2020). Altmetric.com, founded in 2011, is a private, for-profit company and the foremost aggregator of altmetrics in the natural and social sciences. For each item in an expanding global database of over 35 million research outputs (journal articles, whitepapers, reports, datasets, etc.; Altmetric.com 2011), Altmetric.com produces a numerical Altmetric Attention Score (AAS). The AAS is visualised as a distinct, colourful “donut”, or summary badge on publishing and journal webpages. While Altmetric.com cautions against using AAS as a proxy for research excellence (Konkiel 2016), high-scoring articles are often featured on journal webpages, raising their visibility.
The AAS is an algorithmically-derived measure of the attention that a research item garners as it is shared across online communication channels. The algorithm remains proprietary and non-replicable (Altmetric.com 2020). The AAS approximates the magnitude and types of mentions or references for an item, assigning unique colours to each source captured (e.g. blue for Twitter mentions, red for news outlets, etc.). It considers volume (number of times mentioned), sources (where mentions derive from), and mentioning authors. Sources of attention include reference tools (e.g. Web of Science, Scopus, and Google Scholar), news media, blogging platforms and wikis (e.g. Wikipedia), policy documents, and the public pages of social media platforms (e.g. Facebook and Twitter).
The AAS amalgamates online reactions to research that might be qualitatively different. For instance, it does not distinguish between positive and negative attention, challenging the assumption that high AAS reflect better-quality research. A telling example is Bruce Gilley’s controversial and later redacted Third World Quarterly article, published in 2017. The article, which makes a case for colonialism, garnered a score of 1653 in our dataset, among the highest recorded in the field of political science and derived primarily from negative attention via Twitter. Altmetrics therefore reflect the virality and rapid circulation of attention surrounding a research output. While the visual simplicity of the AAS gives a semblance of precision, neutrality, and utility (Nygaard and Bellanova 2017, p. 33), this can result in an oversimplification of information and context (Gumpenberger et al. 2016, p. 980).
Operationalising gender and the AAS in political science
Our unit of analysis is the research item (i.e. publication). Our full database contains close to thirty million research items published since 2011, identified via digital object identifiers (DOIs). Relying on DOIs, the database includes primarily journal articles, in addition to some books, chapters, reports, and other sources. Each research item has an associated AAS, as well as information such as title, journal or source, publication date and venue, author names and affiliations at the time of writing, funding bodies, subject field, and collected mentions across social media platforms and dimensions.
Within this dataset, we narrow our focus to items belonging to the subject field “Political Science” (as opposed to other disciplines).Footnote 1 Of these, we focus solely on journal articles to preserve comparative consistency (we exclude books, chapters, and other outputs). We then focus on items published between 2013 and 2019 for the following reasons: first, pre-2013 data is less reliable and comparable on the whole, due to changes in social media measurement; second, post-2019 data may experience skews, due to recency bias. AAS accumulate over time, meaning that very recent scores may not be directly comparable to older ones. Furthermore, the Covid-19 pandemic may have incentivised different patterns of knowledge dissemination. This precludes us from generalising the dynamics that have driven online research dissemination since 2020, including those that interrelate with existing inequalities, e.g. unequal burden sharing between female and male colleagues.
In addition to publication time frame, we further ensure the comparability of data by excluding non-peer-reviewed publications. We limit our dataset to items published in one of 67 top peer-reviewed political science journals, selected based on SCImago journal ranking scores in 2020 (accessible via https://www.scimagojr.com/). This excludes items published through preprint servers such as the Social Sciences Research Network or ResearchGate. This is not to suggest these publication types are not of sufficient quality or are insignificant in online knowledge ecosystems (in some cases such publications have very high AAS).
We supplement information on author gender for each included research item. First, we coded the data with the help of the genderize.io package, an open-access API that predicts gender based on name (Wais 2018). Second, we manually validated genderize.io results, correcting where necessary.Footnote 2 We preserve the sequence in which authors appear, coding for gender by designating authors as “male” or “female” based on full names as well as (where available) institutional/online profiles at the time of publication. These parameters produced a final dataset consisting of 6,856 coded research items.
Importantly, we use the term “gender” rather than “sex” in this analysis to denote categorisations of male/female authorship, allowing for an understanding of the relationship between gender roles and norms in academia. We recognise that our adherence to disaggregating by male/female can be problematic because it denies the inclusion of non-binary identities and can result in assigning gender/sex erroneously (Brooke 2021, p. 2096). While this approach enables us to offer a viable “first cut” into the dataset, it is limited where it overlooks important dimensions and intersections of variation, including more nuanced understandings of gender in academia, which may perpetuate social inequality (Westbrook and Saperstein 2015).
Results
Do AAS vary by gender?
In the following, we provide descriptive statistical features of the dataset to understand altmetrics variations by gender. Figure 1 depicts AAS by author gender, disaggregated by female-authored, male-authored, and mixed-gender-authored publications. In absolute terms, there are three times more exclusively male-authored (62.7%) than exclusively female-authored (20%) research items (17.2% mixed-gender teams; Fig. 2).
Mean AAS is highest on average for mixed-gender authored items (Fig. 2). Female only authored research generates, on average, the lowest AAS (19.23) as compared to male only authored (24.49) and mixed-gender-authored research (30.54). Publications authored exclusively by women thus have, on average, a 27% lower AAS than those authored by men. Yet median AAS reveal that while mixed-gender authored items generate the most attention, female-authored publications surpass male-authored ones. This indicates that while male authors appear to garner higher average AAS, the trend is driven by a small number of prolific outliers. If we exclude such outliers, female authors garner higher AAS than their male counterparts.Footnote 3
Figure 3 demonstrates the effect of such outliers by applying a log transformation, which condenses the distribution of data by treating multiplicative trends as additive, allowing for a closer observation of trends.
We find that the “viral hits” of research in political science are dominated by male authors: of the top 100 highest-scoring publications, 67 are authored by men. In comparison, only 7 are exclusively female-authored while 26 are authored by mixed-gender teams. Figure 4 shows similar patterns among the top 50 highest-scoring publications, where “virality” skews towards a male bias.
The gendered patterns visible among top scores are also present for publications that have an AAS of 0 or garner no online attention. Just as with top scores, zero scores are also dominated by male-authored publications (Fig. 5). More precisely, the share of male-authored publications with AAS of 0 (67%) mirrors the top 100 (68%: compared to 62.7% overall). Given that female-authored publications make up 15% and mixed-gender authorship 16% of zero-score items (20% and 17.2%, respectively, in the overall dataset), this implies that male-authored items are more widely distributed, while female-authored items are more closely centred around the mean.
Finally, we observe temporal patterns to understand whether these trends are reproduced over time. Figure 6 shows year-on-year AAS by author gender. Over time, mixed-gender teams tend to capture the highest scores on average, reproducing the above findings. Consistently across all years, exclusively female-authored items tend to have the lowest average AAS. We find overall AAS across all categories increase over time. While two items may have the same AAS, the relative weight or importance of that score vis-à-vis others is meaningful in relation to publication year. Older items tend to have lower average AAS. This could be reflective of increased levels of online attention and dissemination, e.g. the growing usage of academic social media over the period under investigation. For instance, though Twitter emerged in 2006, “academic Twitter” as a tool for scholarly networking, dissemination, and outreach, really began to gain traction around 2013 (Thelwall 2013; Lupton 2014; Mohammadi 2018). In sum, our dataset reveals a relationship between AAS and gender, whereby male-authored scholarship dominates the highest scores and the lowest scores while female-authored scholarship performs better overall outside of these extremities.
To sum up the initial findings related to our first question, AAS seem to reflect the gendered practices and norms that also organise academic research and scholarship in political science more broadly. These dynamics represent existing disciplinary patterns that reproduce online, as well as dynamics unique to online spaces (e.g. social media sharing, exposure, and reactions). Consider the finding above that male-authored publications dominate the highest and lowest scores while female authors do better along the median, outside of extremes. This gendered dynamic aligns with literature showing that the gender citation gap is driven by male-dominated publications at the top of the distribution (Zigerell 2015).
High AAS for male-authored research may reproduce the outsized influence that seniority could have in disciplinary and wider political networks: perceived “superstars” in the discipline are quite often established male scholars, as the profession, especially in its higher rungs, remains relatively homogenous and slow to change (Tolleson-Rinehart and Carroll 2006, p. 511; Maliniak et al. 2008, p. 122; Pflaeger Young et al. 2021). Though women have moved to overtake men in terms of university entrants and in attaining Political Science degrees up to the PhD-level, the academic career ladder through to full professor continues to fail them, and their experience of disciplinary spaces remains substantially different to that of their male colleagues (Alper 1993; Tolleson-Rinehart and Carroll 2006, pp. 510–511; Østby et al. 2013, p. 493; Beaulieu et al. 2017, p. 779; Ray 2018). While a slowly changing field is corroborated by an increasingly more diverse academic Twittersphere (e.g. with networks like #WomenAlsoKnowStuff), virality in online research continues to evade women. While we cannot assess the type and content of online engagement female academics experience, it is likely to also differ substantially from their male colleagues (Barlow and Awan 2016).
In addition to having larger audiences and therefore garnering more attention through social and other online channels, research authored by senior scholars attracts “mentioning up” dynamics (Bisbee et al. 2020). In turn, measurement techniques that reflect male academic “superstardom” or success through “virality” can work to reinforce a research environment (whether through funding, opportunities, career progression, etc.) that privileges (the visibility of) male scholarship. Indeed, a publication’s social popularity can be easily influenced by factors aside from its content (Zhang and Wang 2021). Reifying these gendered dynamics through a numerical ranking, represented visually as a neutral indicator of impact, risks producing gendered effects on its own. For example, extremes and outliers may be recalled better and more often than averages (following Kahneman and Tversky 2000). That “academic superstars” (whether online or offline) are more likely to be male may thus reinforce a belief that male researchers are the best bet a department or funding agency can make if they want to reinforce visibility—all the while these outliers are not representative: most articles that get no traction and garner no visibility are also written by men, and female-led publications and publications written by mixed-gender teams do better on average in terms of online visibility.
How do AAS relate to gendered social media dynamics?
To further interrogate gendered dynamics in the AAS, we investigate the components that constitute it. More specifically, we first unpack the individual components that comprise the score; second, we focus on Twitter trends and other social media dimensions that amplify scores; third, we map changes in AAS against Twitter trends to understand the importance of Twitter in online research dissemination. Figures 7 and 8 deconstruct the AAS into its component parts. Note that while Mendeley and Dimensions (traditional citations, e.g. those found in Web of Science) garner high mentions, they do not contribute to the AAS’s calculation.
With respect to the composition of the AAS, two characteristics are notable: first, the overall score relies mainly on mentions from Twitter, and second, other sources of mentions frequently have very low mentions or no data. The absence of mentions from these sources is not evidence of a lack of online engagement—for example, Altmetric.com can only track public Facebook pages, excluding those (like most user-level pages) that are set to be visible only to private audiences. Indeed, focusing on social media only demonstrates that Twitter is by far the most relevant online platform for measuring online attention.
We examine temporal patterns to investigate whether AAS and Twitter scores perform similarly year-on-year. Figure 9 suggests that both the average AAS and average Twitter mentions have steadily increased over time (based on date of publication). We find that AAS moderates the effect of Twitter mentions. Relatively low average Twitter input between 2013 and 2016 results in slightly higher AAS, comparatively speaking: Twitter had slightly less influence on the overall AAS. This relationship is reversed from 2017, where higher average Twitter input results in continuously increasing, but comparatively lower average AAS.
Finally, we compare the gendered nature of overall AAS and Twitter mentions. Figure 10 suggests that the AAS indeed closely matches the distribution of mentions on Twitter for female-authored, male-authored, and mixed-author teams.
In sum, our findings support existing research on altmetrics that find Twitter currently plays an outsized role in measurements of online research impact, attention, and dissemination. The results raise questions as to the indicator’s value for researchers; for instance, its capacity to capture online knowledge exchange rather than simply reflecting social media popularity and follower base.
Notably, research dissemination online has a recency bias—emerging research is more likely to be tweeted simply because Twitter was not available as a platform before 2007, and because Twitter’s academic user base has grown substantially since. While older research can still be shared and thus accumulate high AAS (e.g. Alexander Wendt’s 1992 article “Anarchy is what states make of it: The social construction of power politics” was republished online by International Organization in 2009, drawing an AAS of 76), newer work is more likely to garner social media attention, generating higher scores. In turn, we know that the more recent the research output, the more likely it is to be attributed to female authors or co-authored teams (due to trends towards co-increased authorship; (Teele and Thelen 2017, pp. 437–439)). In combination with the predominance of Twitter in the AAS, this may explain a relative increase in scores for female-authored pieces over time. Whereas in the professional discipline, a higher ratio of male scholars combined with practices of self-citation and citing other men produces significant gender bias (Kristensen 2018), this may be mediated among more diverse online audiences. For example, female political scientists (and those tenure track) are more likely to use Twitter (Bisbee, Larson and Munger 2020) and younger user populations compared to the offline professional discipline might also result in changes in gendered dynamics (Wojcik and Hughes 2019; Bisbee et al. 2020). This would be good news if this online community translates its higher willingness to engage female-authored research into its offline citation and reward practices.
Recent trends towards a further diversification of online disciplinary spaces, e.g. the more widespread use of Mastodon or Substack for research dissemination, but also the possibility that private platforms such as Twitter restrict their data, may well change both how altmetrics are calculated, as well as how gendered dynamics play out online. For example, it may be possible that younger (or more politically left, or more junior, or only European, etc.) academics move to different platforms, which would have effects on how (often) female authors are mentioned on Twitter. Given the current outsized importance of Twitter, this in turn would raise questions not only regarding the AAS algorithm (which would need to be updated). More so, it further complicates calculating the AAS in general, as more data needs to be collected across more platforms, which is computing intensive. Indeed, should there be substantial differences in engagement between platforms, it may raise doubts as to whether any overall altmetrics score can be meaningfully interpreted at all.
Conclusion
This article introduced the study of altmetrics to political science via an analytical focus on gender. We make three contributions: firstly, our original gender-coded dataset allows us to augment and refine existing research on the internal workings of altmetrics, and offers multiple avenues for further research. Secondly, our analysis meaningfully complements scholarship on gendered dynamics in political science. The results generated by approaching the dataset from multiple angles offers a comparative baseline for scholars working on structural inequalities. Concurrently, our analysis remains preliminary. The dataset and coding process could be expanded to include article abstracts or full texts to complement title length, keywords, and author information. This would provide greater insight into the nature of virality, as well as how online attention, status, subject matter, and gender interrelate (Alter et al. 2020). The analysis could also extend to online mentions themselves; for example, the content and sentiment expressed in tweets about research, which we cannot currently capture. We do not control for factors such as the size of the respective author’s network (e.g. number of Twitter followers). Here again, we may see gendered patterns at work (Flaherty 2019). Similarly, controlling for institutional affiliation, rank, seniority, and language—which may well affect virality, but are difficult to reproduce in a dataset—could strengthen results.
Thirdly, our results indicate how metrics are not detached from, and may indeed reify, existing structural inequalities. The AAS potentially broadens the scope for research impact to include the digital social landscape. Yet there are pitfalls to the expanding range of performance indicators used to assess research productivity, and popularity. AAS say little about either the quality of the research or the type of engagement it generates (positive or negative). That altmetrics quantify attention and popularity but do not reflect intellectual labour stands to distort and “metrify” scholarly exchange. This is more likely to produce inequality if indicators are treated as neutral. This works alongside other deeply gendered academic practices such as citations and reading lists (Phull et al. 2019; Alejandro 2021). Indeed, our results question the neutrality of altmetrics in a way that necessitates further analysis of their use and reception beyond this preliminary analysis. As academic careers and funding become tied to measures of productivity and impact, this introduces a perverse logic of knowledge production aimed at attention generation. When seeking to capture research excellence and outreach, focusing on individual AAS risks conflating impact with gendered structural inequality in political science. As with any other dimensions of research, ethical considerations around who benefits from indicators and who suffers must be taken into consideration.
Finally, what should political scientists (and academia more broadly) do with this information? One avenue would be to encourage and support women and junior scholars in navigating online research dissemination and network-building, e.g. via workshops at conferences, in departments or through professional associations, and raise awareness as to the gendered dynamics of the online discipline. Evidently, while workable, this solution largely ignores the more structural hierarchies at play. The game remains the same, only some people become better at playing it. Another avenue would be to reject the use of altmetrics (and other such indicators) altogether, working instead towards rewarding intellectual labour differently and/or creating more supportive online communities. Here, online networks like #WomenAlsoKnowStuff aimed at supporting gender equity in citations and in wider dissemination to, e.g. journalists or policymakers, have an important role to play. And yet, altmetrics are attractive tools, and here to stay—which means seeking to overcome them entirely is likely bound to fail. It strikes us as urgent, therefore, that professional associations, universities, and departments develop formal recommendations informed by social science methodology as to what constitute suitable uses (and misuses) of indicators, particularly those that privilege non-inclusive practices like social media sharing and virality. This may concern hiring, career progression, and research assessment. Similar to pedagogical initiatives that aim to tackle gender bias in scholarship, finally, we encourage scholars to critically reflect on how altmetrics and social sharing impact their own research experiences, and to inform students and future scholars of the gendered dynamics inherent in the emerging digital ecosystem that is transforming academic practice today.
Data availability
All data underlying this article can be accessed via: https://doi.org/10.34973/1atx-zt75
Notes
This includes items from international relations. The article classifies IR as a sub-discipline of political science. This has methodological reasons and is not to be understood as a judgment call as to IR’s disciplinary status: data comes pre-coded with research items associated with one or more disciplines based on keywords and abstracts (e.g. “Political Science”, “Economics”, etc.). “Political Science” includes articles from IR (as well as international political economy, some development studies, migration studies and similar cross-disciplinary fields where their abstracts/keywords suggest a focus on politics, loosely understood).
Including languages for which the algorithm had low confidence predictions (e.g. Icelandic) as well as gender neutral names.
Note that the number of authors in mixed-gender publications might introduce bias to the comparison with single-authored articles. We aim to explore this relationship in future analysis.
References
Alejandro, A. 2021. Reflexive discourse analysis: A methodology for the practice of reflexivity. European Journal of International Relations 27 (1): 150–174. https://doi.org/10.1177/1354066120969789.
Alper, J. 1993. The pipeline is leaking women all the way along. Science 260 (5106): 409–411. https://doi.org/10.1126/science.260.5106.409.
Alter, K.J., et al. 2020. Gender and status in American political science: Who determines whether a scholar is noteworthy? Perspectives on Politics 18 (4): 1048–1067. https://doi.org/10.1017/S1537592719004985.
Altmetric.com. 2011. How it Works. Available at: https://www.altmetric.com/about-our-data/how-it-works-2/. Accessed 18 Feb 2022
Altmetric.com. 2020. ‘Numbers behind numbers: The altmetric attention score and sources explained’. Available at: https://www.altmetric.com/blog/scoreanddonut/. Accessed 2 Sep 2021
Atchison, A.L. 2018. Towards the good profession: Improving the status of women in political science. European Journal of Politics and Gender 1 (1–2): 279–298. https://doi.org/10.1332/251510818X15270068817914.
Bar-Ilan, J., and I. van der Weijden. 2015. Altmetric gender bias? An exploratory study. International Journal of Computer Science: Theory and Application 41 (1): 16–22.
Barlow, C., and I. Awan. 2016. “You need to be sorted out with a knife”: The attempted online silencing of women and people of Muslim faith within academia. Social Media + Society. https://doi.org/10.1177/2056305116678896.
Beaulieu, E., et al. 2017. Women also know stuff: Meta-level mentoring to battle gender bias in political science. PS: Political Science & Politics 50 (3): 779–783. https://doi.org/10.1017/S1049096517000580.
Bisbee, J., J. Larson, and K. Munger. 2020. #polisci Twitter: A descriptive analysis of how political scientists use Twitter in 2019. Perspectives on Politics. https://doi.org/10.1017/S1537592720003643.
Breuning, M., J. Bredehoft, and E. Walton. 2005. Promise and performance: An evaluation of journals in international relations. International Studies Perspectives 6 (4): 447–461. https://doi.org/10.1111/j.1528-3577.2005.00220.x.
Brooke, S.J. 2021. Trouble in programmer’s paradise: Gender-biases in sharing and recognising technical knowledge on stack overflow. Information, Communication & Society 24 (14): 2091–2112. https://doi.org/10.1080/1369118X.2021.1962943.
Colgan, J. 2017. ‘Gender bias in international relations graduate education? New evidence from syllabi.’ PS: Political Science & Politics 50 (2): 456–460. https://doi.org/10.1017/S1049096516002997.
Crane, A., and S. Glozer. 2022. What’s next for the quantified scholar? Impact, metrics, and (social) media. Business & Society 61 (4): 807–812. https://doi.org/10.1177/00076503211016778.
Daraio, C. 2021. Altmetrics as an answer to the need for democratization of research and its evaluation. Journal of Altmetrics 4 (1): 2–13. https://doi.org/10.29024/joa.43.
Deschouwer, K. 2020. Reducing gender inequalities in ECPR publications. European Political Science 19 (3): 411–415. https://doi.org/10.1057/s41304-020-00249-y.
Dinsmore, A., L. Allen, and K. Dolby. 2014. Alternative perspectives on impact: The potential of ALMs and altmetrics to inform funders about research impact. PLOS Biology 12 (11): e1002003. https://doi.org/10.1371/journal.pbio.1002003.
Dion, M.L., J.L. Sumner, and S.M. Mitchell. 2018. Gendered citation patterns across political science and social science methodology fields. Political Analysis 26 (3): 312–327. https://doi.org/10.1017/pan.2018.12.
Esarey, J., and A.R. Wood. 2018. Blogs, online seminars, and social media as tools of scholarship in political science. PS: Political Science & Politics 51 (4): 811–819. https://doi.org/10.1017/S1049096518000070.
Espeland, W.N., and M. Sauder. 2016. Engines of Anxiety: Academic Rankings, Reputation, and Accountability. New York: Russell Sage Foundation.
Evans, H.K., and A. Moulder. 2011. Reflecting on a decade of women’s publications in four top political science journals. PS: Political Science & Politics 44 (4): 793–798. https://doi.org/10.1017/S1049096511001296.
Flaherty, C. (2019) Women have about half the followers of men on Twitter and otherwise diminished influence. Inside Higher Education, October. Available at: https://www.insidehighered.com/news/2019/10/15/women-have-about-half-followers-men-twitter-and-otherwise-diminished-influence. Accessed 28 July 2021.
Fortin, J., et al. 2021. Digital technology helps remove gender bias in academia. Scientometrics 126 (5): 4073–4081. https://doi.org/10.1007/s11192-021-03911-4.
Ghica, L.A. 2021. Who are we? The diversity puzzle in European political science. European Political Science 20 (1): 58–84. https://doi.org/10.1057/s41304-021-00319-9.
Giles, M.W., and J.C. Garand. 2007. Ranking political science journals: Reputational and citational approaches. PS: Political Science & Politics 40 (4): 741–751. https://doi.org/10.1017/S1049096507071181.
González-Valiente, C.L., J. Pacheco-Mendoza, and R. Arencibia-Jorge. 2016. A review of altmetrics as an emerging discipline for research evaluation. Learned Publishing 29 (4): 229–238. https://doi.org/10.1002/leap.1043.
Greenhow, C., B. Gleason, and K.B. Staudt Willet. 2019. Social scholarship revisited: Changing scholarly practices in the age of social media. British Journal of Educational Technology 50 (3): 987–1004. https://doi.org/10.1111/bjet.12772.
Gumpenberger, C., W. Glänzel, and J. Gorraiz. 2016. The ecstasy and the agony of the altmetric score. Scientometrics 108 (2): 977–982. https://doi.org/10.1007/s11192-016-1991-5.
Hardt, H., et al. 2019. The gender readings gap in political science graduate training. The Journal of Politics 81 (4): 1528–1532. https://doi.org/10.1086/704784.
Haustein, S., et al. 2016. Tweets as impact indicators: Examining the implications of automated “bot” accounts on Twitter. Journal of the Association for Information Science and Technology 67 (1): 232–238. https://doi.org/10.1002/asi.23456.
Hix, S. 2004. A global ranking of political science departments. Political Studies Review 2 (3): 293–313. https://doi.org/10.1111/j.1478-9299.2004.00011.x.
Kaufman-Osborn, T. 2017. Disenchanted professionals: The politics of faculty governance in the Neoliberal Academy. Perspectives on Politics 15 (1): 100–115. https://doi.org/10.1017/S1537592716004163.
Daniel, Kahneman, and Tversky Amos, eds. 2000. Choices, Values, and Frames. New York: Cambridge University Press.
Konkiel, S. 2016. Altmetrics: Diversifying the understanding of influential scholarship. Palgrave Communications 2 (1): 1–7. https://doi.org/10.1057/palcomms.2016.57.
Konkiel, S., Sugimoto, C.R. and Williams, S. (2016) What constitutes valuable scholarship? The use of altmetrics in promotion and tenure. Impact of Social Sciences, 24 March. Available at: https://blogs.lse.ac.uk/impactofsocialsciences/2016/03/24/the-use-of-altmetrics-in-promotion-and-tenure/ . Accessed 7 July 2021.
Kristensen, P.M. 2018. International relations at the end: A sociological autopsy. International Studies Quarterly 62 (2): 245–259. https://doi.org/10.1093/isq/sqy002.
Kwok, R. 2013. Research impact: Altmetrics make their mark. Nature 500 (7463): 491–493. https://doi.org/10.1038/nj7463-491a.
Lenine, E., and M. Mörschbächer. 2020. Pesquisa bibliométrica e hierarquias do conhecimento em Ciência Política. Revista Brasileira de Ciência Política. https://doi.org/10.1590/0103-335220203104.
Lin, J. 2020. Altmetrics gaming: Beast within or without? In Gaming the Metrics: Misconduct and Manipulation in Academic Research, ed. M. Biagioli and A. Lippman, 213–227. Boston: MIT Press.
Lundine, J., et al. 2018. The gendered system of academic publishing. The Lancet 391 (10132): 1754–1756. https://doi.org/10.1016/S0140-6736(18)30950-4.
Lupton, D. 2014. ‘Feeling Better Connected’: Academics’ Use of Social Media. Canberra: News and Media Research Centre, University of Canberra. https://www.canberra.edu.au/about-uc/faculties/arts-design/attachments2/pdf/n-and-mrc/Feeling-Better-Connected-report-final.pdf.
Maliniak, D., et al. 2008. Women in international relations. Politics & Gender 4 (1): 122–144. https://doi.org/10.1017/S1743923X08000068.
Maliniak, D., R. Powers, and B.F. Walter. 2013. The gender citation gap in international relations. International Organization 67 (4): 889–922. https://doi.org/10.1017/S0020818313000209.
Mohammadi, E., et al. 2018. Academic information on Twitter: A user survey. PLOS ONE 13 (5): e0197265. https://doi.org/10.1371/journal.pone.0197265.
Nygaard, L.P., and R. Bellanova. 2017. Lost in quantification: Scholars and the politics of bibliometrics. In Global Academic Publishing, ed. M.J. Curry and T. Lilly, 23–36. Bristol: Multilingual Matters. https://doi.org/10.21832/9781783099245-007.
Østby, G., et al. 2013. Gender gap or gender bias in peace research? Publication patterns and citation rates for “Journal of Peace Research”, 1983–2008. International Studies Perspectives 14 (4): 493–506.
Pflaeger Young, Z., et al. 2021. Women in the profession: An update on the gendered composition of the discipline and political science departments in the UK. Political Studies Review 19 (1): 12–36. https://doi.org/10.1177/1478929920905503.
Phull, K., G. Ciflikli, and G. Meibauer. 2019. Gender and bias in the international relations curriculum: Insights from reading lists. European Journal of International Relations 25 (2): 383–407. https://doi.org/10.1177/1354066118791690.
Priem, J. et al. 2010. Altmetrics: a manifesto. Available at: http://altmetrics.org/manifesto.
Ray, V. 2018. ‘The racial exclusions in scholarly citations’, Inside Higher Education, 27 April. Available at: https://www.insidehighered.com/advice/2018/04/27/racial-exclusions-scholarly-citations-opinion. Accessed 9 Oct 2020
Ringel, L. 2021. Challenging valuations: How rankings navigate contestation. Zeitschrift Für Soziologie 50 (5): 289–305. https://doi.org/10.1515/zfsoz-2021-0020.
Roemer, R.C., and R. Borchardt. 2015. Meaningful Metrics: A 21st Century Librarian’s Guide to Bibliometrics, Altmetrics, and Research Impact. Chicago: Association of College and Research Libraries. Available at: https://www.alastore.ala.org/content/meaningful-metrics-21st-century-librarians-guide-bibliometrics-altmetrics-and-research.
Schroeder, J.E. 2021. Reinscribing gender: Social media, algorithms, bias. Journal of Marketing Management 37 (3–4): 376–378. https://doi.org/10.1080/0267257X.2020.1832378.
Stockemer, D. 2022. Introduction: The gendered distribution of authors and reviewers in major European political science journals. European Political Science 21 (3): 413–416. https://doi.org/10.1057/s41304-021-00357-3.
Sud, P., and M. Thelwall. 2014. Evaluating altmetrics. Scientometrics 98 (2): 1131–1143. https://doi.org/10.1007/s11192-013-1117-2.
Teele, D.L., and K. Thelen. 2017. Gender in the journals: Publication patterns in political science. PS: Political Science & Politics 50 (2): 433–447. https://doi.org/10.1017/S1049096516002985.
Thelwall, M., et al. 2013. Do altmetrics work? Twitter and ten other social web services. PLOS ONE 8 (5): e64841. https://doi.org/10.1371/journal.pone.0064841.
Thelwall, M., and T. Nevill. 2018. Could scientists use Altmetric.com scores to predict longer term citation counts? Journal of Informetrics 12 (1): 237–248. https://doi.org/10.1016/j.joi.2018.01.008.
Tolleson-Rinehart, S., and S.J. Carroll. 2006. “Far from Ideal:” The gender politics of political science. The American Political Science Review 100 (4): 507–513.
Wais, K. 2018. ‘GenderizeR’. Available at: https://kalimu.github.io/project/genderizer/. Accessed 13 May 2022
Westbrook, L., and A. Saperstein. 2015. New categories are not enough: Rethinking the measurement of sex and gender in social surveys. Gender & Society 29 (4): 534–560. https://doi.org/10.1177/0891243215584758.
Wojcik, S., and A. Hughes. 2019. How Twitter Users Compare to the General Public. Washington, D.C.: Pew Research Center. Available at: https://www.pewresearch.org/internet/2019/04/24/sizing-up-twitter-users/. Accessed 22 June 2020.
Zhang, L., and J. Wang. 2021. What affects publications’ popularity on Twitter? Scientometrics 126 (11): 9185–9198. https://doi.org/10.1007/s11192-021-04152-1.
Zigerell, L. 2015. Is the gender citation gap in international relations driven by elite papers? Research & Politics 2 (2): 2053168015585192. https://doi.org/10.1177/2053168015585192.
Acknowledgements
The authors are grateful to Marion Lieutaud, Mathis Lohaus, and Michael Tierney for their useful feedback, as well as the participants of the Berlin Global Pathways Workshop in 2021 and of two ISA panels on diversity in the discipline in 2021 and 2022 for their questions and comments. The authors are thankful to Stacy Konkiel for her support in the early stages of this research project.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Meibauer, G., Phull, K., Alejandro, A. et al. Alternative metrics, traditional problems? Assessing gender dynamics in the altmetrics of political science. Eur Polit Sci 23, 179–198 (2024). https://doi.org/10.1057/s41304-023-00431-y
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1057/s41304-023-00431-y