Introduction

In this article, we introduce the study of altmetrics to political science and place it in conversation with scholarship on gender bias in the discipline. Using an original dataset, we analyse the extent to which altmetrics, as an emerging indicator of research impact, reflect the gendered organisation of academia, in a context where literature raises concerns regarding the institutional and structural factors that limit women’s representation and advancement (Lundine et al. 2018, p. 1755).

Altmetrics (“alternative metrics”) are indicators of research impact that aim to acknowledge the increasingly digital diffusion of research activities via social media. They allow scholars to “see ripples generated by their research that might otherwise go unnoticed” (Kwok 2013, p. 492). For example, the Altmetric Attention Score (hereafter: AAS) commonly features on most publishers’ webpages. The AAS tracks the real-time online attention an individual research item receives, visualised as a colourful wheel containing a dynamic numeric score. Despite having become a ubiquitous part of the digital academic experience (and potentially emerging as a tool of academic governance), altmetrics are rarely discussed outside specialised literature (for a review, see González-Valiente et al. 2016). This literature has argued that, if simply taken for granted, altmetrics may contribute to the reification of sociopolitical inequities ranging from the naturalisation of gender bias to the legitimation of discrimination, for example, in hiring, promotion, or grant awards.

It strikes us as particularly appropriate to begin interrogating the politics of altmetrics where they concern political science. In this exploratory piece, we therefore investigate what altmetrics do to and for the discipline of political science when it comes to gendered dynamics. Here, we build on political science scholarship that has investigated how gendered hierarchies emerge and are reproduced in the discipline. This literature’s primary concerns have focused on structural barriers to the professional presence and representation of women (Tolleson-Rinehart and Carroll 2006; Atchison 2018; Deschouwer 2020; Pflaeger Young et al. 2021) as well as bias in publication and citation practices (Breuning et al. 2005; Evans and Moulder 2011; Maliniak et al. 2013; Teele and Thelen 2017; Dion et al. 2018; Ghica 2021; Stockemer 2022) and pedagogy and teaching (Colgan 2017; Hardt et al. 2019; Phull et al. 2019). We extend this literature to digital academia through a focus on altmetrics.

To assess the extent to which altmetrics reproduce disciplinary gendered dynamics (as previously documented in political science for publication and citation practices), we ask: do altmetrics vary by gender? Further, how do altmetrics relate to gendered social media dynamics?

To answer these questions, we introduce a novel dataset that combines information on author gender and AAS for all articles published in 65 top peer-reviewed political science journals between 2013 and 2019. We find that, overall, the AAS reflects the same gendered patterns found in broader publication and citation practices (Dion et al. 2018). Journal articles authored exclusively by women scholars score 27% lower on average than exclusively male-authored outputs, meaning that they receive less attention through online channels. However, by disaggregating these results and controlling for outliers, we nuance these findings and show that these patterns are shaped by the overwhelming presence of high-scoring male disciplinary “superstars” whose research attracts online attention and viral sharing. If we exclude such outliers, both female-authored and mixed-author research garners higher AAS than male-authored research. Male authors also dominate the category of research that receives little to no online attention (articles with an AAS of zero), which speaks to the role of networked sharing in determining impact. We find gendered dynamics pervade the online scholarly ecosystem, especially where sharing and dissemination are concerned. Specifically, we show that the AAS closely overlaps with the networked sharing and virality effects of Twitter, complementing earlier scholarship on the strong relationship between altmetrics and social media (Thelwall et al. 2013; Gumpenberger et al. 2016). These insights are useful for understanding what the AAS actually measures, and how this measurement is influenced by pre-existing gendered dynamics, both online and offline. Based on these findings, we suggest that the AAS may in turn contribute to the gendering of knowledge production and the reproduction of patterns of gendered social organisation in the discipline. Bibliometric indicators are not neutral tools, but capable of influencing and producing norms, behaviours, and practices (Schroeder 2021, p. 376). A more nuanced understanding of altmetrics as non-neutral indicators that increasingly govern research evaluation helps avoid naturalising structural inequalities in the discipline.

Altmetrics as an indicator of research impact

Altmetrics are meta-analytical tools used for monitoring scientific research with the aim to measure research impact and influence. A direct response to the rise of the digital and social web, altmetrics are motivated by a desire to capture the reach, relevance, and impact of academic research in the digital ecosystem (Priem 2010). The growing prevalence of altmetrics has started to attract considerable scholarly interest (Thelwall 2013; Bar-Ilan and van der Weijden 2015; Konkiel et al. 2016; Thelwall and Nevill 2018). Unlike traditional citation metrics codified at the journal or author level, altmetrics have no fixed or canonical definition but adapt to follow the online life of research (Lin et al. 2020, p. 214). Altmetrics can be thought of as a composite of diverse criteria of engagement with scholarship, including interactions (e.g. clicks, views, and downloads), capture (e.g. bookmarks, saves, and favourites), mentions (e.g. posts, comments, reviews, and attributions), and social media reactions (e.g. likes, shares, and tweets), in addition to citations and rankings (Roemer and Borchardt 2015).

Concurrently, altmetrics reflect our digital social behaviour (Lin et al. 2020, p. 215). As a result, they have garnered attention as a means of understanding how gender bias operates in the digital academic sphere (Sud and Thelwall 2014; Bar-Ilan and van der Weijden 2015; Fortin et al. 2021). Academic knowledge production, exchange, and dissemination take place through ever-diversifying digital channels via ubiquitous online platforms like Twitter, YouTube, Reddit, blogging, and collaborative wikis. This development is generally considered a net benefit for academia because it works to democratise scholarship and its evaluation (Daraio 2021). Digital sharing increases the likelihood for research to be cited and circulated, which stands to equalise knowledge dissemination. These practices have transformed the environment within which disciplinary debates emerge and circulate (Esarey and Wood 2018; Greenhow et al. 2019). Social networking sites, academic and popular blogs, and podcasts open channels of access and communication between academics and public audiences. They create possibilities for evaluating research, enabling real-time, crowdsourced peer review (Greenhow et al. 2019, p. 992), encouraging transparency, or eliciting policy advice. Importantly, these evolving knowledge platforms work in parallel with traditional “offline” forms of academic exchange, e.g. networking at conferences.

Research indicators, e.g. citation counts, journal impact factors, and institutional rankings, are not socially-neutral tools, however. They structure the discipline and shape the academic profession in different, unequal ways (Nygaard and Bellanova 2017; Thelwall and Nevill 2018; Ringel 2021; Crane and Glozer 2022). The higher education landscape has seen a proliferation of instruments intended to measure research productivity and quality, ultimately shaping funding, career advancement, and educational policies through calculative rationalities. Formal and informal practices of academic hiring, tenure, promotion, and evaluation can often be tied to publication status, citation statistics, and research popularity (Giles and Garand 2007; Alter et al. 2020), while publisher and university rankings are determinants of funding and social capital (Hix 2004). The use of performance metrics in research management highlights how measurement can both reproduce and generate disciplinary inequalities (Lenine and Mörschbächer 2020). Such metrics then operate as “engines of anxiety” that promote particular notions of excellence and accountability (Espeland and Sauder 2016). Altmetrics are emerging as tools of academic governance. While not (yet) formally anchored in excellence or evaluation frameworks, institutions can use altmetrics to assess impact (e.g. the UK’s Research Excellence Framework), which influences funding allocation (Kwok 2013, p. 493; Konkiel et al. 2016). Private research funders and charities are increasingly paying attention to altmetrics (Dinsmore et al. 2014), though current uses of altmetrics remain limited to informal channels (e.g. as shorthand for exemplary scholarship).

The use of altmetrics has been met with considerable criticism along three main axes. Firstly, it remains unclear what altmetrics are meant to measure, not least because their proprietary algorithms are often black boxed (Lin et al. 2020, p. 214). Secondly, it remains unclear whether altmetrics actually depart from rather than duplicate traditional measurements (Roemer and Borchardt 2015; Haustein et al. 2016; Fortin et al. 2021). Thirdly, scholars anticipate and critically interrogate the risks of institutionalising altmetrics as tools of academic governance (Kaufman-Osborn 2017; Nygaard and Bellanova 2017, p. 25; Lenine and Mörschbächer 2020; Crane and Glozer 2022, p. 807). Building on this literature, we analyse the most prevalent altmetrics indicator—the AAS—to empirically investigate the extent to which gender bias is reproduced in the field of political science.

Methodology

To investigate gendered dynamics in political science altmetrics, we develop an original dataset containing data from Altmetric.com (via the Altmetric Explorer, access to which was granted in May 2020). Altmetric.com, founded in 2011, is a private, for-profit company and the foremost aggregator of altmetrics in the natural and social sciences. For each item in an expanding global database of over 35 million research outputs (journal articles, whitepapers, reports, datasets, etc.; Altmetric.com 2011), Altmetric.com produces a numerical Altmetric Attention Score (AAS). The AAS is visualised as a distinct, colourful “donut”, or summary badge on publishing and journal webpages. While Altmetric.com cautions against using AAS as a proxy for research excellence (Konkiel 2016), high-scoring articles are often featured on journal webpages, raising their visibility.

The AAS is an algorithmically-derived measure of the attention that a research item garners as it is shared across online communication channels. The algorithm remains proprietary and non-replicable (Altmetric.com 2020). The AAS approximates the magnitude and types of mentions or references for an item, assigning unique colours to each source captured (e.g. blue for Twitter mentions, red for news outlets, etc.). It considers volume (number of times mentioned), sources (where mentions derive from), and mentioning authors. Sources of attention include reference tools (e.g. Web of Science, Scopus, and Google Scholar), news media, blogging platforms and wikis (e.g. Wikipedia), policy documents, and the public pages of social media platforms (e.g. Facebook and Twitter).

The AAS amalgamates online reactions to research that might be qualitatively different. For instance, it does not distinguish between positive and negative attention, challenging the assumption that high AAS reflect better-quality research. A telling example is Bruce Gilley’s controversial and later redacted Third World Quarterly article, published in 2017. The article, which makes a case for colonialism, garnered a score of 1653 in our dataset, among the highest recorded in the field of political science and derived primarily from negative attention via Twitter. Altmetrics therefore reflect the virality and rapid circulation of attention surrounding a research output. While the visual simplicity of the AAS gives a semblance of precision, neutrality, and utility (Nygaard and Bellanova 2017, p. 33), this can result in an oversimplification of information and context (Gumpenberger et al. 2016, p. 980).

Operationalising gender and the AAS in political science

Our unit of analysis is the research item (i.e. publication). Our full database contains close to thirty million research items published since 2011, identified via digital object identifiers (DOIs). Relying on DOIs, the database includes primarily journal articles, in addition to some books, chapters, reports, and other sources. Each research item has an associated AAS, as well as information such as title, journal or source, publication date and venue, author names and affiliations at the time of writing, funding bodies, subject field, and collected mentions across social media platforms and dimensions.

Within this dataset, we narrow our focus to items belonging to the subject field “Political Science” (as opposed to other disciplines).Footnote 1 Of these, we focus solely on journal articles to preserve comparative consistency (we exclude books, chapters, and other outputs). We then focus on items published between 2013 and 2019 for the following reasons: first, pre-2013 data is less reliable and comparable on the whole, due to changes in social media measurement; second, post-2019 data may experience skews, due to recency bias. AAS accumulate over time, meaning that very recent scores may not be directly comparable to older ones. Furthermore, the Covid-19 pandemic may have incentivised different patterns of knowledge dissemination. This precludes us from generalising the dynamics that have driven online research dissemination since 2020, including those that interrelate with existing inequalities, e.g. unequal burden sharing between female and male colleagues.

In addition to publication time frame, we further ensure the comparability of data by excluding non-peer-reviewed publications. We limit our dataset to items published in one of 67 top peer-reviewed political science journals, selected based on SCImago journal ranking scores in 2020 (accessible via https://www.scimagojr.com/). This excludes items published through preprint servers such as the Social Sciences Research Network or ResearchGate. This is not to suggest these publication types are not of sufficient quality or are insignificant in online knowledge ecosystems (in some cases such publications have very high AAS).

We supplement information on author gender for each included research item. First, we coded the data with the help of the genderize.io package, an open-access API that predicts gender based on name (Wais 2018). Second, we manually validated genderize.io results, correcting where necessary.Footnote 2 We preserve the sequence in which authors appear, coding for gender by designating authors as “male” or “female” based on full names as well as (where available) institutional/online profiles at the time of publication. These parameters produced a final dataset consisting of 6,856 coded research items.

Importantly, we use the term “gender” rather than “sex” in this analysis to denote categorisations of male/female authorship, allowing for an understanding of the relationship between gender roles and norms in academia. We recognise that our adherence to disaggregating by male/female can be problematic because it denies the inclusion of non-binary identities and can result in assigning gender/sex erroneously (Brooke 2021, p. 2096). While this approach enables us to offer a viable “first cut” into the dataset, it is limited where it overlooks important dimensions and intersections of variation, including more nuanced understandings of gender in academia, which may perpetuate social inequality (Westbrook and Saperstein 2015).

Results

Do AAS vary by gender?

In the following, we provide descriptive statistical features of the dataset to understand altmetrics variations by gender. Figure 1 depicts AAS by author gender, disaggregated by female-authored, male-authored, and mixed-gender-authored publications. In absolute terms, there are three times more exclusively male-authored (62.7%) than exclusively female-authored (20%) research items (17.2% mixed-gender teams; Fig. 2).

Fig. 1
figure 1

Altmetric Attention Score disaggregated by gender

Fig. 2
figure 2

Distribution of Altmetric Attention Score

Mean AAS is highest on average for mixed-gender authored items (Fig. 2). Female only authored research generates, on average, the lowest AAS (19.23) as compared to male only authored (24.49) and mixed-gender-authored research (30.54). Publications authored exclusively by women thus have, on average, a 27% lower AAS than those authored by men. Yet median AAS reveal that while mixed-gender authored items generate the most attention, female-authored publications surpass male-authored ones. This indicates that while male authors appear to garner higher average AAS, the trend is driven by a small number of prolific outliers. If we exclude such outliers, female authors garner higher AAS than their male counterparts.Footnote 3

Figure 3 demonstrates the effect of such outliers by applying a log transformation, which condenses the distribution of data by treating multiplicative trends as additive, allowing for a closer observation of trends.

Fig. 3
figure 3

Effects of male-author outliers (axis log-transformed to condense scale)

We find that the “viral hits” of research in political science are dominated by male authors: of the top 100 highest-scoring publications, 67 are authored by men. In comparison, only 7 are exclusively female-authored while 26 are authored by mixed-gender teams. Figure 4 shows similar patterns among the top 50 highest-scoring publications, where “virality” skews towards a male bias.

Fig. 4
figure 4

Top 50 highest-scoring publications, authors, and journal

The gendered patterns visible among top scores are also present for publications that have an AAS of 0 or garner no online attention. Just as with top scores, zero scores are also dominated by male-authored publications (Fig. 5). More precisely, the share of male-authored publications with AAS of 0 (67%) mirrors the top 100 (68%: compared to 62.7% overall). Given that female-authored publications make up 15% and mixed-gender authorship 16% of zero-score items (20% and 17.2%, respectively, in the overall dataset), this implies that male-authored items are more widely distributed, while female-authored items are more closely centred around the mean.

Fig. 5
figure 5

Distribution of Altmetric Attention Score ‘0’

Finally, we observe temporal patterns to understand whether these trends are reproduced over time. Figure 6 shows year-on-year AAS by author gender. Over time, mixed-gender teams tend to capture the highest scores on average, reproducing the above findings. Consistently across all years, exclusively female-authored items tend to have the lowest average AAS. We find overall AAS across all categories increase over time. While two items may have the same AAS, the relative weight or importance of that score vis-à-vis others is meaningful in relation to publication year. Older items tend to have lower average AAS. This could be reflective of increased levels of online attention and dissemination, e.g. the growing usage of academic social media over the period under investigation. For instance, though Twitter emerged in 2006, “academic Twitter” as a tool for scholarly networking, dissemination, and outreach, really began to gain traction around 2013 (Thelwall 2013; Lupton 2014; Mohammadi 2018). In sum, our dataset reveals a relationship between AAS and gender, whereby male-authored scholarship dominates the highest scores and the lowest scores while female-authored scholarship performs better overall outside of these extremities.

Fig. 6
figure 6

Gendered distribution of Altmetric Attention Scores over time

To sum up the initial findings related to our first question, AAS seem to reflect the gendered practices and norms that also organise academic research and scholarship in political science more broadly. These dynamics represent existing disciplinary patterns that reproduce online, as well as dynamics unique to online spaces (e.g. social media sharing, exposure, and reactions). Consider the finding above that male-authored publications dominate the highest and lowest scores while female authors do better along the median, outside of extremes. This gendered dynamic aligns with literature showing that the gender citation gap is driven by male-dominated publications at the top of the distribution (Zigerell 2015).

High AAS for male-authored research may reproduce the outsized influence that seniority could have in disciplinary and wider political networks: perceived “superstars” in the discipline are quite often established male scholars, as the profession, especially in its higher rungs, remains relatively homogenous and slow to change (Tolleson-Rinehart and Carroll 2006, p. 511; Maliniak et al. 2008, p. 122; Pflaeger Young et al. 2021). Though women have moved to overtake men in terms of university entrants and in attaining Political Science degrees up to the PhD-level, the academic career ladder through to full professor continues to fail them, and their experience of disciplinary spaces remains substantially different to that of their male colleagues (Alper 1993; Tolleson-Rinehart and Carroll 2006, pp. 510–511; Østby et al. 2013, p. 493; Beaulieu et al. 2017, p. 779; Ray 2018). While a slowly changing field is corroborated by an increasingly more diverse academic Twittersphere (e.g. with networks like #WomenAlsoKnowStuff), virality in online research continues to evade women. While we cannot assess the type and content of online engagement female academics experience, it is likely to also differ substantially from their male colleagues (Barlow and Awan 2016).

In addition to having larger audiences and therefore garnering more attention through social and other online channels, research authored by senior scholars attracts “mentioning up” dynamics (Bisbee et al. 2020). In turn, measurement techniques that reflect male academic “superstardom” or success through “virality” can work to reinforce a research environment (whether through funding, opportunities, career progression, etc.) that privileges (the visibility of) male scholarship. Indeed, a publication’s social popularity can be easily influenced by factors aside from its content (Zhang and Wang 2021). Reifying these gendered dynamics through a numerical ranking, represented visually as a neutral indicator of impact, risks producing gendered effects on its own. For example, extremes and outliers may be recalled better and more often than averages (following Kahneman and Tversky 2000). That “academic superstars” (whether online or offline) are more likely to be male may thus reinforce a belief that male researchers are the best bet a department or funding agency can make if they want to reinforce visibility—all the while these outliers are not representative: most articles that get no traction and garner no visibility are also written by men, and female-led publications and publications written by mixed-gender teams do better on average in terms of online visibility.

How do AAS relate to gendered social media dynamics?

To further interrogate gendered dynamics in the AAS, we investigate the components that constitute it. More specifically, we first unpack the individual components that comprise the score; second, we focus on Twitter trends and other social media dimensions that amplify scores; third, we map changes in AAS against Twitter trends to understand the importance of Twitter in online research dissemination. Figures 7 and 8 deconstruct the AAS into its component parts. Note that while Mendeley and Dimensions (traditional citations, e.g. those found in Web of Science) garner high mentions, they do not contribute to the AAS’s calculation.

Fig. 7
figure 7

Sources of online attention, ordered by mean; (*) denotes social media sources

Fig. 8
figure 8

Altmetric Attention Score and mentions across sources

With respect to the composition of the AAS, two characteristics are notable: first, the overall score relies mainly on mentions from Twitter, and second, other sources of mentions frequently have very low mentions or no data. The absence of mentions from these sources is not evidence of a lack of online engagement—for example, Altmetric.com can only track public Facebook pages, excluding those (like most user-level pages) that are set to be visible only to private audiences. Indeed, focusing on social media only demonstrates that Twitter is by far the most relevant online platform for measuring online attention.

We examine temporal patterns to investigate whether AAS and Twitter scores perform similarly year-on-year. Figure 9 suggests that both the average AAS and average Twitter mentions have steadily increased over time (based on date of publication). We find that AAS moderates the effect of Twitter mentions. Relatively low average Twitter input between 2013 and 2016 results in slightly higher AAS, comparatively speaking: Twitter had slightly less influence on the overall AAS. This relationship is reversed from 2017, where higher average Twitter input results in continuously increasing, but comparatively lower average AAS.

Fig. 9
figure 9

Average Attention Scores compared to average Twitter mentions over time

Finally, we compare the gendered nature of overall AAS and Twitter mentions. Figure 10 suggests that the AAS indeed closely matches the distribution of mentions on Twitter for female-authored, male-authored, and mixed-author teams.

Fig. 10
figure 10

Altmetric Attention Scores compared to average Twitter mentions, disaggregated by gender

In sum, our findings support existing research on altmetrics that find Twitter currently plays an outsized role in measurements of online research impact, attention, and dissemination. The results raise questions as to the indicator’s value for researchers; for instance, its capacity to capture online knowledge exchange rather than simply reflecting social media popularity and follower base.

Notably, research dissemination online has a recency bias—emerging research is more likely to be tweeted simply because Twitter was not available as a platform before 2007, and because Twitter’s academic user base has grown substantially since. While older research can still be shared and thus accumulate high AAS (e.g. Alexander Wendt’s 1992 article “Anarchy is what states make of it: The social construction of power politics” was republished online by International Organization in 2009, drawing an AAS of 76), newer work is more likely to garner social media attention, generating higher scores. In turn, we know that the more recent the research output, the more likely it is to be attributed to female authors or co-authored teams (due to trends towards co-increased authorship; (Teele and Thelen 2017, pp. 437–439)). In combination with the predominance of Twitter in the AAS, this may explain a relative increase in scores for female-authored pieces over time. Whereas in the professional discipline, a higher ratio of male scholars combined with practices of self-citation and citing other men produces significant gender bias (Kristensen 2018), this may be mediated among more diverse online audiences. For example, female political scientists (and those tenure track) are more likely to use Twitter (Bisbee, Larson and Munger 2020) and younger user populations compared to the offline professional discipline might also result in changes in gendered dynamics (Wojcik and Hughes 2019; Bisbee et al. 2020). This would be good news if this online community translates its higher willingness to engage female-authored research into its offline citation and reward practices.

Recent trends towards a further diversification of online disciplinary spaces, e.g. the more widespread use of Mastodon or Substack for research dissemination, but also the possibility that private platforms such as Twitter restrict their data, may well change both how altmetrics are calculated, as well as how gendered dynamics play out online. For example, it may be possible that younger (or more politically left, or more junior, or only European, etc.) academics move to different platforms, which would have effects on how (often) female authors are mentioned on Twitter. Given the current outsized importance of Twitter, this in turn would raise questions not only regarding the AAS algorithm (which would need to be updated). More so, it further complicates calculating the AAS in general, as more data needs to be collected across more platforms, which is computing intensive. Indeed, should there be substantial differences in engagement between platforms, it may raise doubts as to whether any overall altmetrics score can be meaningfully interpreted at all.

Conclusion

This article introduced the study of altmetrics to political science via an analytical focus on gender. We make three contributions: firstly, our original gender-coded dataset allows us to augment and refine existing research on the internal workings of altmetrics, and offers multiple avenues for further research. Secondly, our analysis meaningfully complements scholarship on gendered dynamics in political science. The results generated by approaching the dataset from multiple angles offers a comparative baseline for scholars working on structural inequalities. Concurrently, our analysis remains preliminary. The dataset and coding process could be expanded to include article abstracts or full texts to complement title length, keywords, and author information. This would provide greater insight into the nature of virality, as well as how online attention, status, subject matter, and gender interrelate (Alter et al. 2020). The analysis could also extend to online mentions themselves; for example, the content and sentiment expressed in tweets about research, which we cannot currently capture. We do not control for factors such as the size of the respective author’s network (e.g. number of Twitter followers). Here again, we may see gendered patterns at work (Flaherty 2019). Similarly, controlling for institutional affiliation, rank, seniority, and language—which may well affect virality, but are difficult to reproduce in a dataset—could strengthen results.

Thirdly, our results indicate how metrics are not detached from, and may indeed reify, existing structural inequalities. The AAS potentially broadens the scope for research impact to include the digital social landscape. Yet there are pitfalls to the expanding range of performance indicators used to assess research productivity, and popularity. AAS say little about either the quality of the research or the type of engagement it generates (positive or negative). That altmetrics quantify attention and popularity but do not reflect intellectual labour stands to distort and “metrify” scholarly exchange. This is more likely to produce inequality if indicators are treated as neutral. This works alongside other deeply gendered academic practices such as citations and reading lists (Phull et al. 2019; Alejandro 2021). Indeed, our results question the neutrality of altmetrics in a way that necessitates further analysis of their use and reception beyond this preliminary analysis. As academic careers and funding become tied to measures of productivity and impact, this introduces a perverse logic of knowledge production aimed at attention generation. When seeking to capture research excellence and outreach, focusing on individual AAS risks conflating impact with gendered structural inequality in political science. As with any other dimensions of research, ethical considerations around who benefits from indicators and who suffers must be taken into consideration.

Finally, what should political scientists (and academia more broadly) do with this information? One avenue would be to encourage and support women and junior scholars in navigating online research dissemination and network-building, e.g. via workshops at conferences, in departments or through professional associations, and raise awareness as to the gendered dynamics of the online discipline. Evidently, while workable, this solution largely ignores the more structural hierarchies at play. The game remains the same, only some people become better at playing it. Another avenue would be to reject the use of altmetrics (and other such indicators) altogether, working instead towards rewarding intellectual labour differently and/or creating more supportive online communities. Here, online networks like #WomenAlsoKnowStuff aimed at supporting gender equity in citations and in wider dissemination to, e.g. journalists or policymakers, have an important role to play. And yet, altmetrics are attractive tools, and here to stay—which means seeking to overcome them entirely is likely bound to fail. It strikes us as urgent, therefore, that professional associations, universities, and departments develop formal recommendations informed by social science methodology as to what constitute suitable uses (and misuses) of indicators, particularly those that privilege non-inclusive practices like social media sharing and virality. This may concern hiring, career progression, and research assessment. Similar to pedagogical initiatives that aim to tackle gender bias in scholarship, finally, we encourage scholars to critically reflect on how altmetrics and social sharing impact their own research experiences, and to inform students and future scholars of the gendered dynamics inherent in the emerging digital ecosystem that is transforming academic practice today.