The Need for an Ethical Perspective on AI

Artificial Intelligence (AI), i.e., computing machines designed to mimic multiple human intelligences such as the capabilities to do, think, and feel (Huang & Rust, 2018) that are able to interpret external data, learn from such data, and use those learnings to achieve specific goals and tasks through flexible adaptation (Kaplan & Haenlein, 2019), has become one of the most popular topics across a variety of academic disciplines, industry sectors, and business functions. AI widely influences society at large (Haenlein & Kaplan, 2020).

Such amplification can go in positive as well as negative directions. On the positive side, AI helps companies to spot unethical behavior that previously might have been unnoticed. For example, firms can use AI to identify implicit racial bias—like for AirBnB, where distinctively named African-Americans are less likely to get a successful booking than guests with more mainstream names (Edelman et al., 2017). On the negative side, companies can use AI, for example, for employee surveillance. Software such as Status Today can scrutinize staff behavior on a minute-to-minute basis by collecting data on who sends emails to whom at what time, who accesses and edits files, and who meets whom and allows firms to compare such activity data with employee performance. Such use of AI entails ethical concerns and alters company-employee relationships.

AI also allows the analysis of customer information on a much more granular level (Kosinski et al., 2013), which opens up the possibility of unethical marketing practices that firms should actively try to discourage. Dynamic pricing could be pushed to the extreme using past information to determine individual-level willingness-to-pay estimates (Shartsis, 2019). Impulse buying could be triggered by presenting items that the customer previously touched or only intensively looked at but did not buy. Firms could target customers who are particularly prone to addictive items to boost sales in such categories (e.g., tobacco, alcohol, high-calorie food). Consumers need to trust that firms make good use of their data (Rossi, 2019). If such trust is violated, consequences can be substantial (Hirschman, 1970; Klein et al., 2004).

Due to these concerns, regulation is needed, as demanded by several researchers in the field (Crawford & Calo, 2016; Haenlein & Kaplan, 2019; Kopalle et al., 2022). The ethical regulation of AI, its design, and its possible uses are complex and necessary (Taddeo & Floridi, 2018). However, the regulatory and ethical frameworks needed in a time of increasing AI growth are largely absent (Scherer, 2016). Several different solutions have been proposed. For example, Crawford and Calo (2016) opt for a self-governance approach in which AI developers engage in social-system analysis, carefully considering the multiple possible effects of AI-driven systems on all parties involved. In cases where consumers cannot defend themselves and firms are unwilling to regulate on their own, laws may be imposed by regulators (Kaplan, 2022).

As AI advances from simply doing routine, repetitive mechanical tasks, to being highly capable of analytical, cognitive thinking, to having the potential for playing an essential role in interactions and communications involving humans (Huang & Rust, 2018; Huang et al., 2019), more dilemmas are created that make such regulation and decision-making difficult (Kaplan & Haenlein, 2020). For example, while AI creates new jobs and opportunities, many jobs are replaced due to increasing AI automation (Huang & Rust, 2018), and new skills (e.g., social skills) are required for humans to remain in the workforce. How to manage the AI-human team thus becomes a challenge to businesses (Huang et al., 2019; Rust & Huang, 2021). There is also a trade-off between data protection and innovation for businesses worldwide: more (big) data available means better AI systems for companies that use this data to train them. Therefore, the less regulation on data privacy and security in place, the more likely countries will be competitive on the world scene.

In such an environment, ethical leadership becomes an imperative that should serve as a call for action in the educational system. Teaching ethical behavior in universities and schools is more crucial than ever (Kaplan, 2021), and learning how to work with AI and acquiring social, people skills are the survival kits in the Feeling Economy where thinking AI pushes human workers to the feeling world (Huang & Rust, 2018; Huang et al., 2019; Rust & Huang, 2021). Potential high unemployment will be challenging for societies worldwide and most likely lead to tensions among different socioeconomic groups within and across countries (Kaplan & Haenlein, 2020). Universities are asked to include courses combining Artificial Intelligence and Humanities, independently of the academic area. Such courses may become part of the core curriculum as mathematics or history (Kaplan, 2021).

Special Issue on Business Ethics in the Era of Artificial Intelligence

Many of the concerns, dilemmas, and questions above are the subjects of this special issue. Having received an exceptional number of submissions, our selection comprises eleven articles applying various methodological and disciplinary perspectives. Kelley provides a general overview and identifies several components that impact the effective adoption of AI principles in organizations. Toth and colleagues investigate the ethical implications of applying artificial intelligence from a conceptual angle and pay particular attention to the question of accountability. Sullivan and Wamba investigate who should be held accountable if AI results in negative and harmful outcomes. In this context, John-Mathews, Cardon, and Balagué provide a new perspective on AI fairness, laying the groundwork for new models of corporate responsibility. Drawing on moral foundations theory, Telkamp and Anderson theorize that a person will perceive an organization's use of AI as ethical if it resonates with the individual's moral foundations.

Looking at specific areas of business research, in human resources management, Hunkenschroer and Luetge systematically review existing literature on the ethicality of AI-enabled employee recruiting, showcasing ways to mitigate ethical risks in practice. Sharif and Ghodoosi suggest how blockchain technology could ethically improve current organizational practices. Within the scope of retailing, Giroux et al. examine how individuals morally behave toward AI agents and self-service machines, and Rodgers and Nguyen discuss six dominant algorithmic online purchase decision pathways that align with ethical philosophies. Finally, Seele and Schultz conceptually develop a mapping allowing for the transfer of existent knowledge concerning greenwashing to machinewashing. Ma, Tojib, and Tsarenko analyze the general public's receptiveness toward AI-driven sex robots.

Conclusion

Already in 1993, the Journal of Business Ethics published an article dealing with ethical concerns of artificial decision-making (Khalil, 1993). A lot has changed in the nearly three decades since then. Responsible Management has received increasing attention in all areas of business research and broadened the scope of research for faculty around the world (Tsui, 2016). AI has moved into its harvesting season (Haenlein & Kaplan, 2019), with many ethical questions remaining and new ones piling up. To date, AI research on ethics still seems to be emerging, scattered across many domains, thus lacking a coherent theoretical perspective. This makes research in this domain all the more important and was this special issue's objective.