5. Artificial Intelligence, ethics and empathy: How empathic AI applications impact humanity

Linda Aulbach

© 2024 Linda Aulbach, CC BY-NC 4.0 https://doi.org/10.11647/OBP.0423.05

Abstract

The development of Artificial Intelligence (AI) has sparked a huge debate about its impacts on individuals, cultures, societies and the world. Through AI, we now can either support, manipulate or even replace humans at a level we have not seen before.

One of the core values of happy and thriving relationships between humans is empathy, and understanding another person’s feelings builds the foundation of human connection. Within the past few years, the field of AI has taken on the challenge of becoming empathic towards humans to create more trust, acceptance and attachment towards its applications. There are now ‘carebots’ with simple empathic chat features, which seem to be ‘nice to have’, but there is also a concerning development in the field of erobotics—the next (empathic) generation of sex robots, made for humans to fall in love with. The increase in emotional capacity within AI brings into focus how good or bad empathy really is. There is a high risk of manipulation of humans on a deep psychological level, yet there is also reason to believe that empathy is necessary to truly reach an ethical ‘gold’ standard. This chapter will examine empathic AI and its ethical issues with a focus on humanity. It will also touch on the question of what happens if AI becomes more human than humans.

Keywords

Artificial intelligence (AI); empathy; humanity; posthumanism; erobotics.

Introduction

The development of Artificial Intelligence (AI) has sparked a huge debate about its potential impacts on individuals, cultures and societies more broadly. Through AI, we now can either support, manipulate or even replace humans at levels we have not seen before. AI has already infiltrated various aspects of our daily lives, significantly impacting how we live, work, and engage in the world. With the continuous development of more and better technologies, the effects will intensify and the ethical debates surrounding AI will become even more complex (Popkova & Sergi, 2020; Schwab, 2017).

To date, the evolution of AI has led to the development of increasingly intelligent systems that can analyse, predict, calculate, automate, and perform tasks faster, cheaper, and often more effectively than humans. Consequently, ‘intelligence’ is no longer solely attributed to the human species, challenging the conventional notion of what it means to be human. While ‘hard skills’ were previously highly valued in the labour market, there is now a shift towards promoting ‘soft skills’ such as interpersonal abilities like empathy, a shift which gives humans an advantage over their computational counterparts. Soft skills include the ability to experience and express emotions as well as creative and innovative thinking. However, AI technologies are already emerging that specifically target these human traits. Applications like DALL-E and Midjourney in the field of visual arts exemplify the potential for machine-driven creative outcomes (Miller, 2019). The field of emotional AI or empathic AI (AIE) is rapidly emerging, presenting both new possibilities and ethical dilemmas.

Definition

AIE attempts to recognise and exhibit appropriate empathic responses based on human emotions. “Empathy accounts for the naturally occurring subjective experience of similarity between the feelings expressed by self and others without losing sight of whose feelings belong to whom” (Decety & Jackson, 2004, p. 71). Empathy works as an umbrella term for recognising, understanding and expressing emotions—three distinct areas in computer science that need to be developed to create a level of empathic AI (Spezialetti et al., 2020). At the centre of empathy are emotions, a term that has multiple theoretical models that differ significantly in regard to how they perceive and convey emotional signals, as well as how they interpret and evaluate emotional data (Yalçın & DiPaola, 2020). It seems that “there are as many theories of emotions as there are emotion theorists” (Beck, 2015). The concept of emotions lacks consensus in both philosophy and sciences (Stark & Hoey, 2021).

Terms like ‘feelings’, ‘emotions’, ‘empathic’ and ‘affective’ are often used interchangeably in public and even academic discussions (Stark & Hoey, 2021; Shouse, 2005).44 According to Shouse, these terms differ in perspectives: feelings are personal and tied to individual experiences, emotions are social in nature, and affects are pre-personal (Shouse, 2005). In contrast, computer science understands human emotions from a physiological perspective, focusing on factors such as changes in heart rate, sweat, skin colour, or other bodily signals that can indicate specific emotions. Consequently, it may not be necessary to gather the same emotional data as in other models of human behaviour. The categorisation of emotions still heavily relies on the specific empathy model being utilised as the foundation (Stark & Hoey, 2021; Bråten, 2007). Regardless of the specific definition, it is clear that AI is taking on the challenge of understanding human emotions and expressing feelings in a way that makes humans think they are being met with empathy.

Apart from discussing emotions from either computational or psychological perspectives, there is also an ethical and philosophical debate about what it means for AI to have feelings (and not just cause feelings). Rust and Huang claim that “machines are more likely to experience emotions in a machine way, […] it will be just as though machines can ‘experience’ emotions. In other words, machines will pass the emotional Turing test.”45 (Rust & Huang, 2021, p. 161). Computational empathy may still impact humans significantly, whether or not machines have or just simulate emotions.

Empathic robots

The exploration of human emotions in computer science was pioneered by Rosalind Picard with her book Affective Computing (1997). Picard argued that for computers to possess genuine intelligence and interact naturally with humans, they must be equipped with the ability to recognise, understand and even express emotions. As the groundwork for computational emotion recognition advanced, the field of social robotics also experienced significant growth.

Historically, the field of robotics has primarily focused on industrial and professional service applications such as those found in the automobile and mining industries. However, there has been a notable shift in recent times towards robots designed for human interaction. These personal service robots are gaining popularity, largely due to advancements in AIE (Bartneck et al., 2020). What was once a simple task-oriented robot, designed to improve personal productivity or reduce workloads, has now transformed into a social robot that places greater emphasis on personal interactions and experiences (Bartneck et al., 2020; Vincent et al., 2015).

In the 2004 IEEE International Workshop on Robot and Human Communication, Bartneck and Forlizzi defined social robots as autonomous or semi-autonomous robots that interact and communicate with humans by adhering to expected behavioural norms (Bartneck & Forlizzi, 2004). The authors developed a framework for classifying social robots based on factors such as:

Park and Whang (2022) expanded Bartneck and Forlizzi’s work, presenting a literature review on empathic AI and proposing a design concept for AIE robots. Their three-level evolution of AIE provides an overview of the current state of the technology and its future trajectory, with Type I AIE robots (domain-restricted, limited modality) already in use, ongoing research and development focusing on Type II (multi-modality but still domain-restricted), and theoretical explorations of Type III in academic literature (domain independent and intertwined multi-modality).

Currently, many ‘low-level’ (Type I) empathic social robot applications are in commercial and private use in various countries. One notable example is Softbank Robotics’ semi-humanoid robot ‘Pepper’, which can recognise basic human emotions and is deployed in restaurants, banks, and retail stores worldwide to welcome and entertain visitors. Although its design is relatively simplistic, resembling a human in terms of its face, upper body, and arms, it is far less intimidating than the humanoid robots portrayed in the dystopian future depicted in the Netflix series, Black Mirror. Other robots, with varying degrees of human-like appearance, are utilised in elderly or disability care as well as child education and entertainment. The design and technology of all types of AIE robots are rapidly improving, and as a result, their impact on humans is likely to increase as well.

Relationship between AIE and humans

The field of Human-Robot Interaction (HRI) draws from various disciplines such as robotics, psychology, social sciences and humanities (Bartneck et al., 2020; Billard & Grollmann, 2012). HRI Research aims to understand motivations, expectations, relationships and the impact of robot interactions to improve communication processes and enhance the human experience. The media equation theory suggests that humans respond to technologies in a similar manner to how they respond to other humans (Kolling et al., 2016). Building upon this idea, the computers as social actors theory (CASA) specifically focuses on human-to-machine (H2M) interaction, proposing that humans unconsciously treat machines as if they were social entities (Lee & Nass, 2010; Nass & Moon, 2000). Consequently, human-to-human (H2H) relationship insights are applied to human-to-machine communication, further driving the humanisation of robots (Lee & Nass, 2010; Kolling et al., 2016). The ultimate aim is to make machines more human-like, which leads to H2M communication becoming similar to H2H interactions, potentially having comparable psychological and social impacts (Bartneck et al., 2020; Nass & Moon, 2000).

Research has already demonstrated that people can form attachments to objects that are significantly less human-like, as they anthropomorphise (attribute human characteristics to) pets and objects (Hermann, 2022). Pet owners form deep emotional bonds with their animals, giving them names and talking to them in human language (Prato-Previde et al., 2022; Lindgren & Öhman, 2019). Owners of AI devices like virtual assistants Siri or Alexa, which resemble abstract social robots, also experience emotions throughout the lifecycle of these devices (e.g., purchase, use, disposal), and often experience emotional distress if anything happens to them (Hermann, 2022). In early 2023, the chatbot application Replika received a software update to remove the erotic roleplay function, resulting in a massive backlash for the company as it left its users heartbroken and devastated when their AIE companion suddenly ended their emotional relationship (Tong, 2023). The depth of psychological and emotional bonding varies depending on individuals’ personality and mental health, however, it seems that everyone can (and will) become attached to an AIE robot to some degree, once it is being used (Wan & Chen, 2021; Yap & Grisham, 2019). Considering that AIE robots are becoming increasingly human-like and may soon act as equal social actors, any relationship with these applications could potentially impact humans similarly to how humans impact each other.

This topic, and the question of how technology (and the relationship between machines and humans) impact individuals and humanity is not new. Numerous media and communication studies have explored human relationships with technology, ranging from early internet use to recent developments in augmented and virtual reality (Rochadiat et al., in Ling et al., 2020; Bullock & Colvin, 2017). These studies explored, for instance, the impact of online interactions on offline relationships, examining changes in sexual activity or the increased number of complaints about a partner’s online behaviour (Cooper et al., 2000; Underwood & Findlay, 2004). The diversity of research approaches and perspectives, and the significant impacts, demonstrate the opportunities and necessity of interdisciplinary exploration in the emerging field of AIE.

The promises and challenges of AIE

While AIE is already a big part of ethical debates, it is useful to look into the reasons why empathic AI applications are being developed and—if done correctly— how it can positively impact humanity. AI that can show empathy promises to provide an enhanced user experience by dynamically adjusting to individual emotional states, thereby fostering personalised interactions that result in heightened customer satisfaction and engagement (Rust & Huang, 2021). Unlike human agents, empathic AI remains consistently attuned to emotions without fatigue, bias or fluctuations in mood, ensuring that every customer interaction is handled with the same level of care and attention. Customer service already benefits greatly from AIE, as seen in the adoption of various applications such as the aforementioned social robot ‘Pepper’, which is used as a receptionist in offices around the world. Not only will the service industry greatly benefit from AIE, empathy is also going to be a crucial element in AI applications for other fields, for example, education and health/elderly care (McStay, 2018). Learning experiences can be revolutionised by empathic AI applications by adapting educational content and approach to the emotional state and learning style of students, enabling a more personalised education and also offering promising solutions for children with additional needs (McStay, 2020). The inclusion of individuals with disabilities or special needs will be made much easier by incorporating AIE in assistive technology, offering emotional support and assisting with daily tasks in a way that promotes independence and increases quality of life. Empathic AI applications can be used as companions for humans dealing with various mental health issues or loneliness (Potimanios & Narayanan, 2020). AI, in general, is going to have a huge positive impact on the healthcare system, as it can greatly assist in analysing and monitoring health conditions (Topol, 2019). In addition to personal assistance, AIE can also be a support in professional settings where humans and robots collaborate. Enhancing machines with AIE fosters a more positive interaction, which may lead to more effective and efficient teamwork between humans and robots (Lyons et al., 2021; McStay, 2018).

However, all these opportunities can be seen as risky, if developed and applied in an unethical way. Customer service could become manipulative; education and healthcare could potentially have harmful effects on vulnerable people. While the threat of job loss due to AI is not new, emotional AI heightens the threat of replacing roles that rely heavily on empathy and human connection, making the displacement of workers in customer-facing or care industries even more profound. Additionally, the attachment to AIE applications may have a detrimental impact on individuals’ mental health or even spark an existential crisis of humanity if humans come to prefer robotic sex over human reproduction. If not carefully designed and implemented, AIE could exacerbate racism and discrimination by perpetuating biases in data and algorithms, struggling to accurately recognise emotions across diverse populations (Johnson, 2006).

The ethical dimensions of emotional AI are closely entwined with broader concerns about privacy, fairness, transparency, and accountability in AI technology (Powers, 2012). Emotional data, being among the most intimate aspects of the human experience, magnify privacy concerns and underscore the necessity for robust data protection measures and informed consent protocols. Moreover, the potential for emotional manipulation and harm presents significant risks that warrant careful assessment.

As emotional AI technologies develop a greater resemblance to humans, the ease with which individuals may form relationships with AI robots increases. However, these emotional bonds carry the potential for negative impacts on mental health, akin to the devastating effects that can arise from human relationships. This complex landscape emphasises the importance of ongoing research to understand the impact of emotional AI and the development of appropriate regulatory frameworks to address ethical concerns (Park & Whang, 2022; Bartneck et al., 2020; Nass & Moon, 2000).

The approach to ethical empathetic AI also includes a philosophical discussion, as AIE may also challenge the way humanity defines itself. If intelligent agents take over not only physical tasks and abilities but are capable of reading and expressing emotions, the definition of what it means to be human is questioned. This is at the core of the posthumanism discourse, which critically discusses the concepts of human identity and existence (Braidotti, 2019; Nimmo et al., 2020; Ferrando, 2019). Delving deeper into the “crisis of the human” (Ferrando, 2019) and the ethical implications of AIE, one may find valuable insights by exploring the most sophisticated version of empathic AI applications: sex robots. These robots go far beyond the physical construction, as AIE now allows the addition of an emotional component to what is already an advanced compilation of human-like appearance and simulated human movements. The emotional capabilities render the name ‘sex robots’ obsolete—rather, a discipline called ‘erobotics’ has taken its place (Dube, 2021), shifting the focus onto ‘eros’ (love) and the many questions relating to humanity and its emotions. Imagine a user of an erobot developing feelings for it and treating it (or her/him?) as a companion or even officially as a partner. What psychological effects may arise? How are human-to-human relationships impacted? What does it mean for the value of sex, love and intimacy within society? What other effects does it have on humanity? These questions and examples are just the surface fragments of emergent debates in the field of erobotics (Sullins, 2021; Danaher & McArthur, 2017; Devlin, 2018) and exemplify the ethical discussion surrounding AIE, in which the Digital Humanities can also provide valuable insights into the socio-cultural impacts, ethical considerations, and humanistic perspectives on the development and deployment of AIE technology, particularly within the context of intimate human-robot interactions and the evolving dynamics of human-technology relationships in general.

The impact on both micro and macro aspects of society could be substantial. Ethicists have already voiced their concerns about erobots increasing the objectification of women, the potential use of child-bots in relation to paedophilia, and the possible problems and pressures on long-term relationships in relation to the potential of erobots always providing “what one desires” (González-González et. al, 2021; Zhou & Fischer, 2019). These concerns fuel the emerging need to program consent into a sex robot—another example of a complex multilayered discussion that involves questions about how much power we should retain or give to AI. However, the use of such applications may have therapeutic benefits or could lead to a decrease in human trafficking and exploitation (Belk, 2022; Sullins, 2012). Research in this area is ongoing and evidence can be found for both the promise and challenges of AIE. With fast-evolving technology, the ethical debate and the quick and ongoing development of guidelines and policies are necessary to ensure an ethical deployment of empathetic AI systems.

Incorporating empathy

As noted, there is a lack of consensus about how to define emotions or empathy. Empathy has the potential to introduce a more individualistic approach to ethics (Nallur & Finlay, 2022). While the effects of AI are often discussed at a societal level (‘ethics in the large’), a focus on the individual level (‘ethics in the small’) is seen as necessary, and AIE could enable this approach. With the capabilities of emotion recognition software, even large-scale applications can now take individual circumstances into account. This opportunity to incorporate empathy into AI applications empowers developers not only to avoid harm but also to assess the specific needs of each individual interacting with these systems. Multiple studies suggest that the inclusion of empathy in AI systems is a crucial factor for universal ethical AI (Srinivasan & Gonzales, 2021; Batista, 2021; Damiano, Dumouchel & Lehmann, 2015). Affective computing might serve as the “key to a human-friendly singularity” if AI reaches the level of singularity in the future (Hanson Robotics, 2022).46 Thus, empathy seems to be necessary to reach an ethical ‘gold’ standard.

There is growing recognition of the value of empathy and improved mental health in society. This shift signifies a new perspective that considers emotions as complementary to rationality, influencing both prudential and ethical decision-making processes (Nallur & Finlay, 2022). In the traditional legal sphere, however, emotions have often been disregarded and invalidated in favour of the perceived rationality associated with the Rule of Law (Henderson, 1987). Henderson advocates for the integration of empathy into legal practice, asserting that a more comprehensive understanding of situations necessitates acknowledging their emotional dimension, which ultimately leads to more informed and improved decision-making processes. However, incorporating empathy into the realm of legal practice is complex and raises concerns about potential wrongdoing or exploitation due to cultural, individual or confirmation biases. This suggests that empathy, if taken into account when determining legal outcomes, could potentially yield harmful results.

As described earlier, there are various definitions of emotions and empathy, which make it incredibly hard, if not impossible, to create a ‘gold’ standard. Prinz (2011) discusses AI and emotions and presents a theory that addresses the “problem of parts” and the “problem of plenty”. The former refers to the challenge of selecting the necessary components for detecting emotions in a specific context, while the latter pertains to how these components interact with each other. The fragmentation and interconnectedness of these components have given rise to multiple definitions of emotions. The absence of consensus regarding the precise nature of emotions and the true essence of empathy poses a significant obstacle in formulating universal guidelines, whether for the practice of law or the AI(E) industry.

As AI becomes increasingly capable of demonstrating empathy, the ethical considerations surrounding its use are struggling to keep up. McStay and Pavliscak offer emotion-specific ethics guidelines, calling upon practitioners to take action in their daily lives after considering certain ethics-related questions for their product or project, rather than providing a new standard similar to other ethical guidelines (McStay & Pavliscak, 2019). This proposal, still vague, nonetheless creates space for individuality and therefore manifests the core principles of an ethics of care. This normative ethical theory resolves around the individual, believing that generalised standards are “morally problematic, since [they] breed moral blindness or indifference” (Gilligan in Bailey & Cuomo, 2008). However, there is still no official guideline that incorporates empathy. This leads to the question of what happens if AI becomes more human than humans?

Conclusion: AI(E) and humanity

Machines have already caused a significant transformation in what was once called the “physical economy”, an era dominated by mechanical tasks during the Industrial Revolution of the 19th century. This transformation shifted the economy towards a more cognitive approach, often referred to as the “thinking economy” (Rust & Huang, 2021).

The emergence of intelligent technology led to ongoing debates about the very definition of ‘intelligence’, as there was a constant quest to identify characteristics exclusive to humans. With AI, even tasks such as solving complex calculations ceased to be classified as ‘intelligent’. Rust and Huang claim that we have now moved into a “feeling economy” (Rust & Huang, 2021), which is yet again challenged by the emergence of AIE. The aim of developing intelligent technologies is to make them perfect, not allowing any mistakes, as this would be deemed unethical. This also means that, at some point, AIE may be ‘better’ or more empathetic towards someone compared to a human, as tiredness, lack of concentration or any other human factor might inhibit a human’s ability to detect someone else’s feelings. If empathy is currently the distinguishing factor between humans and machines, it may soon be time to find a new characteristic of being human. With that in mind, the question of whether AI will ever become ‘more human’ is impossible to answer, as the definition of such is constantly changing and may always be just one step away. Additionally, the pursuit of AI perfection might drive humans to explore their own progression into ‘more-human humans’, engineering and augmenting themselves towards perfection to bridge the gap between humans and flawless AI systems.47 All of this is currently speculative; however, these hypothetical scenarios necessitate ethical discussions and regulations to navigate the potential implications effectively. Integrating humanistic perspectives into the design and development of AIE technologies is essential for ensuring that these systems align with human values, needs and experiences. Digital humanities scholars can contribute insights from the humanities and social sciences to inform the design, evaluation, and critique of affective AI systems, fostering more ethically and culturally sensitive approaches to technology development.

Ultimately, the notion of AI ever becoming ‘more human’ remains elusive, as the definition of humanity constantly evolves. It seems, however, that empathy is both the key for ethical AI as well as for humanity itself, where it becomes increasingly more important to focus on emotional abilities. Embracing this empathic humanity in the age of empathic AI applications involves leveraging such applications as catalysts for self-reflection, self-exploration and a redefinition of what it means to be human, so that we can ensure we stay ‘more human’ than any simulations of humans.

Works Cited

Bartneck, C., Belpaeme, T., Eyssel, F., Kanda, T., Keijsers, M., & Šabanović, S. (2020). Human-robot interaction: An introduction. Cambridge University Press.

Bartneck, C. & Forlizzi, J. (2004). A design-centred framework for social human-robot interaction. 13th IEEE International Workshop on Robot and Human Interactive Communication (IEEE Catalog No. 04TH8759), pp. 591–594. https://doi.org/10.1109/ROMAN.2004.1374827

Beck, J. (2015, February 4). Hard feelings: science’s struggle to define emotions. The Atlantic. https://www.theatlantic.com/health/archive/2015/02/hard-feelings-sciences-struggle-to-define-emotions/385711/

Belk, R. (2022). Artificial emotions and love and sex doll service workers. Journal of Service Research. https://doi.org/10.1177/10946705211063692

Billard, A., & Grollman, D. (2012). Human-Robot Interaction. In N. Seel (Ed.). Encyclopedia of the Sciences of Learning. Springer. https://doi.org/10.1007/978-1-4419-1428-6_760

Bisconti, P. (2021). Will sexual robots modify human relationships? A psychological approach to reframe the symbolic argument. Advanced Robotics 35(9), 561–571. https://doi.org/10.1080/01691864.2021.1886167

Bostrom, N. (2014). Superintelligence: Paths, dangers, strategies. Oxford University Press, Incorporated. https://doi.org/10.1007/s11023-015-9377-7

Braidotti, R. (2019). Posthuman Knowledge. Polity Press.

Bråten, S. (2007). On being moved: From mirror neurons to empathy. John Benjamins Publishing Company. https://doi.org/10.1075/aicr.68

Bullock, A.N., & Colvin, A.D. (2017). Technology, human relationships and human interaction. Social Work. https://doi.org./10.1093/obo/9780195389678-0249

Cooper, A., McLoughlin, I.P., & Campbell, K.M. (2000). Sexuality in cyberspace: Update for the 21st century. Cyberpsychology & Behavior 3, 521–536. https://doi.org/10.1089/109493100420142

Danaher, J., & McArthur, N. (2017). Robot Sex: Social and Ethical Implications. MIT Press.

Decety, J., & Jackson, P.L. (2004). The functional architecture of human empathy. Behavioral and Cognitive Neuroscience Reviews 3(2), 71–100. https://doi.org/10.1177/1534582304267187

Devlin, K. (2018). Turned On: Science, Sex and Robots. Bloomsbury Sigma.

Ferrando, F. (2019). Philosophical Posthumanism. Bloomsbury Academic.

Kolling, T., Baisch, S., Schall, A., Selic, S., Rühl, S., Kim, Z., Rossberg, H., Klein, B., Pantel, J., Oswald, F. & Knopf, M. (2016). What is emotional about emotional robotics? In S.Y. Tettegah & Y.E. Garcia (Eds). Emotions, Technology and Health. Academic Press. (pp. 85–103). https://doi.org/10.1016/B978-0-12-801737-1.00005-6

González-González, C.S., Gil-Iranzo, R.M., & Paderewski-Rodríguez, P. (2020). Human-robot interaction and sexbots: A systematic literature review. Sensors (Basel) 21(1). https://doi.org/10.3390/s21010216

Korotayev, A., & LePoire, D. (2020). The 21st Century Singularity and Global Futures. Springer. https://doi.org/10.1007/978-3-030-33730-8

Kurzweil, R. (2005). The singularity is near: When humans transcend biology. Penguin Books. https://doi.org/10.1057/9781137349088_26

Johnson, D.G. (2006). Computer systems : Moral entities but not moral agents. Ethics and Information Technology 8(4), 195–204. https://doi.org/10.1007/s10676-006-9111-5

Lee, J.-E., & Nass, C. (2010). Trust in computers: The computers-are-social-actors (CASA) paradigm and trustworthiness perception in human-computer communication. In D. Latusek & A. Gerbasi (Eds). Trust and Technology in a Ubiquitous Modern Environment: Theoretical and Methodological Perspectives. IGI Global. (pp. 1–15). https://doi.org/10.4018/978-1-61520-901-9.ch001

Lee, S.-K., Kavya, P., & Lasser, S.C. (2021). Social interactions and relationships with an intelligent virtual agent. International Journal of Human-Computer Studies 150. https://doi.org/10.1016/j.ijhcs.2021.102608

Lindgren, N., & Öhman, J. (2019). A posthuman approach to human-animal relationships: Advocating critical pluralism. Environmental Education Research 25(8), 1200–1215. https://doi.org/10.1080/13504622.2018.1450848

Lunceford, B. (2018). Love, Emotion and the singularity. Information (Basel) 9(9), 221. https://doi.org/10.3390/info9090221

Lyons, J.B., Sycara, K., Lewis, M., & Capiola, A. (2021). Human-autonomy teaming: Definitions, debates, and directions. Frontiers in Psychology 12, 589585–589585. https://doi.org/10.3389/fpsyg.2021.589585

McStay, A. (2018). Emotional AI: The Rise of Empathic Media. SAGE. http://digital.casalini.it/9781526451323

McStay, A. (2020). Emotional AI and EdTech: Serving the public good? Journal of Educational Media : The Journal of the Educational Television Association 45(3), 270–283. https://doi.org/10.1080/17439884.2020.1686016

Miller, A.I. (2019). The Artist in the Machine: The World of AI-powered Creativity. MIT Press. https://doi.org/10.7551/mitpress/11585.001.0001

Nass, C., & Moon, Y. (2000). Machines and mindlessness: Social responses to computers. Journal of Social Issues 56(1), 81–103. https://doi.org/10.1111/0022-4537.00153

Nimmo, R., Atkinson, P., Delamont, S., Cernat, A., Sakshaug, J.W., & Williams, R.A. (2020). Posthumanism. SAGE Publications Ltd.

Park, S., & Whang, M. (2022). Empathy in human-robot interaction: Designing for social robots. International Journal of Environmental Research and Public Health 19(3), 1889. https://doi.org/10.3390/ijerph19031889

Picard, R. (1997). Affective Computing. MIT Press.

Popkova, E., & Sergi, B.S. (2020). Scientific and Technical Revolution: Yesterday, Today and Tomorrow. Springer.

Potamianos, A., & Narayanan, S. (2020). Why emotion AI is the key to mental health treatment. The Data Warehousing Institute. https://tdwi.org/articles/2020/04/07/adv-all-why-emotion-ai-key-to-mental-health-treatment.aspx

Prato-Previde, E., Basso Ricci, E., & Colombo, E.S. (2022). The complexity of the human-animal bond: Empathy, attachment and anthropomorphism in human-animal relationships and animal hoarding. Animals (Basel) 12(20). https://doi.org/10.3390/ani12202835.

Prinz, J. (2011). Against empathy. The Southern Journal of Philosophy 49(1), 214–233. https://doi.org/10.1111/j.2041-6962.2011.00069.x

Rochadiat, A., Tong, S., & Corriero, E. (2020). Intimacy in the app age: Romantic relationships and mobile technology. In R. Ling, L. Fortunati, G. Goggin, S. Lim, & Y. Li. (2020). The Oxford Handbook of Mobile Communication and Society. Oxford Academic. https://doi.org/10.1093/oxfordhb/9780190864385.001.0001

Rust, R., & Huang, M.-H. (2021). The feeling economy: How Artificial Intelligence is creating the era of empathy. Springer. https://doi.org./10.1007/987-3-030-52977-2

Schwab, K. (2017). The Fourth Industrial Revolution. Portfolio Penguin.

Shouse, E. (2005). Feeling, emotion, affect. M/C Journal (6). https://doi.org/10.5204/mcj.2443

Spezialetti, M., Placidi, G., & Rossi, S. (2020). Emotion recognition for human-robot interaction: Recent advances and future perspectives. Frontiers in Robotics and AI 7. https://doi.org/10.3389/frobt.2020.532279

Stark, L., & Hoey, J. (2021). The ethics of emotion in artificial intelligence systems. ACM Conference: Fairness, Accountability, and Transparency. Association for Computing Machinery. https://doi.org/10.1145/3442188.3445939

Sullins, J.P. (2012). Robots, love, and sex: The ethics of building a love machine. IEEE Transactions on Affective Computing 3(4), 398–409. https://doi.org/10.1109/T-AFFC.2012.31

Thomas, J.C., & Thomas, J. (2016). Turing’s nightmares: Multiple scenarios of the Singularity. CreateSpace Independent Publishing Platform.

Tong, A. (2023, March 26). AI company restores erotic role play after backlash from users ‘married’ to their bots. Sydney Morning Herald. https://www.smh.com.au/world/north-america/ai-company-restores-erotic-roleplay-after-backlash-from-users-married-to-their-bots-20230326-p5cvao.html

Topol, E.J. (2019). Deep medicine: How Artificial Intelligence Can Make Healthcare Human Again. Basic Books.

Underwood, H., & Findlay, B. (2004). Internet relationships and their impact on primary relationships. Behaviour Change 21, 127–140. https://doi.org/10.1375/bech.21.2.127.55422

Vincent, J., Taipale, S., Sapio, B., Lugano, G., & Fortunati, L. (2015). Social Robots from a Human Perspective. Springer International Publishing AG.

Wan, E., & Chen, R.P. (2021). Anthropomorphism and object attachment. Current Opinion in Psychology 39, 88–93. https://doi.org/10.1016/j.copsyc.2020.08.009

Yap, K., & Grisham, J.R. (2019). Unpacking the construct of emotional attachment to objects and its association with hoarding symptoms. Journal of Behavioral Addictions 8(2), 249–258. https://doi.org/10.1556/2006.8.2019.15

Yalçın, Ö.N., & DiPaola, S. (2020). Modelling empathy: Building a link between affective and cognitive processes. Artificial Intelligence Review 53, 2983–3006. https://doi.org/10.1007/s10462-019-09753-0

Zhou, Y., Fischer, M.H. (2019). Intimate relationships with humanoid robots: Exploring human sexuality in the twenty-first century. In Y. Zhou, & M. Fischer (Eds). AI Love You. Springer, Cham.

Powered by Epublius