One Hundred Years of Robotics: Fostering Resilience in Response to Technological Disruption

By JoAnn Oravec

To call for social and personal resilience against the encroachments of robotics and artificial intelligence (AI) may seem like the plot of a science fiction film rather than an urgent practical recommendation.  However, the theme of ubiquitous robotics and AI applications in everyday life has indeed moved from science fiction speculation to real-world implementation. Deaths, injuries, and major resource damage have resulted from some of the applications, leading to serious questions about the responsibility of developers and implementers (De Pagter, 2021; Oravec, 2021).  Disruptions from substantial changes in employment situations have engendered kinds of “automation anxiety” as well (Akst, 2013), along with many racial and class biases related to the problematic use of AI applications (Noor, 2020).  As robots, autonomous vehicles, and other AI-enhanced entities become bigger factors in developed and developing nations, individuals are also being presented with questions as to what it is to be fully human in settings increasingly framed and controlled by intelligent technologies. This short essay explores why robotics, autonomous vehicles, and AI are relevant to peace and justice-related research and curricula at an assortment of levels, from how individuals are given care, food is delivered, and military explosives are detonated to the philosophical consideration of humanity’s relationship with intelligent technologies.  It proposes some ways that these issues can be introduced to students, workers, and community participants in order to expand the voices expressing themselves on these matters.

Infusing countless numbers of humanlike robotic and AI-enhanced entities into societies that already are strained in terms of equity, human rights, and basic safety may certainly present formidable issues, and some level of peril is to be expected.  In some troubled wartime settings, many individuals have already experienced robot-inflicted terror, as autonomous drones and “killer robots” have affected their basic existences.  Wernli et al. (2021) characterize societal resilience as the “capacity of societies to maintain their core social functions and reduce the social impact of a shock” (p. 1).  The notion of “human resilience” extends these capacities to the defense of essential human functions, including the ability to experience joy and love.  Robotics and AI initiatives can indeed challenge the resilience of societies if they are not carefully implemented.  Maintaining awareness about how intelligent technologies are shaping our lives is increasingly a part of societal and human resilience, both on the level of the individual and of the society as a whole.  Many forces keep individuals from attempting to obtain a critical distance from technologies; robotics has been associated with positive futuristic advances by marketers and developers, and questioning its value is often construed as being uninformed (Payr, 2019).  Significant cultural differences have also emerged that provide challenging complications; some nations are more involved with robotics than others, often reflecting economic and social priorities.  Resilience is indeed needed to maintain human values and support human rights in the face of sociotechnological changes and disruptions; however, this resilience may be hard to come by as the changes that occur are framed as essential or inevitable by those with power and economic clout.  However, many of the critical issues involved in such technological shifts have been displaced in public discourse by other important societal concerns, such as inflation and economic inequality.  

Rather than well-tempered resilience, sabotage against robots or even related violence against other humans can be unfortunate outcomes as technological disruptions leave some individuals with reduced employment and reputational options.  Forms of “robo-rage” are already emerging, with individuals acting out their aggressions and frustrations by attacking autonomous vehicles or delivery robots (Oravec, 2022). Those who express fears and misgivings in terms of robotics and other technologies are often considered unknowledgeable and even as “Luddites,” combaters against a supposedly inevitable technological permeation into everyday life (Jones, 2013).  The prospects that “robophobia” would make individuals less equipped to deal with modern society have resulted in a number of research and training initiatives (Woods, 2020).  Many managerial efforts are devoted to reducing the resistance of workers to dealing with robots, attempting to achieve “optimal synergy” between the two (Libert, Mosconi, & Cadieux, 2020).  Simple narratives that present robotics as unproblematic companions of individuals have also emerged (Payr, 2019).  The notion that skilled individuals or university and college students will not need to worry about losing their jobs to automation may dissuade individuals from focusing on the potential ills of technologies; some education in STEM (science, technology, engineering, and mathematics) subjects has been often presented as inoculating individuals from suffering the brunt of technological disruption.


One hundred years of robotics

Matters of humanity’s relationships to robotics have been salient for a long while.  Nearly one hundred years ago, the play RUR (Rossum’s Universal Robots) stimulated thinking about the associations between workers and robots (Čapek, 2004).  Automated and mechanical characters were a part of fictional treatment and theatrical demonstrations for many centuries; “three thousand years of robots” have been documented by historians (Cave & Dihal, 2018).  

Long before the RUR treatment of robot issues, the Luddites placed technological change as a problem in their confrontations with 19th century labor issues (Jones, 2013). In past decades, vigorous discussions of how automation would affect society often blossomed, often triggered by the developers of technological initiatives themselves, such as Norbert Wiener’s 1954 The Human Use of Human Beings.  Wiener was deeply fearful of the social and ethical implications for the “cybernetics” that he pioneered. The inventor of the first chatbot, Joseph Weizenbaum, wrote Computer Power and Human Reason (1976), which outlined his reservations about the encroachments of artificial intelligence upon society.  Donna Haraway’s (1987) “manifesto for cyborgs” presented a pioneering perspective on how humans and robots would meld.  A renewal of passionate discourse in which the needs of humanity are outlined and the impacts of robotics and AI projected may indeed result in intense controversies but could help ensure that important factors are not being overlooked.  Many unforeseen or disregarded negative aspects of robotics and AI applications are emerging as these technologies increase their roles in automating various aspects of workplaces, homes, and communities.   

In the past hundred years, robots have often been associated with terrifying images, with fearsome movie robots and combative Battle Bots leaving indelible traces in some cultures.  Creepiness in the realm of robotics and AI has been given a name, that of the “uncanny valley” phenomenon (Mori, 1970), with individuals often more repelled by robots the closer they become in image to human beings.   In recognition of such strong feelings, some researchers have endeavored to find ways to make robots more friendly and accommodating, even though these efforts may eventually endanger humans as they drop their guard when dealing with potentially-lethal industrial or transportation robots.  Campaigns that include highly positive images of robotics are indeed emerging; for example, marketers of high tech products and services are already characterizing a future for society of robotics and automation in which there are few negative dimensions.  However, in order to be resilient against potential disruption, societies need to enable their participants to explore the negative sides of robotics and automation as well as whatever benefits may be provided.   Automation-related disruptions that require substantial retraining and relocations are already occurring in certain occupations.  

How can students, workers, and community participants engage in effective discourse about robots and AI?  Teachers, researchers, and community leaders can enable individuals to avoid trite answers to the question “will a robot replace me?” and frame issues in ways that capture the nuances of these very complex and emerging situations.  For example, in educational contexts specific examples of robot and AI-related injuries and fatalities can be integrated into curricula for non-engineering majors along with students who study robotics in the classroom as part of a technologically-skewed academic program.  Reflection on the question of the biases and stereotypes that AI can reinforce over time can also aid in enlightening students as to the social impacts of high technology.  Students should be empowered with historical context and technical background to contemplate these emerging issues; science fiction can also be of help in fleshing out some futuristic scenarios for discussion and debate. Some developers of robots have utilized design and implementation strategies that emphasize values and reflection (Seibt, Damholdt, & Vestergaard, 2020), and provide useful models for how developers can assimilate the interests of their communities into their efforts.  Individuals can produce and share “robot blogs” or diaries that narrate the changes over time in their own perspectives as they explore these technologies or encounter them in the workplace.  Producing these blogs may reveal insights about the future of humanity itself as well as its technological imprints. 


Robots as essential workers

Of special interest to the peace and justice studies communities is how robotics are reframing certain kinds of employment, and how the voices of those involved should be heard in how these changes take place.  Such commonplace necessities as food preparation and delivery, lawn care, and facilities cleaning have been transformed in many settings with the use of robotics and AI technologies.  The Covid pandemic served to demonstrate the importance of “essential workers” in societies, but also stimulated the implementation of many initiatives to replace humans with robots and other intelligent entities such as chatbots and drones.  Autonomous vehicles and complex robotics installations in military operations are also playing larger roles in many venues, often exacerbating societal stresses as the power of “killer robots” to destroy is guided by algorithms rather than by human decency. The potential for robots apparently to “outclass” and displace humans in job performance and even in some social interactions presents psychological as well as economic issues.  With today’s “compulsive robotics” many individuals are forced to deal with robots and other AI-enhanced entities as a part of their employment or participation in certain organizations, potentially disempowering them and creating a kind of “learned helplessness” in terms of technology.  

The choices that educational institutions, manufacturing facilities, and community outreach centers make in terms of technologies deliver strong messages about the future for their participants.  Perspectives on technological inevitability can become learned helplessness if individuals are not allowed to question the kinds of technologies they use in their working and playing environments.  Important questions raised by participants concerning the security of the robots’ operations and the privacy of any humans involved are often unanswered. The numbers and extents of deaths and injuries inflicted by robots and other AI-enabled entities are increasing, fomenting worker fears in many manufacturing, service and transportation settings (Oravec, 2021, 2022).  Occasional news stories about a robot that breaks a child’s finger at a chess match or about an employee who contends that a particular AI system has become sentient often emerge in journalistic channels and social media; science fiction movies with rogue robots are commonplace in theatres and in streaming services.  However, what is often lacking is a focused attention on the current state and future potentials for robotics and AI research and development, the kind of attention that teachers and community leaders can inspire in their students and their communities. Critical decisions about robotics and AI implementations are already being made in public policy and legal venues, from courtrooms where liability for robot-involved accidents are being determined to legislative efforts that determine how many tax dollars will be spent for robotics and AI research (Bertolini & Episcopo, 2022).  


Some conclusions and reflections: The role of peace and justice studies

Humans have debated the prospects for modern robotics for more than a hundred years, with the play RUR (1923) stimulating controversy even when the field of robotics was in its infancy.  Many of the debates on the extent to which automation should be encouraged by governments have been contentious, with the value of human employment and other activity weighed against whatever projected productivity gains automation might provide. What kinds of discussions of robotics and other intelligent entities in society will emerge in the next centuries, and who will initiate this discourse?  Today, the discussion of robotics and automation issues is often displaced by politically-charged rhetoric that bypasses concerns about the appropriateness, safety, and security of robotic implementation as well as the resilience of the societies facing potential disruptions.  Themes of technological inevitability are laced with the positive futuristic images provided by marketers and corporate leaders, many of whom will personally benefit from economic investments in these technologies.  

From anti-robot attacks to human-robot marriages, “killer robots” to robots that make errors performing surgeries, individuals will soon confront substantial social and ethical challenges concerning robots and AI.  Some of the concerns that individuals have about robotics are anticipatory and speculative, since the impacts of robotics and AI are just emerging; however, the concerns often reflect considerable personal insights along with well-supported economic and social projections.  Peace and justice studies educators, researchers, and practitioners can help to shape public discourse by empowering individuals to think critically about the impacts of technological development and implementation especially in military and security arenas.  With so many political, economic, and social issues, the focused attention of the public to the prospects for robotics and automation may be difficult to maintain, but is essential to ensuring fair and just societies to come. 



Akst, D. (2013). Automation anxiety. The Wilson Quarterly37(3), 65-78.

Bertolini, A., & Episcopo, F. (2022). Robots and AI as Legal Subjects? Disentangling the Ontological and Functional Perspective. Frontiers in Robotics and AI9.

Čapek, K. (2004). RUR (Rossum’s universal robots). Penguin.

Cave, S., & Dihal, K. (2018). Ancient dreams of intelligent machines: 3,000 years of robots. Nature559(7715), 473-475.

De Pagter, J. (2021). Speculating about robot moral standing: On the constitution of social robots as objects of governance. Frontiers in Robotics and AI8.

Haraway, D. (1987). A manifesto for cyborgs: Science, technology, and socialist feminism in the 1980s. Australian Feminist Studies 2(4): 1-42.

Jones, S. E. (2013). Against technology: From the Luddites to neo-Luddism. New York: Routledge.

Libert, K., Mosconi, E., & Cadieux, N. (2020, January). Human-machine interaction and human resource management perspective for collaborative robotics implementation and adoption. In Proceedings of the 53rd Hawaii International Conference on System Sciences, pp. 533-542.

Mori, M. (1970). The uncanny valley: The original essay by Masahiro Mori. IEEE Spectrum.

Noor, P. (2020). Can we trust AI not to further embed racial bias and prejudice? British Medical Journal368.

Oravec, J. A. (2021). Robots as the artificial “other” in the workplace: Death by robot and anti-robot backlashChange Management, 21(2): 65-78.  DOI: 10.18848/2327-798X/CGP/v21i02/65-78

Oravec, J. A. (2022). Good robot, bad robot: Dark and creepy sides of robotics, autonomous vehicles, and AI.  New York: Palgrave Macmillan Springer.

Payr, S. (2019). In search of a narrative for human–robot relationships. Cybernetics and Systems50(3), 281-299.

Seibt, J., Damholdt, M. F., & Vestergaard, C. (2020). Integrative social robotics, value-driven design, and transdisciplinarity. Interaction Studies21(1), 111-144.

Weizenbaum, J. (1976). Computer power and human reason: From judgment to calculation. San Francisco: WH Freeman.

Wernli, D., Clausin, M., Antulov-Fantulin, N., et al. (2021). Building a multisystemic understanding of societal resilience to the COVID-19 pandemic

BMJ Global Health 6. doi:10.1136/bmjgh-2021-006794

Wiener, N. (1954). The human use of human beings: Cybernetics and society. Da Capo Press.

Woods, A. K. (2022). Robophobia. University of Colorado Law Review93(1), 51-114.



Jo Ann Oravec (MA, MS, MBA, PhD) is a full professor at the University of Wisconsin at Whitewater. She is also affiliated with the Holtz Center for Science and Technology Studies at UW-Madison. She has written over eighty peer-reviewed articles on computing, peace studies, ethics, public policy, disability studies, and related topics. She is currently working on artificial intelligence and lie detection research. Her publications include Good Robot, Bad Robot: Dark and Creepy Sides of Robotics, Autonomous Vehicles, and AI (Springer) and Virtual Individuals, Virtual Groups: Human Dimensions of Groupware and Computer Networking (Cambridge University Press). Her next book, on the “smart home of the future,” will be published in 2023. Jo Ann was the first chair of the Privacy Council of the State of Wisconsin, the US’s first state-level council on information privacy issues. She was a visiting fellow at Oxford and Cambridge. She can be reached at and