An Award for AI Devoted to the Social Good

The Association for the Advancement of Artificial Intelligence (AAAI) and Squirrel AI Learning announced the establishment of a new one million dollars annual award for societal benefits of AI. According to a press release of the AAAI, the award will be sponsored by Squirrel AI Learning as part of its mission to promote the use of artificial intelligence with lasting positive effects for society. „This new international award will recognize significant contributions in the field of artificial intelligence with profound societal impact that have generated otherwise unattainable value for humanity. The award nomination and selection process will be designed by a committee led by AAAI that will include representatives from international organizations with relevant expertise that will be designated by Squirrel AI Learning.“ (AAAI Press Release, 28 May 2019) The AAAI Spring Symposia have repeatedly devoted themselves to social good, also from the perspective of machine ethics. Further information via

Fig.: An award for AI

Moral Competence for Social Robots

At the end of 2018, the article entitled „Learning How to Behave: Moral Competence for Social Robots“ by Bertram F. Malle and Matthias Scheutz was published in the „Handbuch Maschinenethik“ („Handbook Machine Ethics“) (ed.: Oliver Bendel). An excerpt from the abstract: „We describe a theoretical framework and recent research on one key aspect of robot ethics: the development and implementation of a robot’s moral competence.“ The authors propose „that moral competence consists of five elements, two constituents (moral norms and moral vocabulary) and three activities (moral judgment, moral action, and moral communication)“. „A robot’s computational representations of social and moral norms is a prerequisite for all three moral activities. However, merely programming in advance the vast network of human norms is impossible, so new computational learning algorithms are needed that allow robots to acquire and update the context-specific and graded norms relevant to their domain of deployment. Moral vocabulary is needed primarily for moral communication, which expresses moral judgments of others’ violations and explains one’s own moral violations – to justify them, apologize, or declare intentions to do better. Current robots have at best rudimentary moral competence, but with improved learning and reasoning they may begin to show the kinds of capacities that humans will expect of future social robots.“ (Abstract) An overview of the contributions that have been published electronically since 2017 can be found on

Roboethics as Topical Issue in Paladyn Journal

In 2018, Paladyn Journal of Behavioral Robotics published several articles on robot and machine ethics. In a message to the authors, the editors noted: „Our special attention in recent months has been paid to ethical and moral issues that seem to be of daily debate of researchers from different disciplines.“ The current issue „Roboethics“ includes the articles „Towards animal-friendly machines“ by Oliver Bendel, „Liability for autonomous and artificially intelligent robots“ by Woodrow Barfield, „Corporantia: Is moral consciousness above individual brains/robots?“ by Christopher Charles Santos-Lang, „The soldier’s tolerance for autonomous systems“ by Jai Galliott and „GenEth: a general ethical dilemma analyzer“ by Michael Anderson and Susan Leigh Anderson. The following articles will be published in December 2019: „Autonomy in surgical robots and its meaningful human control“ by Fanny Ficuciello, Guglielmo Tamburrini, Alberto Arezzo, Luigi Villani, and Bruno Siciliano, and „AI for the Common Good?! Pitfalls, challenges, and Ethics Pen-Testing“ by Bettina Berendt. More information via

Fig.: Machines can be friendly to beetles

Considerations about Animal-friendly Machines

Semi-autonomous machines, autonomous machines and robots inhabit closed, semi-closed and open environments. There they encounter domestic animals, farm animals, working animals and/or wild animals. These animals could be disturbed, displaced, injured or killed. Within the context of machine ethics, the School of Business FHNW developed several design studies and prototypes for animal-friendly machines, which can be understood as moral machines in the spirit of this discipline. They were each linked with an annotated decision tree containing the ethical assumptions or justifications for interactions with animals. Annotated decision trees are seen as an important basis in developing moral machines. They are not without problems and contradictions, but they do guarantee well-founded, secure actions that are repeated at a certain level. The article „Towards animal-friendly machines“ by Oliver Bendel, published in August 2018 in Paladyn, Journal of Behavioral Robotics, documents completed and current projects, compares their relative risks and benefits, and makes proposals for future developments in machine ethics.

Fig.: An animal-friendly vehicle?

The Robot’s Love Affairs

PlayGround is a Spanish online magazine, founded in 2008, with a focus on culture, future and food. Astrid Otal asked the ethicist Oliver Bendel about the conference in London („Love and Sex with Robots“) and in general about sex robots and love dolls. One issue was: „In love, a person can suffer. But in this case, can robots make us suffer sentimentally?“ The reply to it: „Of course, they can make us suffer. By means of their body, body parts and limbs, and by means of their language capabilities. They can hurt us, they can kill us. They can offend us by using certain words and by telling the truth or the untruth. In my contribution for the conference proceedings, I ask this question: It is possible to be unfaithful to the human love partner with a sex robot, and can a man or a woman be jealous because of the robot’s other love affairs? We can imagine how suffering can emerge in this context … But robots can also make us happy. Some years ago, we developed the GOODBOT, a chatbot which can detect problems of the user and escalate on several levels. On the highest level, it hands over an emergency number. It knows its limits.“ Some statements of the interview have been incorporated in the article „Última parada: después del sexo con autómatas, casarse con un Robot“ (February 11, 2017) which is available via

Fig.: What about the robot’s love affairs?

Journal on Vehicle Routing Algorithms

Springer invites scientists to contribute to the Journal on Vehicle Routing Algorithms. Editors-in-chief are Christian Prins, Troyes University of Technology, France, and Marc Sevaux, University of South-Brittany, France. The publishing house declares that the new journal „is an excellent domain for testing new approaches in modeling, optimization, artificial intelligence, computational intelligence, and simulation“ (Mailing, 2 September 2016). „Articles published in the Journal on Vehicle Routing Algorithms will present solutions, methods, algorithms, case studies, or software, attracting the interest of academic and industrial researchers, practitioners, and policymakers.“ (Mailing, 2 September 2016) According to the website, a vehicle routing problem (VRP) „arises whenever a set of spatially disseminated locations must be visited by mobile objects to perform tasks“ (Website Springer). „The mobile objects may be motorized vehicles, pedestrians, drones, mobile sensors, or manufacturing robots; the space covered may range from silicon chips or PCBs to aircraft wings, warehouses, cities, or countries; and the applications include traditional domains, such as freight and passenger transportation, services, logistics, and manufacturing, and also modern issues such as autonomous cars and the Internet of Things (IoT), and the profound environmental and societal implications of achieving efficiencies in resources, power, labor, and time.“ (Website Springer) The moral decisions of cars, drones and vacuum cleaners can also be investigated. More information via

The LIEBOT Whitepaper

Machine ethics researches the morality of semi-autonomous and autonomous machines. In 2013 and 2014, the School of Business at the University of Applied Sciences and Arts Northwestern Switzerland FHNW implemented a prototype of the GOODBOT, which is a novelty chatbot and a simple moral machine. One of its meta rules was it should not lie unless not lying would hurt the user. In a follow-up project in 2016 the LIEBOT (aka LÜGENBOT) was developed, as an example of a Munchausen machine. The student Kevin Schwegler, supervised by Prof. Dr. Oliver Bendel and Prof. Dr. Bradley Richards, used the Eclipse Scout framework. The whitepaper which was published on July 25, 2016 via outlines the background and the development of the LIEBOT. It describes – after a short introduction to the history and theory of lying and automatic lying (including the term of Munchausen machines) – the principles and pre-defined standards the bad bot will be able to consider. Then it is discussed how Munchausen machines as immoral machines can contribute to constructing and optimizing moral machines. After all the LIEBOT project is a substantial contribution to machine ethics as well as a critical review of electronic language-based systems and services, in particular of virtual assistants and chatbots.


Fig.: A role model for the LIEBOT

From Teledildonics to Roboethics

The second international congress on „Love and Sex with Robots“ will be taking place in London, from 19 to 20 December 2016. Topics are robot emotions, humanoid robots, clone robots, entertainment robots, teledildonics, intelligent electronic sex hardware and roboethics. In the introduction it is said: „Within the fields of Human-Computer Interaction and Human-Robot Interaction, the past few years have witnessed a strong upsurge of interest in the more personal aspects of human relationships with these artificial partners. This upsurge has not only been apparent amongst the general public, as evidenced by an increase in coverage in the print media, TV documentaries and feature films, but also within the academic community.“ (Website LSR 2016) The congress „provides an excellent opportunity for academics and industry professionals to present and discuss their innovative work and ideas in an academic symposium“ (Website LSR 2016). According to the CfP, full papers should „be no more than 10 pages (excluding references) and extended abstracts should be no more than 3 pages (excluding references)“ (Website LSR 2016). More information via


Fig.: Logo and mascot of the congress

The GOODBOT Project

„The GOODBOT project was realized in 2013/14 in the context of machine ethics. First the tutoring person (the author of this contribution) laid out some general considerations. Then a student practice project was tendered within the school. Three future business informatics scientists applied for the practice-related work, developed the prototype over several months in cooperation with the professor, and presented it early in 2014. The successor project LIEBOT started in 2016.“ These are the initial words of a new contribution in Germany’s oldest online magazine, Telepolis. The author, Oliver Bendel, presents the GOODBOT project which is a part of his research on machine ethics. „The GOODBOT responds more or less appropriately to morally charged statements, thereby it differs from the majority of chatbots. It recognizes problems as the designers anticipated certain emotive words users might enter. It rates precarious statements or questions and escalates on multiple levels. Provided the chat runs according to standard, it is just a standard chatbot, but under extreme conditions it turns into a simple moral machine.“ The article „The GOODBOT Project: A Chatbot as a Moral Machine“ was published on May 17, 2016 and can be opened via

Ethics of UAS

Das Department of Aerospace Engineering der Pennsylvania State University ( schreibt eine „Faculty position in Engineering and Ethics of Unmanned Aircraft Systems“ aus. Zu den Unmanned Aircraft Systems gehören Unmanned Aerial Vehicles, ferngesteuerte oder (teil-)autonome Drohnen. In der Annonce, die zur Verfügung gestellt wurde, heißt es: „The research area represented by this search could be viewed as a special aspect of a broader one at the intersection of robotics, autonomy, and ethics. Applicants must have an earned doctorate in aerospace engineering or a related field; at least one degree in aerospace engineering or related experience is preferred.“ Der Leiter der Einrichtung, Professor George A. Lesieutre, nennt auf Nachfrage mögliche Forschungsfragen: „For what purposes should we deploy such vehicles, (or not) and what decisions should we permit them to make on our behalf?“ Weitere Informationen über

Symposium zur Roboterethik

Als Follow-up zu der Tagung „Mensch-Roboter-Interaktionen aus interkultureller Perspektive: Japan und Deutschland im Vergleich“ führt das Japanisch-Deutsche Zentrum Berlin am 4. Dezember 2014 ein Symposium zum Thema „Roboethik: Technikfolgenabschätzung und verantwortungsbewusste Innovation in Japan und Deutschland“ durch. Laut Veranstalter werden Experten aus Wissenschaft und Wirtschaft, darunter Hersteller von Servicerobotern, darüber diskutieren, wie „ethische Fragen, Fragen der Lebensqualität, Risikoabschätzung und Nutzerinteressen frühzeitig in die Entwicklung von Robotertechnologie integriert werden können, so dass ein nachhaltiger Dialog zwischen allen Beteiligten entsteht“ (Information per E-Mail). „Mit der Konferenz soll auf diese Weise auch eine Plattform für den interdisziplinären, interkulturellen Austausch über die Frage, wie wir künftig leben wollen, geboten werden und welche Handlungs- und Gestaltungsmacht dem/der Einzelnen hierbei zukommt.“ (ebd.) Weitere Einzelheiten sind dem Programmentwurf zu entnehmen. Konferenzsprachen sind Deutsch und Japanisch, wobei simultan gedolmetscht wird.

Rethinking Machine Ethics

Ein Call for Chapters zum Thema „Rethinking Machine Ethics in the Age of Ubiquitous Technology“ richtet sich u.a. an Philosophen, Informatiker, Robotiker und KI-Experten. Herausgeber des Buchs wird Jeffrey White vom Korean Advanced Institute of Science and Technology (KAIST) sein. Die Ziele werden auf der Website wie folgt beschrieben: „The present volume aims to bind together forward-looking constructive and interdisciplinary visions of moral/ethical ideals, aims, and applications of machine technology, either as extensions of ongoing engineering research or as theoretical approaches to moral/ethical problems employing machine ethics resources within the vast moral landscape confronting machine ethics researchers.“ Ein Abstract muss bis zum 28. Februar 2014 eingereicht werden. Weitere Informationen direkt über

GOODBOT-Projekt gestartet

Viele Benutzer mögen Chatbots auf Websites und unterhalten sich gerne mit ihnen. Dabei spielen Produkte und Dienstleistungen meist eine untergeordnete Rolle. Man will das virtuelle Gegenüber ausfragen, mit ihm flirten, es necken und reizen. Wenn man in Schwierigkeiten steckt, will man eine Antwort erhalten, die einen nicht noch mehr entmutigt und verstört. Die meisten Bots sind in dieser Hinsicht völlig überfordert. Auf Selbstmorddrohungen oder Ankündigungen von Amokläufen reagieren sie inadäquat. In einem Projekt an der Hochschule für Wirtschaft der Fachhochschule Nordwestschweiz FHNW wird unter der Leitung von Oliver Bendel eine moralische Maschine besonderer Art, nämlich ein sogenannter GOODBOT, konzipiert und prototypisch implementiert. Zunächst werden theoretische Grundlagen im Kontext der Maschinenethik erarbeitet. Es interessiert unter anderem, welche normativen Modelle maschinell verarbeitbar sind. Dann werden ausgewählte Chatbots auf Websites sowie Sprachassistenten wie Siri analysiert und verglichen sowie Erkenntnisse zu Systemen solcher Art gewonnen. Ein vorhandener Regelkatalog soll daraufhin überprüft werden, ob er in einem regelbasierten System umgesetzt bzw. wie er angepasst werden kann. Nicht nur Top-down-, sondern auch Bottom-up-Ansätze sollen erwogen werden. Zudem ist relevant, inwieweit soziale Medien und leibhaftige Menschen moralische Referenzpunkte bilden können. Das Projekt hat im Juni 2013 begonnen. Informationen zum Initiator und Projektleiter über


Abb.: (Ro-)Bots müssen gut zu Menschen sein

Können Roboter lügen?

Das Buch „Können Roboter lügen?“ von Raúl Rojas enthält „Essays zur Robotik und Künstlichen Intelligenz“, die von 2011 bis 2013 in der Online-Zeitschrift Telepolis erschienen sind. Rojas ist Professor für Künstliche Intelligenz an der Freien Universität Berlin. Im Kontext der Maschinenethik (und der Roboterethik) ist vor allem der dritte Teil interessant. Im Vorwort heißt es: „Der dritte Teil über Robotik und KI gibt einen Einblick in die Arbeit, die in den letzten Jahren auf diesem Gebiet geleistet worden ist. Wir wären froh, mit Robotern die Eleganz eines Insekts bei der Bewältigung von Hindernissen zu erreichen. Wir können aber schon heute kleine und große Roboter – z.B. autonome Fahrzeuge – mit kognitiven Architekturen ausstatten, die eine effiziente und nützliche Funktionalität erlauben. Die Kapitel über Roboter, die lügen, bzw. über das Watson-System zeigen die zwei Seiten der Medaille: Wie weit die künstliche Intelligenz gekommen ist und wie weit entfernt sie doch noch vom Ziel bleibt.“ Rojas gelangt zum Schluss: „Roboter kennen die Wahrheit nicht, deswegen können sie nicht lügen.“ Das Buch wurde Ende Mai 2013 im Heise Zeitschriften Verlag veröffentlicht und ist u.a. in der Kindle-Edition erhältlich.

„Towards a machine ethics“ von Oliver Bendel

Zur Konferenz “Technology Assessment and Policy Areas of Great Transitions” vom 13. bis 15. März 2013 in Prag ist das Book of Abstracts erschienen. Es kann über heruntergeladen werden und hat fast 300 Seiten. Enthalten sind die Extended Abstracts der Konferenz. Am Freitag, 15. März findet die “XVII. Thematic Session: Ethical Aspects of TA” statt (Chair: Frans Brom). Sie wird im Book of Abstracts mit folgenden Worten eingeführt: “Which ethical dilemmas are evident in selecting technologies? This session dares a tour d’horizon of ethical expertise and TA and focuses on ethical questions in selecting health technologies, on autonomous machines tested as agents or robots, on conditions for gaining acceptance of technological development and on social sustainability.” Miriam I. Siebzehner schreibt über “Ethical dilemmas in selecting health care technologies in Israel”, Oliver Bendel zum Thema “Towards a machine ethics”, Petr Machleidt über “Technology assessment as applied ethics of technology in the Czech Republic” und Michael Opielka über “Ethical dimensions of TA and social sustainability: The case of participation in social and health policy”.