From 18 to 21 August 2020, the Robophilosophy conference took place. Due to the pandemic, participants could not meet in Aarhus as originally planned, but only in virtual space. Nevertheless, the conference was a complete success. At the end of the year, the conference proceedings were published by IOS Press, including the paper „The Morality Menu Project“ by Oliver Bendel. From the abstract: „The discipline of machine ethics examines, designs and produces moral machines. The artificial morality is usually pre-programmed by a manufacturer or developer. However, another approach is the more flexible morality menu (MOME). With this, owners or users replicate their own moral preferences onto a machine. A team at the FHNW implemented a MOME for MOBO (a chatbot) in 2019/2020. In this article, the author introduces the idea of the MOME, presents the MOBO-MOME project and discusses advantages and disadvantages of such an approach. It turns out that a morality menu could be a valuable extension for certain moral machines.“ The book can be ordered on the publisher’s website. An author’s copy is available here.
Social robots are robots that come close to animals and humans, interact and communicate with them. They reproduce characteristics of animals and humans in their behavior and appearance. They can be implemented both as hardware robots and as software robots. The SPACE THEA project should have already started in March 2020. Because of COVID-19 it had to be postponed. Now Prof. Dr. Oliver Bendel (School of Business FHNW) starts with the preparatory work. In winter 2020/2021 and spring 2021 the programming of the voicebot is then carried out. SPACE THEA is designed to accompany astronauts to Mars and to show them empathy and emotions. In the best case, she should also be able to provide psychological counseling, for example, based on cases from the literature. The project will use findings from social robotics, but also from machine ethics. The results will be available by summer 2021.
In many cases it is important that an autonomous system acts and reacts adequately from a moral point of view. There are some artifacts of machine ethics, e.g., GOODBOT or LADYBIRD by Oliver Bendel or Nao as a care robot by Susan Leigh and Michael Anderson. But there is no standardization in the field of moral machines yet. The MOML project, initiated by Oliver Bendel, is trying to work in this direction. In the management summary of his bachelor thesis Simon Giller writes: „We present a literature review in the areas of machine ethics and markup languages which shaped the proposed morality markup language (MOML). To overcome the most substantial problem of varying moral concepts, MOML uses the idea of the morality menu. The menu lets humans define moral rules and transfer them to an autonomous system to create a proxy morality. Analysing MOML excerpts allowed us to develop an XML schema which we then tested in a test scenario. The outcome is an XML based morality markup language for autonomous agents. Future projects can use this language or extend it. Using the schema, anyone can write MOML documents and validate them. Finally, we discuss new opportunities, applications and concerns related to the use of MOML. Future work could develop a controlled vocabulary or an ontology defining terms and commands for MOML.“ The bachelor thesis will be publicly available in autumn 2020. It was supervised by Dr. Elzbieta Pustulka. There will also be a paper with the results next year.
Fig.: Skeleton of the top elements (Illustration: Simon Giller)
„Once we place so-called ’social robots‘ into the social practices of our everyday lives and lifeworlds, we create complex, and possibly irreversible, interventions in the physical and semantic spaces of human culture and sociality. The long-term socio-cultural consequences of these interventions is currently impossible to gauge.“ (Website Robophilosophy Conference) With these words the next Robophilosophy conference was announced. It would have taken place in Aarhus, Denmark, from 18 to 21 August 2019, but due to the COVID 19 pandemic it is being conducted online. One lecture will be given by Oliver Bendel. The abstract of the paper „The Morality Menu Project“ states: „Machine ethics produces moral machines. The machine morality is usually fixed. Another approach is the morality menu (MOME). With this, owners or users transfer their own morality onto the machine, for example a social robot. The machine acts in the same way as they would act, in detail. A team at the School of Business FHNW implemented a MOME for the MOBO chatbot. In this article, the author introduces the idea of the MOME, presents the MOBO-MOME project and discusses advantages and disadvantages of such an approach. It turns out that a morality menu can be a valuable extension for certain moral machines.“ In 2018 Hiroshi Ishiguro, Guy Standing, Catelijne Muller, Joanna Bryson, and Oliver Bendel had been keynote speakers. In 2020, Catrin Misselhorn, Selma Sabanovic, and Shannon Vallor will be presenting. More information via conferences.au.dk/robo-philosophy/.
Immer wieder hört man, oft von Theologen, manchmal von Philosophen, dass Maschinen nicht autonom seien, nicht intelligent, nicht moralisch etc. Sie übertragen den Begriff, den sie aus ihrem Bereich kennen, auf technische Wissenschaften wie Informatik, Künstliche Intelligenz (KI) und Maschinenethik (die technisch geprägt ist und eng mit KI und Robotik zusammenarbeitet). Sie anerkennen nicht, dass jede Disziplin ihre eigenen Begriffe haben kann (und in der Regel hat). Bei einer Tagung im Jahre 2015 beschimpfte Bundestagspräsident Prof. Dr. Norbert Lammert, ein zutiefst religiöser Mann, die Referenten mit den Worten, Maschinen seien nicht autonom, sie hätten sich nämlich nicht selbst ein Gesetz gegeben. Nun sprechen Informatik und Robotik aber nun einmal von autonomen Systemen und Maschinen, und selbstverständlich dürfen sie das, wenn sie darlegen, wie sie das meinen. Eine solche Begriffsklärung und -aneignung steht sogar am Anfang jeder wissenschaftlichen Betätigung, und dass die Begriffe gleich lauten wie die anderer Bereiche, heißt keineswegs, dass sie dasselbe bedeuten und bedeuten müssen. Eine neue Grafik von Prof. Dr. Oliver Bendel, die auf früheren Entwürfen aufbaut, stellt dar, was der Gegenstandsbereich der Disziplinen oder Arbeitsbereiche der KI, der Maschinenethik und des Maschinellen Bewusstseins ist, und macht für sie terminologische Vorschläge. Im Kern geht es diesen darum, etwas in bestimmten Aspekten ab- oder nachzubilden bzw. zu simulieren. So schafft es die Künstliche Intelligenz eben, künstliche Intelligenz hervorzubringen, etwa Dialogsysteme oder Maschinen, die bestimmte Probleme lösen. Ob diese „wirklich“ intelligent sind oder nicht, ist keine sinnvolle Frage, und der Terminus technicus benutzt nicht umsonst das Adjektiv „künstlich“ – hier wäre noch einfacher als im Falle von „autonom“ zu verstehen, dass es sich um eine „neue“ (immerhin seit über 50 Jahren erklärte) Bedeutung handelt.
Space travel includes travel and transport to, through and from space for civil or military purposes. The take-off on earth is usually done with a launch vehicle. The spaceship, like the lander, is manned or unmanned. The target can be the orbit of a celestial body, a satellite, planet or comet. Man has been to the moon several times, now man wants to go to Mars. The astronaut will not greet the robots that are already there as if he or she had been lonely for months. For on the spaceship he or she had been in the best of company. SPACE THEA spoke to him or her every day. When she noticed that he or she had problems, she changed her tone of voice, the voice became softer and happier, and what she said gave the astronaut hope again. How SPACE THEA really sounds and what she should say is the subject of a research project that will start in spring 2020 at the School of Business FHNW. Under the supervision of Prof. Dr. Oliver Bendel, a student is developing a voicebot that shows empathy towards an astronaut. The scenario is a proposal that can also be rejected. Maybe in these times it is more important to have a virtual assistant for crises and catastrophes in case one is in isolation or quarantine. However, the project in the fields of social robotics and machine ethics is entitled THE EMPATHIC ASSISTANT IN SPACE (SPACE THEA). The results – including the prototype – will be available by the end of 2020.
The first phase of the HUGGIE project will start at the School of Business FHNW in March 2020. Oliver Bendel was able to recruit two students from the International Management program. The project idea is to create a social robot that contributes directly to a good life and economic success by touching and hugging people and especially customers. HUGGIE should be able to warm up in some places, and it should be possible to change the materials it is covered with. A research question will be: What are the possibilities besides warmth and softness? Are optical stimuli (also on displays), vibrations, noises, voices etc. important for a successful hug? HUGGIE could also play a role in crises and disasters, in epidemics and pandemics and in cases of permanent social distancing. Of course it would be bad if only a robot would hug us, and of course it would be good if humans could hug us every day if we wanted them to do so – but maybe in extreme situations a hug by a robot is better than nothing. The HUGGIE project is located in the heart of social robotics and on the periphery of machine ethics. By summer 2020, the students will conduct an online survey to find out the attitudes and expectations of the users.
The paper „Co-Robots as Care Robots“ by Oliver Bendel, Alina Gasser and Joel Siebenmann was accepted at the AAAI 2020 Spring Symposia. From the abstract: „Cooperation and collaboration robots, co-robots or cobots for short, are an integral part of factories. For example, they work closely with the fitters in the automotive sector, and everyone does what they do best. However, the novel robots are not only relevant in production and logistics, but also in the service sector, especially where proximity between them and the users is desired or unavoidable. For decades, individual solutions of a very different kind have been developed in care. Now experts are increasingly relying on co-robots and teaching them the special tasks that are involved in care or therapy. This article presents the advantages, but also the disadvantages of co-robots in care and support, and provides information with regard to human-robot interaction and communication. The article is based on a model that has already been tested in various nursing and retirement homes, namely Lio from F&P Robotics, and uses results from accompanying studies. The authors can show that co-robots are ideal for care and support in many ways. Of course, it is also important to consider a few points in order to guarantee functionality and acceptance.“ The paper had been submitted to the symposium „Applied AI in Healthcare: Safety, Community, and the Environment“. Oliver Bendel will present the results at Stanford University between 23 and 25 March 2020.
Abb.: Ob mit oder ohne Roboter – der Teddy darf nicht fehlen
The book chapter „The BESTBOT Project“ by Oliver Bendel, David Studer and Bradley Richards was published on 31 December 2019. It is part of the 2nd edition of the „Handbuch Maschinenethik“, edited by Oliver Bendel. From the abstract: „The young discipline of machine ethics both studies and creates moral (or immoral) machines. The BESTBOT is a chatbot that recognizes problems and conditions of the user with the help of text analysis and facial recognition and reacts morally to them. It can be seen as a moral machine with some immoral implications. The BESTBOT has two direct predecessor projects, the GOODBOT and the LIEBOT. Both had room for improvement and advancement; thus, the BESTBOT project used their findings as a basis for its development and realization. Text analysis and facial recognition in combination with emotion recognition have proven to be powerful tools for problem identification and are part of the new prototype. The BESTBOT enriches machine ethics as a discipline and can solve problems in practice. At the same time, with new solutions of this kind come new problems, especially with regard to privacy and informational autonomy, which information ethics must deal with.“ (Abstract) The BESTBOT is an immoral machine in a moral one – or a moral machine in an immoral one, depending on the perspective. The book chapter can be downloaded from link.springer.com/referenceworkentry/10.1007/978-3-658-17484-2_32-1.
In his lecture at the Orient-Institut Istanbul on 18 December 2019, Oliver Bendel dealt with care robots as well as therapy and surgery robots. He presented well-known and less known examples and clarified the goals, tasks and characteristics of these service robots in the healthcare sector. Afterwards he investigates current and future functions of care robots, including sexual assistance functions. Against this background, the lecture considered both the perspective of information ethics and machine ethics. In the end, it became clear which robot types and prototypes or products are available in healthcare, which purposes they fulfil, which functions they assume, how the healthcare system changes through their use and which implications and consequences this has for the individual and society. The lecture took place within the series „Human, medicine and society: past, present and future encounters“ … The Orient-Institut Istanbul is a turkological and regional scientific research institute in the association of the Max Weber Foundation. In close cooperation with Turkish and international scientists, it dedicates itself to a multitude of different research areas. More information via www.oiist.org.
CONVERSATIONS 201 is a full-day workshop on chatbot research. It will take place on November 19, 2019 at the University of Amsterdam. From the description: „Chatbots are conversational agents which allow the user access to information and services though natural language dialogue, through text or voice. … Research is crucial in helping realize the potential of chatbots as a means of help and support, information and entertainment, social interaction and relationships. The CONVERSATIONS workshop contributes to this endeavour by providing a cross-disciplinary arena for knowledge exchange by researchers with an interest in chatbots.“ The topics of interest that may be explored in the papers and at the workshop include humanlike chatbots, networks of users and chatbots, trustworthy chatbot design and privacy and ethical issues in chatbot design and implementation. The submission deadline for CONVERSATIONS 2019 was extended to September 10. More information via conversations2019.wordpress.com/.
Robophilosophy or robot philosophy is a field of philosophy that deals with robots (hardware and software robots) as well as with enhancement options such as artificial intelligence. It is not only about the practice and history of development, but also the history of ideas, starting with the works of Homer and Ovid up to science fiction books and movies. Disciplines such as epistemology, ontology, aesthetics and ethics, including information and machine ethics, are involved. The new platform robophilosophy.com was founded in July 2019 by Oliver Bendel. He invited several authors to write with him about robophilosophy, robot law, information ethics, machine ethics, robotics and artificial intelligence. All of them have a relevant background. Oliver Bendel studied philosophy as well as information science and made his doctoral thesis about anthropomorphic software agents. He has been researching in the fields of information ethics and machine ethics for years.