Schlagworte: Social Robots

Robot Enhancement

Social robots and service robots usually have a defined locomotor system, a defined appearance and defined mimic and gestural abilities. This leads, on the one hand, to a certain familiarization effect. On the other hand, the actions of the robots are thus limited, for example in the household or in a shopping mall. Robot enhancement is used to extend and improve social robots and service robots. It changes their appearance and expands their scope. It is possible to apply attachments to the hardware, extend limbs and exchange components. One can pull skin made of silicone over the face or head, making the robots look humanoid. One can also change the software and connect the robot to AI systems – this is already done many times. The project or thesis, announced by Oliver Bendel in August 2020 at the School of Business FHNW, should first present the principles and functional possibilities of robot enhancement. Second, concrete examples should be given and described. One of these examples, e.g., the skin made of silicone, has to be implemented. Robots like Pepper or Atlas would be completely changed by such a skin. They could look uncanny, but also appealing. The project will start in September 2020.

Fig.: Robot Enhancement

The Morality Menu Project

„Once we place so-called ’social robots‘ into the social practices of our everyday lives and lifeworlds, we create complex, and possibly irreversible, interventions in the physical and semantic spaces of human culture and sociality. The long-term socio-cultural consequences of these interventions is currently impossible to gauge.“ (Website Robophilosophy Conference) With these words the next Robophilosophy conference was announced. It would have taken place in Aarhus, Denmark, from 18 to 21 August 2019, but due to the COVID 19 pandemic it is being conducted online. One lecture will be given by Oliver Bendel. The abstract of the paper „The Morality Menu Project“ states: „Machine ethics produces moral machines. The machine morality is usually fixed. Another approach is the morality menu (MOME). With this, owners or users transfer their own morality onto the machine, for example a social robot. The machine acts in the same way as they would act, in detail. A team at the School of Business FHNW implemented a MOME for the MOBO chatbot. In this article, the author introduces the idea of the MOME, presents the MOBO-MOME project and discusses advantages and disadvantages of such an approach. It turns out that a morality menu can be a valuable extension for certain moral machines.“ In 2018 Hiroshi Ishiguro, Guy Standing, Catelijne Muller, Joanna Bryson, and Oliver Bendel had been keynote speakers. In 2020, Catrin Misselhorn, Selma Sabanovic, and Shannon Vallor will be presenting. More information via conferences.au.dk/robo-philosophy/.

Fig.: The morality menu project

From Co-Robots to Care Robots

The paper „Co-Robots as Care Robots“ by Oliver Bendel, Alina Gasser and Joel Siebenmann, accepted at the AAAI 2020 Spring Symposium „Applied AI in Healthcare: Safety, Community, and the Environment“, can be accessed via arxiv.org/abs/2004.04374. From the abstract: „Cooperation and collaboration robots, co-robots or cobots for short, are an integral part of factories. For example, they work closely with the fitters in the automotive sector, and everyone does what they do best. However, the novel robots are not only relevant in production and logistics, but also in the service sector, especially where proximity between them and the users is desired or unavoidable. For decades, individual solutions of a very different kind have been developed in care. Now experts are increasingly relying on co-robots and teaching them the special tasks that are involved in care or therapy. This article presents the advantages, but also the disadvantages of co-robots in care and support, and provides information with regard to human-robot interaction and communication. The article is based on a model that has already been tested in various nursing and retirement homes, namely Lio from F&P Robotics, and uses results from accompanying studies. The authors can show that co-robots are ideal for care and support in many ways. Of course, it is also important to consider a few points in order to guarantee functionality and acceptance.“ Due to the outbreak of the COVID-19 pandemic, the physical meeting to be held at Stanford University was postponed. It will take place in November 2020 in Washington (AAAI 2020 Fall Symposium Series).

Moral Competence for Social Robots

At the end of 2018, the article entitled „Learning How to Behave: Moral Competence for Social Robots“ by Bertram F. Malle and Matthias Scheutz was published in the „Handbuch Maschinenethik“ („Handbook Machine Ethics“) (ed.: Oliver Bendel). An excerpt from the abstract: „We describe a theoretical framework and recent research on one key aspect of robot ethics: the development and implementation of a robot’s moral competence.“ The authors propose „that moral competence consists of five elements, two constituents (moral norms and moral vocabulary) and three activities (moral judgment, moral action, and moral communication)“. „A robot’s computational representations of social and moral norms is a prerequisite for all three moral activities. However, merely programming in advance the vast network of human norms is impossible, so new computational learning algorithms are needed that allow robots to acquire and update the context-specific and graded norms relevant to their domain of deployment. Moral vocabulary is needed primarily for moral communication, which expresses moral judgments of others’ violations and explains one’s own moral violations – to justify them, apologize, or declare intentions to do better. Current robots have at best rudimentary moral competence, but with improved learning and reasoning they may begin to show the kinds of capacities that humans will expect of future social robots.“ (Abstract) An overview of the contributions that have been published electronically since 2017 can be found on link.springer.com/referencework/10.1007/978-3-658-17484-2.