Social robots and service robots usually have a defined locomotor system, a defined appearance and defined mimic and gestural abilities. This leads, on the one hand, to a certain familiarization effect. On the other hand, the actions of the robots are thus limited, for example in the household or in a shopping mall. Robot enhancement is used to extend and improve social robots and service robots. It changes their appearance and expands their scope. It is possible to apply attachments to the hardware, extend limbs and exchange components. One can pull skin made of silicone over the face or head, making the robots look humanoid. One can also change the software and connect the robot to AI systems – this is already done many times. The project or thesis, announced by Oliver Bendel in August 2020 at the School of Business FHNW, should first present the principles and functional possibilities of robot enhancement. Second, concrete examples should be given and described. One of these examples, e.g., the skin made of silicone, has to be implemented. Robots like Pepper or Atlas would be completely changed by such a skin. They could look uncanny, but also appealing. The project will start in September 2020.
The paper „Care Robots with Sexual Assistance Functions“ by Oliver Bendel was accepted at the AAAI 2020 Spring Symposia. From the abstract: „Residents in retirement and nursing homes have sexual needs just like other people. However, the semi-public situation makes it difficult for them to satisfy these existential concerns. In addition, they may not be able to meet a suitable partner or find it difficult to have a relationship for mental or physical reasons. People who live or are cared for at home can also be affected by this problem. Perhaps they can host someone more easily and discreetly than the residents of a health facility, but some elderly and disabled people may be restricted in some ways. This article examines the opportunities and risks that arise with regard to care robots with sexual assistance functions. First of all, it deals with sexual well-being. Then it presents robotic systems ranging from sex robots to care robots. Finally, the focus is on care robots, with the author exploring technical and design issues. A brief ethical discussion completes the article. The result is that care robots with sexual assistance functions could be an enrichment of the everyday life of people in need of care, but that we also have to consider some technical, design and moral aspects.“ The paper had been submitted to the symposium „Applied AI in Healthcare: Safety, Community, and the Environment“. Oliver Bendel will present the paper at Stanford University between 23 and 25 March 2020.
Fig.: Should care robots have sexual assistance functions?
Some universities strive to use holograms in their teaching. Through this technology, the lecturer’s representative would have a physical presence in space. Even interactions and conversations would be possible if the holograms or projections were connected to speech systems. Dr. David Lefevre, director of Imperial’s Edtech Lab, told the BBC one year ago: „The alternative is to use video-conferencing software but we believe these holograms have a much greater sense of presence“. American Samoa Community College (ASCC) has now switched on a digital platform that will stream 3D holograms of University of Hawai’i faculty members to deliver classes and engage with ASCC students in real-time. According to the website, students at the HoloCampus launch on August 20 received a lecture by UH Mānoa Water Resources Research Center researcher Chris Shuler on the subject of „sustainability and resilience“ – a theme „with special significance for the people of American Samoa and Pacific Islands nations as they face challenges such as increasing plastic waste and more dramatic weather systems brought about by climate change“ (Website University of Hawai’i). Holograms could play a role in all sorts of areas, including social and sexual relationships.
CONVERSATIONS 201 is a full-day workshop on chatbot research. It will take place on November 19, 2019 at the University of Amsterdam. From the description: „Chatbots are conversational agents which allow the user access to information and services though natural language dialogue, through text or voice. … Research is crucial in helping realize the potential of chatbots as a means of help and support, information and entertainment, social interaction and relationships. The CONVERSATIONS workshop contributes to this endeavour by providing a cross-disciplinary arena for knowledge exchange by researchers with an interest in chatbots.“ The topics of interest that may be explored in the papers and at the workshop include humanlike chatbots, networks of users and chatbots, trustworthy chatbot design and privacy and ethical issues in chatbot design and implementation. The submission deadline for CONVERSATIONS 2019 was extended to September 10. More information via conversations2019.wordpress.com/.
Robophilosophy or robot philosophy is a field of philosophy that deals with robots (hardware and software robots) as well as with enhancement options such as artificial intelligence. It is not only about the practice and history of development, but also the history of ideas, starting with the works of Homer and Ovid up to science fiction books and movies. Disciplines such as epistemology, ontology, aesthetics and ethics, including information and machine ethics, are involved. The new platform robophilosophy.com was founded in July 2019 by Oliver Bendel. He invited several authors to write with him about robophilosophy, robot law, information ethics, machine ethics, robotics and artificial intelligence. All of them have a relevant background. Oliver Bendel studied philosophy as well as information science and made his doctoral thesis about anthropomorphic software agents. He has been researching in the fields of information ethics and machine ethics for years.
The article „Hologram Girl“ by Oliver Bendel deals first of all with the current and future technical possibilities of projecting three-dimensional human shapes into space or into vessels. Then examples for holograms from literature and film are mentioned, from the fictionality of past and present. Furthermore, the reality of the present and the future of holograms is included, i.e. what technicians and scientists all over the world are trying to achieve, in eager efforts to close the enormous gap between the imagined and the actual. A very specific aspect is of interest here, namely the idea that holograms serve us as objects of desire, that they step alongside love dolls and sex robots and support us in some way. Different aspects of fictional and real holograms are analyzed, namely pictoriality, corporeality, motion, size, beauty and speech capacity. There are indications that three-dimensional human shapes could be considered as partners, albeit in a very specific sense. The genuine advantages and disadvantages need to be investigated further, and a theory of holograms in love could be developed. The article is part of the book „AI Love You“ by Yuefang Zhou and Martin H. Fischer and was published on 18 July 2019. Further information can be found via link.springer.com/book/10.1007/978-3-030-19734-6.
The 23rd Berlin Colloquium of the Daimler and Benz Foundation took place on May 22, 2019. It was dedicated to care robots, not only from the familiar positions, but also from new perspectives. The scientific director, Prof. Dr. Oliver Bendel, invited two of the world’s best-known machine ethicists, Prof. Dr. Michael Anderson and Prof. Dr. Susan L. Anderson. Together with Vincent Berenz, they had programmed a Nao robot with a series of values that determine its behavior and simultaneously help a person in a simulated elderly care facility. A contribution to this appeared some time ago in the Proceedings of the IEEE. For the first time, they presented the results of this project to a European audience, and their one-hour presentation, followed by a twenty-minute discussion, can be considered a great moment in machine ethics. Other internationally renowned scientists, such as the Japan expert Florian Coulmas, also took part. He dealt with artefacts from Japan and relativized the frequently heard assertion that the Japanese considered all things to be inspired. Several media reported on the Berlin Colloquium, for example Neues Deutschland.
The Association for the Advancement of Artificial Intelligence (AAAI) and Squirrel AI Learning announced the establishment of a new one million dollars annual award for societal benefits of AI. According to a press release of the AAAI, the award will be sponsored by Squirrel AI Learning as part of its mission to promote the use of artificial intelligence with lasting positive effects for society. „This new international award will recognize significant contributions in the field of artificial intelligence with profound societal impact that have generated otherwise unattainable value for humanity. The award nomination and selection process will be designed by a committee led by AAAI that will include representatives from international organizations with relevant expertise that will be designated by Squirrel AI Learning.“ (AAAI Press Release, 28 May 2019) The AAAI Spring Symposia have repeatedly devoted themselves to social good, also from the perspective of machine ethics. Further information via aaai.org/Pressroom/Releases//release-19-0528.php.
Machine ethics produces moral and immoral machines. The morality is usually fixed, e.g. by programmed meta-rules and rules. The machine is thus capable of certain actions, not others. However, another approach is the morality menu (MOME for short). With this, the owner or user transfers his or her own morality onto the machine. The machine behaves in the same way as he or she would behave, in detail. Together with his teams, Prof. Dr. Oliver Bendel developed several artifacts of machine ethics at his university from 2013 to 2018. For one of them, he designed a morality menu that has not yet been implemented. Another concept exists for a virtual assistant that can make reservations and orders for its owner more or less independently. In the article „The Morality Menu“ the author introduces the idea of the morality menu in the context of two concrete machines. Then he discusses advantages and disadvantages and presents possibilities for improvement. A morality menu can be a valuable extension for certain moral machines. You can download the article here.
At the end of 2018, the article entitled „Learning How to Behave: Moral Competence for Social Robots“ by Bertram F. Malle and Matthias Scheutz was published in the „Handbuch Maschinenethik“ („Handbook Machine Ethics“) (ed.: Oliver Bendel). An excerpt from the abstract: „We describe a theoretical framework and recent research on one key aspect of robot ethics: the development and implementation of a robot’s moral competence.“ The authors propose „that moral competence consists of five elements, two constituents (moral norms and moral vocabulary) and three activities (moral judgment, moral action, and moral communication)“. „A robot’s computational representations of social and moral norms is a prerequisite for all three moral activities. However, merely programming in advance the vast network of human norms is impossible, so new computational learning algorithms are needed that allow robots to acquire and update the context-specific and graded norms relevant to their domain of deployment. Moral vocabulary is needed primarily for moral communication, which expresses moral judgments of others’ violations and explains one’s own moral violations – to justify them, apologize, or declare intentions to do better. Current robots have at best rudimentary moral competence, but with improved learning and reasoning they may begin to show the kinds of capacities that humans will expect of future social robots.“ (Abstract) An overview of the contributions that have been published electronically since 2017 can be found on link.springer.com/referencework/10.1007/978-3-658-17484-2.
Robots have no rights from a philosophical and ethical point of view and cannot currently get any rights. You only have such rights if you can feel or suffer, if you have a consciousness or a will to live. Accordingly, animals can have certain rights, stones cannot. Only human beings have human rights. Certain animals can be granted basic rights, such as chimpanzees or gorillas. But to grant these animals human rights makes no sense. They are not human beings. If one day robots can feel or suffer, if they have a consciousness or a will to live, they must be granted rights. However, Oliver Bendel does not see any way to get there at the moment. According to him, one could at best develop „reverse cyborgs“, i.e. let brain and nerve cells grow on technical structures (or in a robot). Such reverse or inverted cyborgs might at some point feel something. The newspaper Daily Star dealt with this topic on 28 December 2018. The article can be accessed via www.dailystar.co.uk/news/latest-news/748890/robots-ai-human-rights-legal-status-eu-proposal.
Fig.: A human brain could be part of a reverse cyborg
In 2018, Paladyn Journal of Behavioral Robotics published several articles on robot and machine ethics. In a message to the authors, the editors noted: „Our special attention in recent months has been paid to ethical and moral issues that seem to be of daily debate of researchers from different disciplines.“ The current issue „Roboethics“ includes the articles „Towards animal-friendly machines“ by Oliver Bendel, „Liability for autonomous and artificially intelligent robots“ by Woodrow Barfield, „Corporantia: Is moral consciousness above individual brains/robots?“ by Christopher Charles Santos-Lang, „The soldier’s tolerance for autonomous systems“ by Jai Galliott and „GenEth: a general ethical dilemma analyzer“ by Michael Anderson and Susan Leigh Anderson. The following articles will be published in December 2019: „Autonomy in surgical robots and its meaningful human control“ by Fanny Ficuciello, Guglielmo Tamburrini, Alberto Arezzo, Luigi Villani, and Bruno Siciliano, and „AI for the Common Good?! Pitfalls, challenges, and Ethics Pen-Testing“ by Bettina Berendt. More information via www.degruyter.com/page/1498.