Two Talks on GenAI

On March 26, 2024, Oliver Bendel (School of Business FHNW) gave two talks on generative AI at Stanford University. The setting was the AAAI Spring Symposia, more precisely the symposium „Impact of GenAI on Social and Individual Well-being (AAAI2024-GenAI)“. One presentation was based on the paper „How Can Generative AI Enhance the Well-being of the Blind?“ by Oliver Bendel himself. It was about the GPT-4-based feature Be My AI in the Be My Eyes app. The other presentation was based on the paper „How Can GenAI Foster Well-being in Self-regulated Learning?“ by Stefanie Hauske (ZHAW) and Oliver Bendel. The topic was GPTs used for self-regulated learning. Both talks were received with great interest by the audience. All papers of the AAAI Spring Symposia will be published in spring. The proceedings are edited by the Association for the Advancement of Artificial Intelligence itself.

Fig.: At Stanford University

Robots in Bars, Cafés, and Restaurants

As part of the AAAI 2023 Spring Symposia in San Francisco, the symposium „Socially Responsible AI for Well-being“ is organized by Takashi Kido (Teikyo University, Japan) and Keiki Takadama (The University of Electro-Communications, Japan). The paper „How Can Bar Robots Enhance the Well-being of Guests?“ by Oliver Bendel and Lea K. Peier was accepted. The talk will take place between March 26 and 29, 2023 at Hyatt Regency, San Francisco Airport. The symposium website states: „For our happiness, AI is not enough to be productive in exponential growth or economic/financial supremacies but should be socially responsible from the viewpoint of fairness, transparency, accountability, reliability, safety, privacy, and security. For example, AI diagnosis system should provide responsible results (e.g., a high-accuracy of diagnostics result with an understandable explanation) but the results should be socially accepted (e.g., data for AI (machine learning) should not be biased (i.e., the amount of data for learning should be equal among races and/or locations). Like this example, a decision of AI affects our well-being, which suggests the importance of discussing ‚What is socially responsible?‘ in several potential situations of well-being in the coming AI age.“ (Website AAAI) According to the organizers, the first perspective is „(Individually) Responsible AI“, which aims to clarify what kinds of mechanisms or issues should be taken into consideration to design Responsible AI for well-being. The second perspective is „Socially Responsible AI“, which aims to clarify what kinds of mechanisms or issues should be taken into consideration to implement social aspects in Responsible AI for well-being. More information via www.aaai.org/Symposia/Spring/sss23.php#ss09.

Fig.: Humans and robots can bring water

AAAI Spring Symposia at Hyatt Regency, SFO Airport

The Association for the Advancement of Artificial Intelligence (AAAI) is pleased to present the AAAI 2023 Spring Symposia, to be held at the Hyatt Regency, San Francisco Airport, California, March 27-29. According to the organizers, Stanford University cannot act as host this time because of insufficient staff. Symposia of particular interest from a philosophical point of view are „AI Climate Tipping-Point Discovery“, „AI Trustworthiness Assessment“, „Computational Approaches to Scientific Discovery“, „Evaluation and Design of Generalist Systems (EDGeS): Challenges and methods for assessing the new generation of AI“, and „Socially Responsible AI for Well-being“. According to AAAI, symposia generally range from 40–75 participants each. „Participation will be open to active participants as well as other interested individuals on a first-come, first-served basis.“ (Website AAAI) Over the past decade, the conference has become one of the most important venues in the world for discussions on robot ethics, machine ethics, and AI ethics. It will be held again at History Corner from 2024. Further information via www.aaai.org/Symposia/Spring/sss23.php.

Fig.: The conference will be held in California

Programming Machine Ethics

The book „Programming Machine Ethics“ (2016) by Luís Moniz Pereira and Ari Saptawijaya is available for free download from Z-Library. Luís Moniz Pereira is among the best-known machine ethicists. „This book addresses the fundamentals of machine ethics. It discusses abilities required for ethical machine reasoning and the programming features that enable them. It connects ethics, psychological ethical processes, and machine implemented procedures. From a technical point of view, the book uses logic programming and evolutionary game theory to model and link the individual and collective moral realms. It also reports on the results of experiments performed using several model implementations. Opening specific and promising inroads into the terra incognita of machine ethics, the authors define here new tools and describe a variety of program-tested moral applications and implemented systems. In addition, they provide alternative readings paths, allowing readers to best focus on their specific interests and to explore the concepts at different levels of detail.“ (Information by Springer) The download link is eu1lib.vip/book/2677910/9fd009.

Fig.: Programming machine ethics

Conversational Agent as Trustworthy Autonomous System

On February 18, 2022, the Dagstuhl Report „Conversational Agent as Trustworthy Autonomous System (Trust-CA)“ was published. Editors are Effie Lai-Chong Law, Asbjørn Følstad, Jonathan Grudin, and Björn Schuller. From the abstract: „This report documents the program and the outcomes of Dagstuhl Seminar 21381 ‚Conversational Agent as Trustworthy Autonomous System (Trust-CA)‘. First, we present the abstracts of the talks delivered by the Seminar’s attendees. Then we report on the origin and process of our six breakout (working) groups. For each group, we describe its contributors, goals and key questions, key insights, and future research. The themes of the groups were derived from a pre-Seminar survey, which also led to a list of suggested readings for the topic of trust in conversational agents. The list is included in this report for references.“ (Abstract Dagstuhl Report) The seminar, attended by scientists and experts from around the world, was held at Schloss Dagstuhl from September 19-24, 2022. The report can be downloaded via drops.dagstuhl.de/opus/volltexte/2022/15770/.

Fig.: A conversational agent on the smartphone

Ethics of Conversational User Interfaces

The Ethics of Conversational User Interfaces workshop at the ACM CHI 2022 conference „will consolidate ethics-related research of the past and set the agenda for future CUI research on ethics going forward“. „This builds on previous CUI workshops exploring theories and methods, grand challenges and future design perspectives, and collaborative interactions.“ (CfP CUI)  From the Call for Papers: „In what ways can we advance our research on conversational user interfaces (CUIs) by including considerations on ethics? As CUIs, like Amazon Alexa or chatbots, become commonplace, discussions on how they can be designed in an ethical manner or how they change our views on ethics of technology should be topics we engage with as a community.“ (CfP CUI) Paper submission deadline is 24 February 2022. The workshop is scheduled to take place in New Orleans on 21 April 2022. More information is available via www.conversationaluserinterfaces.org/workshops/CHI2022/.

Fig.: Machine ethics also develops conversational agents

Sex Robots are Not Just for Sex

„Anthropomorphic love dolls – the successors of basic blowup dolls – are widely used these days, both in brothels and at home. While they can offer physical comfort and sexual satisfaction, they certainly cannot engage in more complex interactions with their counterparts. However, sex robots can – or at least they ought to. For now, the offer is not extensive and prices are high. The motor abilities of current models are limited and mainly focus on the head, while the body is usually identical to that of a love doll. Obviously, sex robots are made primarily to have sex with. But the user can also talk to and even form a relationship with them. This in mind, we are now starting to think about other applications of humanoid sex robots in the future – at least when their motor skills have improved. The possibilities might surprise you …” (De Gruyter Conversations, 23 April 2021) The full article is available via blog.degruyter.com/what-we-can-do-with-sex-robots-besides-the-obvious/

Fig.: Sex robots are not just for sex

Love Dolls and Sex Robots

On 24 October 2020 the article „Love Dolls and Sex Robots in Unproven and Unexplored Fields of Application“ by Oliver Bendel was published in Paladyn, Journal of Behavioral Robotics. From the Abstract: „Love dolls, the successors of blow-up dolls, are widespread. They can be ordered online or bought in sex shops and can be found in brothels and households. Sex robots are also on the rise. Research, however, has been slow to address this topic thoroughly. Often, it does not differentiate between users and areas of application, remaining vague, especially in the humanities and social sciences. The present contribution deals with the idea and history of love dolls and sex robots. Against this background, it identifies areas of application that have not been investigated or have hardly been investigated at all. These include prisons, the military, monasteries and seminaries, science, art and design as well as the gamer scene. There is, at least, some relevant research about the application of these artefacts in nursing and retirement homes and as such, these will be given priority. The use of love dolls and sex robots in all these fields is outlined, special features are discussed, and initial ethical, legal and pragmatic considerations are made. It becomes clear that artificial love servants can create added value, but that their use must be carefully considered and prepared. In some cases, their use may even be counterproductive.“ The article is available here for free as an open access publication.

Fig.: Love dolls and sex robots

Robot Enhancement

Social robots and service robots usually have a defined locomotor system, a defined appearance and defined mimic and gestural abilities. This leads, on the one hand, to a certain familiarization effect. On the other hand, the actions of the robots are thus limited, for example in the household or in a shopping mall. Robot enhancement is used to extend and improve social robots and service robots. It changes their appearance and expands their scope. It is possible to apply attachments to the hardware, extend limbs and exchange components. One can pull skin made of silicone over the face or head, making the robots look humanoid. One can also change the software and connect the robot to AI systems – this is already done many times. The project or thesis, announced by Oliver Bendel in August 2020 at the School of Business FHNW, should first present the principles and functional possibilities of robot enhancement. Second, concrete examples should be given and described. One of these examples, e.g., the skin made of silicone, has to be implemented. Robots like Pepper or Atlas would be completely changed by such a skin. They could look uncanny, but also appealing. The project will start in September 2020.

Fig.: Robot Enhancement

From Cobots to Care Robots

The paper „Co-Robots as Care Robots“ by Oliver Bendel, Alina Gasser and Joel Siebenmann was accepted at the AAAI 2020 Spring Symposia. From the abstract: „Cooperation and collaboration robots, co-robots or cobots for short, are an integral part of factories. For example, they work closely with the fitters in the automotive sector, and everyone does what they do best. However, the novel robots are not only relevant in production and logistics, but also in the service sector, especially where proximity between them and the users is desired or unavoidable. For decades, individual solutions of a very different kind have been developed in care. Now experts are increasingly relying on co-robots and teaching them the special tasks that are involved in care or therapy. This article presents the advantages, but also the disadvantages of co-robots in care and support, and provides information with regard to human-robot interaction and communication. The article is based on a model that has already been tested in various nursing and retirement homes, namely Lio from F&P Robotics, and uses results from accompanying studies. The authors can show that co-robots are ideal for care and support in many ways. Of course, it is also important to consider a few points in order to guarantee functionality and acceptance.“ The paper had been submitted to the symposium „Applied AI in Healthcare: Safety, Community, and the Environment“. Oliver Bendel will present the results at Stanford University between 23 and 25 March 2020.

Abb.: Ob mit oder ohne Roboter – der Teddy darf nicht fehlen

Care Robots with Sexual Assistance Functions?

The paper „Care Robots with Sexual Assistance Functions“ by Oliver Bendel was accepted at the AAAI 2020 Spring Symposia. From the abstract: „Residents in retirement and nursing homes have sexual needs just like other people. However, the semi-public situation makes it difficult for them to satisfy these existential concerns. In addition, they may not be able to meet a suitable partner or find it difficult to have a relationship for mental or physical reasons. People who live or are cared for at home can also be affected by this problem. Perhaps they can host someone more easily and discreetly than the residents of a health facility, but some elderly and disabled people may be restricted in some ways. This article examines the opportunities and risks that arise with regard to care robots with sexual assistance functions. First of all, it deals with sexual well-being. Then it presents robotic systems ranging from sex robots to care robots. Finally, the focus is on care robots, with the author exploring technical and design issues. A brief ethical discussion completes the article. The result is that care robots with sexual assistance functions could be an enrichment of the everyday life of people in need of care, but that we also have to consider some technical, design and moral aspects.“ The paper had been submitted to the symposium „Applied AI in Healthcare: Safety, Community, and the Environment“. Oliver Bendel will present the paper at Stanford University between 23 and 25 March 2020.

Fig.: Should care robots have sexual assistance functions?

Basic Income and Basic Property

Automation is advancing relentlessly. Already decades ago, digitization was its partner. In the industry, innovative robots, for example co-robots, are used. Service robots begin to spread in various areas. Systems of artificial intelligence perform tasks of all sorts, even creative activities. The studies on the development of the labor market reach different results. In any case, it can be said that certain jobs will disappear and many people will have to do without their familiar work. It can also be assumed that in many areas less human work has to be performed on behalf (e.g., for customers and employers). As possible solutions to economic and social problems, an and a robot tax are suggested. The paper „Are Robot Tax, Basic Income or Basic Property Solutions to the Social Problems of Automation?“ by Oliver Bendel presents, discusses and criticizes these approaches in the context of automation and digitization. Moreover, it develops a relatively unknown proposal, unconditional basic property, and presents its potentials as well as its risks. The lecture took place on 26 March 2019 at Stanford University (AAAI Spring Symposium „Interpretable AI for Well-Being: Understanding Cognitive Bias and Social Embeddedness“) and led to lively discussions. It was nominated for the „best presentation“. The paper has now been published as a preprint and can be downloaded here.

Fig.: Basic property as a solution?

Stephen A. Schwarzman Centre in Gründung

„Die britische Elite-Universität Oxford hat eine Spende in Höhe von 150 Millionen Pfund (rund 168 Millionen Euro) von US-Milliardär Stephen A. Schwarzman erhalten. Mit der höchsten Einzelspende in der Geschichte der Hochschule soll das ‚Stephen A. Schwarzman Centre‘ für Geisteswissenschaften entstehen.“ (SPON, 19. Juni 2019) Dies meldete der Spiegel am 19. Juni 2019. Weiter heißt es: „In dem Gebäude sollen unter anderem die Fakultäten für … Geschichts- und Sprachwissenschaften, Philosophie, Musik und Theologie zusammengelegt werden. Rund ein Viertel aller Oxford-Studenten sind in diesen Fächern eingeschrieben. Zusätzlich soll dort ein neues Institut für Ethik im Umgang mit Künstlicher Intelligenz entstehen, wie die Universität mitteilte.“ (SPON, 19. Juni 2019) Der Schwerpunkt scheint auf Informations- und Roboterethik zu liegen. Schwarzman selbst sagte laut Spiegel, Universitäten müssten dabei helfen, ethische Grundsätze für den schnellen technologischen Wandel zu entwickeln. Über die Herkunft der Mittel wird debattiert. Weitere Informationen über www.spiegel.de/lebenundlernen/uni/oxford-elite-uni-erhaelt-150-millionen-pfund-spende-a-1273161.html.

Workshop for a Free and Beautiful World


Dr. Mathilde Noual (Freie Universität Berlin) and Prof. Dr. Oliver Bendel (School of Business FHNW) are organizing a workshop on the social implications of artificial intelligence (AI) and robotics. They are looking for constructive proposals of technological and conceptual utopias, of counter-cultures and counter-systems offering strategies for preserving privacy, individuality, and freedom in a technological world, for going beyond the AI’s present limitations and frustrations, and for emphasising the beauty of the world and of humans’ way of accessing it (with high degree of nuance, contextuality, subjectivity, adaptability and acutality). The tracks are: The territory today: core limitations and prospects of AI; technologies and approaches against surveillance technologies (examples: the virtual burka; the hacked social credit system); technologies and approaches for an intact environment (examples: AI and robots for clean waters and seas; animals with weapons for self-defence); technologies and approaches for a new policy (examples: AI as a president); technologies and approaches for shared knowledge and education (example: open research solutions). The workshop will take place on the 29th and 30th of June 2019 in Berlin, at the Weizenbaum Institute. The CfP is addressed exclusively to the invited persons.

Fig.: Free and beautiful

Moral Competence for Social Robots

At the end of 2018, the article entitled „Learning How to Behave: Moral Competence for Social Robots“ by Bertram F. Malle and Matthias Scheutz was published in the „Handbuch Maschinenethik“ („Handbook Machine Ethics“) (ed.: Oliver Bendel). An excerpt from the abstract: „We describe a theoretical framework and recent research on one key aspect of robot ethics: the development and implementation of a robot’s moral competence.“ The authors propose „that moral competence consists of five elements, two constituents (moral norms and moral vocabulary) and three activities (moral judgment, moral action, and moral communication)“. „A robot’s computational representations of social and moral norms is a prerequisite for all three moral activities. However, merely programming in advance the vast network of human norms is impossible, so new computational learning algorithms are needed that allow robots to acquire and update the context-specific and graded norms relevant to their domain of deployment. Moral vocabulary is needed primarily for moral communication, which expresses moral judgments of others’ violations and explains one’s own moral violations – to justify them, apologize, or declare intentions to do better. Current robots have at best rudimentary moral competence, but with improved learning and reasoning they may begin to show the kinds of capacities that humans will expect of future social robots.“ (Abstract) An overview of the contributions that have been published electronically since 2017 can be found on link.springer.com/referencework/10.1007/978-3-658-17484-2.

The Spy who Loved and Nursed Me

Robots in the health sector are important, valuable innovations and supplements. As therapy and nursing robots, they take care of us and come close to us. In addition, other service robots are widespread in nursing and retirement homes and hospitals. With the help of their sensors, all of them are able to recognize us, to examine and classify us, and to evaluate our behavior and appearance. Some of these robots will pass on our personal data to humans and machines. They invade our privacy and challenge the informational autonomy. This is a problem for the institutions and the people that needs to be solved. The article „The Spy who Loved and Nursed Me: Robots and AI Systems in Healthcare from the Perspective of Information Ethics“ by Oliver Bendel presents robot types in the health sector, along with their technical possibilities, including their sensors and their artificial intelligence capabilities. Against this background, moral problems are discussed, especially from the perspective of information ethics and with respect to privacy and informational autonomy. One of the results shows that such robots can improve the personal autonomy, but the informational autonomy is endangered in an area where privacy has a special importance. At the end, solutions are proposed from various disciplines and perspectives. The article was published in Telepolis on December 17, 2018 and can be accessed via www.heise.de/tp/features/The-Spy-who-Loved-and-Nursed-Me-4251919.html.

Fig.: What have I got to hide?