| Umbrello, Steven The Intersection of Bernard Lonergan’s Critical Realism, the Common Good, and Artificial Intelligence in Modern Religious Practices Journal Article In: Religions, vol. 14, iss. 12, no. 1536, pp. 1-19, 2023. @article{Umbrello2023d,
title = {The Intersection of Bernard Lonergan’s Critical Realism, the Common Good, and Artificial Intelligence in Modern Religious Practices},
author = {Umbrello, Steven},
url = {https://doi.org/10.3390/rel14121536},
doi = {10.3390/rel14121536},
year = {2023},
date = {2023-12-13},
journal = {Religions},
volume = {14},
number = {1536},
issue = {12},
pages = {1-19},
abstract = {Artificial intelligence (AI) profoundly influences a number of societal structures today, including religious dynamics. Using Bernard Lonergan’s critical realism as a lens, this article investigates the intersections of AI and religious traditions in their shared pursuit of the common good. Beginning with Lonergan’s principle that humans construct their understanding through cognitive processes, we examine how AI-mediated realities align with or challenge traditional religious tenets. By delving into specific cases, we spotlight AI’s role in reshaping religious symbols, rituals, and even creating novel spiritual meanings. Using Lonergan’s insights on the balance between subjectivity and objectivity, I analyze AI’s potential to both create new sacred spaces and challenge religious orthodoxy. The crux of the discussion centers on the negotiation between religious values and technological innovation, assessing how AI can bolster religious life while maintaining its core essence. Ultimately, this article underscores the importance of the common good in the age of AI-driven religious evolution.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Artificial intelligence (AI) profoundly influences a number of societal structures today, including religious dynamics. Using Bernard Lonergan’s critical realism as a lens, this article investigates the intersections of AI and religious traditions in their shared pursuit of the common good. Beginning with Lonergan’s principle that humans construct their understanding through cognitive processes, we examine how AI-mediated realities align with or challenge traditional religious tenets. By delving into specific cases, we spotlight AI’s role in reshaping religious symbols, rituals, and even creating novel spiritual meanings. Using Lonergan’s insights on the balance between subjectivity and objectivity, I analyze AI’s potential to both create new sacred spaces and challenge religious orthodoxy. The crux of the discussion centers on the negotiation between religious values and technological innovation, assessing how AI can bolster religious life while maintaining its core essence. Ultimately, this article underscores the importance of the common good in the age of AI-driven religious evolution. |
| Umbrello, Steven The Intersection of Bernard Lonergan’s Critical Realism, the Common Good, and Artificial Intelligence in Modern Religious Practices Journal Article In: Religions, vol. 14, iss. 12, no. 1536, pp. 1-19, 2023. @article{Umbrello2023db,
title = {The Intersection of Bernard Lonergan’s Critical Realism, the Common Good, and Artificial Intelligence in Modern Religious Practices},
author = {Umbrello, Steven},
url = {https://doi.org/10.3390/rel14121536},
doi = {10.3390/rel14121536},
year = {2023},
date = {2023-12-13},
journal = {Religions},
volume = {14},
number = {1536},
issue = {12},
pages = {1-19},
abstract = {Artificial intelligence (AI) profoundly influences a number of societal structures today, including religious dynamics. Using Bernard Lonergan’s critical realism as a lens, this article investigates the intersections of AI and religious traditions in their shared pursuit of the common good. Beginning with Lonergan’s principle that humans construct their understanding through cognitive processes, we examine how AI-mediated realities align with or challenge traditional religious tenets. By delving into specific cases, we spotlight AI’s role in reshaping religious symbols, rituals, and even creating novel spiritual meanings. Using Lonergan’s insights on the balance between subjectivity and objectivity, I analyze AI’s potential to both create new sacred spaces and challenge religious orthodoxy. The crux of the discussion centers on the negotiation between religious values and technological innovation, assessing how AI can bolster religious life while maintaining its core essence. Ultimately, this article underscores the importance of the common good in the age of AI-driven religious evolution.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Artificial intelligence (AI) profoundly influences a number of societal structures today, including religious dynamics. Using Bernard Lonergan’s critical realism as a lens, this article investigates the intersections of AI and religious traditions in their shared pursuit of the common good. Beginning with Lonergan’s principle that humans construct their understanding through cognitive processes, we examine how AI-mediated realities align with or challenge traditional religious tenets. By delving into specific cases, we spotlight AI’s role in reshaping religious symbols, rituals, and even creating novel spiritual meanings. Using Lonergan’s insights on the balance between subjectivity and objectivity, I analyze AI’s potential to both create new sacred spaces and challenge religious orthodoxy. The crux of the discussion centers on the negotiation between religious values and technological innovation, assessing how AI can bolster religious life while maintaining its core essence. Ultimately, this article underscores the importance of the common good in the age of AI-driven religious evolution. |
| Umbrello, Steven Sociotechnical Infrastructures of Dominion in Stefan L. Sorgner’s We Have Always Been Cyborgs Journal Article In: Etica & Politica / Ethics & Politics, vol. XXV, iss. 1, pp. 336–351, 2023. @article{Umbrello2023b,
title = {Sociotechnical Infrastructures of Dominion in Stefan L. Sorgner’s We Have Always Been Cyborgs},
author = {Umbrello, Steven},
url = {http://www2.units.it/etica/2023_1/UMBRELLO.pdf},
year = {2023},
date = {2023-04-26},
journal = {Etica & Politica / Ethics & Politics},
volume = {XXV},
issue = {1},
pages = {336–351},
abstract = {In We Have Always Been Cyborgs (2021), Stefan L. Sorgner argues that, given the growing economic burden of desirable welfare programs, in order for Western democratic societies to continue to flourish it will be necessary that they establish some form of algocracy (i.e., governance by algorithm). This is argued to be necessary both in order to maintain the sustainability and efficiency of these programs, but also due to the fact that further integration of humans into technical systems provides the only effective means to bridge gaps in functionality and governance. However, Sorgner’s position is entirely insensitive to the design turn in applied ethics, which argues against the neutrality of technology, instead maintaining that technology and society co-construct each other with persistent feedback loops. This, I argue, is a problem for his account inasmuch as technologies, as they become more ubiquitous, likewise become pervasive and inextricable from our sociotechnical infrastructures. As such, less-than-beneficent forces, as current trends illustrate, can appropriate these seemingly banal infrastructures to gear them towards oppressive ends, thereby ultimately threatening the social democracies that Sorgner’s position aims to buttress.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
In We Have Always Been Cyborgs (2021), Stefan L. Sorgner argues that, given the growing economic burden of desirable welfare programs, in order for Western democratic societies to continue to flourish it will be necessary that they establish some form of algocracy (i.e., governance by algorithm). This is argued to be necessary both in order to maintain the sustainability and efficiency of these programs, but also due to the fact that further integration of humans into technical systems provides the only effective means to bridge gaps in functionality and governance. However, Sorgner’s position is entirely insensitive to the design turn in applied ethics, which argues against the neutrality of technology, instead maintaining that technology and society co-construct each other with persistent feedback loops. This, I argue, is a problem for his account inasmuch as technologies, as they become more ubiquitous, likewise become pervasive and inextricable from our sociotechnical infrastructures. As such, less-than-beneficent forces, as current trends illustrate, can appropriate these seemingly banal infrastructures to gear them towards oppressive ends, thereby ultimately threatening the social democracies that Sorgner’s position aims to buttress. |
| Maurizio; Umbrello Balistreri, Steven Modifying the environment or human nature? What is the right choice for space travel and Mars colonisation? Journal Article In: Nanoethics, vol. 17, 2023. @article{Balistreri2023,
title = {Modifying the environment or human nature? What is the right choice for space travel and Mars colonisation?},
author = {Balistreri, Maurizio; Umbrello, Steven},
url = {https://link.springer.com/article/10.1007/s11569-023-00440-7},
doi = {10.1007/s11569-023-00440-7},
year = {2023},
date = {2023-04-22},
journal = {Nanoethics},
volume = {17},
abstract = {As space travel and intentions to colonise other planets are becoming the norm in public debate and scholarship, we must also confront the technical and survival challenges that emerge from these hostile environments. This paper aims to evaluate the various arguments proposed to meet the challenges of human space travel and extraterrestrial planetary colonisation. In particular, two primary solutions have been present in the literature as the most straightforward solutions to the rigours of extraterrestrial survival and flourishing: (1) geoengineering, where the environment is modified to become hospitable to its inhabitants, and (2) human (bio)enhancement where the genetic heritage of humans is modified to make them more resilient to the difficulties they may encounter as well as to permit them to thrive in non-terrestrial environments. Both positions have strong arguments supporting them but also severe philosophical and practical drawbacks when exposed to different circumstances. This paper aims to show that a principled stance where one position is accepted wholesale necessarily comes at the opportunity cost of the other where the other might be better suited, practically and morally. This paper concludes that case-by-case evaluations of the solutions to space travel and extraterrestrial colonisation are necessary to ensure moral congruency and the survival and flourishing of astronauts now and into the future.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
As space travel and intentions to colonise other planets are becoming the norm in public debate and scholarship, we must also confront the technical and survival challenges that emerge from these hostile environments. This paper aims to evaluate the various arguments proposed to meet the challenges of human space travel and extraterrestrial planetary colonisation. In particular, two primary solutions have been present in the literature as the most straightforward solutions to the rigours of extraterrestrial survival and flourishing: (1) geoengineering, where the environment is modified to become hospitable to its inhabitants, and (2) human (bio)enhancement where the genetic heritage of humans is modified to make them more resilient to the difficulties they may encounter as well as to permit them to thrive in non-terrestrial environments. Both positions have strong arguments supporting them but also severe philosophical and practical drawbacks when exposed to different circumstances. This paper aims to show that a principled stance where one position is accepted wholesale necessarily comes at the opportunity cost of the other where the other might be better suited, practically and morally. This paper concludes that case-by-case evaluations of the solutions to space travel and extraterrestrial colonisation are necessary to ensure moral congruency and the survival and flourishing of astronauts now and into the future. |
| Umbrello, Steven Emotions and Automation in a High-Tech Workplace: A Commentary Journal Article In: Philosophy & Technology, vol. 36, no. 12, 2023. @article{Umbrello2023,
title = {Emotions and Automation in a High-Tech Workplace: A Commentary},
author = {Umbrello, Steven},
url = {https://link.springer.com/article/10.1007/s13347-023-00615-w},
doi = {10.1007/s13347-023-00615-w},
year = {2023},
date = {2023-03-02},
journal = {Philosophy & Technology},
volume = {36},
number = {12},
abstract = {In a recent article, Madelaine Ley evaluates the future of work, specifically robotised workplaces, via the lens of care ethics. Like many proponents of care ethics, Ley draws on the approach and its emphasis on relationality to understand ethical action necessary for worker wellbeing. Her paper aims to fill a research gap by shifting away from the traditional contexts in which care ethics is employed, i.e., health and care contexts and instead appropriates the approach to tackle the sociotechnicity of robotics and how caring should be integrated into non-traditional contexts. This paper comments on that of Ley’s, making the case that the author does, in fact, achieve this end while still leaving areas of potential future research open to buttressing the approach she presents.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
In a recent article, Madelaine Ley evaluates the future of work, specifically robotised workplaces, via the lens of care ethics. Like many proponents of care ethics, Ley draws on the approach and its emphasis on relationality to understand ethical action necessary for worker wellbeing. Her paper aims to fill a research gap by shifting away from the traditional contexts in which care ethics is employed, i.e., health and care contexts and instead appropriates the approach to tackle the sociotechnicity of robotics and how caring should be integrated into non-traditional contexts. This paper comments on that of Ley’s, making the case that the author does, in fact, achieve this end while still leaving areas of potential future research open to buttressing the approach she presents. |
| Zeki C.; Umbrello Seskir, Steven; Coenen Democratization of Quantum Technologies Journal Article In: Quantum Science and Technology, vol. 8, iss. 2, no. 024005, 2023. @article{Seskir2023,
title = {Democratization of Quantum Technologies},
author = {Seskir, Zeki C.; Umbrello, Steven; Coenen, Christopher; Vermaas, Pieter E.},
url = {https://iopscience.iop.org/article/10.1088/2058-9565/acb6ae},
doi = {10.1088/2058-9565/acb6ae},
year = {2023},
date = {2023-02-07},
journal = {Quantum Science and Technology},
volume = {8},
number = {024005},
issue = {2},
abstract = {As quantum technologies (QT) advance, their potential impact on and relation with society has been developing into an important issue for exploration. In this paper, we investigate the topic of democratization in the context of QT, particularly quantum computing. The paper contains three main sections. First, we briefly introduce different theories of democracy (participatory, representative, and deliberative) and how the concept of democratization can be formulated with respect to whether democracy is taken as an intrinsic or instrumental value. Second, we give an overview of how the concept of democratization is used in the QT field. Democratization is mainly adopted by companies working on quantum computing and used in a very narrow understanding of the concept. Third, we explore various narratives and counter-narratives concerning democratization in QT. Finally, we explore the general efforts of democratization in QT such as different forms of access, formation of grassroot communities and special interest groups, the emerging culture of manifesto writing, and how these can be located within the different theories of democracy. In conclusion, we argue that although the ongoing efforts in the democratization of QT are necessary steps towards the democratization of this set of emerging technologies, they should not be accepted as sufficient to argue that QT is a democratized field. We argue that more reflexivity and responsiveness regarding the narratives and actions adopted by the actors in the QT field and making the underlying assumptions of ongoing efforts on democratization of QT explicit, can result in a better technology for society.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
As quantum technologies (QT) advance, their potential impact on and relation with society has been developing into an important issue for exploration. In this paper, we investigate the topic of democratization in the context of QT, particularly quantum computing. The paper contains three main sections. First, we briefly introduce different theories of democracy (participatory, representative, and deliberative) and how the concept of democratization can be formulated with respect to whether democracy is taken as an intrinsic or instrumental value. Second, we give an overview of how the concept of democratization is used in the QT field. Democratization is mainly adopted by companies working on quantum computing and used in a very narrow understanding of the concept. Third, we explore various narratives and counter-narratives concerning democratization in QT. Finally, we explore the general efforts of democratization in QT such as different forms of access, formation of grassroot communities and special interest groups, the emerging culture of manifesto writing, and how these can be located within the different theories of democracy. In conclusion, we argue that although the ongoing efforts in the democratization of QT are necessary steps towards the democratization of this set of emerging technologies, they should not be accepted as sufficient to argue that QT is a democratized field. We argue that more reflexivity and responsiveness regarding the narratives and actions adopted by the actors in the QT field and making the underlying assumptions of ongoing efforts on democratization of QT explicit, can result in a better technology for society. |
| Umbrello, Steven; Bernstein, Michael J.; Vermaas, Pieter E.; Resseguier, Anaïs; Gonzalez, Gustavo; Porcari, Andrea; Grinbaum, Alexei; Adomaitis, Laurynas From speculation to reality: Enhancing anticipatory ethics for emerging technologies (ATE) in practice Journal Article In: Technology in Society, vol. 74, pp. 102325, 2023, ISSN: 0160-791X. @article{UMBRELLO2023102325,
title = {From speculation to reality: Enhancing anticipatory ethics for emerging technologies (ATE) in practice},
author = {Steven Umbrello and Michael J. Bernstein and Pieter E. Vermaas and Anaïs Resseguier and Gustavo Gonzalez and Andrea Porcari and Alexei Grinbaum and Laurynas Adomaitis},
url = {https://www.sciencedirect.com/science/article/pii/S0160791X23001306},
doi = {https://doi.org/10.1016/j.techsoc.2023.102325},
issn = {0160-791X},
year = {2023},
date = {2023-01-01},
journal = {Technology in Society},
volume = {74},
pages = {102325},
abstract = {Various approaches have emerged over the last several decades to meet the challenges and complexities of anticipating and responding to the potential impacts of emerging technologies. Although many of the existing approaches share similarities, they each have shortfalls. This paper takes as the object of its study Anticipatory Ethics for Emerging Technologies (ATE) to technology assessment, given that it was formatted to address many of the privations characterising parallel approaches. The ATE approach, also in practice, presents certain areas for retooling, such as how it characterises levels and objects of analysis. This paper results from the work done with the TechEthos Horizon 2020 project in evaluating the ethical, legal, and social impacts of climate engineering, digital extended reality, and neurotechnologies. To meet the challenges these technology families present, this paper aims to enhance the ATE framework to encompass the variety of human processes and material forms, functions, and applications that comprise the socio-technical systems in which these technologies are embedded.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Various approaches have emerged over the last several decades to meet the challenges and complexities of anticipating and responding to the potential impacts of emerging technologies. Although many of the existing approaches share similarities, they each have shortfalls. This paper takes as the object of its study Anticipatory Ethics for Emerging Technologies (ATE) to technology assessment, given that it was formatted to address many of the privations characterising parallel approaches. The ATE approach, also in practice, presents certain areas for retooling, such as how it characterises levels and objects of analysis. This paper results from the work done with the TechEthos Horizon 2020 project in evaluating the ethical, legal, and social impacts of climate engineering, digital extended reality, and neurotechnologies. To meet the challenges these technology families present, this paper aims to enhance the ATE framework to encompass the variety of human processes and material forms, functions, and applications that comprise the socio-technical systems in which these technologies are embedded. |
| Umbrello, Steven Designing AI for Explainability and Verifiability: A Value Sensitive Design Approach to Avoid Artificial Stupidity in Autonomous Vehicles Journal Article In: International Journal of Social Robotics, vol. 14, iss. 2, pp. 313–322, 2022. @article{Umbrello2021h,
title = {Designing AI for Explainability and Verifiability: A Value Sensitive Design Approach to Avoid Artificial Stupidity in Autonomous Vehicles},
author = {Steven Umbrello},
url = {https://doi.org/10.1007/s12369-021-00790-w},
doi = {10.1007/s12369-021-00790-w},
year = {2022},
date = {2022-06-01},
urldate = {2022-06-01},
journal = {International Journal of Social Robotics},
volume = {14},
issue = {2},
pages = {313–322},
abstract = {One of the primary, if not most critical, difficulties in the design and implementation of autonomous systems is the black-boxed nature of the decision-making structures and logical pathways of autonomous systems. For this reason, the values of stakeholders become of particular significance given the risks posed by opaque structures of intelligent agents (IAs). This paper proposes the Value Sensitive Design (VSD) approach as a principled framework for incorporating these values in design. The example of autonomous vehicles is used as a case study for how VSD offers a systematic way for engineering teams to formally incorporate existing technical solutions towards ethical design, while simultaneously remaining pliable to emerging issues and needs. It is concluded that the VSD methodology offers at least a strong enough foundation from which designers can begin to anticipate design needs and formulate salient design flows that can be adapted to changing ethical landscapes.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
One of the primary, if not most critical, difficulties in the design and implementation of autonomous systems is the black-boxed nature of the decision-making structures and logical pathways of autonomous systems. For this reason, the values of stakeholders become of particular significance given the risks posed by opaque structures of intelligent agents (IAs). This paper proposes the Value Sensitive Design (VSD) approach as a principled framework for incorporating these values in design. The example of autonomous vehicles is used as a case study for how VSD offers a systematic way for engineering teams to formally incorporate existing technical solutions towards ethical design, while simultaneously remaining pliable to emerging issues and needs. It is concluded that the VSD methodology offers at least a strong enough foundation from which designers can begin to anticipate design needs and formulate salient design flows that can be adapted to changing ethical landscapes. |
| Umbrello, Steven The Role of Engineers in Harmonising Human Values for AI Systems Design Journal Article In: Journal of Responsible Technology, vol. 10, iss. July, no. 100031, 2022. @article{Umbrello2022b,
title = {The Role of Engineers in Harmonising Human Values for AI Systems Design},
author = {Steven Umbrello},
url = {https://doi.org/10.1016/j.jrt.2022.100031},
doi = {10.1016/j.jrt.2022.100031},
year = {2022},
date = {2022-04-12},
journal = {Journal of Responsible Technology},
volume = {10},
number = {100031},
issue = {July},
abstract = {Most engineers Fwork within social structures governing and governed by a set of values that primarily emphasise economic concerns. The majority of innovations derive from these loci. Given the effects of these innovations on various communities, it is imperative that the values they embody are aligned with those societies. Like other transformative technologies, artificial intelligence systems can be designed by a single organisation but be diffused globally, demonstrating impacts over time. This paper argues that in order to design for this broad stakeholder group, engineers must adopt a systems thinking approach that allows them to understand the sociotechnicity of artificial intelligence systems across sociocultural domains. It claims that value sensitive design, and envisioning cards in particular, provides a solid first step towards helping designers harmonise human values, understood across spatiotemporal boundaries, with economic values, rather than the former coming at the opportunity cost of the latter.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Most engineers Fwork within social structures governing and governed by a set of values that primarily emphasise economic concerns. The majority of innovations derive from these loci. Given the effects of these innovations on various communities, it is imperative that the values they embody are aligned with those societies. Like other transformative technologies, artificial intelligence systems can be designed by a single organisation but be diffused globally, demonstrating impacts over time. This paper argues that in order to design for this broad stakeholder group, engineers must adopt a systems thinking approach that allows them to understand the sociotechnicity of artificial intelligence systems across sociocultural domains. It claims that value sensitive design, and envisioning cards in particular, provides a solid first step towards helping designers harmonise human values, understood across spatiotemporal boundaries, with economic values, rather than the former coming at the opportunity cost of the latter. |
| Marianna; Umbrello Capasso, Steven Responsible Nudging for Social Good: New Healthcare Skills for AI-Driven Digital Personal Assistants Journal Article In: Medicine, Health Care and Philosophy, vol. 25, iss. 1, pp. 11-22, 2022. @article{Capasso2022,
title = {Responsible Nudging for Social Good: New Healthcare Skills for AI-Driven Digital Personal Assistants},
author = {Capasso, Marianna; Umbrello, Steven},
url = {https://doi.org/10.1007/s11019-021-10062-z},
doi = {10.1007/s11019-021-10062-z},
year = {2022},
date = {2022-03-01},
journal = {Medicine, Health Care and Philosophy},
volume = {25},
issue = {1},
pages = {11-22},
abstract = {Traditional medical practices and relationships are changing given the widespread adoption of AI-driven technologies across the various domains of health and healthcare. In many cases, these new technologies are not specific to the field of healthcare. Still, they are existent, ubiquitous, and commercially available systems upskilled to integrate these novel care practices. Given the widespread adoption, coupled with the dramatic changes in practices, new ethical and social issues emerge due to how these systems nudge users into making decisions and changing behaviours. This article discusses how these AI-driven systems pose particular ethical challenges with regards to nudging. To confront these issues, the value sensitive design (VSD) approach is adopted as a principled methodology that designers can adopt to design these systems to avoid harming and contribute to the social good. The AI for Social Good (AI4SG) factors are adopted as the norms constraining maleficence. In contrast, higher-order values specific to AI, such as those from the EU High-Level Expert Group on AI and the United Nations Sustainable Development Goals, are adopted as the values to be promoted as much as possible in design. The use case of Amazon Alexa's Healthcare Skills is used to illustrate this design approach. It provides an exemplar of how designers and engineers can begin to orientate their design programs of these technologies towards the social good.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Traditional medical practices and relationships are changing given the widespread adoption of AI-driven technologies across the various domains of health and healthcare. In many cases, these new technologies are not specific to the field of healthcare. Still, they are existent, ubiquitous, and commercially available systems upskilled to integrate these novel care practices. Given the widespread adoption, coupled with the dramatic changes in practices, new ethical and social issues emerge due to how these systems nudge users into making decisions and changing behaviours. This article discusses how these AI-driven systems pose particular ethical challenges with regards to nudging. To confront these issues, the value sensitive design (VSD) approach is adopted as a principled methodology that designers can adopt to design these systems to avoid harming and contribute to the social good. The AI for Social Good (AI4SG) factors are adopted as the norms constraining maleficence. In contrast, higher-order values specific to AI, such as those from the EU High-Level Expert Group on AI and the United Nations Sustainable Development Goals, are adopted as the values to be promoted as much as possible in design. The use case of Amazon Alexa's Healthcare Skills is used to illustrate this design approach. It provides an exemplar of how designers and engineers can begin to orientate their design programs of these technologies towards the social good. |
| Umbrello, Steven Shikake: The Japanese Art of Shaping Behavior Through Design Journal Article In: International Journal of Art, Culture and Design Technologies, vol. 10, iss. 2, pp. 57-60, 2021. @article{Umbrello2021k,
title = {Shikake: The Japanese Art of Shaping Behavior Through Design},
author = {Steven Umbrello},
url = {https://www.igi-global.com/pdf.aspx?tid=297020&ptid=254309&ctid=17&title=review%20of%20shikake:%20the%20japanese%20art%20of%20shaping%20behavior%20through%20design&isxn=9781799862314},
year = {2021},
date = {2021-12-01},
journal = {International Journal of Art, Culture and Design Technologies},
volume = {10},
issue = {2},
pages = {57-60},
abstract = {A new book by Naohiro Matsumura is reviewed. Shikake are described as designs that open up new behavioral optionsto people and that positively allow them to choose those optionsfreely. Matsumura explores the motivations, philosophy, and implementations of shikake in the real world, providing numerous examples and illustrations. This book appeals to numerous audiences, ranging from the general interest reader who wishes to understand nudging from a traditional perspective ranging through the history of Japanese design, as well as the specialist designer who wishes to employ nudging techniques in a positive and fair manner.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
A new book by Naohiro Matsumura is reviewed. Shikake are described as designs that open up new behavioral optionsto people and that positively allow them to choose those optionsfreely. Matsumura explores the motivations, philosophy, and implementations of shikake in the real world, providing numerous examples and illustrations. This book appeals to numerous audiences, ranging from the general interest reader who wishes to understand nudging from a traditional perspective ranging through the history of Japanese design, as well as the specialist designer who wishes to employ nudging techniques in a positive and fair manner. |
| Maurizio Balistreri Alberto Pirni, Marianna Capasso Robot Care Ethics -Between Autonomy and Vulnerability: Coupling Principles and Practices in Autonomous Systems for Care Journal Article In: Frontiers in Robotics and AI, vol. 8, no. 654298, 2021. @article{Pirni2021,
title = {Robot Care Ethics -Between Autonomy and Vulnerability: Coupling Principles and Practices in Autonomous Systems for Care},
author = {Alberto Pirni, Maurizio Balistreri, Marianna Capasso, Steven Umbrello, Federica Merenda},
url = {https://www.frontiersin.org/articles/10.3389/frobt.2021.654298/},
doi = {10.3389/frobt.2021.654298},
year = {2021},
date = {2021-06-08},
journal = {Frontiers in Robotics and AI},
volume = {8},
number = {654298},
abstract = {Technological developments involving robotics and artificial intelligence devices are being employed evermore in elderly care and the healthcare sector more generally, raising ethical issues and practical questions warranting closer considerations of what we mean by “care” and, subsequently, how to design such software coherently with the chosen definition. This paper starts by critically examining the existing approaches to the ethical design of care robots provided by Aimee van Wynsberghe, who relies on the work on the ethics of care by Joan Tronto. In doing so, it suggests an alternative to their non-principled approach, an alternative suited to tackling some of the issues raised by Tronto and van Wynsberghe, while allowing for the inclusion of two orientative principles. Our proposal centres on the principles of autonomy and vulnerability, whose joint adoption we deem able to constitute an original revision of a bottom-up approach in care ethics. Conclusively, the ethical framework introduced here integrates more traditional approaches in care ethics in view of enhancing the debate regarding the ethical design of care robots under a new lens.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Technological developments involving robotics and artificial intelligence devices are being employed evermore in elderly care and the healthcare sector more generally, raising ethical issues and practical questions warranting closer considerations of what we mean by “care” and, subsequently, how to design such software coherently with the chosen definition. This paper starts by critically examining the existing approaches to the ethical design of care robots provided by Aimee van Wynsberghe, who relies on the work on the ethics of care by Joan Tronto. In doing so, it suggests an alternative to their non-principled approach, an alternative suited to tackling some of the issues raised by Tronto and van Wynsberghe, while allowing for the inclusion of two orientative principles. Our proposal centres on the principles of autonomy and vulnerability, whose joint adoption we deem able to constitute an original revision of a bottom-up approach in care ethics. Conclusively, the ethical framework introduced here integrates more traditional approaches in care ethics in view of enhancing the debate regarding the ethical design of care robots under a new lens. |
| Umbrello, Steven; Capasso, Marianna; Pirni, Alberto; Balistreri, Maurizio; Merenda, Federica Value Sensitive Design to Achieve the UN SDGs with AI: A Case of Elderly Care Robots Journal Article In: Minds and Machines, vol. 31, iss. 3, pp. 395–419, 2021. @article{Umbrello2021b,
title = {Value Sensitive Design to Achieve the UN SDGs with AI: A Case of Elderly Care Robots},
author = {Steven Umbrello and Marianna Capasso and Alberto Pirni and Maurizio Balistreri and Federica Merenda},
url = {https://doi.org/10.1007/s11023-021-09561-y},
doi = {10.1007/s11023-021-09561-y},
year = {2021},
date = {2021-05-23},
urldate = {2021-05-23},
journal = {Minds and Machines},
volume = {31},
issue = {3},
pages = {395–419},
abstract = {The increasing automation and ubiquity of robotics deployed within the field of care boasts promising advantages. However, challenging ethical issues arise also as a consequence. This paper takes care robots for the elderly as the subject of analysis, building on previous literature in the domain of the ethics and design of care robots. It takes the value sensitive design (VSD) approach to technology design and extends its application to care robots by not only integrating the values of care, but also those specific to AI as well as higher-scale values such as the United Nations Sustainable Development Goals (SDGs). The ethical issues specific to care robots for the elderly as discussed as well as examples of specific design requirements to ameliorate those issues.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
The increasing automation and ubiquity of robotics deployed within the field of care boasts promising advantages. However, challenging ethical issues arise also as a consequence. This paper takes care robots for the elderly as the subject of analysis, building on previous literature in the domain of the ethics and design of care robots. It takes the value sensitive design (VSD) approach to technology design and extends its application to care robots by not only integrating the values of care, but also those specific to AI as well as higher-scale values such as the United Nations Sustainable Development Goals (SDGs). The ethical issues specific to care robots for the elderly as discussed as well as examples of specific design requirements to ameliorate those issues. |
| Umbrello, Steven; Wood, Nathan Gabriel Autonomous Weapons Systems and the Contextual Nature of Hors de Combat Status Journal Article In: Information, vol. 12, no. 5, pp. 216, 2021. @article{Umbrello2021,
title = {Autonomous Weapons Systems and the Contextual Nature of Hors de Combat Status},
author = {Steven Umbrello and Nathan Gabriel Wood},
url = {https://doi.org/10.3390/info12050216},
doi = {10.3390/info12050216},
year = {2021},
date = {2021-05-20},
journal = {Information},
volume = {12},
number = {5},
pages = {216},
abstract = {Autonomous weapons systems (AWS), sometimes referred to as “killer robots”, are receiving in-creasing attention in public discourse and scholarship. Much of this interest is connected with policy makers and the emerging ethical and legal problems linked to the full autonomy of weap-ons systems, however there is a general lack of recognition for the ways in which existing law might impact on these new technologies. In this paper, we argue that as AWS become more so-phisticated and more capable than ground troops, soldiers will be at the mercy of enemy AWS and unable to defend themselves. We argue that these soldiers ought to be considered hors de combat, and not targeted. We contend that hors de combat status must be viewed contextually, with close reference to the capabilities of combatants on both sides of any engagement. Given this point, and the fact that AWS may come in many shapes and sizes, and can be made for many different missions, each individual AWS will need its own standard for when enemy soldiers are deemed hors de combat. The difficulties of achieving this with the limits of modern technology should also be acknowledged. We conclude by examining how these nuanced views of hors de combat status might impact on the “meaningful human control” of AWS.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Autonomous weapons systems (AWS), sometimes referred to as “killer robots”, are receiving in-creasing attention in public discourse and scholarship. Much of this interest is connected with policy makers and the emerging ethical and legal problems linked to the full autonomy of weap-ons systems, however there is a general lack of recognition for the ways in which existing law might impact on these new technologies. In this paper, we argue that as AWS become more so-phisticated and more capable than ground troops, soldiers will be at the mercy of enemy AWS and unable to defend themselves. We argue that these soldiers ought to be considered hors de combat, and not targeted. We contend that hors de combat status must be viewed contextually, with close reference to the capabilities of combatants on both sides of any engagement. Given this point, and the fact that AWS may come in many shapes and sizes, and can be made for many different missions, each individual AWS will need its own standard for when enemy soldiers are deemed hors de combat. The difficulties of achieving this with the limits of modern technology should also be acknowledged. We conclude by examining how these nuanced views of hors de combat status might impact on the “meaningful human control” of AWS. |
| LaGrandeur, Kevin Are We Ready for Direct Brain Links to Machines and Each Other? : A Real-World Application of Posthuman Bioethics Journal Article In: Journal of Posthumanism, vol. 1, no. 1, pp. 87-91, 2021. @article{LaGrandeur2021,
title = {Are We Ready for Direct Brain Links to Machines and Each Other? : A Real-World Application of Posthuman Bioethics},
author = {Kevin LaGrandeur},
url = {https://doi.org/10.33182/jp.v1i1.1185},
doi = {10.33182/jp.v1i1.1185},
year = {2021},
date = {2021-05-16},
journal = {Journal of Posthumanism},
volume = {1},
number = {1},
pages = {87-91},
abstract = {Neuralink, a company founded by Elon Musk three years ago, is the most notable of several companies developing a new type of Brain-Computer Interface (BCI): a direct, two-way, digital system that is robust, compact, and wireless. BCI is already being used therapeutically to reduce seizures in severe epileptics, resolve tremors in Parkinson’s patients, and to stabilize mood disorders in psychiatric patients. But the devices used to do this are bulky and hardwired, causing difficulty of use for patients and requiring invasive surgeries and large incisions to implant them. So, researchers have been trying to make these devices more compact, easier to implant, and wireless.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Neuralink, a company founded by Elon Musk three years ago, is the most notable of several companies developing a new type of Brain-Computer Interface (BCI): a direct, two-way, digital system that is robust, compact, and wireless. BCI is already being used therapeutically to reduce seizures in severe epileptics, resolve tremors in Parkinson’s patients, and to stabilize mood disorders in psychiatric patients. But the devices used to do this are bulky and hardwired, causing difficulty of use for patients and requiring invasive surgeries and large incisions to implant them. So, researchers have been trying to make these devices more compact, easier to implant, and wireless. |
| Umbrello, Steven Coupling Levels of Abstraction in Understanding Meaningful Human Control of Autonomous Weapons: A Two-Tiered Approach Journal Article In: Ethics and Information Technology, 2021. @article{Umbrello2021g,
title = {Coupling Levels of Abstraction in Understanding Meaningful Human Control of Autonomous Weapons: A Two-Tiered Approach},
author = {Steven Umbrello},
url = {https://doi.org/10.1007/s10676-021-09588-w
},
doi = {10.1007/s10676-021-09588-w},
year = {2021},
date = {2021-04-09},
journal = {Ethics and Information Technology},
abstract = {The international debate on the ethics and legality of autonomous weapon systems (AWS), along with the call for a ban, primarily focus on the nebulous concept of fully autonomous AWS. These are AWS capable of target selection and engagement absent human supervision or control. This paper argues that such a conception of autonomy is divorced from both military planning and decision-making operations; it also ignores the design requirements that govern AWS engineering and the subsequent tracking and tracing of moral responsibility. To show how military operations can be coupled with design ethics, this paper marries two different kinds of meaningful human control (MHC) termed levels of abstraction. Under this two-tiered understanding of MHC, the contentious notion of ‘full’ autonomy becomes unproblematic.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
The international debate on the ethics and legality of autonomous weapon systems (AWS), along with the call for a ban, primarily focus on the nebulous concept of fully autonomous AWS. These are AWS capable of target selection and engagement absent human supervision or control. This paper argues that such a conception of autonomy is divorced from both military planning and decision-making operations; it also ignores the design requirements that govern AWS engineering and the subsequent tracking and tracing of moral responsibility. To show how military operations can be coupled with design ethics, this paper marries two different kinds of meaningful human control (MHC) termed levels of abstraction. Under this two-tiered understanding of MHC, the contentious notion of ‘full’ autonomy becomes unproblematic. |
| Umbrello, Steven The Ecological Turn in Design: Adopting a Posthumanist Ethics to Inform Value Sensitive Design Journal Article In: Philosophies, vol. 6, no. 2, pp. 29, 2021. @article{Umbrello2021f,
title = {The Ecological Turn in Design: Adopting a Posthumanist Ethics to Inform Value Sensitive Design},
author = {Steven Umbrello},
url = {https://www.mdpi.com/2409-9287/6/2/29},
doi = {10.3390/philosophies6020029},
year = {2021},
date = {2021-04-02},
journal = {Philosophies},
volume = {6},
number = {2},
pages = {29},
abstract = {Design for Values (DfV) philosophies are a series of design approaches that aim to incorporate human values into the early phases of technological design to direct innovation into beneficial outcomes. The difficulty and necessity of directing advantageous futures for transformative technologies through the application and adoption of value-based design approaches are apparent. However, questions of whose values to design are of critical importance. DfV philosophies typically aim to enrol the stakeholders who may be affected by the emergence of such a technology. However, regardless of which design approach is adopted, all enrolled stakeholders are human ones who propose human values. Contemporary scholarship on metahumanisms, particularly those on posthumanism, have decentred the human from its traditionally privileged position among other forms of life. Arguments that the humanist position is not (and has never been) tenable are persuasive. As such, scholarship has begun to provide a more encompassing ontology for the investigation of nonhuman values. Given the potentially transformative nature of future technologies as relates to the earth and its many assemblages, it is clear that the value investigations of these design approaches fail to account for all relevant stakeholders (i.e., nonhuman animals). This paper has two primary objectives: (1) to argue for the cogency of a posthuman ethics in the design of technologies; and (2) to describe how existing DfV approaches can begin to envision principled and methodological ways of incorporating non-human values into design. To do this, the paper provides a rudimentary outline of what constitutes DfV approaches. It then takes up a unique design approach called Value Sensitive Design (VSD) as an illustrative example. Out of all the other DfV frameworks, VSD most clearly illustrates a principled approach to the integration of values in design.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Design for Values (DfV) philosophies are a series of design approaches that aim to incorporate human values into the early phases of technological design to direct innovation into beneficial outcomes. The difficulty and necessity of directing advantageous futures for transformative technologies through the application and adoption of value-based design approaches are apparent. However, questions of whose values to design are of critical importance. DfV philosophies typically aim to enrol the stakeholders who may be affected by the emergence of such a technology. However, regardless of which design approach is adopted, all enrolled stakeholders are human ones who propose human values. Contemporary scholarship on metahumanisms, particularly those on posthumanism, have decentred the human from its traditionally privileged position among other forms of life. Arguments that the humanist position is not (and has never been) tenable are persuasive. As such, scholarship has begun to provide a more encompassing ontology for the investigation of nonhuman values. Given the potentially transformative nature of future technologies as relates to the earth and its many assemblages, it is clear that the value investigations of these design approaches fail to account for all relevant stakeholders (i.e., nonhuman animals). This paper has two primary objectives: (1) to argue for the cogency of a posthuman ethics in the design of technologies; and (2) to describe how existing DfV approaches can begin to envision principled and methodological ways of incorporating non-human values into design. To do this, the paper provides a rudimentary outline of what constitutes DfV approaches. It then takes up a unique design approach called Value Sensitive Design (VSD) as an illustrative example. Out of all the other DfV frameworks, VSD most clearly illustrates a principled approach to the integration of values in design. |
| Umbrello, Steven Can humans dream of electric sheep? Journal Article In: Metascience, vol. 30, no. 2, pp. 269-271, 2021. @article{Umbrello2021e,
title = {Can humans dream of electric sheep?},
author = {Steven Umbrello},
url = {https://link.springer.com/article/10.1007/s11016-021-00629-0},
doi = {10.1007/s11016-021-00629-0},
year = {2021},
date = {2021-02-26},
journal = {Metascience},
volume = {30},
number = {2},
pages = {269-271},
abstract = {As an idea, transhumanism has received increasing attention in recent years and across numerous domains. Despite presidential candidates such as Zoltan Istvan, who ran on an explicitly Transhumanist platform in 2016 but later dropped out to endorse Hillary Clinton, transhumanism has taken root more recently in the conspiratorial imaginations of the dark web. Given the philosophy’s central emphasis on technology as an inherent good, imaginations in supposed alt-right internet circles have criticised it as an ideological gateway to global, fully automated Communism. This is not to say that such discussions on transhumanism are exclusively siloed and on the margins of society. Related discussions are happening at various well-known institutions and research centres such as the Institute for Ethics and Emerging Technologies, a non-profit think tank dedicated to techno-progressivism where I have been managing director for half a decade. What I mean to say here is that transhumanism is not monolithic. It is best described as multi-faceted and existing in different instantiations across multiple domains. James Michael MacFarlane’s recent book, Transhumanism as a New Social Movement: The Techno-Centred Imagination, is an attempt to trace the history, meaning, and practices that characterise this variegated term.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
As an idea, transhumanism has received increasing attention in recent years and across numerous domains. Despite presidential candidates such as Zoltan Istvan, who ran on an explicitly Transhumanist platform in 2016 but later dropped out to endorse Hillary Clinton, transhumanism has taken root more recently in the conspiratorial imaginations of the dark web. Given the philosophy’s central emphasis on technology as an inherent good, imaginations in supposed alt-right internet circles have criticised it as an ideological gateway to global, fully automated Communism. This is not to say that such discussions on transhumanism are exclusively siloed and on the margins of society. Related discussions are happening at various well-known institutions and research centres such as the Institute for Ethics and Emerging Technologies, a non-profit think tank dedicated to techno-progressivism where I have been managing director for half a decade. What I mean to say here is that transhumanism is not monolithic. It is best described as multi-faceted and existing in different instantiations across multiple domains. James Michael MacFarlane’s recent book, Transhumanism as a New Social Movement: The Techno-Centred Imagination, is an attempt to trace the history, meaning, and practices that characterise this variegated term. |
| Steven; van de Poel Umbrello, Ibo Mapping value sensitive design onto AI for social good principles Journal Article In: AI and Ethics, pp. 1-14, 2021. @article{Umbrello2021d,
title = {Mapping value sensitive design onto AI for social good principles},
author = {Umbrello, Steven; van de Poel, Ibo},
url = {https://link.springer.com/article/10.1007/s43681-021-00038-3},
doi = {10.1007/s43681-021-00038-3},
year = {2021},
date = {2021-02-01},
journal = {AI and Ethics},
pages = {1-14},
abstract = {Value sensitive design (VSD) is an established method for integrating values into technical design. It has been applied to different technologies and, more recently, to artificial intelligence (AI). We argue that AI poses a number of challenges specific to VSD that require a somewhat modified VSD approach. Machine learning (ML), in particular, poses two challenges. First, humans may not understand how an AI system learns certain things. This requires paying attention to values such as transparency, explicability, and accountability. Second, ML may lead to AI systems adapting in ways that ‘disembody’ the values embedded in them. To address this, we propose a threefold modified VSD approach: (1) integrating a known set of VSD principles (AI4SG) as design norms from which more specific design requirements can be derived; (2) distinguishing between values that are promoted and respected by the design to ensure outcomes that not only do no harm but also contribute to good, and (3) extending the VSD process to encompass the whole life cycle of an AI technology to monitor unintended value consequences and redesign as needed. We illustrate our VSD for AI approach with an example use case of a SARS-CoV-2 contact tracing app.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Value sensitive design (VSD) is an established method for integrating values into technical design. It has been applied to different technologies and, more recently, to artificial intelligence (AI). We argue that AI poses a number of challenges specific to VSD that require a somewhat modified VSD approach. Machine learning (ML), in particular, poses two challenges. First, humans may not understand how an AI system learns certain things. This requires paying attention to values such as transparency, explicability, and accountability. Second, ML may lead to AI systems adapting in ways that ‘disembody’ the values embedded in them. To address this, we propose a threefold modified VSD approach: (1) integrating a known set of VSD principles (AI4SG) as design norms from which more specific design requirements can be derived; (2) distinguishing between values that are promoted and respected by the design to ensure outcomes that not only do no harm but also contribute to good, and (3) extending the VSD process to encompass the whole life cycle of an AI technology to monitor unintended value consequences and redesign as needed. We illustrate our VSD for AI approach with an example use case of a SARS-CoV-2 contact tracing app. |
| Umbrello, Steven Reckoning with assessment: can we responsibly innovate? Journal Article In: Metascience, pp. 1-3, 2021. @article{Umbrello2021c,
title = {Reckoning with assessment: can we responsibly innovate?},
author = {Steven Umbrello},
editor = {
},
url = {https://link.springer.com/article/10.1007/s11016-021-00605-8},
doi = {10.1007/s11016-021-00605-8},
year = {2021},
date = {2021-01-15},
journal = {Metascience},
pages = {1-3},
abstract = {Assessment of Responsible Innovation argues, contrary to common imagination, that the profit motive underpinning private sector decision-making about innovation neither excludes—nor is even necessarily in tension with—responsible innovation. Responsible innovation is not a clear-cut thing, principle, or clearly formulated grouping of practices. Rather, it consists in a plurality of engagements, strategies, and interactions oriented around the general goal of technological development towards socially desirable ends. The assessment of responsible innovation faces a lacuna partly due to this plurality, and partly because responsible research and innovation (RRI) has primarily been the domain of research institutions, higher education, and public sector entities—those who are not responsible for the majority of innovations. There is thus a gap between past RRI research and the actual nexus of innovation programmes.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Assessment of Responsible Innovation argues, contrary to common imagination, that the profit motive underpinning private sector decision-making about innovation neither excludes—nor is even necessarily in tension with—responsible innovation. Responsible innovation is not a clear-cut thing, principle, or clearly formulated grouping of practices. Rather, it consists in a plurality of engagements, strategies, and interactions oriented around the general goal of technological development towards socially desirable ends. The assessment of responsible innovation faces a lacuna partly due to this plurality, and partly because responsible research and innovation (RRI) has primarily been the domain of research institutions, higher education, and public sector entities—those who are not responsible for the majority of innovations. There is thus a gap between past RRI research and the actual nexus of innovation programmes. |