These blogs and videos are intended for everyone interested in the work of the SSIT SC and the ethical and social implications of technologies. All blog posts and videos represent the opinions of the authors and do not necessarily reflect the official policy or position of the IEEE, its societies and affiliates, or the authors’ institutions. Submissions are welcome.
Why is SSIT involved in Standards Making?
Submitted by Dr. Beth-Anne Schuelke-Leech and Dr. Sara Jordan
SSIT Standards Committee
Published originally in the IEEE SSIT Newsletter, July 2019, https://newsletters.ieee.org/society/SIT/2019/July/ES_TA_1033_2019_07_SSIT_Newsletter_Email%20%281%29.html
[1] https://ethw.org/IEEE_Standards_Association_History
[2] https://www.ieee.org/standards/index.html. See https://standards.ieee.org/develop/index.html for more details about the standards making process at IEEE.
[3] https://ethicsinaction.ieee.org/
[4] http://sites.ieee.org/sagroups-ssit/useful-links/
Why ethical and societal concerns should be of interest to engineers and IEEE members?
April 17, 2019
By Beth-Anne Schuelke-Leech
For most engineers, their exposure to the ethical and societal impacts of technologies occurs during a mandatory undergraduate course on this topic. After 1986, it often included looking at the case of Roger Boisjoly and the Challenger Space Shuttle Disaster.1 Boisjoly believed that there was a danger to launching the space shuttle at lower temperatures. However, he was overruled and the Challenger launched, resulting in the loss of the space shuttle and the seven astronauts on-board. The case of Boisjoly and Challenger provides a convenient case for looking back on a decision and analyzing the ethical implications of this decision.
With a few notable exceptions, ethical and societal impacts have never been the focus of engineering work. We agree to professional and organizational codes of conduct that espouse protection of public safety and compliance with appropriate laws and regulations. There is no question that these are essential components of our training, but they often seem tangential to our core activities, which rarely presents us with the kind of stark choice for safety that Boisjoly faced.
Many of the technologies currently under development and emerging in the marketplace have the potential to be disruptive to society. Artificial intelligence, machine learning, automation, and robotics have the potential to displace humans, leaving some workers struggling to find meaningful employment but freeing up others to focus on value-added activities. Autonomous vehicles have the potential to transform transportation making it safer for people by reducing the number of traffic fatalities and injuries, but also eliminating the need for human truck drivers. Biometrics, biotechnologies, personalized medicine, and genetic engineering have the potential to improve and individualize healthcare, but may also result in manipulations of embryos and human life. Smart cities, ubiquitous connectivity, and the internet of things may improve the provision and convenience of services for citizens and customers, but it may be at the cost of privacy, diversity, and democracy.
However, it is not just computer technologies that can be disruptive. Biotechnologies and material science are other examples of fields that are rapidly changing and can provide significant impacts and benefits. Innovations and new technologies have been changing individuals and society for many generations. What is different is that many new technologies are less transparent and have the potential to create larger changes than in the past. The recent case of the Boeing 737 Max 8, where the engineers used software to compensate for hardware changes thinking that the pilots would not know the difference between the original natural operations and the newer simulated ones, created problems when the sensor malfunctioned and pilots could not get the plane to respond as they expected it too.2 Another example is that of the Volkswagen Diesel emissions software designed to detect a regulatory emission test and engage an emission reduction system that otherwise did not operate.3 The engineers at VW clearly understood that what they were doing was illegal, but it is unclear if they really understood how it was unethical or the societal impacts of their decision (i.e., increased emissions and associated health problems) or if they acknowledged that these impacts were their responsibility. Likewise, Elizabeth Holmes of Theranos literally created a black box that was supposed to take a small drop of blood and run thousands of diagnostic tests from the convenience of the customer’s home. 4 The box never worked. It remains a question as to whether it was fraud or just an overly ambitious entrepreneur that never quite got the technology to work.
Were these engineers and developers unethical? It is not as easy a question as it first appears. Ethics is a function of both the individual and the group that the individual is in. Evaluating the ethical and societal impacts of one’s work and the technologies that are being developed is not an easy task. Often engineers and developers are so focused on the tasks that they have been given that it is difficult to contemplate the longer-term impacts that this work may have.
The recently released IEEE Ethically Aligned Design5, published as part of the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems provides an excellent overview of some of the challenges and issues of incorporating the Ethical and Societal impacts into the development of AI and autonomous systems. The first paragraph establishes the importance of the work:
As the use and impact of autonomous and intelligence systems (A/IS) becomes pervasive, we need to establish societal and policy guidelines in order for such systems to remain human-centric, serving humanity’s values and ethical principles. These systems must be developed and should operate in a way that is beneficial to people and the environment, beyond simply reaching functional goals and addressing technical problems. This approach will foster the heightened level of trust between people and technology that is needed for its fruitful use in our daily lives. (p. 2)
Technologies bring both opportunities and challenges. Our job is to figure out how to develop and deploy technologies in a way that truly benefits people and reflects our values and aspirations. SSIT Standards Committee was created to help facilitate the incorporation of ethical and societal concerns into standards, as well as for the development and oversight of standards that address specific ethical and societal issues. Our goal is to build bridges between technical experts and all stakeholders who see the value, opportunities, challenges, and issues with technologies. We all have a role to play in the successful, prosperous, sustainable, and ethical future of our society.
Notes
- Boisjoly, Roger M, (1987), “Ethical decisions: Morton Thiokol and the space shuttle Challenger disaster,” Paper presented at the American Society of Mechanical Engineers Winter Annual Meeting, Boston, Massachusetts.
- Martin, Kaste, (2019), “After Boeing Crashes, New Attention On The Potential Flaws Of Software,” Retrieved from https://www.npr.org/2019/03/24/705966447/software-is-everywhere-but-its-not-always-an-upgrade on April 17, 2019.
- Ewing, Jack, (2017), Faster, Higher, Farther: The Volkswagen Scandal, New York, NY: W.W. Norton & Co.
- Carreyrou, John, (2018), Bad Blood: Secrets and Lies in a Silicon Valley Startup, New York, NY: Knopf.
- Available at https://ethicsinaction.ieee.org/