Canada Research Chair Jason Millar is an engineer and philosopher who studies social and ethical issues related to new innovations in technology. Below is an overview and discussion of some of his most recent work.

Following its product launch in 2013, Google Glass saw two years of poor sales and heavy criticism prior to being shelved officially in 2015. Alongside other social and ethical considerations, critics were concerned about personal privacy—most notably, that Google Glass gave users the ability to seamlessly record private conversations and interactions with others, as well as the ability to employ facial recognition software.

St. George’s Hospital Medical School designed a new computer program to automate the screening of medical school applicants in 1979. By 1988, St. George’s had been found guilty of racial and gender discrimination in its admissions process: based on historical data, sourced from a time when the school had openly discriminated against certain groups of applicants, the computer program had been designed accidentally to reiterate discriminatory human biases.

March 2018 marked one of the most high-profile fatal accidents involving an autonomous vehicle to date. The US National Transportation Safety Board (NTSB) determined in November 2019 that the collision had resulted from a series of decisions by Uber ATG, an organization which according to the NTSB had failed to make clear the abilities and limitations of its vehicles. Federal regulators have since been called upon to establish a formal review process before allowing companies to test automated vehicles on public roads.In each of the cases above, individuals responsible for the design and deployment of new, innovative technologies failed to consider the full spectrum of social and ethical implications including, but not limited to, justice, bias, fairness, interpretability, explainability, control, power, gender, privacy, discrimination, truth, and equality (Millar, 2019).

St. George’s Hospital Medical School failed to consider the ethical implications of using biased historical data in their admissions process; Uber ATG failed to establish clear lines of responsibility and accountability before testing near-driverless cars; and Google failed to consider personal privacy in designing Google Glass.

With both an engineering and ethics background, Canada Research Chair Jason Millar is uniquely positioned to perform cutting-edge research in this area. Studying the various ways designers and engineers tend to overlook the ethical and social considerations of their work, Millar has found ethical and social analysis crucial to realizing the benefit of many new innovations like machine learning algorithms, driverless cars, and robots.

Baked into the practice of engineering is an in-depth understanding of the various ways materials and mechanical systems in technology fail: corrosion, erosion, fatigue, and overload, just to name a few. In engineering, these breakdowns are referred to as failure modes, generally classified as either material or mechanical in nature. From this body of knowledge, engineers have been able to develop an effective list of tools, codes, standards, risk assessments, and other best practices aimed at preventing future material or mechanical failures in engineering and design.

Alarmingly, Millar has found existing approaches to ethical analysis to be somewhat out of step with new and emerging risks. That is, unlike with material and mechanical failure, there are no universally accepted tools, codes, standards, or risk assessments aimed at preventing social and ethical problems related to AI, automation, and autonomous robots (though there have been ample efforts to establish a common set of high-level ethical principles to guide decision making around autonomous and intelligent systems). In response to this finding, Millar has in turn developed a thoughtful set of tools and techniques for engineers and designers to incorporate into their daily practice, three of which are listed and explained below.

  1. Social Failure Mode Analysis

At the core of his research, Millar argues that in addition to being able to fail materially or mechanically, new technologies may also fail socially: social failure occurs when an artefact’s design conflicts with the accepted social norms of its users or environment to the extent that its intended use is prevented or diminished (Millar, 2019). In other words, products and tools may be designed in such a way that they transgress fundamental social norms and ethical expectations, ultimately causing their benefits to go unrealized. In line with this argument, Millar has begun compiling a list of common social failure modes for engineers and designers to use in creating tools, codes, standards, or for risk assessments.

  1. Value Maps and Worksheets

In hopes of establishing a practical way to conduct ethical analysis in engineering and design, Millar and his team at the University of Ottawa’s Canadian Robotics and Artificial Intelligence Ethical Design Lab (CRAiEDL) are developing value maps and other worksheets for designers and engineers to use in their daily practice. These worksheets are intended to guide engineers and designers through a process Millar calls value exploration. This process first seeks to identify the full range of stakeholders involved in the development of a given technology, along with their respective values, and then explore any existing value tensions that may need to be addressed during the engineering and design process.

One common example of value tension occurs in the context of automated decision-making systems. While some stakeholders may value transparency and the ability to understand how algorithms behind automated decision-making systems work, others may value intellectual property rights and the ability to keep valuable and proprietary information private. In this content, value maps and other kinds of worksheets may assist designers and engineers in identifying the right amount of transparency and IP protections needed for their products.

  1. Evaluating Automated Ethical Decision Making

Other tools developed by Millar are much more specific to their intended applications. For example, Millar developed a tool to evaluate automated ethical decision making in autonomous robots, such as autonomous vehicles, virtual assistants, or social robots. Millar sought to develop a tool that was user-centred and proportional in its approach, that acknowledged and accepted the psychology of user-robot relationships, that helped designers satisfy the principles contained in the human-robotics interaction (HRI) Code of Ethics, and that helped designers distinguish between acceptable and unacceptable design features (Millar, 2016). The result was a series of 12 questions for engineers, designers, and policymakers when evaluating automated, ethical decision-making systems.

The Government of Canada developed its own Algorithmic Impact Assessment tool in 2019, which was a series of questions designed to help public service employees assess and mitigate risks associated with deploying an automated decision system. Interestingly, Canada was the first country in the world to develop this kind of procedure.

As new technologies and new applications for existing technologies emerge over the coming years, it will be vital to continue to develop and perfect practical tools for ethical and social analysis in engineering and design.Mairead Matthews is a Research and Policy Analyst at the Information and Communications Technology Council of Canada (ICTC), a national centre of expertise on the digital economy. With ICTC, Mairead brings her longstanding interest in Canadian policy to the conversation on technology and 21st century regulatory challenges. Mairead’s areas of interest include internet policy, data governance, and the social and ethical impacts of emerging tech.

Works Cited

Government of Canada. Algorithmic Impact Assessment. 2019. https://open.canada.ca/data/en/dataset/748a97fb-6714-41ef-9fb8-637a0b8e0da1

Levin, Sam and Wong, Julia Carrie. Self-driving Uber kills Arizona woman in first fatal crash involving pedestrian. March 2018. The Guardian. https://www.theguardian.com/technology/2018/mar/19/uber-self-driving-car-kills-woman-arizona-tempe

Lowry, Stella and Macpherson, Gordon. A blot on the profession. 1988. British Medical Journal.

Millar, Jason.  An Ethics Evaluation Tool for Automating Ethical Decision-Making in Robots and Self-Driving Cars. 2016. Applied Artificial Intelligence - Vol. 30 Issue 8, p787-809.

Millar, Jason. Biography. University of Ottawa. https://droitcivil.uottawa.ca/en/jason-millar

Millar, Jason. Data is people! Ethics Capacity-Building to Overcome Data-Agnosticism in AI. 2019. University of Ottawa. https://commonlaw.uottawa.ca/health-law/sites/commonlaw.uottawa.ca.health-law/files/millar_5.pdf

Millar, Jason. Jason Millar, Social Failure Modes in Technology – Implications for AI. March 2019. Centre for Ethics. https://www.youtube.com/watch?v=xpYrsghqkXs

National Transportation Safety Board Office of Public Affairs. Inadequate Safety Culture’ Contributed to Uber Automated Test Vehicle Crash - NTSB Calls for Federal Review Process for Automated Vehicle Testing on Public Roads. November 2019. US National Transportation Safety Board. https://www.ntsb.gov/news/press-releases/Pages/NR20191119c.aspx

Naughton, John. The rebirth of Google Glass shows the merit of failure. July 2017. The Guardian. https://www.theguardian.com/commentisfree/2017/jul/23/the-return-of-google-glass-surprising-merit-in-failure-enterprise-edition