Actionable AI Ethics #1: Adopting an MLOps mindset for AI ethics
Why current approaches to AI ethics are failing
Welcome to the first edition of this newsletter! This week we will be talking about:
Adopting an MLOps mindset for AI Ethics
What I am reading:
Fairness Definitions Explained
Automating informality: On AI and labour in the global South
Hazard Contribution Modes of Machine Learning Components
AI Ethics Tool of the Week:
Watch this space the following week for a tool that I think is going to be particularly useful for your ML workflow. If you know of a tool that you think should be shared with the community, feel free to hit reply to this email or reach out to me through one of the ways listed here.
If this newsletter has been forwarded to you, you can get your own copy by clicking on the following button:
What is Actionable AI Ethics?
A few quick words on what Actionable AI Ethics is all about:
The aim of the book is to guide everyday practitioners to apply principles and guidelines in the domain of AI ethics to the AI development and deployment lifecycle, translating high-level, abstract ideas into concrete and measurable techniques with minimal friction to meet the goal of building ethical, safe, and inclusive AI systems that are aligned with the values of their organization. The uniqueness of the book is that it surfaces the signal from the noise that surrounds the fragmented tooling and framework landscape in AI ethics.
The book is meant to serve as an ageless text for readers to return to when wrestling with the domain of AI ethics and they are looking for practical guidance. While some tools may fade over a long time horizon, the core principles and their practical foundations as expressed in this book will continue to serve them long into the future, allowing them to easily integrate any new tools that arise.
You can read more on what this newsletter is going to be all about here.
To learn more about the book itself and stay up-to-date with other content, you can take a look at the book’s website.
An MLOps approach to AI ethics
Photo by Adrien Robert on Unsplash
There is a great deal of talk about “operationalizing” AI ethics.
So there is a ton you will find out in the world about how AI ethics should be implemented too.
Yet, and correct me if I am wrong, there seem to be a lack of concrete recommendations of going about putting this into practice. I think a part of it is the divide between where the research is being done and where it is supposed to be put into practice.
It isn’t as simple as: (some would certainly hope that it is!)
from sklearn.linear_model import LogisticRegression
import ai-ethics
# the code from here on is all “ethical” because we magically fixed it
Ok so give me a little bit of rope here …
If we go back to some of the early days of cybersecurity, what were some of the impediments when it came to effectively putting in place cybersecurity measures? One of them was that software developers believed that it was outside their purview to implement those best practices. It was believed that the quality assurance (QA) team and security testing team at the end would find potential vulnerabilities and flag them to be addressed. We certainly haven’t reached a place where all our software is secure (in fact, far from it!). But what we do have is a greater awareness of secure coding practices that are now firmly in the purview of those writing the code in the first place.
With the rise of DevOps, we have reached a place where a lot of the deployment concerns regarding how the software system will behave in practice are pushed upstream right to the place where code is being written.
What is DevOps?
The contraction of “Dev” and “Ops” refers to replacing siloed Development and Operations to create multidisciplinary teams that now work together with shared and efficient practices and tools. Essential DevOps practices include agile planning, continuous integration, continuous delivery, and monitoring of applications.
What does MLOps have to do with this?
Slowly, with the rise of MLOps, people are also coming to terms with the how we need to think about failures and potential ethical violations not after the system is deployed but all through the lifecycle.
What is MLOps?
The term MLOps is defined as “the extension of the DevOps methodology to include Machine Learning and Data Science assets as first-class citizens within the DevOps ecology”
And yes! You will hear the word lifecycle repeated time and again, yet it seems that apart from an abstract notion of what that lifecycle is and how models are actually developed and put into practice, there seems to be a bit of a disconnect between this desire and the exhibited behaviours.
This is where I believe AI ethics is failing today.
Discounting the needs of the practitioners, either by not speaking with enough of them during the research phases or kicking the proverbial can down the road, hoping that others will come along and devise clever methods to address these challenges.
I believe MLOps, as a discipline, a methodology, an ideology, a practice, however you perceive it, presents our best option for taking these notions of ethics and directly putting them into practice.
But haven’t we been warned that purely technical approaches don’t work?
MLOps is heavily steeped in engineering and this is where I want to sound the alarm.
Purely technical approaches won’t work. We need to engage the social sciences and scholars who have experience in these domains before making unfounded assumptions.
But, this also doesn’t take away from the fact that ultimately engineering is what is going to realize our goals of building ethical, safe, and inclusive AI systems. Instead of viewing the engineering mindset as problematic, I invite the wider AI ethics community to view this as a diagnostic and pragmatic realization of well-reasoned ideas that the interdisciplinary research community is producing. I borrow heavily in this mindset from the stellar work by Abebe et al.
There are some great resources on MLOps if you aren’t yet familiar with the primary ideas in the domain. More so, my forthcoming book (apologies for the shameless plug!) will dive into the details about utilizing this very mindset along with concrete examples of different tools that put ideas of privacy, fairness, auditability, and more natively into your workflows.
We’ll pick up again next week with a bit more on this idea, in the meantime, here are some things that I have been reading this past week that I think you will find interesting.
Here is what I am reading
Fairness Definitions Explained
The basic premise of the paper is that it will attempt to explain how because of different definitions of fairness, we can have scenarios that are fair according to one and not fair according to another.
Automating informality: On AI and labour in the global South
A much needed perspective on the specificities of the labor impacts of AI in the Global South, this paper lays out the implications of the prevalent informal labor markets in India along with the associated social hierarchies that impose a double precarity on the lives of the workers because of marginalization through both digital and reinforced societal inequities.
Hazard Contribution Modes of Machine Learning Components
This paper provides a categorization framework for assessing the safety posture of a system that consists of embedded machine learning components. It additionally ties that in with a safety assurance reasoning scheme that helps to provide justifiable and demonstrable mechanisms for proving the safety of the system.
AI Ethics Tool of the Week
Watch this space the following week for a tool that I think is going to be particularly useful for your ML workflow. If you know of a tool that you think should be shared with the community, feel free to hit reply to this email or reach out to me through one of the ways listed here.
These are weekly musings on the technical applications of AI ethics. A word of caution that these are fluid explorations as I discover new methodologies and ideas to make AI ethics more actionable and as such will evolve over time. Hopefully you’ll find the nuggets of information here useful to move you and your team from theory to practice.
You can support my work by buying me a coffee! It goes a long way in helping me create great content every week.
If something piques your interest, please do leave a comment and let’s engage in a discussion:
Did you find the content from this edition useful? If so, share this with your colleagues to give them a chance to engage with AI ethics in a more actionable manner!
Finally, before you go, this newsletter is a supplement to the book Actionable AI Ethics (Manning Publications) - make sure to grab a copy and learn more from the website for the book!