Actionable AI Ethics #3: Taking small steps to RAI, Disability and Bias in AI, Black boxes and more ...
How do we shift power dynamics in the field of responsible AI towards those who are doing meaningful work?
Welcome to this week’s edition of the newsletter!
What I am thinking:
Prudent Public-Sector Procurement of AI Products
Small steps to *actually* achieving Responsible AI
What I am reading:
Disability, Bias, and AI
Examining the Black Box: Tools for Assessing Algorithmic Systems
AI Ethics Tool of the Week:
Watch this space in the following weeks for a tool that I think is going to be particularly useful for your ML workflow. If you know of a tool that you think should be shared with the community, feel free to hit reply to this email or reach out to me through one of the ways listed here.
If this newsletter has been forwarded to you, you can get your own copy by clicking on the following button:
So the past couple of weeks have whisked past like it was all a blur. Probably just emblematic of the pandemic and work and personal life becoming one continuum. Apologies for not having a chance to connect with you all last week. I’m happy to share that the progress on the book is going well and perhaps sometime early this summer you’ll all be able to get a first peek at it!
There are some critical ideas that I am working on this week and the next that will be presented at a few conferences in March. Thank you to those who have been generous with their time in helping me bounce those ideas off of them. One of them said they were particularly pumped after our conversation and can’t wait to see a practical demonstration of one of the ideas that we discussed in the book! That makes me hopeful and happy that what I am writing will potentially be useful to you all :)
Let me know in the comments or by hitting reply to this email in case you would like to be kept abreast of some of that as well and I’d be happy to include mentions of those conferences and even links to the presentations that I am giving at those places in case you want to get a sneak peek into what might end up in the book as well.
What I am thinking
Prudent Public-Sector Procurement of AI Products
The use of AI-enabled systems to streamline services that are often labor-, and time-intensive is being recognized by governments to implement in multiple sectors. However, there are significant implications for procuring these systems and the way they are deployed. Due to the gap in understanding of the implications of these systems and how to properly measure their risks, oftentimes governments will procure and deploy solutions that are biased, risk-heavy, with the potential to cause significant harm to the public.
Small steps to *actually* achieving Responsible AI
Responsible AI can seem overwhelming to achieve. I am with you on that. It comes with so many challenges that it is easy to get lost and feel disheartened in trying to get anything done. But, as they say that a journey of a thousand miles begins with the first step, I believe that there are some small steps that we can take in actually achieving Responsible AI in a realistic manner.
Essential to this strategy is the emphasis of starting with partial solutions which may be imperfect to begin with but provide the necessary fertile ground that can aid in the process of overcoming the inertia that is often experienced by practitioners in the field due to the large number of variables (both technical and organizational) involved in putting principles into practice.
What I am reading
A seminal paper that provides the most comprehensive discussion on how people with disabilities are excluded from the design and development of AI systems. It also situates this in existing research from the field and provides concrete recommendations on how AI practitioners can do better so that we don’t just engage in ethics washing but actually centre the processes around those with lived experiences to build systems that don’t just include but also empower people.
Examining the Black Box: Tools for Assessing Algorithmic Systems
The paper provides some much needed clarity on what assessment in algorithmic systems can look like taking it apart on the axes of when the assessment activities are carried out, who needs to be involved, the pieces being evaluated, and the maturity of the techniques. It also provides an understanding of some key terms used in the field and identifies the gaps in the current crop of methods as they relate to the axes mentioned above.
AI Ethics Tool of the Week
Watch this space the following week for a tool that I think is going to be particularly useful for your ML workflow. If you know of a tool that you think should be shared with the community, feel free to hit reply to this email or reach out to me through one of the ways listed here.
See you next week!
I hope that the ideas this week were interesting! Let me know what I can do to bring you even more value, you can always hit reply to this email and I assure you I read each and every email. See you again in a few days, in the meantime, stay safe and healthy!
These are weekly musings on the technical applications of AI ethics. A word of caution that these are fluid explorations as I discover new methodologies and ideas to make AI ethics more actionable and as such will evolve over time. Hopefully you’ll find the nuggets of information here useful to move you and your team from theory to practice.
You can support my work by buying me a coffee! It goes a long way in helping me create great content every week.
If something piques your interest, please do leave a comment and let’s engage in a discussion:
Did you find the content from this edition useful? If so, share this with your colleagues to give them a chance to engage with AI ethics in a more actionable manner!
Finally, before you go, this newsletter is a supplement to the book Actionable AI Ethics (Manning Publications) - make sure to grab a copy and learn more from the website for the book!