Actionable AI Ethics #4: Large language models, data statements, upstanders in AI ethics, and more ...
Have you ever participated in a design charrette?
Welcome to this week’s edition of the newsletter!
What I am thinking:
Introduction to the ethics of use of AI in war
Becoming an upstander in AI ethics
What I am reading:
Understanding the Capabilities, Limitations, and Societal Impacts of Large Language Models
Data Statements for Natural Language Processing: Toward Mitigating System Bias and Enabling Better Science
AI Ethics Tool of the Week:
Watch this space in the following weeks for a tool that I think is going to be particularly useful for your ML workflow. If you know of a tool that you think should be shared with the community, feel free to hit reply to this email or reach out to me through one of the ways listed here.
If this newsletter has been forwarded to you, you can get your own copy by clicking on the following button:
Another one of those very busy weeks which means that this edition of the newsletter is a couple of days late to your inbox so apologies for that.
I’ve been working hard on wrapping up Chapter 3 of the book and will be sending that out to my editor this weekend so hoping for some positive feedback on that. To give you a sneak peek into that chapter, it will have some tools that will help you establish the foundations for your ML project from day one in a way that makes responsible AI the norm rather than the exception! This includes ideas like an enhanced data dictionary, a cookie-cutter template, and another really cool idea that I have been incubating for 4 months now!
If you want to know what that new idea is, shoot me a quick reply to this email and I’d be happy to share an advance copy of a presentation that I am giving on that idea at York University soon!
In other news, my contribution titled “Making Responsible AI the norm rather than the exception” has been featured heavily in the newly published report from the National Security Commission on AI.
Finally, I am also working on this idea of utilizing a design charrette for AI ethics and would love to hear from you if you have any cool resources for that or any ideas that you would like to share with some of the other readers of this newsletter. Drop a comment using the button below and let’s chat!
What I am thinking
Introduction to the ethics of the use of AI in war
Advances in AI have spilled over into usage in defense applications and there are rightly many ethical concerns that it raises. While there are many detailed documents available that talk about specific areas of concern in the use of AI in warfighting application, I’d like to give here an overview of those issues and talk about some basic ideas that will help you discuss and address issues in the space in a more informed manner.
Let’s talk through the following items to build up an understanding of why the use of AI in war raises so many ethical concerns:
Quick basics
Advantages vs. costs
Current limitations of ethics principles
Key issues
Open questions
Becoming an upstander in AI ethics
As we’ve seen the enormous upheaval in the field of AI ethics over the past 3 months, I think it behooves us to think a little deeply about the role all of us can play in making a meaningful, positive impact on the world. This idea of becoming an upstander in AI ethics is particularly powerful and I believe that in 2021, this is the right way to help create a healthier ecosystem for us all.
What I am reading
Understanding the Capabilities, Limitations, and Societal Impacts of Large Language Models
The paper provides insights and different lines of inquiry on the capabilities, limitations and the societal impacts of large-scale language models, specifically in the context of the GPT-3 and other such models that might be released in the coming months and years. It also dives into issues of what constitutes intelligence and how such models can be better aligned with human needs and values. All of these are based on a workshop that was convened by the authors inviting participation from a wide variety of backgrounds.
The paper provides a new methodological instrument to give people using a dataset a better idea about the generalizability of a dataset, assumptions behind it, what biases it might have, and implications from its use in deployment. It also details some of the accompanying changes required in the field writ large to enable this to function effectively.
AI Ethics Tool of the Week
Watch this space the following week for a tool that I think is going to be particularly useful for your ML workflow. If you know of a tool that you think should be shared with the community, feel free to hit reply to this email or reach out to me through one of the ways listed here.
These are weekly musings on the technical applications of AI ethics. A word of caution that these are fluid explorations as I discover new methodologies and ideas to make AI ethics more actionable and as such will evolve over time. Hopefully you’ll find the nuggets of information here useful to move you and your team from theory to practice.
You can support my work by buying me a coffee! It goes a long way in helping me create great content every week.
If something piques your interest, please do leave a comment and let’s engage in a discussion:
Did you find the content from this edition useful? If so, share this with your colleagues to give them a chance to engage with AI ethics in a more actionable manner!
Finally, before you go, this newsletter is a supplement to the book Actionable AI Ethics (Manning Publications) - make sure to grab a copy and learn more from the website for the book!