Actionable AI Ethics #2: Civic competence, believability gaps, AI safety, defense applications of AI
What are some of the most important gaps in AI ethics implementations that are unresolved right now?
Howdy and welcome back to the newsletter - let’s talk about you can become an Actionable AI Ethics practitioner!
What I am thinking:
Why civic competence in AI ethics is needed in 2021
To achieve Responsible AI, close the “believability gap”
What I am reading:
AI Safety, Security, and Stability Among Great Powers
“Cool Projects” or “Expanding the Efficiency of the Murderous American War Machine?”
AI Ethics Tool of the Week:
Watch this space in the following weeks for a tool that I think is going to be particularly useful for your ML workflow. If you know of a tool that you think should be shared with the community, feel free to hit reply to this email or reach out to me through one of the ways listed here.
If this newsletter has been forwarded to you, you can get your own copy by clicking on the following button:
Howdy!
It’s been a few weeks since we last had a chance to communicate through this newsletter. As is the case with everyone else, the pandemic caused a lot of disruptions by I am getting back to writing at a more regular cadence, so you can expect to find content from this newsletter every week now!
What I am thinking
Why civic competence in AI ethics is needed in 2021
Civic competence refers to the ability of people from everyday life and from all walks of life to be able to meaningfully participate in discussions on a particular subject. Meaningful participation here is showing up with a solution-mindset with a fundamental understanding of the issues, what has been tried before, and what realistic actions we can take to move the discussion forward in a productive manner.
To achieve Responsible AI, close the “believability gap”
We’ve had an outpouring of interest in the field of AI ethics over 2019 and 2020 which has led to many people sharing insights, best practices, tips and tricks, etc. that can help us achieve Responsible AI.
But, as we head into 2021, it seems that there are still huge gaps in how AI ethics is being operationalized. A part of this stems from what I call the believability gap that needs to be bridged before we can realize our goal of having widespread adoption of these practices in a way that actually creates positive change.
Fragmentation in the field along with the wide-ranging impacts of AI mean that we are often grappling with domains and areas where we have little knowledge about what the ins and outs are.
What I am reading
AI Safety, Security, and Stability Among Great Powers
The paper takes a critical view of the international relationships between countries that have advanced AI capabilities and makes recommendations for grounding discussions on AI capabilities, limitations, and harms through piggybacking on traditional avenues of transnational negotiation and policy-making. Instead of perceiving AI development as an arms race, it advocates for the view of cooperation to ensure a more secure future as this technology becomes more widely deployed, especially in military applications.
“Cool Projects” or “Expanding the Efficiency of the Murderous American War Machine?”
This research study seeks to glean whether there is indeed an adversarial dynamic between the tech industry and the Department of Defense (DoD) and other US government agencies. It finds that there is wide variability in perception that the tech industry has of the DoD, and willingness to work depends on the area of work and prior exposure to funding from and work of the DoD.
AI Ethics Tool of the Week
Watch this space in the following weeks for a tool that I think is going to be particularly useful for your ML workflow. If you know of a tool that you think should be shared with the community, feel free to hit reply to this email or reach out to me through one of the ways listed here.
See you next week!
I hope that the ideas this week were interesting! Let me know what I can do to bring you even more value, you can always hit reply to this email and I assure you I read each and every email. See you again in a few days, in the meantime, stay safe and healthy!
These are weekly musings on the technical applications of AI ethics. A word of caution that these are fluid explorations as I discover new methodologies and ideas to make AI ethics more actionable and as such will evolve over time. Hopefully you’ll find the nuggets of information here useful to move you and your team from theory to practice.
You can support my work by buying me a coffee! It goes a long way in helping me create great content every week.
If something piques your interest, please do leave a comment and let’s engage in a discussion:
Did you find the content from this edition useful? If so, share this with your colleagues to give them a chance to engage with AI ethics in a more actionable manner!
Finally, before you go, this newsletter is a supplement to the book Actionable AI Ethics (Manning Publications) - make sure to grab a copy and learn more from the website for the book!