Notes on Bletchly AI Safety Conference

Here are some observations and notes I had on the Bletchly AI Safety meeting. I'm looking at this from a AI systems in information security, government and commercial benefits administration, safety critical industrial automation and medical systems perspective.

Including you in government benefits and economic success and avoiding your exclusion is important for the country. Inclusive institutions are a vital societal stability and national security objective.

As security practitioners in machine learning and artificial intelligence we will often be involved in setting, assessing and responding to critical functions and incidents identified by artificial intelligence.

We are called to both use artificial intelligence where it can help effectively and limit potential harms.

fig. We shouldn't assume a techno-optimist future, but shouldn't discount the benefits of Artificial Intelligence either. We need to actively manage the transnational risks and benefits of Artificial Intelligence. [Robert McCall, Prologue and the Promise]

What does that mean for an engineer solving problems

fig. Effective management of risks and benefits of AI needs to be human scale, ie connected to what matters to you in your life and what can help or hinder you in your life goals.

What would I like to see in future conferences "regulating the commons" that are our research and knowledge of AI or any other transnational commons?

I don't think these things are required to be part of the text of an agreement, but can also be part of how a commons agreement is talked about.

AI has huge potential to build inclusion and wealth broadly, it makes sense to limit the potential harms as well. The conference shows there's an international will to do this, so compliments to the countries involved.