Here are some observations and notes I had on the Bletchly AI Safety meeting. I'm looking at this from a AI systems in information security, government and commercial benefits administration, safety critical industrial automation and medical systems perspective.
Including you in government benefits and economic success and avoiding your exclusion is important for the country. Inclusive institutions are a vital societal stability and national security objective.
As security practitioners in machine learning and artificial intelligence we will often be involved in setting, assessing and responding to critical functions and incidents identified by artificial intelligence.
We are called to both use artificial intelligence where it can help effectively and limit potential harms.
fig. We shouldn't assume a techno-optimist future, but shouldn't discount the benefits of Artificial Intelligence either. We need to actively manage the transnational risks and benefits of Artificial Intelligence. [Robert McCall, Prologue and the Promise]
What does that mean for an engineer solving problems
In many cases AI systems I build have AI involved in design or user interface, but safetly critical elements are tested, simulated, in some cases even subjected to formal proofs. Not every problem AI gets applied to can or will have those items in place.
Not all machine learning or artificial intelligence functions are complex enough or autonomous enough to be significantly harmful. The context and a good risk assessment matters.
AI has tremendous potential to improve things, and like any system we should protect its potential by limiting harms. Its apparent that the potential goods and harms are large with AI.Â
Nonetheless, as we move towards Artificial General Intelligence I think we will need much more cultural infrastructure and engineering guidelines that help keep us successful at facing this challenge together.
fig. Effective management of risks and benefits of AI needs to be human scale, ie connected to what matters to you in your life and what can help or hinder you in your life goals.
What would I like to see in future conferences "regulating the commons" that are our research and knowledge of AI or any other transnational commons?
Identify a friendly shared identity that the population involved will be a part of.
Identify something concrete that stands to be lost if we don't team up, and a common opposing circumstance. Not every item presents a transnational risk, but for the things that do, this is important. Many things each country and community or company will compete on, things that are transnational risks should be singled out concretely in my humble opinion.
I don't think these things are required to be part of the text of an agreement, but can also be part of how a commons agreement is talked about.
AI has huge potential to build inclusion and wealth broadly, it makes sense to limit the potential harms as well. The conference shows there's an international will to do this, so compliments to the countries involved.