AI Safety and Responsible AI in Practice
A rough summary of the views of presenters
- They open with cars as a mental model of risk - i.e. if you proposed a new technology that would make everyone's life better and more productive, but it will kill 40k people a year in the country, would society be ok with that?
- And that about sums up everything else that is discussed. Some notes by the presenters: ** Safety is use case specific. Safety when recommending movies is different from safety when advising a doctor on a prognosis. ** Safety requires iteration. ** No one agrees on what responsible use of AI is, but "if this was published on the front page of the NYT, would you be ok with it?" ** Meta says they take safety very seriously, b/c they open-source their model weights. ** Google had a leading chatbot, but due to security concerns did not make it externally available. OpenAI did.... (and thus look at their success)
All in all, these thoughts around safety seemed to be pragmatically aimed at models that don't have agentic capabilities and are before the 5.0 class of open AI models.
There is one core issue with thinking about safety in this way: an iterative model of the next car doesn't have the ability to cause catostrophe. The next iterative AI model could be better than any human alive at physics, mathematics, manufacturing weapons of mass destruction, creating biological weapons in a garage, hacking cloudflare to bring down the majority of the internet, sneaking anthrax through the white house mail room, etc, etc..
It is a horizontal layer with more use cases than we can imagine. It is more similar to the discovery of fire than it is to any other technological improvement it is compared to.