Governments need to provide more oversight and guardrails around how AI technology is being deployed across society or risk a potentially catastrophic situation, which could include an economic bloodbath or “some crazy person” using it to make biological weapons.
That was the bleak warning by Anthropic co-founder and head of policy Jack Clark, who said rapidly growing AI technology could either provide huge societal benefits or reap dangerous consequences if politicians and regulators don’t take it more seriously.
“I am deeply anxious that we won’t put enough attention on it,” he said. “The thing I’m most scared of is it just happens through companies building and deploying products without a much larger societal conversation. That’s the thing I truly worry about the most.”
Anthropic is the AI technology start-up that produces Claude, but positions itself as a tech company that has benefited humanity’s long-term wellbeing.
Speaking on The News Agents podcast with British journalist Lewis Goodall, Clark believes that AI systems need to be designed with human values so that “these systems care about us in the same way that we care about these systems”.
In response to a question about whether AI needed to care about humans, he said: “If these systems have no grounding in human society or like what humans want then I think they could very easily you know accidentally or perhaps maliciously do things which contradict our interests or go against our interests.”
Anthropic is analysing and mapping out the values that its systems display to ensure they are helpful rather than deceptive.
“It’s possible if you get a large sequence of things wrong in how you build the machines or if you intentionally made malicious machines, which is not the goal of, you know, these at the frontier, but could be something that could be done,” he said.
Read next: Chatbots Behaving Badly: Open Ai & Meta In The Firing Line As Ai Accused Of Crossing A Line
Greater oversight needed
Clark is calling for greater transparency and government oversight about how AI technology is deployed and warned it would be a grave mistake for AI companies to make all of the judgment calls about how to build these tools and study their consequences.
He believes the AI technology revolution will have far greater economic and societal impacts than the industrial revolution or and other technological advancement in history.
Artificial intelligence is expected to fundamentally transform the global workforce by 2050, with up to 60 per cent of jobs either displaced or requiring adaptation, according to reports from PwC, McKinsey and the World Economic Forum.
The advertising has already seen huge upheaval in the use of AI technology to create advertising and plan and buy media, as well as modelling and measurement tools.
Clark believes that unlike other technological advances, the AI S-curve is far far steeper and hasn’t begun flattening out, and society needs to prepare for what AI CEO Dario Amodei has warned will be “a white collar bloodbath”.
“A technological revolution that takes place within one generation rather than multiple generations would be radically different in terms of how we deal with it,” Clark said. “So yes, I think there’s the potential for really large scale employment impacts to show up.”
Nurses & teachers
This doesn’t mean it’s all doom and gloom. An abundance of labour could free up space in the economy to focus and value more on human-centric roles like teaching, mentoring, social care and nursing.
“Across the western world, you have too few teachers and too few nurses and too few what you might think of as human centric jobs, elder care. And we tend to pay these people very poorly,” he said.
“But getting that right requires a huge change in policy in how we approach the economy and reckoning with the arrival of technology that really might free up enough space in the economy for us to change this in a really positive way.”
Clark is optimistic Governments can provide the right infrastructure and policy settings to provide transparency and oversight in AI technology.
He cites the UK’s Safety Institute and the response to Covid as examples of when governments act swiftly for the greater good.
Nonetheless he believes AI technology and regulation has arrived at a fork in the road.
On one there is a positive future, which improves the delivery of public services, healthcare, education and provides a “greater level of abundance”.
On the other side there are two bad states. One is a failure in politics allows technology companies to “make a large amount of money and take a large role in public life similar almost to the social media companies and then everyone sits around and says how do we let this happen?”.
Even worse is a doomsday scenario: “The actual technology gets away from us in the form of some crazy person misusing it in the most sci-fi-like scenario…and an accident occurs that could be incredibly damaging and scary.”

