Artificial intelligence is increasingly part of how we live, work, and play. For example, large language models – known as LLMs – are helping students understand the highlights of dense academic text. Likewise, business teams collaborating across time zones are leveraging AI to summarise their colleagues’ contributions to shared documents when review time is short. And at home, AI can even devise mouth-watering recipes based on a few ingredients from your kitchen. While we may have once bookmarked pages, we are now learning to memorize prompts.
Of course, not all AI use cases present the same risk. Policymakers are increasingly concerned about intellectual property, privacy, and cybersecurity issues related to AI, to name a few. Operating in this space requires a fleet-footedness, deep technical understanding and a broadness of horizon. It’s one of the most challenging policy remits anywhere.
As governments take interest in AI, Australia-based Atlassian sees opportunities for productive dialogue about the future. “There are very few opportunities to tackle these issues in practice at the same time as the governance and the regulation is evolving,” said Anna Jaffe, Atlassian’s Head of Regulatory Affairs & Responsible Technology.
Indeed, Atlassian’s approach is rooted in its vision for Responsible Technology, which the company outlined in its Responsible Technology Principles. As Jaffe explained; “These Principles were heavily informed by, and designed to align with, a number of similar principles embedded in policy and regulatory frameworks globally. But they are also uniquely Atlassian. We drew on our company mission and values as well as our commitments to our customers, employees, and stakeholders.” In 2024, Atlassian also published its No BS Guide to Responsible Tech Reviews, which describes the company’s learnings from applying its Responsible Tech Review Template across its own AI products and use cases.
Recently, Atlassian made its Rovo AI solution for search, chat and agents generally available. The assistant helps teams quickly discover knowledge across Atlassian and third-party SaaS apps—making working between different SaaS products a breeze. It’s the kind of interoperable smarts that make AI potentially such a boon for enterprise businesses with silo walls sliced and inefficiencies reduced. Atlassian also introduced new AI-powered updates for its Trello and Jira products, surfacing information faster and more easily to make work more efficient.
Atlassian’s CEO Mike Cannon-Brookes and CFO Joe Binz told the company’s shareholders with the release of its Q2 results that the number of AI interactions on its platform had grown 25 times year-on-year. It also found that customers were increasingly opting for more feature-heavy plans to take advantage of its AI smarts.
“We see AI capabilities as core to our offerings, helping teams in every corner of an organisation push work forward,” the pair wrote.
Those AI capabilities help teams in every corner of the world, too. Something that makes Jaffe and her team’s job all the more challenging.
“We operate in that environment where we need to look at every single jurisdiction where we’re providing our products or employing people and understand what they’re doing, how that fits with what we’re doing,” she explained.
Approaches to AI development and regulation differ around the world.
The EU, for instance, is currently seeking feedback on its Code of Practice on General-Purpose AI. This consultation is an outgrowth of its landmark AI Act, passed in 2024, which is the world’s most comprehensive AI law. In the US, there’s currently no unified framework at the national level, so states are taking matters into their own hands. Research from the Business Software Alliance indicates that there were over 700 legislative proposals related to AI in state legislatures during 2024, with even more anticipated for 2025.
The Australian government set out its stall on AI regulation in September last year with its proposals paper for introducing mandatory guardrails for AI in high-risk settings. The paper, which sought to determine the correct way of regulating AI in Australia—whether that’s a domain-specific approach, a framework approach or a whole-of-economy approach—received more than 300 submissions from bodies, businesses and organisations. In its submission, Jaffe and head of global policy and regulatory affairs, David Masters favoured the framework approach. This would feature a clearinghouse to serve as a central repository for information on the guardrails, including best practices, guidelines and compliance resources, and an expanded role for the government’s AI advisory body.
It’s a minefield, to say the least. Jaffe noted that lawmakers in Australia, at least, are trying to “balance public concern” about “where the technology is coming from” whilst also ensuring that Australia does not fall out of step with other jurisdictions.
“It’s a fascinating time to be working at the intersection of law and emerging technologies, particularly when it comes to AI because so many of the rules are still being written or being interpreted or understood in practice as the technology evolves,” Jaffe added.
However, she noted that while it is tempting to see AI as a “lawless space” all the existing laws apply to AI technologies as they do everything else—whether they’re privacy, consumer protection or intellectual property laws.
In that context, creating AI-specific regulations might seem like the right thing to do—at least to ensure that businesses and voters know where they stand. However, Jaffe believes it’s an approach that might end up with some broad simplifications of the technology, leading to a wider regulatory gap.
“It’s not as simple as saying, ‘Well, this is a vertical of companies that operate the same, they have the same business model and are driving at the same aims’. That’s the interesting thing about regulating technology, it’s not a homogenous group, thing or even a single thing,” she said.
“AI isn’t one technology, it’s a bunch of technologies and simply saying ‘We’re going to regulate AI,’ but what do you mean?”
Given how transformative AI could be for any number of industries, the approach has to be more industry or even use case specific. Jaffe’s advice for regulators? It’s complicated so treat it as such.
“Sometimes you have to move slowly to move quickly, particularly to build that understanding [of the technology], understand the problem and craft a solution that is targeted at that problem,” she said.
“It’s not always about being the first. Sometimes it is great to be first but sometimes being the most thoughtful and considered is more important. That’s not to say that you should be last but there’s merit in fast following.”
There is an ethical lens with AI that often, and sadly, gets somewhat sidelined. Jaffe’s job, however, encompasses both those dimensions—something she describes as both “interesting” and “unusual”. However, it’s a role that has transformed since she joined the business in 2021 when it was mainly focused on machine learning.
“By late 2022, the [ethics] side of my role grew as we started to understand the promise of generative AI, not just as consumers more broadly but also as a company and what it meant for our products,” she explained.
“It’s unusual to be doing both but I do see them as really complementary. Where we see the ethical and responsible side of the work is actually running ahead of the law and filling in those gaps that the law either can’t fill or hasn’t filled yet. Bringing the dual perspectives means that we’re doing what we should do, not just what we can do and that’s ultimately where the law is going to get to anyway,” she explained.
Atlassian’s approach continues to emphasise the responsibilities that technology providers should uphold, not only what they must do from a compliance perspective. While some in the space warn of ‘unintended consequences’ from the technology and use the phrase as something like a ‘get out of jail’ card, she is proud of the business’ work on doing the right thing and that its teams are putting the tools Jaffe has led the teams to create to use.
“Experts rightly pointed out that ‘unintended consequences’ is just another word for ‘harms’. It’s a way to absolve yourself from responsibility for harms that you are in fact causing. We took that feedback onboard but kept the term unintended consequences, because if you present the word ‘harms’ to a team that strongly believes in what they’re doing and are really driving towards the best-case scenario, they don’t love thinking about the worst-case scenario,” she said.
“Our job is to make them think about the worst cases, take a step back and open their minds to all these potential possibilities, good and bad, and work through them.”
It remains to be seen how AI regulations will shake out, and whether businesses will pay attention to them as they strive to steal a march on the next competitive advantage or game-changing feature.
As is sometimes said in journalism, it’s important to be first, but it’s more important to be right first. Atlassian seems to be taking that same approach.
Does shaping the future of humanity’s work sound interesting? Search Atlassian’s open roles or download its Responsible Tech Guide here.