AI tools have quickly become part of everyday work. People use them to summarise documents, draft emails, prepare presentations, and make sense of large amounts of information.
For many organisations, the question is no longer whether AI will be used, but which tools people are allowed to use and how safely they are introduced.
What often gets missed is that choosing an AI tool is not just a productivity decision. It is a decision about how data moves, where it ends up, and who might be able to access it. Most AI use comes from small, well-intentioned choices made without fully understanding what happens behind the scenes.
It's important to make clear and intentional procurement choices.
Start with how AI actually handles information
A useful way to think about AI tools is to put the focus on data flow. Every time you use an AI tool, something goes in, something happens to it, and something comes back out. The risk lives in the middle.
For example, when you ask an AI tool to summarise a document, you are not just asking it to “read” something. You are sending that document to a system run by a third party outside of your organisation. The tool may process large amounts of text, store temporary copies, log the interaction for monitoring purposes, or keep records for a period of time. Even if the output looks innocuous, the input may have travelled further than you expect.
The same is true when instead of uploading a document, you type detailed instructions into a prompting window. Prompts can often contain sensitive context: internal decisions, client issues, or commercial strategy. From a data perspective, prompts can be just as valuable (and just as risky) as a document.
This is why the most important question to ask about any AI tool is not how smart it is or how many features it has, but rather what happens to the data that is put into it. This is a critical question as it relates to data sovereignty laws, where data is subject to the laws, regulations, and access of the nation where it resides.
The "where" and "which laws" of data processing and storage
Data residency, which is different to data sovereignty, is often described in technical or legal language, but the idea itself is simple. It is all about the "where"— where your information is processed and stored. This plays into data sovereignty and which laws will apply as a result.
If your data is processed in another country, different access rules and legal obligations may apply. Some governments have broader rights to access data held within their borders. Some industries place restrictions on where data can be handled. In Australia this includes sectors such as legal, government, healthcare, financial services, and critical infrastructure.
The challenge is that AI tools rarely involve a single, obvious location. Information may pass through several systems: the AI model itself, logging tools, safety systems, analytics platforms or third-party service providers. When vendors say, “we use a major cloud provider”, that does not automatically answer the residency question. What matters is which regions are used and for which parts of the process. Understanding this does not require technical expertise, but clarity from vendors and asking the right questions.
Why “trying it out” can quietly increase risk
Most AI tools enter organisations through experimentation. Someone tries a new tool on a low risk task. It works well. Confidence grows. Over time, people may start using it for more important or sensitive work.
This progression is natural, but it is also where risk quietly increases. The tool itself has not changed, but the type of information being shared has. What started as harmless drafting can become summarising confidential reports or analysing sensitive issues.
Without clear guidance, people make reasonable assumptions: if a tool is allowed, it must be safe for real work. This is why security and governance decisions need to happen early, even when the initial use cases seem small.
Choosing AI tools: think less about features, more about fit
A common mistake in AI procurement is focusing on what a tool can do, rather than how it fits into the organisation’s existing environment.
A well-chosen AI tool should work naturally with systems people already use such as email, document management, and identity controls, respecting existing guardrails around access, retention, and confidentiality.
When a tool sits outside of these systems, people are more likely to copy and paste information manually, bypass safeguards, or use personal accounts to get work done faster.
This is especially important where organisations already have enterprise grade AI tools in place. Adding another tool may be justified, but only if it genuinely adds value and does not undermine existing controls. Convenience should never come at the cost of oversight.
Contracts matter because AI tools change
Unlike traditional software, AI tools are constantly evolving. Models are updated, safety systems are adjusted and behaviour can shift over time. These changes are often improvements, but not always for every use case.
That is why contracts play a critical role. Organisations should not be surprised by major changes to how a tool behaves, what model it uses, or how data is handled. Clear commitments about notification, testing, and fallback options make it possible to adapt safely when upgrades occur.
Without these protections, organisations can find themselves using a tool that no longer behaves the way it did when it was approved, with no easy way to manage the change.
Questions to ask before approving an AI tool
For non-technical teams, a practical approach is to slow down and ask five key questions before approving or expanding use of an AI tool:
-
What data will put into the tool, and how sensitive is it? Identify whether the input includes personal information, client confidential details, commercial strategy, or regulated data (e.g., health, financial, government). If the answer is “anything beyond public or fully anonymised content,” the tool must have strong data protection guarantees.
-
Where does that data go, and under which law does it fall once it leaves our organisation? Map the full data flow: inference endpoint, logging, safety filtering, analytics, and any third party subprocessors. Confirm the geographic regions used for each step and which jurisdictions’ access rules (data sovereignty) apply—critical in Australia for legal, health, government and finance sectors.
-
What does the vendor do with the data behind the scenes, and can we opt out of training or retention? Ask whether inputs are stored permanently, used to fine tune the model, shared with partners, or retained for monitoring. Look for explicit opt in/opt out clauses, data deletion guarantees, and evidence (audit reports, data flow diagrams) that no cross-border replication occurs without consent.
-
How will the tool behave when the underlying model or safety system changes, and what fallback options do we have? GenAI models evolve rapidly; a tool that performed safely today may "hallucinate" more or expose data differently after an update. Verify that the contract requires advance notice of material changes, provides a testing window, and offers a rollback or alternative model so work isn’t disrupted.
-
Does it integrate with our existing identity, access control, and document management systems, or will it encourage shadow workarounds? A tool that sits outside SSO, retention and confidentiality guardrails pushes users to copy paste into personal accounts, bypassing security. Preference solutions that embed via API, respect existing role based access, and log interactions in your central audit trail.
If those questions can be answered clearly, most of the major risks are already visible. If they cannot, that is usually a sign that more work is needed before moving ahead.
The bigger picture
AI tools can absolutely make work faster, clearer, and more efficient. The goal of careful procurement is not to slow adoption or discourage experimentation. It is to ensure that productivity gains are not undermined by avoidable risk.
For most organisations, responsible AI use does not require deep technical knowledge. It requires clear thinking about information, early decisions about boundaries, and realistic assumptions about how people will actually use the tools provided to them.
When these foundations are in place, AI can be adopted with confidence.
All information on this site is of a general nature only and is not intended to be relied upon as, nor to be a substitute for, specific legal professional advice. No responsibility for the loss occasioned to any person acting on or refraining from action as a result of any material published can be accepted.