When Generative AI Use Should Trigger Privacy Review in Australia
Author
Dom Jocubeit

Not every use of generative AI requires the same level of governance attention. But many organisations are still underestimating how quickly a simple experiment can turn into a privacy-sensitive use case.
The OAIC’s guidance makes clear that privacy obligations continue to apply when organisations use AI products and when they develop and train generative AI models involving personal information.¹ ²
That means privacy review becomes more important whenever generative AI changes how personal information is handled, analysed, disclosed, stored, reused, or inferred.¹ ² ³
Key takeaways
- Generative AI use should trigger privacy review when personal information is involved.¹ ²
- The OAIC says, as a matter of best practice, organisations should not enter personal information, especially sensitive information, into publicly available generative AI tools.¹ ⁴
- If AI systems are used to generate or infer personal information, that is a collection of personal information and must comply with APP 3.¹
- The OAIC recommends PIAs as part of risk management and planning processes.³ ⁵
- Informal experimentation is often where privacy risk begins, which is why early review pathways matter.¹ ²
Why this question matters now
Generative AI adoption often begins informally.
A team starts testing a drafting tool. Someone pastes in notes. A manager uses a transcript. A project team uploads case details to speed up review work. None of this may look like a major privacy event at first.
But if the use of generative AI changes how personal information is handled, analysed, disclosed, stored, reused, or inferred, privacy questions are already in play.¹ ²
The clearest trigger: personal information
The first and most obvious trigger for privacy review is whether the use case involves personal information.
The OAIC’s guidance on commercially available AI products says that if AI systems are used to generate or infer personal information, this is a collection of personal information and must comply with APP 3.¹
That point is easy to miss, especially where a team assumes the system is only generating insights rather than collecting data. But if the output creates or infers personal information, privacy obligations may still be engaged.¹
Publicly available tools deserve stronger caution
The OAIC has made one recommendation especially clear: as a matter of best practice, organisations should not enter personal information, and particularly sensitive information, into publicly available generative AI tools due to the significant and complex privacy risks involved.¹ ⁴
That means a privacy review is strongly warranted where staff propose using publicly available generative AI tools with:
- customer information
- employee information
- health information
- case records
- complaint details
- identifiable transcript content
- any other sensitive or regulated information.¹ ⁴
Common situations that should trigger privacy review
A privacy review should usually be considered where generative AI is being used for:
Drafting or analysis using real records
If prompts contain real client, customer, employee, patient, student, or citizen details, privacy issues arise quickly.¹
Summarising conversations or transcripts
Meeting notes, calls, interviews, and transcripts can easily contain personal information, even when users do not think of them that way.¹ ²
Decision support about individuals
If outputs may influence decisions about individuals, privacy and fairness concerns become more significant.¹ ⁶
Training or fine-tuning models
The OAIC’s separate guidance on developing and training generative AI models makes clear that just because data is publicly available or otherwise accessible does not mean it can legally be used to train or fine-tune generative AI systems.²
Tools with unclear data handling practices
If the organisation cannot clearly explain what the provider does with prompts, outputs, retained data, or model improvement processes, that alone can justify stronger review.¹
A practical trigger table
| Situation | Why privacy review is likely needed |
|---|---|
| Prompts include personal information | Personal information handling is directly involved¹ |
| Prompts include sensitive information | OAIC best practice strongly cautions against this in publicly available tools¹ ⁴ |
| AI outputs may infer information about a person | Inferred personal information may still amount to collection under APP 3¹ |
| Real-world records are used for drafting or summaries | The use case may change how information is disclosed or processed¹ ² |
| Model training or fine-tuning uses accessible data | Publicly available does not automatically mean lawful for training² |
| The vendor’s data handling is unclear | The organisation may not be able to justify the privacy position¹ |
Where a PIA fits
The OAIC recommends that organisations conduct PIAs as part of their risk management and planning processes.³ ⁵
If a generative AI use case involves personal information in a meaningful way, or could have material privacy impacts on individuals, a PIA can provide a structured way to:
- identify what information is involved
- map how it is being handled
- assess the privacy risks
- record recommendations and controls
- support a more defensible decision about whether the use should proceed.³ ⁵
Why informal experimentation is risky
One of the hardest governance problems with generative AI is that adoption often starts outside formal review channels.
Governance teams discover the use case only after people have already built habits around it. At that point, the organisation may be dealing with a mix of convenience, unclear permissions, inconsistent restrictions, and incomplete knowledge about where personal information has already gone.
This is why clearer internal review pathways matter. The issue is not only whether a particular use is technically possible. It is whether the organisation has a reliable way to identify when privacy review should happen.¹ ³
Practical questions to ask early
Before a team uses generative AI in a workflow involving real-world information, it should be able to answer:
- Does the use case involve personal or sensitive information?¹ ²
- Is the tool publicly available or privately controlled?¹
- What does the provider do with prompts, outputs, and retained data?¹
- Could the system generate or infer personal information?¹
- Could outputs affect decisions about individuals?¹ ⁶
- Do we need a structured privacy assessment before proceeding?³ ⁵
A simple review logic
Conclusion
Generative AI use should trigger privacy review in Australia whenever it meaningfully changes how personal information is handled, disclosed, analysed, stored, reused, or inferred.¹ ²
The strongest practical triggers are the presence of personal information, the use of publicly available tools, decision-support uses involving individuals, and training or fine-tuning scenarios involving accessible data.¹ ² ⁴
The organisations that manage this well will not wait until informal use becomes embedded. They will create clearer pathways for early review, stronger restrictions where needed, and better documentation for why a use case is acceptable, limited, or prohibited.¹ ³ ⁵
References
- OAIC, Guidance on privacy and the use of commercially available AI products.
- OAIC, Guidance on privacy and developing and training generative AI models.
- OAIC, Guide to undertaking privacy impact assessments.
- OAIC, New AI guidance makes privacy compliance easier for business.
- OAIC, 10 steps to undertaking a privacy impact assessment.
- OAIC, Checklist: Privacy considerations when selecting a commercially available AI product.