PrivacyApril 20266 min read

OAIC Guidance on Commercially Available AI Products: What Organisations Should Actually Do

Dom Jocubeit

Author

Dom Jocubeit

OAIC Guidance on Commercially Available AI Products: What Organisations Should Actually Do

The OAIC’s guidance on privacy and the use of commercially available AI products is one of the clearest signals that Australian privacy obligations continue to apply even when an organisation is using an external AI tool rather than building a model itself.¹

That point matters because many organisations still treat vendor-supplied AI products as though the main risk sits with the vendor. In practice, the organisation using the tool may still have privacy obligations of its own.¹ ²

Key takeaways

  • The OAIC says privacy obligations still apply when organisations use commercially available AI products.¹
  • As a matter of best practice, the OAIC recommends that organisations do not enter personal information, especially sensitive information, into publicly available generative AI tools.¹ ³
  • If an AI system is used to generate or infer personal information, that can amount to collection of personal information and must comply with APP 3.¹
  • OAIC has published practical checklists to help organisations assess AI product selection and deployment.² ⁴
  • In many cases, the right response is not a blanket ban but a more disciplined review process.¹ ²

Why the guidance matters

The OAIC’s guidance is important because it addresses a very common real-world scenario: an organisation is not building an AI model itself, but it is using a commercially available AI product in ways that may involve personal information.¹

That scenario is easy to underestimate. Teams often see these tools as productivity or workflow tools first, and privacy-regulated systems second. The OAIC’s position makes clear that organisations still need to think carefully about how personal information is handled when these products are used.¹

The strongest practical message from the OAIC

The clearest operational message in the guidance is also the simplest.

The OAIC says that, as a matter of best practice, organisations should not enter personal information, and particularly sensitive information, into publicly available generative AI tools due to the significant and complex privacy risks involved.¹ ³

That recommendation is highly relevant for governance teams because many informal AI use cases begin with staff entering prompts, notes, transcripts, case details, or customer content into publicly available tools without a structured assessment of the privacy consequences.

What the OAIC is really asking organisations to consider

The guidance is not only about whether a tool is useful. It is about whether the organisation understands:

  • what information is being entered into the tool
  • whether that information includes personal or sensitive information
  • what the provider may do with submitted data
  • whether the proposed use is consistent with the APPs
  • whether the use of the product creates high privacy risk
  • whether the organisation can explain and justify the choice of tool.¹ ² ⁴

This is why commercially available AI products should not be treated as casual technology choices when they interact with regulated information or sensitive workflows.

Product selection is part of privacy governance

The OAIC has also published a checklist on privacy considerations when selecting a commercially available AI product. That checklist asks whether the system is appropriate and reliable for the organisation’s intended uses, whether the organisation understands the limitations of the product, and whether the intended use involves high privacy risk, including decisions that may have legal or similarly significant effects on individuals.⁴

That is a useful reminder that privacy governance starts before deployment. Product selection itself can be a governance decision.

Deployment is not the same thing as low risk

A tool being commercially available does not mean it is low risk.

The OAIC’s deployment checklist makes clear that organisations using commercially available AI products should assess privacy issues in the actual deployment context.² This includes looking at how the product interacts with personal information, whether privacy notices and internal guidance are sufficient, and whether additional governance controls are needed.²

When organisations should slow down and review

A stronger privacy review is likely to be warranted when:

  • staff propose entering customer, employee, patient, student, or client information into a tool
  • prompts may contain case details or records tied to identifiable individuals
  • the AI product may retain, reuse, or further process submitted content
  • outputs may influence decisions about individuals
  • the organisation cannot clearly explain how the product handles data
  • the proposed use would be high privacy risk in an ordinary non-AI setting.¹ ² ⁴

Practical governance responses

In many cases, the right response is not a blanket ban on AI products. It is a more disciplined review process.

That may include:

  1. restricting use of publicly available generative AI tools for personal information
  2. conducting privacy assessment for proposed higher-risk use cases⁵ ⁶
  3. using OAIC’s AI product checklists during selection and deployment² ⁴
  4. documenting approvals, limitations, and controls
  5. issuing clearer internal guidance on acceptable use.

A simple decision table

SituationLikely response
Publicly available AI tool with personal information in promptsStrong caution; OAIC best practice is not to enter personal information¹
AI product used for low-risk drafting with no personal informationLower privacy concern, but still needs internal guidance
AI product used for decisions about individualsStronger governance and privacy review likely needed¹ ⁴
Product selection involves unclear data handling or retention practicesFurther diligence and review needed² ⁴

The wider lesson for governance teams

The wider lesson is that commercially available AI products should not be treated as casual productivity tools if they interact with regulated information or sensitive workflows.

They should be treated as governance-relevant technology decisions.

That means the organisation needs a way to assess proposed uses, apply restrictions where appropriate, and document why a given use is acceptable, limited, or prohibited.¹ ² ⁵

Conclusion

The OAIC’s guidance on commercially available AI products is useful because it turns a vague concern into a practical governance question.

The issue is not simply whether AI tools are useful. It is whether the organisation understands how they affect privacy obligations, risk exposure, and accountability.

The organisations that respond well will not rely on informal judgement alone. They will apply more disciplined review, clearer restrictions, and stronger decision-making around how commercially available AI products are selected and used.¹ ² ⁴

References

  1. OAIC, Guidance on privacy and the use of commercially available AI products.
  2. OAIC, Checklist: Privacy considerations when using commercially available AI products.
  3. OAIC, New AI guidance makes privacy compliance easier for business.
  4. OAIC, Checklist: Privacy considerations when selecting a commercially available AI product.
  5. OAIC, Guide to undertaking privacy impact assessments.
  6. OAIC, 10 steps to undertaking a privacy impact assessment.

Need support turning governance intent into operational execution?

Talk to Beacon & Stone about local advisory support, deployment, and practical governance implementation.