The Voluntary AI Safety Standard and Mandatory Guardrails: What’s the Difference?
Author
Dom Jocubeit

Australia’s AI policy conversation often gets blurred into one broad idea of “AI regulation.” In practice, two different things are happening at once.
On one hand, Australia has published voluntary guidance intended to support safe and responsible AI adoption. On the other, the Australian Government has also consulted on possible mandatory guardrails for AI in high-risk settings.¹ ²
Understanding the difference matters for any organisation trying to work out what is optional, what is policy direction, and what may become binding in the future.
Key takeaways
- The Voluntary AI Safety Standard is voluntary, not binding economy-wide law.¹
- The government later published Guidance for AI Adoption with 6 essential practices for safe and responsible AI governance.¹
- Mandatory guardrails are a separate consultation track focused on high-risk AI.²
- Existing Australian laws still apply now, regardless of whether a future AI-specific regime emerges.¹
The Voluntary AI Safety Standard is voluntary
The Australian Government’s legal landscape guidance makes this point clearly: the Voluntary AI Safety Standard, and the earlier 10 guardrails associated with it, are voluntary.¹
That means they are not, by themselves, a binding economy-wide legal regime.
The standard is useful because it signals the Australian Government’s view of what safe and responsible AI practice should look like. It gives organisations a practical framework they can use to improve internal governance, risk management, and deployment practices.¹
The guidance evolved in 2025
The government’s legal landscape page also notes that on 21 October 2025 it published Guidance for AI Adoption, which outlines 6 essential practices for safe and responsible AI governance. According to the same page, this updated and simplified the earlier Voluntary AI Safety Standard.¹
That matters for two reasons:
- it shows the framework is not static
- it suggests the government is interested in practical adoption and implementation, not just high-level principle statements.¹
Mandatory guardrails are a separate policy track
Separate from the voluntary framework, the Australian Government has also consulted on introducing mandatory guardrails for AI in high-risk settings.²
The consultation materials say the proposed guardrails are intended to support safe and responsible AI use in Australia. Other consultation material explains that the proposed regulations would focus on testing, transparency, and accountability requirements for high-risk AI.²
This is not the same thing as the voluntary standard.
Voluntary vs mandatory at a glance
| Question | Voluntary AI Safety Standard / Guidance for AI Adoption | Mandatory guardrails consultation |
|---|---|---|
| Current status | Guidance | Consultation on possible future regulation |
| Scope | Broad safe and responsible AI governance practices | High-risk AI settings |
| Binding economy-wide? | No | Not yet settled or enacted |
| Practical use today | Benchmark for internal governance and maturity | Signal of likely future regulatory direction |
Why the distinction matters
Many organisations hear the phrase “AI guardrails” and assume all of it is either already mandatory or entirely optional.
Neither is accurate.
The current position is more nuanced:
- some guidance is voluntary
- some policy consultation is exploring future mandatory rules
- some existing laws already apply today, regardless of whether a new AI-specific regime exists.¹ ²
That last point matters. Even if mandatory AI guardrails are still under development, organisations may already face obligations under privacy law, consumer law, anti-discrimination law, online safety requirements, sector-specific rules, or other existing frameworks identified in official guidance.¹
Why high-risk settings are the focus
The consultation on guardrails has focused on high-risk AI settings because those are the contexts where harm is more likely to be serious, difficult to reverse, or significant for individuals and communities.²
This is consistent with the broader direction of Australian AI policy, which has tended to differentiate between lower-risk adoption and higher-risk uses that need stronger scrutiny.¹ ²
What organisations should do now
The existence of both voluntary guidance and possible future mandatory guardrails creates an understandable temptation to wait for more certainty.
That is usually the wrong move.
A better approach is to use the current voluntary guidance as a practical benchmark while also identifying higher-risk use cases that may be more exposed if a future mandatory regime emerges.
In practice, that means:
- understanding which AI use cases are already significant from a governance perspective
- identifying which existing laws apply now
- documenting who is accountable for higher-risk use cases
- building stronger review and escalation for more sensitive applications
- treating transparency, testing, and oversight as practical design questions rather than abstract themes.¹ ²
Voluntary does not mean unimportant
A voluntary framework can still have real influence.
It can shape procurement expectations, internal governance standards, industry norms, and the kinds of questions boards, executives, and customers ask. It can also serve as a reference point when organisations build internal policies or assess whether their current controls are mature enough.¹
Mandatory does not mean immediate certainty
At the same time, consultation on mandatory guardrails does not mean organisations can already point to a final settled regime.
The policy direction is serious, but consultation is not the same thing as enacted legislation. Organisations should therefore avoid claiming a level of finality that does not yet exist.²
Conclusion
The Voluntary AI Safety Standard and mandatory guardrails are related, but they are not the same thing.
The voluntary framework provides practical guidance and signals the direction of Australian AI governance thinking. Mandatory guardrails, if introduced, would create a stronger regulatory response for high-risk settings. Alongside both of these, existing laws already apply to many AI use cases today.¹ ²
For organisations, the smartest response is not to wait for a single definitive AI law. It is to strengthen governance now, especially for higher-risk use cases, while staying alert to how the policy framework continues to evolve.
References
- Australian Government, Legal landscape for AI in Australia.
- Australian Government consultation, Mandatory guardrails for AI in high-risk settings.