What Makes an AI Use Case “Low Risk” or “Higher Risk” Under the DTA’s Assessment Model?
Author
Dom Jocubeit

One of the most common questions organisations now ask is whether an AI use case is “low risk” or “high risk”. In the Australian Government context, the Digital Transformation Agency’s AI impact assessment guidance gives a practical framework for thinking about that question.
The key point is that the DTA model does not treat risk as a vague label. It treats risk as something to be assessed systematically through a threshold assessment, supported by likelihood and consequence analysis, and tied to practical governance consequences.¹ ² ³
Key takeaways
- Under the DTA model, risk is assessed through a structured threshold assessment rather than intuition alone.¹ ²
- Risk ratings are based on likelihood and consequence for each listed risk category.³
- If all risks are low, the assessing officer may recommend that a full assessment is not required.²
- If one or more risks are medium or higher, the assessing officer must either complete a full assessment, reduce the risk to low, or decide not to proceed.²
- High-risk AI use cases require additional governance actions under the AI policy, including creating or reusing a governance body for high-risk AI.⁴ ⁵
Why the “low risk vs high risk” question matters
The DTA’s updated Policy for the responsible use of AI in government says the principles and requirements in the AI use case impact assessment section are intended to assess potential impacts of AI use cases and ensure additional oversight of higher-risk AI.⁵
That matters because risk rating is not just an academic exercise. It affects whether a use case can proceed with a lighter level of review, whether it needs fuller assessment, and whether additional governance bodies or senior oversight should be involved.² ⁴ ⁵
The DTA model begins with threshold assessment
The AI impact assessment process includes a threshold assessment that helps determine whether the use case can remain at a lighter level of review or whether it should move into fuller assessment.¹ ²
This is important because it means the model does not assume that every AI use case needs the same depth of governance. Instead, it uses a practical triage process.
How the DTA guidance expects risk to be assessed
The DTA’s inherent risk assessment guidance says that, for each listed risk category in the assessment table, the team should determine the likelihood and consequence of the risk occurring for the AI use case.³
The guidance also says the inherent risk assessment should reflect the intended scope and function of the AI use case.³
That means a risk rating is not only about the technology in the abstract. It is about the actual use case in its intended operating context.
What counts as “low risk” under the threshold logic
The clearest official threshold statement appears in the DTA’s threshold assessment outcome guidance.
It says that if the assessing officer is satisfied all risks are low, they may recommend that a full assessment is not required and that the approving officer accept the low risks and endorse the use case.²
So in the DTA model, “low risk” does not simply mean a use case feels harmless. It means the threshold assessment has been completed, the listed risks have been assessed, and all of them are low enough that a fuller assessment is not considered necessary.²
What moves a use case into the “higher risk” path
The same threshold assessment outcome guidance also makes the next step explicit.
If one or more risks are medium or higher, the assessing officer must either:
- complete a full assessment
- amend the scope or function so the threshold assessment results in a low risk rating
- decide to not accept the risk and not proceed with the AI use case.²
This means the DTA’s model treats “medium or higher” as the point at which the use case no longer qualifies for the simplest pathway.²
Medium risk vs high risk in the policy setting
The AI policy gives some practical direction on what happens after a higher rating is reached.
Digital.gov.au says that if an agency determines an in-scope AI use case has an inherent medium-risk rating when completing an AI use case impact assessment, it should consider whether the use case would benefit from being governed through a designated board or a senior executive.⁵
For high-risk AI use cases, the supporting guidance says agencies are required under the AI policy to apply specific actions, including creating or reusing a governance body for the purpose of governing high-risk AI.⁴
That means the DTA model does not collapse everything into “safe” or “unsafe”. It creates a graduated governance response.
A practical reading of the model
| Risk position | Practical implication under the DTA model |
|---|---|
| All risks low | Assessing officer may recommend that a full assessment is not required.² |
| One or more risks medium | A full assessment may be needed, and agencies should consider stronger oversight such as a designated board or senior executive.² ⁵ |
| One or more risks high | Additional governance actions apply, including governance body review for high-risk AI.⁴ ⁵ |
What kinds of issues tend to matter in the assessment
The DTA framework is broader than a purely technical safety review. Across the AI impact assessment guidance, related policy materials, and privacy/security guidance, teams are expected to consider issues including:
- accountability and ownership⁵ ⁶
- privacy and security⁷
- transparency and contestability¹
- broader impacts and risks against Australia’s AI Ethics Principles¹
- whether the use case should proceed, be modified, or not proceed at all.²
This matters because a use case can move into a higher-risk pathway for governance reasons, not only because of technical complexity.
Why context matters more than labels
One important lesson from the DTA model is that risk is contextual.
The guidance does not say that a technology is inherently low risk or high risk in every setting. Instead, the risk assessment should reflect the intended scope, function, and operating context of the specific use case.³
That is why two uses of similar AI capability may justify different governance responses.
Questions teams should ask early
Before assuming an AI use case is “low risk”, a team should be able to answer:
- Have we completed the threshold assessment properly?¹ ²
- Are all listed risks genuinely low?²
- Have likelihood and consequence been assessed for the actual use case, not just the technology in general?³
- Are privacy, security, accountability, and oversight issues fully considered?⁵ ⁶ ⁷
- If any risk is medium or higher, do we need fuller assessment, stronger governance, or a decision not to proceed?² ⁴ ⁵
Threshold logic at a glance
Conclusion
Under the DTA’s assessment model, an AI use case is not “low risk” simply because a team feels comfortable with it.
A low-risk outcome means the threshold assessment has been completed and all assessed risks are low. Once one or more risks are medium or higher, the use case moves into a more demanding path: fuller assessment, stronger oversight, redesign to lower the risk, or a decision not to proceed.² ⁵
That is what makes the DTA approach useful. It turns the language of AI risk into a structured governance process with practical consequences.
References
- Digital Transformation Agency, Artificial intelligence impact assessment tool.
- Digital Transformation Agency, Threshold assessment outcome guidance.
- Digital Transformation Agency, Inherent risk assessment guidance.
- Digital Transformation Agency, Case review and next steps guidance.
- Digital Transformation Agency, Policy for the responsible use of AI in government – AI use case impact assessment.
- Digital Transformation Agency, Basic information and impact assessment responsibilities guidance.
- Digital Transformation Agency, Privacy protection and security guidance.