Skip to content
EasiraAI.

GOVERNANCE

Automated decisions under the Privacy Act 2026: 12 AU examples

What counts as an 'automated decision' under the Privacy Act 2026 — and 12 real-world examples from Australian mid-market firms in professional services, finance, and healthcare.

Published 16 May 2026 · 10 min read


title: "Automated decisions under the Privacy Act 2026: 12 AU examples" dek: "What counts as an 'automated decision' under the Privacy Act 2026 — and 12 real-world examples from Australian mid-market firms in professional services, finance, and healthcare." category: "GOVERNANCE" publishedAt: "2026-05-16" readTime: "10 min read" author: "EasiraAI editorial team" keywords:

  • Privacy Act 2026 automated decisions
  • AU AI compliance
  • automated decision-making transparency Australia

The Privacy Act 2026 automated-decision transparency obligation is one of the most discussed — and least clearly understood — reforms in the Act. Most mid-market firms know the deadline (10 December 2026). Far fewer have a working definition of which of their existing processes actually trigger the obligation.

This article focuses on that question: what counts as an "automated decision" under the Privacy Act 2026, and what does that mean in practice for a 50–500 person Australian firm? For the compliance roadmap — the 90-day plan for getting your organisation ready — see the companion article Privacy Act 2026: a 90-day plan to comply.

What the Act actually says

The Privacy and Other Legislation Amendment Act 2024 introduced a new APP 7A (in the draft legislation; the final section numbering is confirmed in the Privacy Act 2026 as enacted) creating transparency obligations for "substantially automated decisions" that have a "legal or significant effect" on an individual.

The key test has two limbs:

Limb 1 — Substantially automated. The decision must be made using an automated process with minimal or no human review of the individual circumstances. A decision that a human reviews and approves is less likely to meet this test. A decision made by an algorithm, rule engine, or AI model where the output is acted upon without meaningful human review is more likely to.

Limb 2 — Legal or significant effect. This is broader than "legal effect" in the strict sense. The OAIC's draft guidance covers decisions that affect rights, financial position, access to services, or employment standing. It does not require the decision to be legally binding — a decision that effectively determines whether someone gets access to credit, insurance, employment, or a service is within scope even if there is technically an appeal mechanism.

The obligation that attaches when both limbs are met is: you must tell the individual that a substantially automated decision has been or will be made, what personal information was used, and how they can seek a review or explanation.

The firms most at risk from the automated-decision obligation aren't the ones running sophisticated AI models — they're the ones with rule-based systems and scoring models that have been quietly making consequential decisions for a decade without anyone thinking of them as "automated."

What does not count

Before the examples, it is worth being clear about what the obligation does not cover.

Internal operational automations — invoice processing, document routing, scheduling, payroll calculations — generally do not involve "decisions with significant effect on an individual" in the sense the Act targets. They affect the business's operations, not an individual's rights or access to services.

Fully human decisions with AI assistance. If a human reviews the AI output and makes an independent judgment, the automated decision obligation is less likely to apply. The more meaningful the human review, the weaker the case for the obligation. "Rubber-stamping" an AI recommendation without genuine review is unlikely to escape the obligation, however.

Aggregated analytics and reporting. Population-level analysis that does not produce individual-level decisions is not in scope.

12 real-world examples from AU mid-market

These examples cover the sectors where the obligation is most commonly triggered — financial services, legal, accounting, healthcare, and professional services broadly.

Financial services and insurance

1. Credit decisioning with a rules engine

A non-bank lender or financial services firm uses a rules engine that scores a credit application based on income, existing obligations, repayment history, and risk band. The output is "approved," "declined," or "refer." If the "approved" or "declined" outcomes are acted upon without a human reviewing the individual application, this is a substantially automated decision with significant financial effect. The obligation applies.

2. Insurance premium pricing by risk model

A general insurer uses a statistical risk model to price a home and contents policy. The premium is generated automatically based on property data, claims history, and risk band. The individual sees a price; there is no human underwriter involved. This is a substantially automated decision — the individual's financial access to insurance is affected by the automated output.

3. Claims pre-assessment and initial outcome

A claims processing workflow uses document AI to ingest a claim, cross-reference policy terms, and produce an initial "covered/not covered" classification. If the initial classification is communicated to the claimant before a human review, or if the human review is superficial, this likely meets the threshold. The build for a claims pre-assessment agentic AI system should include an explicit human-review step with documented criteria.

4. Transaction fraud scoring and account restriction

A payments or banking firm's fraud model flags a transaction and automatically restricts account access pending review. The account restriction is a significant effect, even if temporary. The automated flag and restriction together constitute a substantially automated decision. The question is whether the restriction is communicated with adequate transparency about why and how to contest it.

Legal and accounting practices

5. Conflict-of-interest check and matter intake

A law firm's client intake system runs an automated conflict check against a matter database and either clears the intake or flags it as conflicted. If the automated flag results in a declined client engagement without a solicitor reviewing the specific conflict details, this could meet the threshold — particularly if the client experiences it as a refusal of service.

6. AML/KYC automated screening

Accounting firms subject to AUSTRAC obligations often use automated screening tools to assess client identity against PEP (politically exposed person) and sanctions lists. An automated result of "high risk" that causes a client to be declined services or subject to enhanced due diligence is a decision with significant effect triggered by an automated process. The obligation to explain and provide review applies.

7. Tax position scoring and audit flag

Where a firm uses an automated model to score a client's tax return for audit risk and routes high-risk clients to a different service tier or charges different fees, this is likely a substantially automated decision — particularly where the individual client is unaware of the scoring.

Healthcare

8. Referral triage and appointment priority

A healthcare practice or hospital uses an automated triage tool that classifies incoming referrals and assigns appointment priority or waiting list position. A referral classified as "routine" that would otherwise be classified as "urgent" by a clinician is a decision with significant effect. The automation of clinical triage without explicit clinician review is one of the higher-risk categories in healthcare.

9. Prior authorisation for insurance-funded treatment

A health insurer's system automatically assesses a treatment request against policy terms and clinical criteria, generating a prior authorisation decision. If the decision is "not approved" based on automated rules without a clinician review, this is in scope — the individual's access to funded healthcare treatment is directly affected.

HR and employment

10. Automated CV screening and shortlisting

A firm using an ATS (applicant tracking system) with AI-powered shortlisting that automatically rejects candidates below a threshold score, without a human reviewing the individual's application, is making substantially automated employment decisions. These are squarely in scope under the Act. This is one of the most common obligations mid-market firms are inadvertently triggering.

11. Performance scoring and remuneration band assignment

Where performance management software generates a score based on automated metrics (sales data, call volumes, attendance records) and that score is used to determine a pay band or performance rating without meaningful human review, the automated decision obligation is likely triggered for employment-related decisions.

Property and tenancy

12. Automated rental application assessment

A property management firm's platform automatically scores rental applications based on income verification, credit history, and rental history, generating an "approved" or "declined" result communicated to the applicant. This is a substantially automated decision with significant effect on an individual's housing access. The obligation applies in full.

What compliance actually requires

For each process that meets the two-limb test, the Privacy Act 2026 requires you to:

  1. Notify the individual — before or at the time of the decision — that a substantially automated decision will be or has been made, and what that decision is.

  2. Disclose the information used — the categories of personal information that were used in the automated decision (not necessarily the specific scores or model weights, but the categories: income data, claims history, credit history).

  3. Provide a review mechanism — a clear pathway for the individual to seek an explanation of the decision and to have it reviewed.

The notification does not need to be complex. In most cases, a disclosure in the relevant service agreement or at the point of application, combined with a documented review process, is sufficient. What is not sufficient is silence — the most common current state.

Building the inventory

The first practical step is an inventory of your automated decision-making processes. This is not a technology audit — it is a process audit. The questions to answer for each process are:

| Question | Purpose | |---------|---------| | Is there an automated system producing an output that determines or heavily influences an individual outcome? | Establish automation | | Does a human review the individual circumstances before the outcome is acted on? | Assess substantiality | | Does the outcome affect financial access, services access, employment, or legal rights? | Assess significance | | Is there a documented disclosure to the individual? | Current compliance status | | Is there a review or contestation mechanism? | Current compliance status |

For most mid-market firms, completing this inventory takes two to three weeks with the right facilitation. It is the prerequisite for any Privacy Act 2026 automated-decision compliance program.

The AI Readiness Audit includes a governance gap analysis that covers this inventory as a standard deliverable, mapped against the Privacy Act 2026 APP obligations and the AU Voluntary AI Safety Standard's transparency guardrail.

The cost of inaction

The statutory tort introduced by the Privacy Act 2026 allows individuals to bring a claim for serious invasions of privacy without requiring OAIC involvement. The civil penalty regime runs up to the greater of $50 million, three times the benefit obtained, or 30% of adjusted turnover — applicable to organisations, not just large ones.

The OAIC has signalled increased enforcement focus. The automated-decision transparency obligation is one of the provisions they are most likely to pursue in the first 12 months after commencement, because it is clear, testable, and affects a large number of individual people.

For most mid-market firms, the cost of building a compliant automated-decision disclosure and review process into their key systems is modest — a few weeks of work on documentation and process design. The cost of waiting until the OAIC comes asking is not.


Next steps

If you are not certain which of your current processes trigger the automated-decision obligation, the right starting point is the AI Readiness Audit — a two-week, fixed-fee engagement that includes a governance gap analysis against Privacy Act 2026 obligations. Or contact us to discuss your specific situation.

Want this applied to your business?

Book a discovery call. We'll map your specific exposure to the rules and the 90-day plan to address it.

Book a discovery call