Skip to content
EasiraAI.

STRATEGY

How to design an AI pilot that qualifies for the R&D Tax Incentive

The R&DTI 43.5% refundable offset applies to qualifying AI pilots — but only if the project is designed and documented correctly from day one. Here's how.

Published 16 May 2026 · 9 min read


title: "How to design an AI pilot that qualifies for the R&D Tax Incentive" dek: "The R&DTI 43.5% refundable offset applies to qualifying AI pilots — but only if the project is designed and documented correctly from day one. Here's how." category: "STRATEGY" publishedAt: "2026-05-16" readTime: "9 min read" author: "EasiraAI editorial team" keywords:

  • R&D Tax Incentive AI
  • R&DTI 43.5% offset
  • AI pilot R&D Australia

The R&D Tax Incentive is one of the most underused mechanisms available to Australian mid-market firms investing in AI. A firm with under $20 million aggregated turnover can claim a 43.5% refundable tax offset on qualifying R&D expenditure — meaning the government effectively funds 43.5 cents of every qualifying dollar, including cash refunds where the company has no tax liability.

Most mid-market firms either don't know this applies to AI work, or they find out at tax time and try to retrofit documentation onto work that wasn't designed to be claimed. Neither produces a defensible claim.

This article covers how to design an AI pilot from the start to qualify for R&DTI — what the ATO and AusIndustry actually require, where mid-market AI projects typically fall short of the standard, and the specific documentation practices that produce a claim that survives scrutiny.

What the R&DTI actually requires

The core test is straightforward, but the application requires precision.

A "core R&D activity" under the Industry Research and Development Act 1986 (the IR&D Act) must satisfy three criteria:

  1. Experimental in nature — the activity must be designed to generate new knowledge through experiment. It must have a hypothesis (or hypotheses) that you are testing.

  2. Technically uncertain outcome — the outcome must not be knowable in advance by a competent professional in the field, even with access to current knowledge. If you are deploying a known AI architecture in a standard way, the outcome is not technically uncertain. If you are doing something genuinely novel — a new combination of architectures, a novel training approach for your domain, a new integration method — there is a legitimate uncertainty argument.

  3. Systematic progression — the activity must follow a systematic approach: form a hypothesis, design an experiment, observe results, revise the hypothesis.

The ATO's guidance on software R&D (specifically the 2017 guidance note on software as a core R&D activity, updated in 2022) makes clear that routine software development, debugging, and system integration do not qualify. What can qualify is genuinely experimental work — work where the technical outcome is not determinable in advance.

The R&DTI is not a subsidy for AI adoption. It is a subsidy for AI experimentation — and that distinction matters enormously for how you design the project and document the work.

Where mid-market AI projects typically qualify

The following categories of AI work commonly have legitimate R&DTI eligibility for mid-market firms:

Custom model development and fine-tuning. Training or fine-tuning a model on a domain-specific dataset — particularly where the right architecture, training approach, or data augmentation strategy is genuinely uncertain. For example: fine-tuning a classification model on a corpus of AU legal clauses where the training data size, label taxonomy, and optimal architecture are all experimental decisions.

Novel RAG pipeline design. Retrieval-augmented generation is well-understood as a category, but the specific design decisions for a novel domain can involve genuine technical uncertainty: the optimal chunking strategy for a specific document type, the right embedding model for a domain with unusual vocabulary, the retrieval and re-ranking approach for a specific query pattern. These are testable hypotheses, not engineering choices.

Novel automation architectures. Where a workflow automation or agentic AI system involves genuine experimentation — a new approach to multi-step agent design, a novel integration between systems that has not been done before, an experimental approach to exception handling or decision logic — there is an eligibility argument.

Domain-specific model evaluation and bias assessment. Developing novel evaluation frameworks for AI systems in AU-specific regulatory contexts (financial services, healthcare, legal) — where the right metrics and benchmarks are themselves uncertain — can qualify as core R&D.

The categories that typically do not qualify:

  • Deploying an existing model via API with standard configuration
  • Building automations using documented tools (n8n, Power Automate) in standard ways
  • System integration and data migration without experimental components
  • UI and application layer development around an AI model

The documentation requirements

This is where most retrospective claims fail. AusIndustry and the ATO require contemporaneous documentation — records made at the time of the activity, not reconstructed at year end.

For a qualifying AI project, the minimum documentation framework is:

Hypothesis log

For each experimental component, a written record of:

  • What technical problem or question you are trying to answer
  • Your hypothesis about the approach that will work and why
  • The alternative approaches you considered and why they were uncertain
  • The specific outcome you are trying to achieve and how you will measure it

This does not need to be long — a one-page technical log entry per experiment is sufficient. What it cannot be is: "we tried it and it worked" written after the fact.

Experiment records

For each experiment run:

  • Date and personnel involved
  • Configuration used (model, architecture, hyperparameters, data splits as relevant)
  • Results obtained (metrics, outputs, failure modes)
  • Conclusion: did the result confirm or invalidate the hypothesis, and what does that mean for the next experiment?

In an AI development context, this maps naturally to model training logs, evaluation run records, and version control history — if you instrument them correctly from the start.

Expenditure records

For the claim to be quantified, you need clear time records showing which personnel worked on which R&D activities (as distinct from non-qualifying development work), and which costs (including subcontractor costs for qualifying work) were incurred on those activities.

The distinction between qualifying R&D time and non-qualifying development time needs to be made contemporaneously. "The whole project was R&D" is not a defensible position. Neither is "we decided what was R&D at the end of the year." You need a project plan that identifies the R&D components and a time-tracking practice that records against them.

Technical narrative

At the end of the project (or at the AusIndustry registration point — registration must occur by 30 April for the preceding income year), a technical narrative describing:

  • The knowledge gap the project addressed
  • Why the knowledge was not publicly available
  • What experiments were conducted and what was learned
  • The new knowledge generated (even if the experiments partly failed)

The narrative does not require the project to have succeeded. A failed experiment that generated genuine learning about technical uncertainty can qualify. A successful project that just deployed known approaches does not.

The AusIndustry registration process

Registration with AusIndustry (the Industry R&D portal, now administered through the Business.gov.au R&D portal) must be completed by 30 April following the income year in which the activities were conducted. This means:

  • For activities in FY2026 (ending 30 June 2026), registration is due 30 April 2027
  • Registration is prospective or concurrent — you can register activities as you plan them — but the claim is submitted in the tax return after the income year closes

The registration requires: a description of the core R&D activities, the technical hypotheses, the experimental approach, and the estimated eligible expenditure. The ATO may request supporting documentation; the hypothesis log and experiment records are your primary evidence.

Most mid-market firms engaging an R&DTI specialist (a tax adviser with specific AI R&D experience) for the first time should allow four to six weeks for the first registration. Subsequent years are faster once the template is established.

What the 43.5% offset means in practice

For a company with under $20 million aggregated turnover:

| Scenario | Qualifying expenditure | Offset (43.5%) | Cash position | |----------|----------------------|---------------|---------------| | $60K AI pilot (mixed R&D/dev) | $35K qualifying | $15,225 | Cash refund if no tax liability | | $120K custom LLM build | $80K qualifying | $34,800 | Credit against tax or cash refund | | $200K legacy modernisation + AI layer | $100K qualifying | $43,500 | Credit against tax or cash refund |

The refundable nature is significant. A company with no current tax liability — which includes many mid-market firms in growth investment mode — receives a cash payment from the ATO for the qualifying offset. This is not a deduction; it is a cash return.

For firms above $20 million aggregated turnover, the offset is 38.5% and non-refundable (but can be carried forward). The claim calculation also involves the incremental methodology for companies above the threshold, which is more complex.

How the Readiness Audit fits in

The AI Readiness Audit includes an R&DTI eligibility memo as a standard deliverable. This memo reviews your planned AI activities against the core R&D activity criteria, identifies which components are likely to qualify, and outlines the documentation requirements for those components.

This is worth having before you start the project, not at the end of the financial year. The documentation practices need to be in place from day one. A memo that says "yes, this might qualify" written at tax time is not the same as a project that was instrumented to produce qualifying evidence.

The memo is not a formal R&DTI advice document — you will need a registered tax agent to lodge the claim. It is a preliminary assessment that tells you whether engaging a specialist is warranted and what they will need.

The compliance risk of getting it wrong

The ATO audits R&DTI claims and has increased scrutiny of software and AI claims since 2021. The consequences of a claim that doesn't survive audit are not just the offset being reversed — they can include penalties and interest.

The risk is not in claiming the offset for genuinely qualifying work. The risk is in claiming work that does not qualify: routine software development, standard system integration, or documentation work, presented as core R&D. The documentation framework described above is your protection against this risk — if every experiment has a hypothesis, contemporaneous records, and a clear technical uncertainty argument, the claim is defensible.


Planning an AI investment in the next 12 months?

The AI Readiness Audit includes an R&DTI eligibility memo as a standard deliverable — worth knowing before you commit the budget. Contact us to discuss your planned program and whether R&DTI applies.

Want this applied to your business?

Book a discovery call. We'll map your specific exposure to the rules and the 90-day plan to address it.

Book a discovery call