Back to articles
#ai#politics#tech

How I Accidentally Started Trying to Start a Political Party

Paul Seymour·March 3, 2026·7 min read

The Brisbane line train from the Sunshine Coast is about 1.5 hours if you're lucky with connections. Dan Dekel and I make this commute regularly — he's my co-CEO at Patient Zero alongside Demelza Green — and over the years we've developed a habit of solving all of Australia's problems on the way to meetings. Economics, deglobalisation, politics, housing, climate change. The usual Tuesday morning routine for two tech nerds with too much time and strong opinions.

This particular trip started normally enough. We were discussing something mundane — probably microservices architecture or why Change Advisory Boards are the devil — when the conversation somehow pivoted to the quality of political discourse in Australia. Dan mentioned watching some parliamentary question time the night before and how utterly theatrical it all was.

"Here's what I don't get," he said, looking out at the passing sugarcane fields. "These are supposedly intelligent people with access to Treasury modelling and departmental briefings, but the policy discussion is just... performance art."

I knew exactly what he meant. You watch question time and it's all political positioning and gotcha moments. Someone asks about housing affordability, and somehow it becomes a shouting match about industrial relations law from 2019. Meanwhile, we spend our days building systems that have to actually work, where evidence and logic matter because if you're wrong, things break in measurable ways.

"What if," Dan said, "you asked AI the same policy questions they're pretending to debate?"

This is how trouble starts. One of us has an idea that sounds reasonable, and before you know it, we're three beers deep at the Caloundra RSL registering a political party. But I was curious, so I opened Claude on my phone and typed: "What are the most evidence-based approaches to housing affordability in Australia?"

Thirty seconds later, I had a structured analysis covering supply-side constraints from zoning laws, demand-side impacts of negative gearing, social housing models from Vienna and Singapore, with citations to AHURI research and RBA economic modelling. The response was longer than most ministerial press releases and infinitely more useful than anything I'd heard in parliament.

Dan read it over my shoulder. "That's actually... good policy work."

And that was the moment we accidentally started thinking seriously about this.

The Research Rabbit Hole

Being software architects means we can't leave these things alone. By the time we reached Central Station, we'd agreed to systematically test AI responses across policy domains. Climate change adaptation. Digital infrastructure. Healthcare workforce planning. Tax reform. Every query returned detailed analyses with stakeholder impact assessments, overseas case studies, and implementation timelines.

This wasn't generic ChatGPT output either. When Dan asked about carbon pricing mechanisms, it referenced Sabine Hossenfelder's physics work on climate sensitivity, compared the EU ETS implementation with our failed 2012 carbon tax, cited economic modelling from Nicholas Stern's review, and outlined a phased approach that specifically addressed regional impacts on mining communities in the Hunter Valley and Pilbara.

We hadn't asked for that level of detail. We'd asked a policy question and received what looked suspiciously like actual policy work. The kind you'd expect from a competent department with time to think through implementation details rather than just political messaging.

The more we experimented over subsequent commutes, the more obvious the disconnect became. We were getting better policy analysis from a language model than we were seeing from the institutions we pay to develop policy. This felt like a problem worth understanding.

Systems Thinking

Running Patient Zero has taught me that most organisational problems aren't technical problems — they're process and incentive problems. Political parties optimise for winning elections, not for developing good policy. They hire communications advisers and focus group moderators, not policy researchers. They measure success in polling numbers and media mentions, not in evidence quality or implementation outcomes.

The AI wasn't better because it was smarter. It was better because it had different constraints. It wasn't trying to avoid criticism or placate donor groups or position for the next election cycle. It was just trying to answer the question I'd asked using the best available evidence.

This started me thinking about system design. What if you structured political decision-making differently? What if you separated the research function from the political function? AI handles evidence gathering and analysis. Humans handle value judgements and trade-off decisions. It's basically the same pattern we use in software development — automate the parts that can be automated, keep humans focused on the parts that require judgement.

The Accidental Registration

A few weeks later, we were explaining this concept to some mates at the Caloundra RSL — policy development as a two-phase process with AI doing research and humans making decisions — when someone said something that changed everything: "Mate, you've basically designed a political party. You should just register the thing."

Under normal circumstances, this would have been the point where reasonable people laugh and move the conversation on to rugby league or property prices. But we'd spent the evening debugging a particularly messy API integration, we were three beers in, and the idea had the same appeal as most 11pm software architecture decisions: it seemed both obvious and inevitable.

The Australian AI Party. A political organisation that uses artificial intelligence for policy research and evidence analysis, with human representatives making final decisions based on community values and democratic mandate. The AI provides the homework. Humans provide the judgement. Transparent about the process, clear about the constraints.

Dan and I decided to try registering it the following week. Here's what we learned: starting a political party isn't as simple as registering a business name. The Australian Electoral Commission requires 1,500 eligible voters to sign up as members before they'll even consider your application for registration. We've got a long way to go.

Reality Testing

Of course, the AI research wasn't perfect. It couldn't be — these models have training data cutoffs, knowledge gaps, and occasional spectacular blind spots that reminded me why human oversight matters.

When we asked it to analyse broadband infrastructure policy for regional Australia, it produced a comprehensive plan for fibre rollout to rural communities, complete with cost estimates and implementation timelines. Impressive work, except the costings were based on US labour rates and didn't account for the fact that getting NBN contractors to some of these locations involves four-hour drives and hazard pay rates. Off by about 300%.

Another time, it confidently recommended housing affordability solutions that involved "encouraging population migration to high-opportunity regional centres" and then listed three towns whose major employers had closed in the past five years. Apparently the AI thought encouraging people to move to economically depressed areas was sound population policy.

Dan's favourite error was a detailed analysis of Australian agricultural exports that kept referencing our "major rice industry." We grow rice. We're not a major rice exporter. That's like describing my weekend home brewing as a craft beer enterprise.

Each time, we'd push back. "Those cost estimates are wrong for Australian conditions." "Those towns don't have the employment base to support population growth." "Rice isn't a major export sector." The AI would acknowledge the correction, adjust its analysis, and produce better output. This is exactly the kind of iterative process that makes sense — AI provides research capabilities, humans provide local knowledge and reality checking.

The Irony Problem

Yes, we used AI to create a political party about using AI responsibly. The recursion is not lost on us. It's like using a computer to write software that manages computers, or hiring consultants to advise on consultant management. The meta-level absurdity is part of the point.

But the irony actually reinforces the core argument. AI tools are powerful and useful, but they're not infallible. They need human oversight, reality testing, and value-based decision making. The AAIP concept exists because we tried the tools, found them valuable but flawed, and concluded they'd work best in a system designed around those characteristics.

Most political organisations either ignore these capabilities entirely or use them without transparency. We're suggesting a middle path: use AI for what it's good at (research and analysis), acknowledge its limitations (context gaps and value judgements), and design processes that account for both.

What Actually Matters

Even if we manage to collect 1,500 signatures and get the Australian AI Party officially registered, it'll probably never win a federal seat. The major parties have resources, brand recognition, and institutional advantages that make electoral success extremely unlikely for new entrants. But electoral success was never the primary objective.

The objective is demonstrating that policy development can be more evidence-based, more transparent, and more systematically rigorous than current approaches. That research and analysis can be separated from political positioning. That you can use powerful tools responsibly if you design appropriate processes around them.

We have technology that can make democratic institutions more effective. Instead of pretending it doesn't exist or using it secretly, we're proposing to make the process transparent. AI does the research, humans make the decisions, everyone can see how both parts work.

This is what happens when you give two software architects a long train commute, parliamentary question time, and API access. Eventually, you start thinking about better system designs. Now we just need to convince 1,500 Australians that it's worth a shot.

How I Accidentally Started Trying to Start a Political Party — Paul Seymour | Paul Seymour