Claude Code Austria screenshot used for the mortgage broker automation article

A Beautiful Idea for a Mortgage Broker That Will Break on the Third Client

Stanislav Kapustin Apr 22, 2026 ai · automation · mortgage · broker · workflow

I was scrolling through job postings and came across one. An Australian mortgage broker looking for a developer. I opened it, started reading — and couldn’t close it.

Not because the offer was interesting. But because I immediately saw how this would end.

But let’s first understand exactly what this person wants to build. Because the scale of the idea matters — without it, neither the excitement nor the problem makes sense.

What the Broker Wants

Picture a mortgage broker. Every day clients come to him wanting to buy property on credit. Every client brings a folder of documents. Or drops files into Dropbox. Or sends them scattered across a messenger app.

Then the manual work begins. The same work, deal after deal, over and over again.

First, figure out what actually arrived. This is a payslip. This is a bank statement. This is a tax return. And this is missing — no ATO notice, without which the application won’t even be looked at.

Then rename everything. Because the client sent files called “scan 3”, “document final 2”, “photo_2024”. The bank won’t accept that.

Then extract the numbers from all these papers. How much the person earns. What loans they have. What expenses. What assets. All of it needs to be consolidated into one table — clean, structured, error-free.

Then calculate borrowing power — the maximum the bank is willing to lend this client. Then compare several scenarios — what if 25 years, what if 30, what if a different deposit amount.

And finally — write the credit submission. Not just fill out a form, but write submission notes — a living document that explains to the bank why this client is reliable, why the deal is good, why it should be approved. Every broker has their own style, their own logic, their own language.

And all of this — for every client. Again and again.

The broker wants one system to do all of this. Automatically.

Here’s specifically what he describes in the job post:

  • The system should pull files from Dropbox or Google Drive on its own. No manual uploading — it simply reads the client folder and works with whatever is there.
  • The system should identify document type. Is this a payslip or a bank statement? Is it from January or March? Is it CBA or Westpac? And rename files into a consistent format: Payslip_Jan_2026_ClientName.pdf, BankStatement_CBA_Mar2026.pdf.
  • The system should check completeness. Compare what arrived against a checklist — and clearly flag what’s missing. No bank statement for the last three months. No tax return for 2024. This saves the broker hours of back-and-forth with clients.
  • The system should extract financial data. From PDFs, from scans, from spreadsheets — income, liabilities, expenses, assets. All clean, structured, consistent across every deal.
  • The system should calculate borrowing power and compare scenarios. With a list of risks, pros and cons for each option.
  • The system should generate submission notes. Ready-to-send text for the bank — in a fixed structure, in the broker’s own style, clean and ready to go.
  • And finally — the system should learn. The broker wants to upload his past approved deals, his old submission notes. And have the system gradually absorb his logic, his style, his approach to underwriting. Getting smarter with every new deal.

This isn’t just automating routine work. This is a full AI underwriter that thinks like an experienced broker.

Where This Idea Comes From

Here’s what matters: the broker didn’t pull this out of thin air. He’s already working with Claude Cowork — an Anthropic product that lets you work with Claude as a smart desktop assistant, connect files, ask complex questions, get structured answers.

He’s already tried it. Already uploaded a client’s documents, asked a question — and got an answer in seconds. Clean, structured, professional.

It feels like magic. I understand that feeling. When you first see AI take a stack of scanned documents and hand you back a complete financial summary — it genuinely looks like something impossible just happened.

And now the broker wants to harness that magic. Package it into a system. So it runs on its own, without his involvement, equally well for every client.

The desire is completely understandable. More than that — most of this list is genuinely buildable. I’ve assembled things like this myself, piece by piece.

But there’s one line that changes everything.

The One Line That Breaks Everything

“The system should learn from my past approved deals and improve over time.”

I stopped at this sentence. Because this is exactly where the beautiful idea starts to fall apart.

Let me explain why.

There are two working approaches to AI automation.

The first is the workflow approach. You build clear chains: if a document of type X arrives, do Y. If the data looks like this, generate that template. Everything is predictable, everything is controlled. Every step is transparent. If something breaks — you know where and why. This works. Reliably and in real-world conditions.

The second is the agentic approach. You give the AI a task and say: figure it out yourself. The agent decides what to do, which tools to use, how to interpret the data. This looks impressive in demos. But in real business processes — especially where the cost of error is high, financial data, legal documents, credit decisions — this is still a rough approach. The agent can make mistakes, and you often won’t know exactly where things went wrong.

The combination sounds tempting: several clear workflows, with an agent in the middle that calls on them and requests the right data. It’s an interesting architecture. But it requires very clear thinking about what exactly the agent receives, which tools it has access to, and where its authority ends.

Now add a “learning system” on top of that.

What does that actually mean in practice? The AI looks at a new case, compares it to past ones, and then — what? If it can’t change the workflow logic, it isn’t learning, it’s just using references. That’s retrieval, not learning. Useful, but a completely different thing.

And if it can change the logic — the problems begin. Because every client is different. Every deal has its own nuances. The system adapts to a case with a self-employed client with a complex income structure — and breaks the standard scenario for a salaried employee. This is exactly how agents fail. Not from the complexity of any single task. From the sheer diversity of real people and real situations.

Why AI Doesn’t Learn the Way We Think It Does

There’s another fundamental problem that rarely gets talked about openly.

AI treats everything as a signal.

A client phrased their application slightly differently than usual — signal. Documents arrived in a different order — signal. The broker added a new phrase to the submission notes — signal. The system starts reacting to this, adapting, adjusting its logic.

But most of these “signals” are noise. Random variations, one-off exceptions, quirks of a specific client that will never repeat. AI still cannot reliably separate signal from noise. This is one of the hardest open problems in the field.

Which means a “learning system” in practice often means a system that degrades under the pressure of real-world data diversity.

What Actually Works

Does this mean the broker’s idea is impossible? No. Most of it is completely achievable.

Reading files from Dropbox and Google Drive via API — buildable. Identifying document type and renaming — buildable. Checking completeness against a checklist — buildable. Extracting financial data from PDFs using Claude — buildable. Generating submission notes from a template — buildable.

All of this can be assembled into a reliable workflow. Through Make or Zapier, through the Claude API, through the Google Drive API. Without agents, without complex architecture, without trying to teach the system to think independently.

And what about learning? This is where retrieval works — RAG (retrieval-augmented generation). The broker uploads his past approved deals and submission notes into a knowledge base. When the system generates a new application, it finds similar past cases and uses them as reference. This is not machine learning. This is not model training. But it genuinely improves the quality of generation, preserves the broker’s style, and reflects his logic.

Honest and predictable. No magic, but real results.

What to Actually Expect

Here’s the most important thing to understand before starting any project like this.

There will be no system that improves itself. There will be a system that constantly needs to be updated by hand. The type of borrower changed — update the logic. Bank requirements changed — update the templates. A new document type appeared — update the detection.

This isn’t a flaw in the system. This is the nature of AI automation right now. And the sooner you accept it as a given, the more honest the conversation will be — with the client, with the developer, with yourself.

The broker has already experienced the magic. And that magic is real — Claude can genuinely do things that seemed impossible three years ago. But magic and an autonomous self-learning system are two different things.

One is buildable. The other isn’t. Not yet.

Better to understand that before you start than to explain after the third client why everything went wrong.

Read next

Three nearby posts worth opening next.

Need a similar system in your business?

If you have a manual workflow between tools, I can help map the logic, design the system, and automate it in a way your team can actually use.

svg