Outward portfolio company, Orbital, is building an all-in-one property diligence automation platform for real estate lawyers and property professionals.
Orbital has been on a multi-year journey from building classical machine learning to embedding frontier LLMs and agentic workflows, in one of the most trust-sensitive categories in software: legal. Backed by Outward VC since 2020, Orbital has now closed a $60m Series B round, led by Brighton Park Capital.
Outward VC Principal Sanchit Dhote sat down with Andrew Thompson, Orbital’s CTO, to discuss his leap into legaltech, what it takes to “ship early and often” in a regulated, trust-heavy market, and the operating principles he uses to keep product, engineering and commercial reality in lockstep.
Previously CTO at Appear Here and VP of Engineering at Yoyo Wallet, Andrew now applies his execution discipline to building domain-specific AI workflows for real estate legal due diligence.
1. Journey to Orbital: what drew you to real estate and legal tech, and to Orbital specifically?
Before Orbital, I was at Appear Here, which was an Airbnb-for-the-high-street model. We worked with landlords across cities like London, Paris and New York, and helped brands run pop-ups and short-term retail. It was a rapidly growing business, but when Covid hit and the high street shut overnight, the whole model got tested very quickly. We had to cut deep to survive.
Around that time, I got introduced to Will Pearce, CEO at Orbital, and the team. I dealt with a lot of real estate transactions globally at Appear Here and I could clearly see the problems that needed solving. I knew real estate due diligence was an important industry that had not been properly disrupted for a long time. Real estate is also the world’s largest asset class and the workflows around transactions are incredibly consequential.
From an engineering perspective, it was also a genuinely hard problem yet tractable problem. It was one of those rare intersections where the domain is deep, the stakes are high, and the opportunity to build something meaningfully better was real. After a few conversations with Will and the team, I decided to get involved as CTO right at the company’s infancy.
2. You joined relatively early. What gave you conviction, especially working with young Founders?
At the time, I was not actively looking for an early-stage challenge. I’d been a founder before, and I knew I could do pre-product-market-fit work, but I didn’t think that was my sweet spot.
Orbital had something interesting though: one foot on solid ground and one foot in the unknown. There was a product that customers already loved, but the bigger mission, automating real estate due diligence properly, still needed true product-market fit. That combination was exciting and honestly a bit scary.
On the “young founders” point: age is not the issue, but inexperience can be both an asset and a liability. The asset is energy, tenacity and a willingness to stare down problems that an entire industry has accepted for decades. The liability is being ahead of your skis with limited best practice around selling, product delivery, and the realities of building in a trust-heavy market.
Orbital’s two founders indexed highly on the asset side and to mitigate risk on the liabilities side we found a rhythm to have very honest, direct conversations about building a category defining business by deeply solving customer problems amidst a ton of uncertainty. I was trusted for my prior experience while also being held accountable for the impact I delivered on the job.
3. On your personal blog you say you “ship early and often”. What does that mean in practice at Orbital?
Part of it is a personal mantra, and part of it is a very practical operating principle.
I’ve seen, and sometimes lived, the disconnect between engineering and the business. When engineering and commercial reality fall out of sync, the company suffers. Shipping early and often is a way to keep everyone anchored on outcomes: build the smallest thing that tests the hypothesis, put it in users’ hands, and learn repeatedly.
No matter how smart the people who make product decisions are, the market is always smarter. Until you ship, you are making educated guesses. You might be guessing with experience and good data, but you are still guessing. The fastest way to get out of opinion and into truth is to ship something, then iterate or, alternatively, bin it and start again.
4. How do you balance speed with reliability, security, and trust, given your users are law firms handling sensitive data?
There are non-negotiables, and we’re very clear about them.
We go through serious InfoSec processes, we’re ISO 27001 accredited, and we deal with highly sensitive information. Law firms often act for public companies where new information can move markets. So yes, security, permissions, and data handling have to be robust and audited regularly.
The trap is letting “non-negotiables” become the culture for everything. If every change gets treated like a core security risk, you end up blocking harmless improvements: small UX changes, feature-flagged experiments for a single customer, incremental workflow tweaks, or the ability to improve accuracy.
What we try to do is separate signal from noise, be uncompromising where we must, and fast everywhere else. In AI particularly, the model capabilities can change materially in weeks. If we cannot adapt quickly, we are not serving our customers properly. Our customers choose us because we help them navigate the current complexity of the AI landscape.
5. Orbital started with classical supervised machine learning (ML). What changed, and how did you decide to go all-in on LLMs and Agents?
We were doing what many ML companies did at the time – using models like BERT and T5, building training pipelines around data labelled by real estate legal professionals. d. At one point I had a team of about 10 paralegals trawling through real estate legal documents classifying clauses and answering questions.
One of the pivotal moments was realising that we were sitting on something more powerful than just labelled legal examples: we had “rulebooks” and structured legal thinking captured by real estate lawyers and paralegals. One of our early data scientists, Henry, had what felt like a slightly crazy idea at the time: shove the “rulebook” thinking into a context window of BERT and see what happens. Context windows were tiny back then, but he basically saw the future and it changed our thinking about what was now possible with transformer models.
When the newer generation of LLMs arrived, the penny dropped. We made two big bets which seem obvious now but at the time were a calculated gamble:
- Token costs would fall continuously over time and eventually make our product commercially viable
- LLMs weren’t good enough for legal reasoning but the capability curve would keep rising resulting in more intelligence, faster inference, and greater steerability
Once we truly believed this the hard part was organisational, not technical. We’d invested a lot in classical ML. But sunk cost is not a strategy. We decided to put a bunch of that work in the bin and build everything for a world driven by LLMs and, later, agentic systems. In hindsight it was clearly the right call, but at the time it was a leap of faith. I give huge credit to our VP of AI, Matt Westcott, for seeing what he did back then helping to educate myself and the rest of the business about what we could rely on with LLMs and what was still uncertain territory.
6. How did that shift change the shape of the team, from data labelling to “legal engineering”?
In the classical ML approach, we needed real estate legal professionals to label documents at scale: highlight clauses, extract attributes, produce training data. That meant a lot of contractor paralegals and a machine that had to be set up to label more, train more, and push the F1 scores up and to the right.
With LLMs, we still care about examples and correctness, but the centre of gravity moved up the stack. AI labs handled the training data, by reading the corpus of all publicly available knowledge on the web, while we provided experienced real estate lawyers who could teach the system “how to think”, not just what to pick out.
That’s where “legal engineering” comes in. On the domain side we turn experienced real estate lawyers into effective prompt engineers. On the technical side, we have AI engineers thinking about agentic tooling, orchestration and accuracy. On the legal side, we have legal engineers encode the reasoning patterns of good real estate lawyers so the system can handle ambiguity without making things up.
7. Everyone is talking about “agentic systems”. What have you learned building them for real work?
There’s often a disconnect between what the industry talks about at a high level and what actually works day to day.
For a while the dominant conversation was “evals and fine-tuning”. We looked at that and felt it didn’t match the reality of how quickly models change. You can spend a lot of time fine-tuning on domain data, and then a new frontier model arrives and outperforms it with good prompting.
Rather, we focused on building an environment where improvements can ship to customers incredibly quickly in hours not weeks/months. If a customer finds an issue, a legal engineer can digest it and update the prompt or workflow fast, and that improvement can benefit every other user.
The trade-off is a form of “prompt tax” when upgrading to newer models. A couple of years ago, moving between model providers required updating prompts in a ton of small ways with very subtle tweaks to natural language. Some of the old “prompt hacks” have faded as models improved and become more steerable, but managing prompts, tools and orchestration is still an engineering discipline in its own right. The goal is to keep the system adaptable without turning it into a fragile mess. I’m not sure everyone in the industry would agree with our technical approach but we’re okay with that—we often follow an “extreme pragmatism” philosophy for how we build systems and so far it’s working well amidst the chaos and constant learning in this post-ChatGPT era of agentic application development.
8. Accuracy matters enormously in legal due diligence. How do you think about accuracy and iteration loops?
Accuracy is not optional in legal, but speed of improvement is also part of accuracy in practice.
A fine-tuning approach can work, but it often introduces a significant lag and cost trade-off. If a customer is reviewing a portfolio of thousands of properties and finds a significant issue, saying “we’ll fix it in a month once the model retrains” is not good enough.
With prompt and workflow engineering, we can often address issues in minutes or hours. That means customers can keep moving, and the system improves in near real-time.
At Orbital, shipping early and often is not just a product philosophy – it’s part of how we maintain trust. Customers see that when something is wrong, it gets fixed quickly and systematically.
9. What have you learned from customers adopting AI, and how do you differentiate now that “everyone has an agent”?
Customers are on a complicated journey. Even for technologists, it’s noisy: new models, new techniques, and a lot of misinformation and hype. For legal professionals, it’s harder. They’re making large investments, not just in software, but in change management and time from very senior people.
We try to take an opinionated stance. Customers are not only buying a feature called “AI”. They’re buying a roadmap with a technology partner who will help them make sense of what matters, what doesn’t, and what “good” looks like in real estate legal workflows.
On differentiation: early on, people dismissed agents as toys. Now everyone has an agent that can upload their data to it and get complex questions answered in a fraction of the time a human can do it. This begs the question: where is a generic agent good enough versus where does one need genuine expertise (human or agent driven)?
Clients transacting real estate go to real estate specialists for a reason. The same idea applies here. We focus on deeply domain-specific workflows that reflect how real estate legal experts work, not just generic chat over documents. Real estate legal not only has a textual component but also a visual one with maps, surveys and plans which are also used throughout the diligence process.
10. What is keeping you up at night as CTO right now?
The hardest product question in AI right now is: where are the models going to get to and by when? You can take a model today and wrap it in lots of scaffolding to make it safe, controlled and reliable. That might produce a great feature today. But in a few months, new models may be faster, cheaper and more capable, and that heavy scaffolding can become a constraint that stops you from riding the AI curve upward.
There’s an art to building features that work for customers today but get vastly better as models improve. You have to make bets about where model capabilities are going in six to eighteen months, and you will not always get it right. This one still pains me as a CTO but you might need to build something today to solve a problem knowing that in maybe 6 months you’ll throw it in the bin because models have perfected the capability you built previously. If you wait for models to improve and don’t build it, you lose out to competitors but if you build it there’s a high degree of likelihood it will be redundant in the future.
What helps us is trust and partnership with the right customers. We work closely with forward-looking teams, put things in their hands earlier, learn quickly, and then roll out more broadly when it’s ready. Our ability to adapt in real-time to both our customers needs and AI advancement is our superpower.
11. How do you see the future of software engineering and legal liability evolving in an AI-first world?
On the engineering side, a lot of engineers are having a mild existential moment. The tools started as autocomplete, and now they can write meaningful chunks of code better and substantially faster than engineers. The question becomes: where are engineers most valuable?
My view is that engineers increasingly become orchestrators of fleets of agents and verify the impact they have—similar to what a manager of people does today. Engineers will have a set budget for tokens, and their job is to apply that resource to the highest-leverage problems. If they spend it badly, they get very little value back whereas if they spend it wisely, they unlock huge value in return.
On the legal side, liability matters. Even if AI gets incredibly capable, it is not a human. If something goes wrong, some legal entity must be accountable. For AI to take on more responsibility in legal work, we need a framework that answers: who is liable, how is it regulated, and what does insurance look like?
Today, lawyers are backed by insurance structures. Something similar will need to evolve for AI-generated outputs to be trusted at the same level, particularly in high-value transactions.
12. Looking ahead, what does the next phase for Orbital look like, and how are you thinking about scaling the team?
We’re scaling product and engineering significantly over the next phase of the company’s growth. Today we operate in cross-functional squads with product, design, software engineering, AI engineering and legal engineering working tightly together, and we expect to increase the number of squads as we grow.
The hiring focus is broad: strong product managers who translate customer problems into crisp bets, AI engineers who craft agentic tooling and know where model capabilities are going, designers who reimagine workflows beyond “just a chat box”, engineers who build systems not just prototypes and legal engineers who teach models how to think like real estate lawyers.
A core part of Orbital will remain the combination of frontier AI capability and deep real estate legal expertise, wrapped in great product design and software engineering. That “marriage” is what is making the work defensible and what makes it useful for customers.
13. Finally, what leadership principles are you most intentional about as the company grows?
Culture is critical. Startups are emotionally intense: high highs and low lows. We are building an environment where exceptional people can do their best work, without it turning into ego and division.
A few principles we come back to again and again:
- Bet on the model. Understand and stay just ahead of the model curve, building a product that leverages future models leaps seamlessly.
- Ship, Shipmate, Self. We succeed together when the mission comes first, we support each other and personal wins follow from collective wins.
- Fight with the weapons you have. Resourcefulness beats resources. The best teams win with what they have while building for what’s next.
- Truth over tact. We are here to fix problems, not to avoid discomfort. Truth is our lever while empathy is our balance. Those are the kinds of things that keep teams aligned when the pace is high and the environment keeps changing.
Closing thoughts
Orbital’s evolution from classical ML to agentic systems is a reminder that technical strategy is inseparable from operating cadence: fast iteration, customer feedback, and the ability to adapt as the frontier moves. For Andrew, “shipping early and often” is not a slogan, it is a mechanism for learning fast while protecting the non-negotiables of trust, security, and accountability. As AI reshapes both software creation and professional services, the winners will be the teams that combine deep domain expertise with an architecture and culture designed to evolve.
Orbital are currently hiring across the board, check out their open positions here.