Build vs. Buy in the Age of AI: What the SaaSpocalypse Gets Wrong for CRE

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Written by: Adam Dermer, Director of Product Marketing

Why institutional real estate teams that 'just build it' will pay far more than they expect — and carry risk they cannot justify.

TL;DR

An AI model on its own draws from whatever it was trained on — broad internet data your firm never approved, none of which is specific to your assets, your markets, or your investment criteria. When it gives you an answer, you don't know where that answer came from. You can't verify it. You can't defend it to an LP, a board, or a regulator.

For AI to work inside an institutional investment process, it needs three things: a dataset your firm has reviewed and approved, clear rules governing how it is permitted to use that data, and a structure that shows its work. Without those three things, you don't have a tool — you have a liability.

That is not an AI problem. It is a governance problem. And it is one that purpose-built platforms solve by design.

  • The recent "SaaSpocalypse" narrative correctly identifies that generic workflow software is vulnerable to AI automation. It does not apply to governed platforms that support investment decision-making in regulated environments.
  • For institutional CRE teams, the question is no longer whether an internal tool can be built. It is whether that tool can support auditability, data governance, and investment-committee accountability over time.
  • Purpose-built platforms provide the structured data model, workflow controls, validated market intelligence, and defensible audit trail required for AI to operate safely inside an investment process. Internal tools rarely do.
  • Build vs. buy in 2026 is not about capability. It is about where your firm chooses to carry risk.

Executive Summary

The most important question in enterprise AI right now is not how capable the model is. It is whether the model knows what it is allowed to use, why it is giving you a particular answer, and how you would defend that answer if challenged.

For institutional CRE firms, that question is not abstract. Every investment recommendation, every underwriting assumption, every market comp that informs a bid on a nine-figure asset has to be traceable. It has to be defensible — to an investment committee, an LP, an auditor, or a regulator. "The AI told me" is not an answer at that level. But "our governed platform, operating on our approved data, within our compliance framework, surfaced this" is.

This is the distinction the SaaSpocalypse narrative misses entirely. In early 2026, a wave of AI product launches triggered a sharp market selloff in enterprise software stocks, as investors concluded that AI agents could replace the workflows SaaS tools were built to support. That conclusion is partly right — for generic, repetitive-task software with low stakes and reversible failure, the disruption thesis holds. It does not hold for platforms built around institutional decision-making, where the data, the governance, and the audit trail are the product.

This paper makes three arguments. First, that the SaaSpocalypse narrative is wrong when applied to regulated, institutional contexts like CRE — because what makes these platforms valuable is not the interface AI can replicate, but the governed data infrastructure it cannot.

Second, that the true cost of building internal tools — once maintenance, compliance, security, and governance are accounted for — significantly exceeds the cost of buying purpose-built platforms over any meaningful time horizon.

Third, and most importantly, that ungoverned AI use in institutional investment management is not just operationally risky — it is increasingly a regulatory and fiduciary liability. The firms that get this right will be the ones where AI operates inside a governed system of record. The firms that don't will find out the hard way.

What Actually Happened — and Why It Matters

In early 2026, Anthropic released Claude Cowork — a desktop agent capable of sustained, autonomous, multi-step enterprise workflows — alongside Claude Code, which can write and deploy production software from natural language prompts. The market reaction was swift. Within days, approximately $285 billion evaporated from software stocks in a single trading session.1 Within seven days, the damage had expanded to roughly $1 trillion across the S&P Software and Services Index.2,3 Individual names were hit far harder: Atlassian collapsed 35% in a single week, HubSpot dropped 57% from its peak, and Salesforce lost 26% year-to-date despite posting $800 million in Agentforce ARR.

The moment that crystallized investor fear was a live experiment published by CNBC in which two reporters with no coding experience used Claude Code to build a functional replica of Monday.com in under an hour. The story sent Monday.com's stock sharply lower.4

Markets drew a straightforward conclusion: if AI agents can replace the humans who use SaaS tools, and AI can now build the tools themselves, then per-seat software pricing is structurally broken.

There is truth in this. Generic SaaS built around repetitive human workflows that AI can execute autonomously faces genuine disruption. The market's instinct to reprice that category is not irrational. But the more important point for this audience is not whether markets overreacted. It is whether the underlying logic — that AI makes purpose-built platforms less valuable — holds when applied to institutional CRE deal management. It does not.

Where the Narrative Falls Short
The companies building AI still buy enterprise SaaS

The most powerful counter-evidence to the SaaSpocalypse narrative does not come from incumbents defending their turf. It comes from the companies doing the disrupting.

Anthropic — the company whose products triggered the SaaS selloff — uses Salesforce and Slack as operational pillars. As of early 2026, Anthropic was actively hiring a Salesforce Administrator.4,5 Anthropic and Salesforce have an active strategic partnership.7 OpenAI expanded its own Salesforce partnership at Dreamforce 2025. Marc Benioff confirmed publicly that both Anthropic and OpenAI use Salesforce tools.8

These organizations have the best AI engineers in the world, access to every coding tool, and the lowest marginal cost to build software in history. They still write checks to Salesforce every month. Not because they can't build it — but because they understand the importance of reliability, governance, maintenance, security, and the operational reality of running production software at scale.

A wrong answer in CRE is not a bad user experience — it is a capital event

The SaaSpocalypse argument makes sense for a certain kind of software: tools built around repetitive tasks, with low stakes, where failure is cheap and reversible. Think scheduling tools, basic reporting dashboards, email automation. For those categories, the disruption thesis has real merit.

CRE deal management is not that kind of software. A wrong comp doesn't produce a bad report — it informs a bid on a nine-figure asset. A missed approval gate isn't a workflow inconvenience — it is a governance failure with legal and fiduciary consequences. An untraced data source isn't sloppy — it is a liability in a regulatory examination.

The value of purpose-built platforms in this context is not the interface. It is the constraints — the approval workflows, the audit trails, the access controls, the validated data structures. AI does not replace these things. It operates inside them, and it becomes dramatically more useful when it does.

The True Cost of Building

Here is the conversation happening inside CRE firms right now.

A sharp associate discovers that tools like Lovable, Cursor, or Claude Code can produce working software from plain English instructions — no engineering background required. Over a weekend, they build something that looks remarkably like a deal pipeline tracker. Clean interface, live data, automated notifications. They demo it to the team on Monday. Everyone is impressed. The question surfaces: why are we paying for a platform when we can build this ourselves?

It is a fair question. And the demo is real — those tools are genuinely capable of producing something that looks production-ready in hours. That part of the argument is not wrong.

What the demo does not show is everything that comes after it.

Month three, the tool breaks mid-diligence on a live deal. The associate who built it has moved on, and nobody else understands how it works. Month six, your IT team flags it as a security liability — it hasn't been patched since it was built. Month nine, a prospective LP or institutional partner asks for your data governance documentation, and you don't have one. Month twelve, a regulator asks how a specific investment decision was made, and you cannot produce an audit trail because the system was never designed to create one.

The weekend build cost nothing. What followed was far more expensive than the platform it was meant to replace.

This plays out the same way regardless of firm size. The hidden costs fall into three categories that hit everyone.

  • Maintenance. Software is never finished. Whoever built it has to keep building it — patching security vulnerabilities, updating integrations when a third-party API changes, fixing the edge cases that only surface when real deals flow through the system. When that person leaves, the knowledge walks out with them. What looked like a free tool becomes an unowned liability.
  • Iteration. A tool that works for three people breaks when twelve people use it. A workflow that handles ten deals a month creates bottlenecks at fifty. Fixing these problems costs more than the original build — and unlike a purpose-built platform, there is no vendor absorbing that cost on your behalf.
  • Accountability. When something goes wrong on a live deal — and eventually something always does — "the tool broke" is not an answer anyone wants to give a client, a partner, or an LP. Purpose-built platforms carry institutional accountability for reliability, security, and data integrity. Internal tools carry none.

As Jason Lemkin, one of the most cited voices in enterprise software, put it plainly: shipping a v1 is maybe 2% of the work.9 Build vs. buy was never really about whether you can build something. In 2026, almost anyone can. It has always been about who carries the risk when it breaks — and whether the people carrying that risk can afford to.

The Governance Imperative: "I Asked AI" Is Not an Answer

This is the argument that matters most for the audience reading this paper.


When a CRE investment team is managing institutional capital — capital that belongs to pension funds, endowments, sovereign wealth funds, and family offices — every decision they make must be justifiable. Not just to themselves, but to an LP, a board, an auditor, a regulator, or a court.
General AI models are trained on broad internet data of unknown accuracy, unverifiable provenance, and no domain specificity. In CRE, where a wrong cap rate assumption or an inaccurate rent roll can influence a nine-figure transaction, that is not an acceptable foundation for analysis.


The institutional solution is AI that operates on firm-approved, validated, domain-specific data — curated deal histories, proprietary comp databases, reviewed transaction records. The AI retrieves from a governed knowledge base, not its own training memory. That knowledge base must be built, maintained, and validated by a purpose-built platform. It cannot be reconstructed from scratch each time a new model is released.


The regulatory trajectory reinforces this. The SEC's 2026 Examination Priorities explicitly evaluate whether firms' policies and procedures adequately govern AI use — across virtually all examinations.10 The SEC has already levied penalties against investment advisers for misrepresenting their AI capabilities.11 Venable LLP's December 2025 analysis found that using AI without explainability or validation could constitute a breach of the duty of care — analogous to relying on an unverified analyst without performing due diligence.12


The firms that get this right will be those where AI operates inside a governed system of record — where the data is approved, the process is documented, and the output is traceable. That is not a technology requirement. It is a fiduciary one.

What This Means for CRE Deal Teams

For institutional investors and brokers evaluating technology strategy in 2026, the questions have changed — but not in the direction the SaaSpocalypse narrative suggests.

The old question was: can we build it? AI has made that question almost irrelevant. The answer is usually yes, and the initial cost has never been lower.

The new questions are more precise:

  • Can we maintain it?  Every tool built in a weekend requires engineers, security patches, compliance audits, and infrastructure management — indefinitely. That cost does not appear in the demo.
  • Can we govern it?  Enterprise clients, LPs, and regulators require SOC 2 compliance, audit trails, and data governance documentation as baseline conditions. These are not features that can be added later.
  • Can we justify it?  When a managing director presents an investment recommendation informed by AI analysis, the question from the investment committee is not "what did AI say?" It is: what data did it use, who approved that data, what is the audit trail, and who is accountable if the conclusion is wrong?
  • Can we afford the risk if it breaks?  A custom tool that fails during due diligence on a $300M acquisition is not a bad user experience. It is a business-critical incident with real capital consequences.

The largest financial institutions have already answered these questions — by building AI governance frameworks around enterprise platforms, not in place of them. JPMorgan runs its AI strategy through an $18 billion technology infrastructure with a dedicated AI council and centralized governance framework. Goldman Sachs built a proprietary, firewalled AI environment serving all 46,000 employees — on top of, not instead of, its enterprise systems of record.13


Institutional CRE firms do not have those resources. What they have is Origin: a platform that provides the governed data backbone, the validated comps infrastructure, the structured workflow architecture, and the compliance framework that allows AI to operate reliably and defensibly — without requiring a firm to build and maintain any of it themselves.

Conclusion

The SaaSpocalypse is real as a financial event. It correctly reprices overvalued, generic, workflow-replacement software that AI can legitimately substitute. Those categories deserve the scrutiny they are receiving.

What it does not correctly price is the value of governed, purpose-built platforms embedded in institutional decision-making workflows. The firms using Altrio's platform are not paying for a UI that clicks buttons. They are paying for years of proprietary CRE market data, a validated data extraction service with human analyst review at every stage, a connected data model built around how real estate deals actually work, institutional-grade workflow governance, and a compliance infrastructure that turns AI from a liability into an asset.

In CRE deal management, the risk of getting it wrong is not a slow load time or a bad user experience. It is a mispriced asset, a missed IC approval, an untraced data source in a regulatory examination, or a decision that cannot be defended to an LP.

That risk does not belong in a vibe-coded internal tool. It belongs in a platform purpose-built to govern it.

[1] Bloomberg, "What's Behind the SaaSpocalypse Plunge in Software Stocks," February 4, 2026.

https://www.bloomberg.com/news/articles/2026-02-04/what-s-behind-the-saaspocalypse-plunge-in-software-stocks

[2] FinancialContent / MarketMinute, "Software Stocks Under Siege by New AI Tools," February 17, 2026. https://markets.financialcontent.com/stocks/article/marketminute-2026-2-17-software-stocks-under-siege-by-new-ai-tools-the-saaspocalypse-of-2026

[3] AI2.work, "The 2026 SaaS Apocalypse: Why Wall Street Is Dumping Software Stocks," February 14, 2026. https://ai2.work/technology/the-2026-saas-apocalypse-why-wall-street-is-dumping-software-stocks/

[4] CNBC, "How exposed are software stocks to AI tools? We put vibe-coding to the test," February 5, 2026 (Deidre Bosa and Jasmine Wu). https://www.cnbc.com/2026/02/05/how-exposed-are-software-stocks-to-ai-tools-we-tested-vibe-coding.html

[5] Salesforce Ben, "If AI Will Replace 50% of SaaS, Why Is Anthropic Hiring a Salesforce Admin?"

https://www.salesforceben.com/if-ai-will-replace-50-of-saas-why-is-anthropic-hiring-a-salesforce-admin/

[6] Salesforce Ben, "If AI Will Replace 50% of SaaS, Why Is Anthropic Hiring a Salesforce Admin?"

https://www.salesforceben.com/if-ai-will-replace-50-of-saas-why-is-anthropic-hiring-a-salesforce-admin/

[7] Salesforce Press Release, "Anthropic and Salesforce Expand Strategic Partnership," October 14, 2025. https://www.salesforce.com/news/press-releases/2025/10/14/anthropic-regulated-industries-partnership-expansion-announcement/

[8] Constellation Research, "Salesforce expands OpenAI, Anthropic partnerships, eyes Agentforce everywhere." https://www.constellationr.com/blog-news/insights/salesforce-expands-openai-anthropic-partnerships-eyes-agentforce-everywhere

[9] SaaStr, "The 2026 SaaS Crash: It's Not What You Think," January 30, 2026 (Jason Lemkin).

https://www.saastr.com/the-2026-saas-crash-its-not-what-you-think/

[10] Proskauer Rose LLP, "2026 SEC Examination Priorities for Investment Advisers," December 2025. https://www.proskauer.com/alert/2026-sec-examination-priorities-for-investment-advisers

[11] SEC.gov, "SEC Charges Two Investment Advisers with Making False and Misleading Statements About Their Use of AI," March 2024. https://www.sec.gov/newsroom/press-releases/2024-36

[12] Venable LLP, "Artificial Intelligence in Investment Management: Regulatory Challenges and Fiduciary Implications," December 2025. https://www.venable.com/insights/publications/2025/12/artificial-intelligence-in-investment-management

[13] SparkCo AI, "AI Adoption: Goldman Sachs vs JPMorgan Benchmark."

https://sparkco.ai/blog/ai-adoption-goldman-sachs-vs-jpmorgan-benchmark

Ready to manage deals better?

Take a tour of Origin to see what it can do for you.
Request a Demo