Why a Connected Tech Stack Isn’t a Decision Engine
Executive Summary
Multifamily PMS integrations are working. The data is moving. However, the comprehensive data strategy underneath them isn’t. Operators today have more connectors stitching their stacks together than at any point in the industry’s history — and most still can’t answer a basic portfolio-level question without a multi-week analyst sprint.
The hard truth to face is that ninety-five percent of AI pilots are failing across industries. JLL’s 2025 Global Real Estate Technology Survey found that only five percent of CRE companies have achieved all their AI program goals. The operators pulling ahead — AvalonBay the cleanest public example — built an operator-owned data layer first, normalized it across PMS, CRM, maintenance, leasing, and accounting, and let AI run on top of it. Two more layers sit above that warehouse — workflow intelligence and agent orchestration — and that’s where decisions actually get made.
That’s where the multifamily conversation should be heading.
Connection isn’t unification
The PMS platforms have made real moves. Yardi shipped an MCP connector for Claude. AppFolio launched a marketplace of integrated tools. Entrata overhauled its API program and gave developers a sandbox environment for the first time.
Each is a sign of progress. But a recent Thesis Driven survey scored the major platforms on openness and found that none cleared 6.5 out of 10 — and the lowest averaged just over 3.
Editor-in-chief Brad Hargreaves’ conclusion was direct: none of the recent moves has meaningfully changed how operators experience these platforms day to day.
The three questions every multifamily operator should be able to answer
A Thesis Driven piece written by Propexo’s Remen Okoruwa in response to the survey frames a useful diagnostic — three questions:
- Can your BI team query across all your operational tools today?
- If you wanted to switch AI vendors next year, could you?
- Do you own your data, or do your vendors own it?
If the honest answer to any of them is no, the problem isn’t your integrations. It’s the data architecture underneath them.
It’s also why so many AI pilots are failing. The models aren’t broken. They’re being pointed at fragmented data across six vendor silos and asked to produce confident answers — which is, in some ways, worse than no answer at all.
What integration alone created
Connectors moved bytes between systems. Nothing in that movement made the data mean the same thing on the other side.
Yardi’s chart of accounts isn’t RealPage’s. Knock’s qualified-lead definition isn’t asset management’s qualified lead. EliseAI’s resident chat sentiment data has no native home in the PMS at all — those signals routinely get logged nowhere, which means neither Yardi nor Entrata can query them.
The dashboards multiplied. The decisions still required a human to reconcile four systems before anyone trusted the answer.
What AvalonBay built — and why it’s working
The operators pulling ahead aren’t waiting for their PMS vendors to solve this.
Rukevbe “Rukus” Esi, SVP and Chief Digital Officer at AvalonBay Communities, has organized the company’s entire technology stack around an operator-owned data layer principle. He told Thesis Driven: “Flexibility and control genuinely matter to us.”
In the same conversation, he described the data layer as the one element that has to stay consistent regardless of which platforms surround it.
The result is an architecture where AvalonBay can pivot vendors, evaluate new AI tools, and expose data to other services on its own terms — without rebuilding from scratch every time the market shifts.
Okoruwa notes in the article that this approach isn’t unique to AvalonBay’s scale. Any operator with portfolio-level ambitions can adopt it, but most haven’t gotten there yet.
The next phase of work — and where decisions actually happen
The Thesis Driven piece correctly names the foundation. An operator-controlled warehouse — Snowflake, Databricks, BigQuery, Microsoft Fabric, the platform matters less than the principle. Automated pipelines feeding it on a daily schedule. A normalization layer that resolves a Yardi lease and an Entrata lease into the same data structure with the same field names.
That’s necessary, but two more layers sit on top:
Workflow intelligence — turning normalized data into specific, repeatable decisions. Lease renewal pricing. Vendor SLA flags. Capex prioritization. IC memo generation. Anomaly remediation across third-party PMs.
Agentic orchestration — routing the right finding to the right human at the right moment, automatically, based on context. The article describes this with an inspection example: one flag, evaluated against multiple risk dimensions — liability, structural integrity, compliance, facilities — and routed to the right stakeholders.
The article calls that vision aspirational. We call it the work. It’s what we build for multifamily operators on top of the data layer the article describes.
The next layer up is where the conversation should be heading
The operators figuring this out first aren’t the ones with the biggest tech budgets. They’re the ones who stopped treating connection as the goal and started treating the data layer as strategic infrastructure they own and control.
The differentiator over the next twenty-four months won’t be how many AI tools an operator has plugged into the stack. It will be whether the data underneath the AI is unified, owned, and ready for the workflow intelligence and orchestration layers that turn data into decisions.
Three “no” answers to the three questions above mean the same thing in every multifamily firm we’ve worked with: a data strategy gap, sitting underneath a stack of integrations everyone assumed was solving it.
Updated May 2026
