MDM

The January Blueprint: News and Topics from the Edge

Esper Team
January 29, 2026

Learn More

Every week, the Esper team sends an edge technology topic straight to your inbox. Here’s what we’ve been talking about this month:

The Gigawatt Campuses are Coming

Written by Cam Summerson

The “AI campus” trend has moved from rumor to concrete megaprojects. Forget Megawatts — we’re talking about Gigawatts.

But if you think these massive new datacenters are coming to save edge fleet latency issues, think again. 

Companies like Google, xAI, Meta, and Microsoft/OpenAI aren’t dropping 1GW+ clusters into specific zip codes to help process your kiosk’s voice order or your warehouse inventory scans. They’re building them to train the next generation of LLMs. 

These campuses are “gravity wells” for compute power. They’re designed to ingest the ‘net and spit out intelligence — not answer API calls from a drive-thru in Des Moines in under 200ms. 

That’s why your strategy should still prioritize edge inferences instead of relying on these behemoths to process every decision your devices make. 

The LLM Training Grounds

There are four main projects set to turn the American South/Midwest into the world’s AI backbone: 

  • Google’s "Project Pyramid" (West Memphis, AR): A confirmed $4B, 1,000-acre megacampus. With 600MW of solar and 350MW of battery storage backing it, this turns the Memphis region into a dual-threat AI hub.
  • xAI’s "Colossus" (Memphis, TN): Just across the river, Elon Musk is pushing this supercluster to ~2GW. Between this and Google, the "Digital Delta" is becoming the densest compute region in the US.
  • Meta’s "Hyperion" (Richland Parish, LA): A $10B, 4-million-sq-ft campus designed to scale to 2GW, tapping directly into Gulf Coast power reserves.
  • Microsoft & OpenAI’s "Stargate" (Texas & Wisconsin): A $500B project aiming for 10GW by 2026.

When these projects are complete, we’ll have no shortage of LLM training grounds. But here’s the kicker: In 2026, the bottleneck will be inference. Research firms like IDC are predicting a massive pivot to "AgenticOps" — running lighter AI agents directly on edge networks or devices to avoid the round-trip to these massive training centers.

Why? Because physics is stubborn. You can’t beat the speed of light. If your devices are stuck in a latency loop trying to round-trip simple data to a centralized "Sun" that is hundreds of miles away, your customer experience is going to feel like 1999 dial-up.

Check Your “Gigawatt-Readiness”

You can’t move the data centers, but you can move the inference. Here is your Gigawatt Readiness Checklist:

  1. Check your NPUs: Do your current device specs prioritize on-device NPU performance (Snapdragon/Apple Silicon)?
    • Why: You need hardware that can handle "Small Language Models" (SLMs) locally. The big campus teaches the model; your device runs the distilled version.
  2. Audit your "Offline" Intelligence:
    • Test: Can your kiosks or handhelds process a complex query without hitting the cloud?
    • Goal: If the answer is "no," you might be vulnerable. The goal is to keep the "execution" local and only send the "learning" back to the mothership.
  3. Map your Latency: Look at where these new campuses are (Memphis, Louisiana, Texas, Wisconsin).
    • Why: If you operate near them, you might get better training throughput. If you don't, your edge strategy is your only safety net.

The grid is finite. Your fleet is the pressure valve. We are entering an era where the grid simply can't keep up with AI demand. When the data center hits a power wall, the workload gets pushed out.

The companies that win in 2026 and beyond won't be the ones renting the most space in Stargate or Pyramid. They'll be the ones whose devices are smart enough to stay cool (and functional) when the data center runs hot.

Symbiosis Isn’t Soft. It’s Revenue.

Written by Maddie Gainza

If you’re buying into an “ecosystem,” don’t get distracted by the logo wall. Your job is to pick a machine that keeps delivering after the deal — through rollouts, updates, device swaps, policy changes, and expansion.

Most ecosystems don’t break at purchase. They break at change.

The OEM ships.
The ISV installs.
The VAR “owns the customer.”

And you own the outcome, especially when the device layer gets ignored until it’s the loudest thing in the room.

So here’s the IT infrastructure: What should you look for to know an ecosystem will actually perform? You’re looking for one thing above all: a shared control plane. This means the ability to enforce fleet state across partners, devices, and time. 

Device Ecosystems Are Expanding

Ecosystems are expanding, not contracting. The market rewards partners who can orchestrate them.

KPMG describes partner ecosystems as networks built for mutual benefit. They’re stronger together than alone. This correlates directly to growth and resilience.

Forrester’s tracking the same pattern: partner ecosystems are expanding, and partner-driven revenue is expected to rise.

Translation: the “one partner does it all” era is dying. The winners are the ones who can run multi-party delivery without chaos (Omdia, 2025).

That’s why co-sell programs and marketplace ecosystems exist in the first place. They provide structured collaboration, shared plans, shared execution, and ultimately, shared value (Microsoft, 2025). 

What Buyers Should Look for (MDM as the Proof)

If you want the ecosystem to hold, treat device management like infrastructure — not a bolt-on. Device If you want the ecosystem to hold, treat device management like infrastructure — not a bolt-on. Device management ecosystems are one of the clearest signals that partners can deliver outcomes without chaos.

Here’s a quick gut-check you can run this week:

  1. Look for outcomes, not the tool: Don’t accept “MDM included.” Require: “Zero-touch provisioning and enforced compliance for every device in scope.”
  2. Standardize your deployment blueprint: Ask for one repeatable baseline per environment (retail, QSR, logistics): apps, policies, kiosk posture, update rules. If every site is bespoke, you’re buying future instability.
  3. Define shared accountability with your partners: OEM owns hardware. ISV owns app. Someone must own fleet state. If that’s not written down, you’ll be arbitrating blame at 2:17.
  4. Bake in rollback and staged change control: You don’t fear updates. You fear surprise. Demand staged rollouts, validation gates, and rollback plans as part of the operating model.
  5. Turn “support” into telemetry: If nobody can see drift, nobody can prevent it. If nobody can prevent it, you’ll be stuck reacting.
  6. Verify there’s an operating motion, not a one-time setup: MDM isn’t a project. It’s enforcement over time: patch posture, app version control, policy compliance, fleet health.
  7. Focus on uptime and stability: When fleet state stays stable, everything above it behaves. 

Bottom line: when you control the fleet state, you control the customer experience — and that’s the goal. 

(And if you happen to be a VAR reading this: treat the checklist as your packaging guide. This is how you stop selling parts and start selling outcomes.)

What Makes Strong Partnerships

The strongest partnerships aren’t built on handshakes. They’re built on shared control.

Buyers don’t get trapped by “bad technology.” They get trapped by vendor lock-in disguised as simplicity: an ecosystem that looks unified on day one, then quietly becomes a gated compound where your vendor’s roadmap becomes your strategy.

That’s the warning in Device Diversity or Digital Dictatorship? The promise of “standardization” can harden into dependency: fewer hardware choices, fewer integration paths, and switching costs that spike right when you need agility most.

A healthy ecosystem is the opposite. It’s vendor-agnostic by design: you can pick the best OEMs, the best ISVs, the right delivery partners, and still run the fleet with consistent workflows, controlled change, and real visibility into drift.

That’s the long-game value of MDM for buyers: It’s not just how partners cooperate. It’s how you keep leverage and change devices, partners, and strategy without blowing up operations.

When Your Best People Become the Bottleneck

Written by Ali Clawson

Human-in-the-loop workflows made sense when infrastructure was centralized and change happened in controlled, infrequent cycles. When something broke, teams investigated, applied a fix, and moved on. The system tolerated latency because the environment wasn’t moving fast enough to care.

Human touch alone doesn't cut it anymore.

Hybrid architectures now span cloud, on-prem, and edge environments. Devices run persistent workloads, receive frequent updates, and operate in unreliable network conditions. In this reality, every manual approval, SSH session, or ad-hoc remediation introduces delay and inconsistency. At scale, human intervention often becomes the limiting factor on reliability.

Hybrid Is the Destination

Hybrid environments aren’t a stepping stone anymore; they’re the destination. TechRepublic’s look at 2026 infrastructure trends suggests that enterprise leaders must now manage cloud, on-prem, and distributed systems as one unified fabric. This shift — accelerated by AI-powered workloads that increase both pace and risk — means traditional manual intervention is too slow when systems drift from their specified parameters.

As compute moves closer to where data is generated, devices stop behaving like passive endpoints and start functioning as independent execution environments. At that scale, traditional practices like hands-on provisioning, manual updates, or reactive troubleshooting don’t just create overhead — they introduce inconsistency across the fleet. Distributed systems require distributed control, and that control has to be automated by default.

At the end of the day, operating models built around human intervention can’t keep up with systems that are designed to run continuously, autonomously, and everywhere at once.

No More Relying on Human Heroics

To nail operational reliability when you're scaling, it's less about the initial design and more about ditching anything that needs a person to step in. The path to resilience isn’t adding more tools. It’s removing the manual steps that accidentally determine how systems fail and how they recover.

Where to start:

  • Identify intervention-dependent workflows: Look for tasks that only proceed once someone gets paged or steps in: provisioning, updates, certificate rotation, device recovery. These are your first targets.
  • Count the human touchpoints: Every approval, SSH session, spreadsheet, or Slack message adds latency and inconsistency. If a workflow involves three manual steps, you have three failure modes.
  • Procedures → enforced workflows: Procedures don’t enforce anything. If a task lives in a runbook, it’s a candidate for automation. Systems enforce state; documentation describes intent. There’s a critical difference.
  • Automate failure paths before happy paths: Assume devices will drift, lose connectivity, or misconfigure themselves. Design recovery, rollback, and re-enrollment to run without supervision. The happy path is easy, but it’s the edge cases that get the best of you.
  • Standardize to unlock scale: Snowflake configurations and one-off fixes undermine automation. Consistency across the fleet is what makes distributed control possible. Every exception you preserve is a future outage.

Redefining Operations With Distributed Infrastructure

Distributed infrastructure changes what “good operations” actually looks like. When systems span cloud, on-prem, and the edge, reliability comes from enforcing consistency automatically, not relying on people to restore it after the fact. The sooner teams treat automation as an operational requirement rather than an optimization, the more durable their environments become.

Read More: The Edge Visibility Gap: Why IT Teams Go Blind When Devices Scale Up

Slaying the Edge AI Energy Vampire

Written by Kelsey Milligan

From food-delivery bots to edge robotics in manufacturing, the edge is where AI’s potential stands out. But there’s a huge mismatch: AI models are big, compute-heavy, and energy-hungry. Meanwhile, edge devices are small and bandwidth-constrained. 

The options: Run models on the device itself, and watch the device’s memory (and battery) drain fast. Or push everything to the cloud, and see latency, bandwidth costs, and security risks rise. 

Either way, deployment times get costly, updates suck up far too many resources, and potential is gone before you can say “proof-of-concept.”

AI Goes Over the Radio

Researchers at Duke University and MIT asked: Can AI models be embedded into radio waves? Turns out, yes. And the result is near-perfect AI model accuracy at the edge with far less energy and bandwidth. It’s an early concept, but it’s promising.

It’s not just the physics that stand out. It’s the philosophy shift: AI can be distributed, shared, and delivered efficiently, with infrastructure that already exists. 

And we see this idea in other solutions that are already in use. For example, on-premise gateway devices can also cut bandwidth constraints and run AI models without major hardware upgrades. 

Stop Edge AI From Sucking the Life Out of Your Strategy

You may not need to wait for experimental radio wave (or better batteries) to start overcoming the AI power problem. You do need to rethink how intelligence, updates, and compute move across your edge stack:

Audit Where Energy Is Really Being Burned:

  • How often do devices pull the same data or updates from the cloud?
  • How much bandwidth is consumed per provisioning or OTA cycle?
  • What’s the battery impact of on-device inference vs. cloud round trips?

Centralize Intelligence — Not Workloads

  • Centralize those big assets (models, packages, content)
  • Distribute them locally if you can
  • Where possible, let devices pull from nearby infrastructure instead of the cloud

This mirrors the same idea behind AI-over-radio: share intelligence once, reuse it everywhere.

Introduce an On-Prem Edge Gateway

If your environment supports it: Add a local gateway for provisioning, updates, and content delivery. This reduces latency, bandwidth usage, and energy consumption, with less impact on device hardware.

Read more: How CIOs Can Scale Retail AI

Keep Exploring

No items found.

Esper is Modern Device Management

For tablets, smartphones, kiosks, point of sale, IoT, and other Android, iOS/iPadOS, Windows, and Linux edge devices.

Kiosk mode

Hardened device lockdown for all devices (not just kiosks)

Explore Kiosk Mode

Heading 1

Heading 2

Heading 3

Heading 4

Heading 5
Heading 6

Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.

Block quote

Ordered list

  1. Item 1
  2. Item 2
  3. Item 3

Unordered list

  • Item A
  • Item B
  • Item C

Text link

Bold text

Emphasis

Superscript

Subscript

App management

Google Play, Apple App Store, private apps, or a mix of all three

Explore App Management

Heading 1

Heading 2

Heading 3

Heading 4

Heading 5
Heading 6

Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.

Block quote

Ordered list

  1. Item 1
  2. Item 2
  3. Item 3

Unordered list

  • Item A
  • Item B
  • Item C

Text link

Bold text

Emphasis

Superscript

Subscript

Device groups

Manage devices individually, in user-defined groups, or all at once

Explore Device Groups

Heading 1

Heading 2

Heading 3

Heading 4

Heading 5
Heading 6

Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.

Block quote

Ordered list

  1. Item 1
  2. Item 2
  3. Item 3

Unordered list

  • Item A
  • Item B
  • Item C

Text link

Bold text

Emphasis

Superscript

Subscript

Remote tools

Monitor, troubleshoot, and update devices without leaving your desk

Explore Remote Tools

Heading 1

Heading 2

Heading 3

Heading 4

Heading 5
Heading 6

Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.

Block quote

Ordered list

  1. Item 1
  2. Item 2
  3. Item 3

Unordered list

  • Item A
  • Item B
  • Item C

Text link

Bold text

Emphasis

Superscript

Subscript

Touchless provisioning

Turn it on and walk away — let your devices provision themselves

Explore Touchless Provisioning

Heading 1

Heading 2

Heading 3

Heading 4

Heading 5
Heading 6

Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.

Block quote

Ordered list

  1. Item 1
  2. Item 2
  3. Item 3

Unordered list

  • Item A
  • Item B
  • Item C

Text link

Bold text

Emphasis

Superscript

Subscript

Reporting and alerts

Custom reports and granular device alerts for managing by exception

Explore Reporting & Alerts

Heading 1

Heading 2

Heading 3

Heading 4

Heading 5
Heading 6

Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.

Block quote

Ordered list

  1. Item 1
  2. Item 2
  3. Item 3

Unordered list

  • Item A
  • Item B
  • Item C

Text link

Bold text

Emphasis

Superscript

Subscript