No items found.

The April Blueprint: News from the Edge

Esper Team
April 30, 2026

April 30, 2026

Every week, the Esper team sends an edge technology topic straight to your inbox. Here’s what we’ve been talking about this month: 

Apple Built a Great MDM…For Someone Else

By Cam Summerson | April 1, 2026

Last week, Apple announced Apple Business: A unified bundle that replaces ABM (Apple Business Manager) and includes built-in MDM, email, company directory, and even Maps advertising all under one roof. It’s clean, well-designed, and genuinely impressive. It’s a very tight offering. 

But here’s the catch: It’s probably not for you. Hear me out. 

Apple built this product for the business owner who’s also the IT department. A founder managing a couple dozen iPhones and iPads. A COPE or BYOD mid-market IT team managing an org where every employee packs an iPhone and has a MacBook on their desk. For these uses, it’s hard to beat Apple Business (look out, JAMF). You get zero touch deployment, managed accounts, and policy enforcement. Simple IT use case. No six-figure platform. 

But then there’s the other side of that coin: Fleets that move beyond employee use, Apple’s ecosystem, or both. The moment you move outside of those walls, you outgrow Apple Business (and similar products). Dedicated Android kiosks, Linux edge nodes, Windows endpoints, and unattended devices in the field all force Apple Business to crack under the pressure because it was never designed to handle it in the first place. 

From the Field: The new Apple Business

Here’s the skinny on what Apple Business includes: Bundled MDM with Blueprints* for preconfig’d device settings and zero-touch deployment, managed Apple accounts with cryptographic separation of work and personal data, employee group management, app distribution through the App Store, and an Admin API for larger deployments. Truly a solid set of features. 

Note the asterisk above? Apple calls its preconfiguration feature “Blueprints.” So does Esper. They’re not the same thing, but this is no coincidence — it’s a direct reflection of the philosophy around what good fleet management looks like. 

The problem is that “set up” and “manage” aren’t the same thing. Supporting multiple operating systems under a single policy framework is tough, and it only gets harder when those devices live in the field, unattended, across multiple locations. That’s a gap Apple Business was never intended to close — it knows its lane and owns it well. 

Just make sure you know where that lane is, too. 

Practical Application: Know Your Weight Class

Apple Business is indeed the correct tool for many organizations. Here’s a quick gut-check to understand why yours might not be one of them: 

  • Do you manage more than one OS? Android kiosks, Linux edge nodes, Windows endpoints — the moment your fleet leaves the Apple ecosystem, Apple Business can't follow you.
  • Do your devices live in the field, unattended, across multiple locations? A restaurant with 200 kiosks across 80 cities needs orchestration, not configuration.
  • Do you need staged rollouts and rollback? Pushing an update to your entire fleet at once isn't a deployment strategy — it's a liability. If your MDM can't stagger, pause, and reverse, it's not enterprise-grade.
  • Is your MDM expected to stay ahead of your fleet, not react to it? Drift detection, desired state enforcement, proactive alerts. You want your platform to catch problems before your field team does.

If you answered “yes” to even one of these questions, guess what? Apple Business is not for your business, friend. That’s not a knock, either — it just means you need something built for where your fleet is going. Not where it started. 

Final Thought

Apple Business is a great product for the devices it was designed for. The question worth asking isn't whether it's good — it's whether your fleet is that simple. If you're reading this newsletter, it probably isn't.

Worth a Look

Curious what "two steps ahead" actually looks like in practice? We put together a breakdown of what enterprise-grade fleet management covers that basic MDM doesn't. It's a useful reference whether you're evaluating platforms or just trying to explain the gap to a stakeholder. Give it a read.

Your Kiosk Doesn't Know It Has a Houseguest

By Maddie Gainza | April 8, 2026

Purpose-built device security runs on specificity. A kiosk runs one approved app, calls out to known endpoints, and operates on a locked runtime with no room for deviation. That narrowness is the whole point. AI agents exploit that framework's blind spot.

Agentic AI can browse the web, connect to SaaS platforms, execute commands, read and write data, and chain together multi-step actions without a human in the loop. Enterprises are deploying them at scale to automate workflows. Backend systems are being wired with agents to orchestrate decisions. 

And as those agents proliferate across enterprise infrastructure, they're reaching the device layer — and a purpose-built device was never designed to tell the difference between its own POS app making an API call and an agent framework that hitched a ride inside a dependency update. Its policy baseline, network behavior, and telemetry profile don't flag what looks legitimate — which is exactly how agent frameworks operate.

A Dark Reading readership poll found that 48% of cybersecurity professionals identify agentic AI and autonomous systems as the top attack vector heading into 2026, ranking above deepfakes, above breach response, above everything else. For operators running fleets of unattended, purpose-built devices, the exposure is uniquely brutal to detect: there's no user to notice when something's wrong.

From the Field: The AI Agent Attack Vector

When open-source AI agent OpenClaw went viral early this year, it surpassed React's GitHub star count in just weeks. SecurityScorecard's STRIKE team found over 135,000 OpenClaw instances exposed to the public internet across 82 countries. More than 15,000 of those were directly vulnerable to remote code execution.

The fleet-relevant detail: OpenClaw didn't live in the cloud. It ran locally, on devices, and it connected to whatever integrations it was handed — email, calendars, file systems, developer tools. Enterprise spillover was confirmed. Bitdefender GravityZone telemetry documented OpenClaw deployments reaching corporate environments, where agents were operating with full OAuth access to connected services, invisible to standard endpoint security.

The marketplace made it uglier. Out of 2,857 skills on ClawHub, more than 340 were malicious. Keyloggers. Infostealers. All dressed up with professional documentation and innocent names like "solana-wallet-tracker." The more troubling architectural point: OpenClaw doesn't become dangerous only when a vulnerability drops. It becomes dangerous the moment it starts running before governance catches up.

Gartner analysts, as reported by The Register, called it a "dangerous preview of agentic AI," citing "insecure by default" risks including plaintext credential storage. That's a rough thing to read about a device class your team specifically locked down.

Practical Application: Five Gut Checks for Your Fleet Before Agents Arrive Uninvited

You don't need to ban AI agents across your organization. That ship has sailed. But your purpose-built fleet needs to be hardened for a world where agents are everywhere, and nobody asked your kiosk if it was cool with that.

1. Does your baseline actually lock down the runtime?

Purpose-built devices should only run what they're supposed to run. If your policy baseline allows arbitrary process execution or broad network calls, an agent framework can take up residence through a dependency update, a compromised app, or a supply chain slip without triggering a single alert. Audit your allowlists. Tighten what can execute.

2. Do you have visibility into what's calling out from your devices?

Agents are noisy. They make API calls, pull external data, and phone home constantly. That traffic looks totally normal to most network tools. You need telemetry that can distinguish your POS app's expected call pattern from something new, unexpected, and persistent making outbound connections. If your fleet monitoring surfaces downtime but not behavioral anomalies, you have a blind spot worth fixing yesterday.

3. Can you detect config drift fast enough to matter?

When compromised or misused, agents can autonomously discover, invoke, and install additional components at machine speed. On a purpose-built device, any new process, new dependency, or unexpected outbound connection should register as drift. The question is whether your tooling catches that in minutes or weeks.

4. What's your blast radius if one device gets got?

Unattended fleets are laterally connected. A compromised kiosk at Location #47 can pivot into the same backend systems serving Locations #1 through #500. Map your connectivity. Know what a single compromised endpoint can reach, and make sure your network segmentation is actually enforced and not just assumed.

5. Are your OTA pipelines part of the threat model?

Agents often enter environments through software supply chains: a dependency in an update, a compromised third-party SDK. According to Tenable's Cloud and AI Security Risk Report 2026, 18% of organizations have granted AI services administrative permissions that are rarely audited — and that's just in cloud environments where someone is at least nominally watching. On unattended device fleets, the governance gap is wider. Staged rollouts, canary groups, and rollback capabilities aren't just good ops hygiene. In an agentic world, they're your early warning system.

Final Thought: Check Your Assumptions

Purpose-built device management is built on the assumption that you know exactly what's running. Agentic AI operates by making that assumption unreliable — not through brute force, but through legitimate-looking behavior that existing telemetry wasn't designed to catch.

Tighter process allowlisting, behavioral baselining, and tested OTA rollback won't prevent every scenario. But they're the difference between catching an unauthorized agent in hours versus finding out about it from a security vendor months later. Make sure it knows that.

Want to see how 1,000+ organizations are managing visibility, compliance, and device health across distributed fleets? Check out our 2025 State of Device Management Report.

Inside the Checkout Lab

By Ali Clawson | April 15, 2026

Self-checkout has never been a "set it and forget it" solution; it’s a live, high-stakes experiment. Even within the same corporate family, there is zero consensus on the "right" way to let customers handle their own transactions.

Take Walmart and Sam’s Club. They share a parent company and overlapping customer data, yet their front-end strategies are worlds apart. These big retailers have learned to treat the checkout experience as a business model variable, not a one-time installation decision. Strategy now follows the operational winds of customer experience, store location, labor mix, and shrink profiles. 

For those managing the stack, this creates a volatile reality. If your self-checkout configuration is constantly in motion across hundreds of locations, the question isn't just what you’re deploying, but whether your management layer can handle the velocity of change. 

From the Field: How Retailers are Changing the Edge

Kiosk Industry recently published a case study on how three retail titans are navigating dynamic edge environments. Their divergent paths offer a roadmap for anyone managing distributed hardware:

Sam’s Club is replacing kiosks with member phone-based checkout. It’s experimental, and forward thinking, relying on AI exit arches replacing manual receipt checks. 

Costco is people-first with limited self-checkout, strict item counts, and heavy ID enforcement. Self-service options are in the mix, but meant to aid, not replace trained staff.

Walmart runs a dynamic "permanent pilot": They test configuration variations across thousands of stores simultaneously, shifting the mix based on real-time shrink risk. They’ll swap kiosks for staffed lanes in high-theft zones while running lean, AI-assisted automation elsewhere. 

All three retailers are still adjusting, and this puts specific and ever-changing demands on the management layer underneath. This means edge infrastructure teams have to manage multiple valid configurations simultaneously and surface enough per-location data to tell you which one is actually working.

That’s a high bar to clear. 

Practical Application: Can You Quickly (and Safely) Change Lanes?

Managing location-specific variation across a fleet of thousands requires a more advanced strategy than uniform deployment. Here’s what it means to spin up a unique deployment for a specific store cluster on a razor-thin timeline:: 

You can run two configurations simultaneously and measure the difference: Testing a new approach across a subset of locations only works if your stack holds both states cleanly. Layer in usage data — transaction volume, session length, error rates — and you have the feedback loop that tells you whether the new approach earns a broader rollout, and at what cost.

Your edge visibility goes deep: In a store running kiosks, POS terminals, and inventory scanners simultaneously, aggregate health metrics hide too much. A configuration issue hitting one device type at one location needs to surface with enough resolution to act on. 

A new location can go from shipped to operational fast: Retailers opening a new store every month can't treat provisioning as a project. Devices need to arrive pre-configured and self-activate into the right state — no one on-site adjusting settings by hand. Activation speed is where fleet management strategy either earns its keep or becomes the bottleneck.

Final Thought:

For operators outside retail, the management question is structurally identical. Whether you're deploying kiosks, scanners, or purpose-built edge devices, static deployment tooling becomes a constraint the moment your operating model starts shifting locations, device types, or use cases. Build the configuration layer to absorb that variance before the next deployment forces the issue.

Worth a Look

Edge AI Confidence, Meet Reality

By Kelsey Milligan | April 22, 2026

Ninety-seven percent of enterprise leaders say they can scale AI workloads in the next three years. Why shouldn’t they? The use cases are compelling, the budgets are flowing, and the technology is genuinely impressive. 

Right now, leaders are very confident in their edge AI maturity, and bullish on the fast-landing tech — in restaurants, on retail floors, healthcare settings, and factory floors. But early adopters are finding that there’s a difference between scaling up fast and getting it right. 

The sticking point is operational infrastructure  — and it’s a leap may be wider than most IT budgets let on.

From the Field: Edge Deployment Cracks Emerge

Deloitte has released its annual Enterprise AI infrastructure survey. And today’s companies are really confident and investing big in edge AI. Right now, 36% report current edge implementations, and almost three-quarters of companies say they’ll add edge AI by 2028. 

At the same time, respondents might be overinflating their AI maturity. For one, Deloitte research shows that most organizations are only using basic automations, yet 96% of respondents say their AI workloads are medium or high complexity.

They note that token consumption — a sign of advanced reasoning models — is exploding, with 61% anticipating 10 billion tokens per month by 2028. But, Deloitte cautions that “token growth can signal inefficient solution patterns.” In other words, there’s a good chance that high-token use (tokenmaxxing or not — and yes, it’s a thing) just equates to strategic gaps.

Finally, AI-related skills gaps are a likely hurdle. Everyone from AI engineers and security specialists to change management experts is currently in demand to help meet ambitious AI targets.

Practical Application: It Starts With Edge Operations

As AI moves to the edge, skills gaps and operational processes might be the biggest difference between advanced and rudimentary deployments. But the health of your edge device foundation is often the biggest tell. Here’s what to assess:

  1. Is your hardware stack audited and standardized? Define your device lifecycle, standardize purchasing to cut variations (they’re an operational tax), and keep a real-time device inventory.
  2. Is your device management platform deployed and ready? Remote provisioning, OTA updates with staged rollout and automatic rollback, and centralized policy enforcement. And when you can’t standardize, make sure you have unified visibility.
  3. Do you have device health baselines under AI workload conditions? CPU, memory, thermals, and latency at normal operating load — not idle. Deviations are only detectable if you know what normal looks like. 
  4. Can you design for failure before it happens? Automate device updates and remediation workflows for common edge AI failures — model process crashes, storage threshold breaches, or connectivity loss — so that when issues arise, devices recover faster with less human intervention. 

Final Thought: Hitting Your AI Target

The organizations that will hit their 2028 edge AI targets will be the ones that know the difference between ambitious and effective. And getting it right means treating operational infrastructure as a prerequisite. After all, confidence is cheap, but a hardened edge foundation is really worth bragging about. 

72% of drive-thru workers deal with tech issues. Broken Restaurant Tech Isn’t the Problem

By Cam Summerson | April 30, 2026

QSRs have spent the last several years upping their tech game to speed up service, cut labor costs, and modernize the customer experience. All good things. 

The problem is that a lot of this tech is failing. Constantly. At scale. 

A new report from QSR Magazine makes the point clear: 72% of drive-thru employees deal with recurring tech issues. Nearly 4 in 10 say credit card readers fail to process transactions. A third say the POS goes offline regularly. And more than half have watched both kiosk and drive-thru customers simply abandon their order because the tech didn’t work. 

Half. That’s an outrageous number. 

The irony is hard to miss: QSRs deployed this tech to enhance the customer experience, but now the tech itself is becoming the problem. Every kiosk that freezes, payment terminal that goes offline, or screen that just goes dark is a direct hit on revenue. Worse, it goes against everything that “digital transformation” promises to deliver. 

The problem is that operators bought the hardware and the software, but nobody thought about the infrastructure that keeps it all running after day one. Duct tape and prayers hold it all together. 

Kiosks aren’t products. POS terminals aren’t one-and-done. These are devices that need to be managed. 

From the Field: Payment Terminal Failure Points

The QSR Magazine report, which draws on survey data from frontline employees at major QSR chains like McDonald’s, Burger King, Chick-Fil-A, and Domino’s, paints a pretty damning picture for the industry. 

The most common failure point is the payment terminal: 38% of drive-thru employees say readers regularly fail to process transactions. That’s a structural failure of a load-bearing function. 

But that’s not even the worst part. Operators don’t find this out through monitoring or proactive alerts — they’re hitting walls when customers complain, a crew member invents a workaround to get the job done, or a manager notices a line isn’t moving. This isn't just a short-term issue where you lose a transaction, either. This is how you lose customers. Full stop. 

The report’s conclusion is razor sharp: restaurants that are gaining ground on these problems see it as an infrastructure solution — the restaurant is a single, connected system. They’re managing tech on-site holistically, with remote visibility, automated troubleshooting, and proactive alerts. 

Everyone else is just playing tech whack-a-mole when things go sideways.

Practical Application: Check Your Device Health

So, what can you do? This week, today, right now? Here’s your gut check: 

  1. Do you know when a device goes down — before a customer does? If the answer is "when someone calls us," that's a monitoring gap. Real-time alerting on device health should be table stakes for any managed fleet.
  2. Can you reboot, reconfigure, or recover a device without rolling a truck? Remote remediation — reboots, config pushes, app restarts — is the difference between a 30-second fix and a $300 service call. Every device in your fleet should be reachable remotely.
  3. Do you know the OS and app version running on every device, right now? Version drift is silent and cumulative. One device on an old OS version is a curiosity. A hundred of them is an incident waiting to happen.
  4. What's your MTTR when something breaks at 11 AM on a Saturday? Mean time to resolution matters more in QSR than almost any other environment. The lunch rush doesn't wait for your ticketing system.

If any of these questions made you squirm in your seat a little bit, you know your answer. There’s a gap. But you can close it. 

Final Thought: Mapping the Problem

The QSR tech crisis isn’t a vendor problem. It’s not a hardware problem. It’s a management problem, and it’s 100% solvable. You bought the kiosks, POS system, KDS, and everything else. But you didn’t bank on the monitoring systems, remote operations, and incident response being the actual differentiator. Time for a mindset shift.

The good news? You don’t have to build it from scratch. 

Worth a Look

Leading QSR operators are already closing this gap. Here’s why it all starts with treating devices like infrastructure. 

Learn More

Keep Exploring

Learn More

Learn More

Esper Team

The Esper Team is on a mission to power exceptional device experiences by revolutionizing the way companies manage their device fleets. Through advanced capabilities, Esper is leading the market beyond standard MDM practices into the modern era of DevOps for devices and beyond — Where workflow automation is standard, re-provisioning is a thing of the past, and managing by exception unlocks unprecedented operational efficiency.

Learn More

7 min read