Every week, the Esper team sends an edge technology topic straight to your inbox. Here’s what we’ve been talking about this month:
The Kiosk Tipping Point is Real, but What Comes After is What Matters
Written by Cam Summerson
The self-service kiosk market is blowing past $16 billion this year on its way to a projected $45 billion by 2030. Installations have surged 43% in recent years, and the big QSR brands like McDonald's, KFC, Taco Bell are deploying aggressively across thousands of locations.
If you're in restaurants, retail, or hospitality, none of this is news. What is news is how few operators are thinking about what happens after the kiosk ships.
Because here's the thing: Buying a kiosk is a POS decision. Running 500 kiosks across 200 locations is an infrastructure decision. And that's a fundamentally different problem.
No turning back for restaurant kiosks
GRUBBRR recently published a solid breakdown of why 2026 is the “no turning back” point for restaurant kiosks. The data tracks with what we're seeing across the industry: 61% of restaurant customers want more self-service options, and operators are reporting 20–30% average ticket increases thanks to consistent, intelligent upselling that even the best (and most stressed) cashier can't match during a lunch rush.
The business case is settled. What's not settled is the operational backbone.
Every one of those kiosks is an unattended endpoint running in a high-traffic, high-abuse environment. They need to be provisioned, locked down, updated, monitored, and troubleshooted remotely across dozens or hundreds of locations. When one goes down during the lunch rush, that's not a "tech issue." That's lost revenue per minute.
And it's not just kiosks. It's the kitchen displays behind them. The digital menu boards above them. The order progress screens next to them. You're not deploying a kiosk. You're deploying a fleet.
Take your fleet readiness snapshot
If you're rolling out self-service kiosks (or already have them in the field), here's a quick sanity check:
- No-touch provisioning: Can you ship a kiosk to Store #147 and have it self-configure? If a tech needs to be on-site, you’ve already lost the scale game.
- Remote diagnostics: When a kiosk freezes at Store #10, can you see what happened from your desk, or are you dispatching someone with a laptop?
- OTA updates at scale: Can you push a firmware patch, app update, or config change to every kiosk in the fleet at 2 AM without waking anyone up? And if something goes wrong, can you roll it back before the morning shift?
- Lockdown and compliance: Are your kiosks actually locked down to their intended purpose, or is there a chance someone could break out of the app and start watching YouTube on your $3,000 ordering terminal?
- Fleet-level visibility: Do you know — right now, before the tickets start flying — which devices are online, drifting, or outdated?
If the answer to any of these is "not really," you're not running a kiosk fleet. You're running a collection of individual kiosks and hoping for the best.
From kiosk to growth driver
The kiosk tipping point isn't about hardware. It's about what happens when every physical location becomes a high-stakes technology deployment. Restaurants and retail stores are becoming device companies, whether they planned for it or not. The ones that treat their hardware like infrastructure will be the ones that actually capture the revenue upside everyone's promising.
The rest will be troubleshooting frozen screens during the lunch rush and wondering where the 20% ticket lift went.
Adding new kiosks or POS devices? Learn why the procurement and deployment phase is the first step in cutting TCO.
The Real Operating System of Your Company
Written by Maddie Gainza
Retail technology is moving faster than ever.
AI is already showing up in everyday workflows across industries, especially in retail edge environments. According to the National Bureau of Economic Research, about 23% of workers say they use generative AI at work each week, and nearly one in 10 are using it daily.
At the same time, trust in how that technology is governed hasn’t caught up yet. A global study by KPMG and the University of Melbourne found that only about two in five people believe current safeguards around AI are sufficient.
Enter the human side of the equation: UKG found that 79% of frontline retail employees report feeling burned out at work. Put all that together, and you get the environment most retail technology teams are living in right now: technology is accelerating, expectations are rising, and the people keeping everything running are stretched thin.
That’s why culture suddenly matters a lot more than people expect. Not the “culture” you see on a careers page, not the mission statement posted in the break room. Real culture emerges in how your organization actually takes action to tackle problems. Culture is how decisions are made under pressure.
The first hour of an incident is the truth
With AI on the edge, technology failures rarely arrive politely. Systems slow down as tickets pile up. Authorizations start failing. Dashboards light up like a Christmas tree. Your team suddenly has to make decisions fast, with incomplete information.
The initial hour when something goes wrong is highly revealing. It immediately exposes critical team dynamics: whether teams collaborate or descend into finger-pointing, if individuals escalate problems proactively or adopt a 'wait-and-see' approach, and if frontline teams are authorized to take action or are delayed awaiting necessary approvals.
In fast-paced retail environments where downtime directly translates into lost revenue, the difference between a minor incident and a full-blown crisis often comes down to how teams respond in the first 60 minutes. Not the technology. The culture around it.
Stress test your culture
Here’s a simple exercise most organizations rarely brave: Simulate a real incident on your edge AI models. Pick something realistic:
- Payment authorization fails during peak hours
- Network outages impact multiple stores
- A bad device update bricks POS terminals
- Inventory systems going out of sync
Then, run a tabletop scenario with the teams responsible for responding to the test incident. Don’t just watch the technical fixes. Watch how the culture emerges. Who owns the decision? Who has authority to act? How quickly does information move between teams? You’ll usually find the same things: Unclear ownership leads to slow escalation. Quick-thinking contributors blocked by approval processes unfit for a SEV1 moment.
These are cultural problems disguised as operational ones. The organizations that handle incidents well elaborate clear decision paths and foster a culture where teams trust each other enough to move quickly.
The first hour of incident response
Retail technology is evolving quickly. AI is moving into everyday operations. Customer expectations keep rising. And frontline teams are under more pressure than ever.
In that environment, the organizations that win won’t just be the ones with the best tools. They’ll be the ones with cultures designed for reality. Because you don’t really know how your company operates when everything is calm. You find out when something breaks.
And the first hour of an incident can tell your whole culture story.
Further Reading
Take this to the next level, add analytics. It’s one thing to perform under pressure, but track it too. Read our CTO, Sudhir Reddy’s blog post, The Power of AI and Analytics at the Edge for Long-Term Results.
Attack Vector Cleanup in Aisle 4
Written by Kelsey Milligan
Your payment terminals or self-order kiosks keep the checkout line moving and the orders coming. These devices also store sensitive information about your customers, making them significant security risks the moment their operating systems lose support.
But far too many edge devices were never built with a retirement plan. Devices are simply deployed and put to work. As long as they do their job, everything is great!
That is, until the updates stop. That’s when the vulnerabilities pile up, while the new hardware budget doesn’t. That’s when the bad actors take note.
CISA orders purge of unsupported edge devices
In February, CISA ordered federal agencies to remove unsupported edge devices. Right now, they have 90 days to take inventory, report back, and replace old hardware.
The rationale: State-sponsored threats are increasingly targeting unsupported edge devices as a preferred access pathway into target networks.
The adversaries don’t always distinguish between a government router and a retail payment terminal. While your organization might not be holding many government secrets, for attackers, extortion never goes out of style.
A kiosk on outdated firmware, a signage display that hasn't been patched in two years, a logistics tablet still running an unsupported OS — these are exactly the devices CISA is concerned about. As CISA Acting Director Madhu Gottumukkala stated: "Unsupported devices pose a serious risk to federal systems and should never remain on enterprise networks."
Swap "federal systems" for yours, and the logic still holds.
Build your lifecycle baseline
Prioritize visibility and strategic planning over jumping straight into a massive remediation project. Here are three steps to take right away:
- Step 1: Inventory everything. Pull a full list of every managed device — model, OS version, firmware, last update date. Hint: In a properly configured MDM, this is a report you can run in minutes.
- Step 2: Flag end-of-support candidates. Cross-reference your inventory against OEM support timelines. Anything reaching support discontinuation within 12 months needs to be on your radar now — not when the patch stops arriving.
- Step 3: Define your response for each. Three options: update (patch to a supported version), replace (migrate to supported hardware), or isolate (network-segment until replacement is possible). Nothing should sit in an ambiguous middle state.
Bonus points: Make this a quarterly review. Lifecycle risk shifts as vendors change support commitments and your fleet evolves.
How enterprises are owning the hardware lifecycle
If your hardware is out-of-date, rip and replace isn’t the only option. When your OEM no longer supports updates, you can keep your hardware and swap the OS.
We’ve had customers pull it off: One retailer in particular was obsessed (in the best way) with their customer experience. But a good portion of cash flow depended on soon-to-be unsupported x86-based POS devices. Poor data security was a non-starter, and rip-and-replace would have been costly and extremely disruptive.
The answer: Keep the hardware, and install a custom Android OS. Learn how it’s done.



