Speak Up to Shape Next-Gen Edge AI

Speak Up to Shape Next-Gen Edge AI


Ten years ago, when deep neural networks started leaping from academic papers onto embedded processors, many of us assumed the hardest problems would soon be solved. Hardware accelerators would get faster, frameworks would mature, and training tricks would trickle down from the cloud. Yet every year, when the Edge AI and Vision Alliance asks product teams where they struggle building physical products incorporating computer vision and perceptual AI, pain points glare back at us—some sharper than ever.

In our 2024 Computer Vision and Perceptual AI Developer’s Survey, 232 end-product developers told us which obstacles slow them most when using neural networks for vision. Three themes dominated:

Getting (and keeping) the right data. 58% of respondents say training data is a top challenge. That’s not just getting enough training data; it’s variety, quality, labeling consistency, privacy compliance, and the cost of updating datasets when the real world changes.

Squeezing models into real products. 44% struggle with “the cost of processing performance required to run the model in the target system.” Closely related concerns include memory footprint (35%) and power consumption (34%). Put differently, it’s great to be able to run your model on a desktop with a GPU, but it’s another thing when the product has to fit into a tight space, the BOM cost must hit €50, and the battery needs to last a week. 

Turning models into running code. 43% cite the effort required to train a model, 35% the effort to find the right model, and 33% the effort to implement the model on the target system. These numbers represent overlapping headaches: evaluating models, running trainings, tuning hyper-parameters, pruning for size, recompiling kernels for the NPU du jour, discovering the accuracy dipped below spec… You get the idea.

Respondents can pick multiple answers, so totals exceed 100 percent—proof that most teams wrestle several of these challenges at once.
Respondents can pick multiple answers, so totals exceed 100%—proof that most teams wrestle several of these challenges at once.

Why spotlight these numbers? Because they remind us that while ImageNet-beating accuracy is old news, practical productization is not. Data and deployment remain the trenches where schedules slip, budgets bleed, and engineers lose sleep.

Turning pain points into progress

Collecting these candid reports isn’t an academic exercise. Each year, we share the anonymized results of our survey with Edge AI and Vision Alliance member companies—the processor vendors, tool suppliers, camera makers, and algorithm startups that populate the supplier ecosystem. The conversation often starts with, “Wow, interesting to see that developers are still struggling with X,” and ends with new features, roadmap tweaks, improved documentation, and fresh tutorials—all aimed at fixing X.

The virtuous cycle works because system developers—engineers and engineering managers building kitchen appliances, robots, medical devices, and industrial equipment—take the time to answer. Suppliers know what their evaluation kits can do. But they don’t always know what happens when a medical-grade Linux kernel, multiple 30 FPS cameras, a transformer-based neural network, a cutting-edge SoC, and a cramped thermal envelope collide. Your survey responses supply that reality check.

Just as important, the dataset helps us at the Alliance sharpen our own content. A few years ago, a surge of “data set pain” spurred us to adjust the content of our Embedded Vision Summit conference tracks to focus on this critical problem. We don’t guess what topics are critical for our attendees; instead, we follow the numbers.

Help us shape the 2025 landscape

That brings me to this year’s ask. The 2025 edition of the Computer Vision and Perceptual AI Developer’s Survey is now open, and we’re eager to hear from the people who make vision work at the edge—especially those building finished products for consumers, businesses, or governments. If you’re developing a CV or sensor-based AI product and spec hardware or software, collect data, train, optimize, integrate, or ship, your perspective is gold.

The sooner we gather a critical mass of responses, the sooner suppliers can align their roadmaps with your real-world needs: lower-cost processors, easier-to-use tools, reproducible lower power NPUs, smarter memory hierarchies—whatever you tell us still hurts.

Ten years from now, we’ll look back at today’s challenges as quaint. But getting there faster requires clear signals from the front lines. Please add your voice. Spend a coffee break with the survey, then forward the link to a colleague who wrestles with computer vision and perceptual AI gremlins of their own.

I can’t promise every gripe will vanish next quarter. But I can promise that shining light on the bottlenecks is the surest way to get them fixed—because history shows that when developers speak with data, the industry listens.

Leave a Reply