Edge AI Development: Real-World Challenges You'll Face Early On
August 20, 2024

Peter Morales
Code Metal

If you're stepping into edge AI development, let me share some things you'll run into early on. Transitioning AI models from desktop environments to edge devices isn't as straightforward as it might seem. The edge brings unique challenges that demand a different mindset, and the sooner you're aware of them, the better prepared you'll be.

The ONNX Reality

One of the most significant challenges you'll face is the limited support for AI frameworks on edge devices. While frameworks like PyTorch and TensorFlow dominate desktop AI development, they don't always play nicely with the edge. Many edge devices, including FPGAs, are adopting ONNX (Open Neural Network Exchange) as a standard format for running AI models. However, this isn't without its complications.

For example, Qualcomm's Neural Processing SDK supports ONNX, but only a specific version. This means if your model relies on operators introduced in newer ONNX versions, you could be out of luck. Even more frustrating, we've encountered errors in operator implementations — bugs that can be hard to track down because the community of developers working on these edge devices is much smaller. We've spent a lot of time making more advanced operators, not available in our version of ONNX, work by combining a subset of other operators. This is an extremely time-consuming process that requires a deep understanding of both the model and the hardware.

The takeaway here is that when you're porting models to the edge, you need to thoroughly test them on the actual hardware as early as possible. Be prepared to debug low-level issues that you might not have anticipated, and don't assume that everything will "just work" after a simple export to ONNX.

The Ease of Popular Models

While edge AI development is full of challenges, there are some areas where things are a bit smoother. Many edge devices are optimized for popular models like YOLO (You Only Look Once), a common computer vision model. It seems to be the first thing that every edge hardware accelerator company wants to demo because it's well-known and has been heavily optimized for edge environments.

That said, deploying even a custom version of YOLO isn't without its own challenges. However, you're likely to find more community support, better documentation, and pre-existing optimizations if you start with these popular models. It's worth looking at what's already popular on your target platform rather than immediately jumping to the latest model that leads on performance benchmarks but hasn't been thoroughly tested on edge hardware.

This approach can save you time and effort, especially if you're new to edge AI development. Once you're comfortable with the platform and its quirks, you can start experimenting with more complex models and custom implementations.

Dependency Constraints

Another major hurdle is dealing with dependencies in edge environments. On a desktop, you can freely install libraries with a simple pip install and not think twice about it. But on an edge device, you need to be much more strategic. The libraries you rely on might not be available, or they might exist in a stripped-down version that lacks critical features.

For example, we've seen cases where a seemingly small piece of code on the desktop pulls in thousands of lines of dependency code. What's manageable on a desktop can become a bloated, resource-heavy burden on an edge device. This is where many developers get tripped up — assuming that porting code will be straightforward, only to discover that those few lines of import statements come with massive overheads that edge devices can't handle.

Before you start porting, map out exactly what your project depends on. Understand the entire dependency chain, because on the edge, every extra line of code can be a problem. Consider using tools that allow you to visualize these dependencies or manually trace them to ensure they're essential. This upfront effort can save you from significant headaches later on.

Key Takeaways for Edge AI Developers

So, what does all this mean for your development process?

1. Test Early and Often on Target Hardware: Don't wait until the end of your project to test on the actual edge device. Start early, and be prepared for unexpected issues with framework support and operator compatibility.

2. Leverage Popular Models: Start with models like YOLO that are widely used and well-supported in edge environments. This can make your initial foray into edge AI development smoother and help you avoid some of the pitfalls of working with more experimental models.

Be a Master of Your Dependencies: Before porting, deeply understand what your code is bringing along with it. Strip down to the essentials and be ready to replace or optimize parts of your code that are too heavy for edge devices.

Adapt and Debug: The edge environment is less forgiving than the desktop, and you'll need to be ready to adapt quickly. Whether it's dealing with ONNX quirks or tracking down obscure bugs, your ability to debug and optimize will be crucial.

Embrace the Challenge

Edge AI development isn't just desktop AI on a smaller scale — it's a different world with its own rules and limitations. But for those who are ready to embrace the challenge, it offers the opportunity to push AI into real-world applications where it can make an immediate impact. With the right preparation and mindset, you can overcome these obstacles and contribute to the cutting edge of AI development.

Peter Morales is CEO and Founder of Code Metal
Share this

Industry News

November 20, 2024

Spectro Cloud completed a $75 million Series C funding round led by Growth Equity at Goldman Sachs Alternatives with participation from existing Spectro Cloud investors.

November 20, 2024

The Cloud Native Computing Foundation® (CNCF®), which builds sustainable ecosystems for cloud native software, has announced significant momentum around cloud native training and certifications with the addition of three new project-centric certifications and a series of new Platform Engineering-specific certifications:

November 20, 2024

Red Hat announced the latest version of Red Hat OpenShift AI, its artificial intelligence (AI) and machine learning (ML) platform built on Red Hat OpenShift that enables enterprises to create and deliver AI-enabled applications at scale across the hybrid cloud.

November 20, 2024

Salesforce announced agentic lifecycle management tools to automate Agentforce testing, prototype agents in secure Sandbox environments, and transparently manage usage at scale.

November 19, 2024

OpenText™ unveiled Cloud Editions (CE) 24.4, presenting a suite of transformative advancements in Business Cloud, AI, and Technology to empower the future of AI-driven knowledge work.

November 19, 2024

Red Hat announced new capabilities and enhancements for Red Hat Developer Hub, Red Hat’s enterprise-grade developer portal based on the Backstage project.

November 19, 2024

Pegasystems announced the availability of new AI-driven legacy discovery capabilities in Pega GenAI Blueprint™ to accelerate the daunting task of modernizing legacy systems that hold organizations back.

November 19, 2024

Tricentis launched enhanced cloud capabilities for its flagship solution, Tricentis Tosca, bringing enterprise-ready end-to-end test automation to the cloud.

November 19, 2024

Rafay Systems announced new platform advancements that help enterprises and GPU cloud providers deliver developer-friendly consumption workflows for GPU infrastructure.

November 19, 2024

Apiiro introduced Code-to-Runtime, a new capability using Apiiro’s deep code analysis (DCA) technology to map software architecture and trace all types of software components including APIs, open source software (OSS), and containers to code owners while enriching it with business impact.

November 19, 2024

Zesty announced the launch of Kompass, its automated Kubernetes optimization platform.

November 18, 2024

MacStadium announced the launch of Orka Engine, the latest addition to its Orka product line.

November 18, 2024

Elastic announced its AI ecosystem to help enterprise developers accelerate building and deploying their Retrieval Augmented Generation (RAG) applications.

Read the full news on APMdigest

November 18, 2024

Red Hat introduced new capabilities and enhancements for Red Hat OpenShift, a hybrid cloud application platform powered by Kubernetes, as well as the technology preview of Red Hat OpenShift Lightspeed.

November 18, 2024

Traefik Labs announced API Sandbox as a Service to streamline and accelerate mock API development, and Traefik Proxy v3.2.