Edge AI Development: Real-World Challenges You'll Face Early On
August 20, 2024

Peter Morales
Code Metal

If you're stepping into edge AI development, let me share some things you'll run into early on. Transitioning AI models from desktop environments to edge devices isn't as straightforward as it might seem. The edge brings unique challenges that demand a different mindset, and the sooner you're aware of them, the better prepared you'll be.

The ONNX Reality

One of the most significant challenges you'll face is the limited support for AI frameworks on edge devices. While frameworks like PyTorch and TensorFlow dominate desktop AI development, they don't always play nicely with the edge. Many edge devices, including FPGAs, are adopting ONNX (Open Neural Network Exchange) as a standard format for running AI models. However, this isn't without its complications.

For example, Qualcomm's Neural Processing SDK supports ONNX, but only a specific version. This means if your model relies on operators introduced in newer ONNX versions, you could be out of luck. Even more frustrating, we've encountered errors in operator implementations — bugs that can be hard to track down because the community of developers working on these edge devices is much smaller. We've spent a lot of time making more advanced operators, not available in our version of ONNX, work by combining a subset of other operators. This is an extremely time-consuming process that requires a deep understanding of both the model and the hardware.

The takeaway here is that when you're porting models to the edge, you need to thoroughly test them on the actual hardware as early as possible. Be prepared to debug low-level issues that you might not have anticipated, and don't assume that everything will "just work" after a simple export to ONNX.

The Ease of Popular Models

While edge AI development is full of challenges, there are some areas where things are a bit smoother. Many edge devices are optimized for popular models like YOLO (You Only Look Once), a common computer vision model. It seems to be the first thing that every edge hardware accelerator company wants to demo because it's well-known and has been heavily optimized for edge environments.

That said, deploying even a custom version of YOLO isn't without its own challenges. However, you're likely to find more community support, better documentation, and pre-existing optimizations if you start with these popular models. It's worth looking at what's already popular on your target platform rather than immediately jumping to the latest model that leads on performance benchmarks but hasn't been thoroughly tested on edge hardware.

This approach can save you time and effort, especially if you're new to edge AI development. Once you're comfortable with the platform and its quirks, you can start experimenting with more complex models and custom implementations.

Dependency Constraints

Another major hurdle is dealing with dependencies in edge environments. On a desktop, you can freely install libraries with a simple pip install and not think twice about it. But on an edge device, you need to be much more strategic. The libraries you rely on might not be available, or they might exist in a stripped-down version that lacks critical features.

For example, we've seen cases where a seemingly small piece of code on the desktop pulls in thousands of lines of dependency code. What's manageable on a desktop can become a bloated, resource-heavy burden on an edge device. This is where many developers get tripped up — assuming that porting code will be straightforward, only to discover that those few lines of import statements come with massive overheads that edge devices can't handle.

Before you start porting, map out exactly what your project depends on. Understand the entire dependency chain, because on the edge, every extra line of code can be a problem. Consider using tools that allow you to visualize these dependencies or manually trace them to ensure they're essential. This upfront effort can save you from significant headaches later on.

Key Takeaways for Edge AI Developers

So, what does all this mean for your development process?

1. Test Early and Often on Target Hardware: Don't wait until the end of your project to test on the actual edge device. Start early, and be prepared for unexpected issues with framework support and operator compatibility.

2. Leverage Popular Models: Start with models like YOLO that are widely used and well-supported in edge environments. This can make your initial foray into edge AI development smoother and help you avoid some of the pitfalls of working with more experimental models.

Be a Master of Your Dependencies: Before porting, deeply understand what your code is bringing along with it. Strip down to the essentials and be ready to replace or optimize parts of your code that are too heavy for edge devices.

Adapt and Debug: The edge environment is less forgiving than the desktop, and you'll need to be ready to adapt quickly. Whether it's dealing with ONNX quirks or tracking down obscure bugs, your ability to debug and optimize will be crucial.

Embrace the Challenge

Edge AI development isn't just desktop AI on a smaller scale — it's a different world with its own rules and limitations. But for those who are ready to embrace the challenge, it offers the opportunity to push AI into real-world applications where it can make an immediate impact. With the right preparation and mindset, you can overcome these obstacles and contribute to the cutting edge of AI development.

Peter Morales is CEO and Founder of Code Metal
Share this

Industry News

September 17, 2024

Check Point® Software Technologies Ltd. has been recognized as one of theWorld’s Best Companies of 2024 by TIME and Statista.

Check Point made its debut on the list due to its strong employee satisfaction, revenue growth, and ESG efforts.

September 17, 2024

Oracle announced the availability of Java 23, the latest version of the programming language and development platform.

September 17, 2024

JFrog announced a new product integration with NVIDIA NIM microservices, part of the NVIDIA AI Enterprise software platform.

September 17, 2024

Tigera announced several new features for Calico Cloud and Calico Enterprise to improve the efficiency of remediating vulnerabilities in container images, and ensure compatibility with the latest deployment options for OpenShift.

September 17, 2024

Gearset announced the acquisition of Clayton, a code analysis platform designed specifically for Salesforce.

September 16, 2024

Docker is introducing a new way for developers and organizations to access its suite of products – including Docker Desktop, Docker Hub, Docker Trusted Content, Docker Scout, Docker Build Cloud, and Testcontainers Cloud.

September 16, 2024

The Linux Foundation, the nonprofit organization enabling mass innovation through open source, announced the launch of the OpenSearch Software Foundation, a community-driven initiative that will support OpenSearch and its search software, which is used by developers around the world to build search, analytics, observability, and vector database applications.

September 16, 2024

Copado announced the Copado AI platform encompassing a suite of AI-powered DevOps agents.

September 16, 2024

Kong announced the release of Kong Gateway 3.8, a major update that sets a new standard for API management.

September 16, 2024

Perforce Software announced that its mobile application testing platform, Perfecto, will support Apple's latest iOS version, iOS 18, on Monday, September 16, 2024.

September 12, 2024

Check Point® Software Technologies Ltd. has been recognized as a Leader in the latest GigaOm Radar Report for Security Policy as Code.

September 12, 2024

JFrog announced the addition of JFrog Runtime to its suite of security capabilities, empowering enterprises to seamlessly integrate security into every step of the development process, from writing source code to deploying binaries into production.

September 12, 2024

Kong unveiled its new Premium Technology Partner Program, a strategic initiative designed to deepen its engagement with technology partners and foster innovation within its cloud and developer ecosystem.

September 11, 2024

Kong announced the launch of the latest version of Kong Konnect, the API platform for the AI era.

September 10, 2024

Oracle announced new capabilities to help customers accelerate the development of applications and deployment on Oracle Cloud Infrastructure (OCI).