Edge AI Revolutionizes Real-Time Data Processing and Automation
Edge AI offers reduced latency, faster processing, a reduced need for constant internet connectivity, and fewer privacy concerns. As demand for real-time intelligence grows, this technology represents a significant shift in how data is processed.
Edge AI uses artificial intelligence directly within a device, computing near the data source rather than in an off-site data center with cloud computing. Edge AI offers reduced latency, faster processing, a reduced need for constant internet connectivity, and can lower privacy risks. This technology represents a significant shift in how data is processed, impacting a wide range of technologies, from consumer applications to advanced vehicle functions to factory automation.
The greatest value of edge AI is the high-speed functionality it can provide. Unlike cloud/data center AI, edge AI is not sending data over network links; it is doing computation locally (often on a real-time operating system). For time-sensitive situations like a factory line that uses machine vision or an automotive application, edge AI offers instant response times without the uncertain response times that can come with sending data through the network or servers in the cloud.
Edge AI is already impacting every industry. Supply chains, including warehousing and factories, utilize the technology, which also extends to the transportation industry, including delivery drones. Med-tech is another area of great promise; for example, engineers developing pacemakers and other cardiac devices can design in tools that look for abnormal heart rhythms and offer guidance on when to seek medical assistance.
Engineers and developers to plan the future of user interactions
Edge AI is a form of machine learning (ML) and matches patterns based on a statistical algorithm. There are several pathways to generate an ML model: either with an integrated development environment like TensorFlow or PyTorch or using a SaaS platform (like Edge Impulse). Most of the work in building a good ML model is the creation of a representative data set and labeling it well.
The most popular ML model for edge AI is a supervised model, which is a type of training based on labeled and tagged sample data, where the output is a known value that can be checked for correctness. This type of training is typically used in applications such as classification work or data regression. Supervised training can be useful and highly accurate, but it depends greatly on the tagged dataset and may be unable to handle new inputs.
Hardware for edge AI workloads
Edge AI implementations generally run on microcontrollers, FPGAs, and single board computers (SBCs). Recent hardware releases that support these functions include NXP’s MCX-N series, and ST Microelectronics’ STM32MP25 series. Development boards for running edge AI include SparkFun’s Edge Development Board Apollo3 Blue, AdaFruit’s EdgeBadge, Arduino’s Nano 33 BLE Sense Rev 2, and Raspberry Pi’s 4 or 5. Neural processing units (NPUs) are also gaining ground in edge AI. These specialized ICs are designed to accelerate the processing of ML and AI applications based on neural networks, structures based on the human brain with many interconnected layers and nodes called neurons that process and move information. The latest generation of NPUs with dedicated math processing include NXP’s MCX N series and ADI’s MAX78000. AI accelerators for edge devices are in development from Google Coral and Hailo.
ML sensors
High-speed cameras with machine learning models are used for supply chain processes like locating products within a warehouse or identifying defective products in a production line. Low-cost AI vision modules can run ML modules to recognize objects or people. Running an ML model require an embedded system, which includes AI-enabled sensors, also known as ML sensors. While adding an ML model to most sensors will not make them more efficient at the application, a few types of sensors with ML training can perform in significantly more efficient ways, such as camera sensors in which ML models can track objects and people in the frame, or IMU, accelerometer, and motion sensors that can detect activity profiles.
Some AI sensors come preloaded with an ML model. For example, the SparkFun eval board for sensing people is preprogrammed to detect faces and return information over the QWiiC I2C interface. Some AI sensors, like Nicla Vision from Arduino or the OpenMV Cam H7 from Seeed Technology, need to be trained to look for defects and objects.

Seeed Technology’s OpenMV Cam H7 from DigiKey
By enabling faster, more secure data processing at the device level, innovation in edge AI will be profound. A few areas we see expanding in the near future include dedicated processor logic for computing neural network arithmetic; lower power alternatives compared to cloud computing’s significant energy consumption; and integrated/module options like AI Vision parts that will include built-in sensors along with embedded hardware.
For more edge AI information, products, and resources, visit DigiKey.com/edge-ai.
Shawn Luke is a technical marketing engineer at DigiKey.
Like this article? Check out our other Cloud Computing, Smart Devices articles, our Datacom Market Page, and our 2024 and 2025 Article Archive.
Subscribe to our weekly e-newsletters, follow us on LinkedIn, Twitter, and Facebook, and check out our eBook archives for more applicable, expert-informed connectivity content.
- Edge AI Revolutionizes Real-Time Data Processing and Automation - January 28, 2025