TinyML and IoT Make Good Partners
TinyML is enabling AI to be delivered on edge devices in Internet of Things networks, offering advantages of low-power and low-latency.
By John P. Desmond, Editor, AI in Busines

TinyML is being incorporated into edge devices on Internet of Things (IoT) networks, helping to bring compact AI processing via a low-power option without requiring internet connectivity.
TinyML works on low-power microcontrollers that respond to data in real time, without use of significant resources. TinyML is an optimized machine learning technique that allows ML model software to run on embedded systems, according to a recent account in KDNuggets.
An embedded system is made up of hardware and software designed for a specific function, such as an electronic calculator or an ATM. The software can execute on a microcontroller, which offers the benefits of low-power, low price and a multifunctional use. “Microcontrollers can be used in all devices, gadgets and appliances,” stated the author of the account, Nahla Davies, software developer and tech writer. Some 28 billion microcontrollers were shipped in 2020, she stated.
From a software execution standpoint, TinyML allows data to be processed with low latency, almost no delay, because the analytics happens on the device without requiring any server connection. Another advance is high data security, since all the data is contained within the device with no connectivity.
Machine learning frameworks that support TinyML include: Edge Impulse, a free ML development platform for edge devices; TensorFlow Lite, a library of tools that enable developers to put ML software on embedded systems, edge devices and other standalone devices; and PyTorch Mobile, an open-source ML framework for mobile platforms.
TinyML is gaining in popularity. Research group ABI predicts that 2.5 billion devices using TinyML systems will be shipped in 2030. Among its advantages, “TinyML models can be used as an alternative to a cloud environment, reducing costs, using less power, and offering more data privacy,” stated the author.
On the flip side, TinyML has a memory capacity limited to megabytes or sometimes kilobytes, which restricts the complexity of models that will execute. Also, troubleshooting cannot be conducted remotely, as it can be in cloud environments.
Industries where TinyML has taken root include: agriculture, where it is used to collect crop and livestock data in real time; retail, to monitor inventories; healthcare, for health monitoring; and in manufacturing, for predictive maintenance systems. “Its popularity is predicted to grow,” stated Davies.
Syntiant Announces New Chip for Deploying TinyML
A recent announcement that exemplifies the trend is the announcement by Imagimob, a provider of TinyML platforms, of a partnership with Syntiant Corp., an end-to-end AI chip company. Syntiant makes edge AI chips, positioning as a secure and easy way to deploy TinyML applications on edge devices.
Syntiant designed its low-power NDP 120 neural decision processor specifically for deep learning applications that will see the most use in battery-powered devices. The combined Imagimob-Syntiant board supports a range of applications, including sound event detection, keyword spotting, fall detection, anomaly detection, gesture detection and many others.
“The collaboration with Syntiant will be very valuable for our customers because it allows them to quickly develop and deploy powerful, production-ready deep learning models on the Syntiant NDP120,” stated Anders Hardebring, CEO and co-founder at Imagimob, in a press release. “We see a lot of market demand in sound event detection, fall detection, and anomaly detection.”
“Pairing our NDP120 with the Imagimob platform will enable developers to quickly and easily deploy deep learning models,” stated Kurt Busch, CEO of Syntiant. “Studies suggest that there are more than 35 million falls in the U.S. alone that require some kind of medical attention, so there is significant opportunity for applications across both consumer and industrial use cases.”
TinyML Enables Deep Learning on IoT Devices
In a recent paper in the scientific journal MDPI, author Dina M. Ibrahim of the Department of Computer Engineering at Tanta University, Egypt, provided a technical overview. “A new technology called Tiny Machine Learning (TinyML) has paved the way to meet the challenges of IoT devices,” she stated, adding, “The integration of ML algorithms with the IoT device has aimed to process huge amounts of data and make the devices intelligent in order to make decisions.”
Deep neural networks of deep learning (DL), a subset of ML, considered the most advanced way to process data today, has been applied successfully in applications including image classification, object detection and speech recognition. DL is enabling many applications in IoT edge devices, such as mobile phones, which become intelligent microcontrollers when equipped with DL, the author stated.
The integration of DL with IoT faces many challenges that TinyML can help to meet. “It is computationally expensive, consuming massive CPU and GPU resources, power, memory, and time,” especially to train the DL model, Ibrahim stated. This is where TinyML has advantages in overcoming resource constraints of limited computation, small memory and a few milliwatts of power.
Case study experience cited by the authors includes efforts at speech recognition incorporating a new set of tools called TinySpeech. “TinySpeech aims to build a deep convolutional network that has low architecture, low computation on the devices and requires low storage” stated Ibrahim.
TinySpeech was introduced by a team at Cornell University that was researching “attention condensers” for deep speech recognition neural networks on edge devices. Alexander Wong, lead author of a related paper, was a research associate at Cornell in 2019; now he is a project manager with Synthomer of Acton, Mass., maker of highly specialized chemical products.
The authors stated in their related paper, “We introduce TinySpeech, low-precision deep neural networks comprising largely of attention condensers tailored for on-device speech recognition using a machine-driven design exploration strategy, with one tailored specifically with microcontroller operation constraints.”
Results on the Google Speech Commands benchmark dataset for limited-vocabulary speech recognition were encouraging. “These results not only demonstrate the efficacy of attention condensers for building highly efficient networks for on-device speech recognition, but also illuminate its potential for accelerating deep learning on the edge and empowering TinyML applications,” the authors stated.
Read the source articles and information in KDNuggets, in a press release from Syntiant, In a recent paper in the scientific journal MDPI, and in a recent paper from a team at Cornell University.
(Write to the editor here.)