One is good, two is better and three is even better. This maxim can be applied to the world of technology when we think of the Internet of Things (IoT), Machine Learning (ML) and TinyML. The first concept, which refers to physical items connected in a network to collect and exchange data, is already known by professionals in the area and now becomes more commonplace among laymen. The second concept, Machine Learning, is gaining people's mouths with the huge buzz around ChatGPT, capable of reproducing human language to answer generic questions. The third is a combination of the first two. We are talking about a field that offers a way, through Machine Learning algorithms, to transform large volumes of data generated by IoT into insights. This intelligence can help inform decision-making and raise automation initiatives to new heights in various industries. How does the integration between the two technologies manage to generate so many benefits? On the one hand, IoT-generated data from various sources can be used to train Machine Learning algorithms. On the flip side, Machine Learning algorithms can enhance the ability of IoT devices to better process and analyse real-time data at the edges of networks, reducing latency. In practice, we can think, for example, that sensors in industrial equipment, after being trained by Machine Learning algorithms, will in the future be able to analyse temperature data in real time and warn about the need to make preventive repairs even in different seasons of the year or if they are moved. But what about the third concept mentioned at the beginning of the text, the TinyML? This technology has sought to bring the power of Machine Learning to extremely small devices, with very limited processing power, memory and power resources. TinyML algorithms are developed specifically for these compact devices, common on the Internet of Things, in a highly optimized way to be able to perform complex tasks such as image and audio recognition. The potential of the tiny TinyML technology is reflected in huge numbers. The amount of TinyML installations is expected to increase from nearly 2 billion in 2022 to more than 11 billion in 2027, according to a study by ABI Research. "A common theme of the TinyML market is the idea of taking ML everywhere. There are many possible use cases. Think about any kind of sensory data and there will likely be an ML model to apply to that information. Sound and environmental condition sensors remain the most prominent and should drive the huge growth of TinyML device installations," predicts Lian Jye Su, principal research analyst at ABI Research. So, integrating TinyML into IoT is about bringing together three promising technologies: the Internet of Things itself, Machine Learning capabilities and miniaturization of devices with preservation of the ability to perform complex tasks while consuming minimal power. This involves expertise in hardware and software optimization, data science and Artificial Intelligence. And if three integrated technologies is already great, how about adding 5G, Edge Computing and increasingly sophisticated sensors? Faster data transfer with 5G, lower latency with more advanced edge computing resources, and more sophisticated sensors to capture increasingly varied and accurate data will generate transformations hitherto unimaginable. Learning to learn Training Machine Learning model on tiny devices, such as IoT sensors, allows them to use more data to make better predictions. However, the training process requires a lot of memory, not usually available. To solve this impasse, researchers at MIT and the MIT-IBM Watson AI Lab have developed a new training technique that requires less than a quarter of a megabyte of memory. Other solutions use as much as 500 megabytes, far exceeding the capacity of most IoT devices. The new approach can be applied in a matter of minutes and also preserves privacy by keeping the data on the device itself and can even improve the accuracy of the results. As if that wasn't enough, it allows you to customize the model based on user demands. "Our solution enables IoT devices to not only perform inference, but also continuously update AI models according to newly collected data, paving the way for learning throughout the life of devices. Low resource utilisation makes deep learning more accessible and can have a wider reach, especially for low-power edge devices," says Song Han, a member of the MIT-IBM Watson AI Lab and senior author of the paper describing the innovation. The researchers employed two algorithmic solutions to make the training process more efficient and use less memory. The first, known as sparse updating, uses an algorithm that identifies the most important weights to be updated in each training round. The second solution involves quantized training and simplification of the weights, which typically use 32 bits. An algorithm rounds the weights so that they have only eight bits, reducing the amount of memory for training and inference. Next, a technique called quantization-aware scaling (QAS) is applied to avoid any drop in accuracy that might come from quantized training. In addition, the researchers have developed a training engine that can run these algorithms on a simple microcontroller with no operating system. The new framework was used to train a computer vision model aimed at detecting people in images. After only 10 minutes of training, the solution successfully learned to perform the task. The method was able to train a model 20 times faster than other approaches, according to the researchers, who want to apply it to language models and different types of data, as well as trying to decrease the size of larger models without sacrificing accuracy. The project is financed by National Science Foundation, MIT-IBM Watson AI Lab, MIT AI Hardware Program, Amazon, Intel, Qualcomm, Ford Motor Company and Google.