Home > IoT > MIT students unlock potential of smart IoT devices
The term “Internet of Things” is not just about connections between things – that is, sensors and devices exchanging data. Managing the data flowing from these devices often means connecting in new and more complex ways to a wide range of systems capable of solving everyday problems, such as air pollution and monitoring people with motor difficulties, as projects devised by engineering and computer science students at the Massachusetts Institute of Technology reveal.
The space in which we live, work and learn is highly polluted by toxic emissions from traffic and industry. Tackling this represents a huge task for cities and businesses. Recent students from MIT’s New Engineering Education Transformation (NEET) program have developed a system to improve the way urban centers monitor air pollution levels. The idea is to have fleets of automated drones patrol sections of city skies, report their sample readings in real-time, and then return to the docks to recharge for the next flight.
The autonomous drones fly about 100 meters above a densely populated urban residential area to capture data which is then sent to a central communication module. The processed information is compared with wind and traffic patterns and historical series of pollution hot spots. The drone fleet is then instructed to move to new sampling points and restart the process.
According to the student group, the mobile approach was adopted because it was imagined that the social impact would be greater compared to other known stationary pollution monitoring systems. The students assessed that static models often fail to detect spatial heterogeneity in pollution levels in a given scenario. “Because of the limited distribution and lack of mobility, they are only an indicator of the air quality directly around each monitoring point, not of the air quality in the whole city, for example,” the students said.
The air quality data is sent in real-time with a resolution of 15 meters and can be publicly accessed through an interface capable of aggregating the collected information to socioeconomic data of the area, such as income, family composition, housing types, and means of transportation. According to the students, this helps reveal patterns and disparities in exposure to air pollution and make more precise decisions to reverse the situation.
The solution monitors a form of pollution called PM 2.5, composed essentially of particles small enough that they can enter the bloodstream when inhaled and can lead to lung and heart disease over time.
A tactile sensing mat developed by MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) could help treatments in healthcare settings or smart homes. The mat exploits the many interactions people have with the ground every day to better understand their movements.
Thinking of preserving privacy as much as possible, cameras are initially used only to create the dataset of the environment in which the system is trained and to capture the person performing the activities. Later, to infer the person’s 3D pose on the mat, a deep neural network uses only the tactile information to determine whether the person is doing, for example, sit-ups, stretching, or another activity.
The researchers trained the system using tactile and visual data, such as a video and a corresponding heat map of someone doing a push-up. The AI model uses this visual data as the ground truth and uses the pressure of the person on the mat to create various human poses in 3D, so it can produce an image or video of a person doing a particular action on the mat without actually registering the person carrying the action.
More than accompanying gym classes, the mat can be used, for example, for monitoring high-risk individuals, fall detection, and rehabilitation activities.
“We have built a low-cost, high-density, large-scale smart mat that enables real-time recordings of human tactile floor interactions on a continuous basis,” the researchers note.
Made of commercial pressure-sensitive film and conductive wires with more than 9,000 sensors, the mat measures 11×0.60 meters. Each sensor converts body pressure into an electrical signal, and the system is specifically trained on synchronized tactile and visual data, such as videos and heat maps corresponding to someone doing a push-up. The model takes the pose extracted from the visual data as a basis, uses the tactile data as input, and thus produces the 3D pose of the person.
According to the researchers, the model is able to predict poses with a margin of error of fewer than 10 centimeters. In classifying specific actions, the system was accurate 97% of the time.
In the future, the idea is to improve metrics for multiple users, for example in scenarios where two people are dancing on the carpet. They also intend to capture more information from tactical signals, such as the height or weight of the person.