txt

Imagimob

  • Hardware & IoT
  • 2 Case Studies
Leading the way in Edge AI applications

Oct 28

Agritech: Monitoring cattle with IoT and Edge AI

By Tech Dr Alex Jonsson How to sample high-resolution biometric data, use low-powered, long-range networks (LPWAN) and still achieve high-quality results while powered by trickle-feed battery power alone? By using Edge AI (aka tinyML) is the answer, so here's how Imagimob implemented an unsupervised training model, namely a GMM solution (Gaussian Mixture Model) in order to do just that! IntroductionWithin the AFarCloud project [1], Imagimob has since two years had the privilege to work with some of the most prominent organisations and kick-ass scientists within precision agriculture; Universidad Politécnica de Madrid, Division of Intelligent Future Technologies at MDH, Charles University in Prague and RISE Research Institutes of Sweden, to create an end-to-end service for real-time monitoring the heath and well-being of cattle. Indoors, foremost for milking-giving bovine, there are today methods commercially available which utilise combinations of multisensory (camera), radio beacons and specialist equipment like ruminal probes, allowing farmers to keep their livestock mostly indoors under continuous surveillance. While in many European countries, much of the beef cattle are held in outside environments, often large pastures spanning over hundreds or thousands of hectares, making short-range technologies requiring high data rates and networking range challenging for aggregating sensor data, as access to power lines and electrical outlets is scarce, and most equipment are battery-powered only. With a small form factor worn around the neck, a sealed enclosure crams all of a 32-bit microcomputer, 9-axis movement sensors from Bosch sampling at full 50Hz speed, a long-range radio and a 40-gram Tesla-style battery cell. The magic comes in when all of the analysis is performed on the very same microcomputer then and there at the edge - literally on-cow AI - rather than the traditional transfer of raw data to a cloud for processing and storage. Every hour, a small data set containing a refined result, is sent over the airwaves, allowing the farmer to see how much of the animal's time has gone into feeding, ruminating, resting, romping around et cetera, in the form of an activity pie chart. By shrivelling many animals, the individual bovine can be compared both to its peers, as well as activity over time. Periods of being in heat (fertility), are important for the farmer to monitor as cows only give milk after becoming pregnant [2]. Cows in heat are amongst other things found more restless and alert, rather standing when the rest of the herd are laying down resting, fondle one another and produce frequent bellowing. Training & AI modelsAs we're using Imagimob AI SaaS and desktop software inhouse too, where a lot of the heavy lifting is taken care of once you have a H5 file (Hierarchical Data Format, with multi-dimensional scientific data sets). The tricky part is to gather data from the field. For this purpose, another device is used which basically contains the very same movement sensors, e.g. the Bosch BMA 280, 456 or 490L chip, an SD card, a clock chip and a battery allowing weeks of continuous measurements. Some of the time, cows were videotaped, allowing us to line-up sensor readings and the goings-in in the meadows. Using the Imagimob capture system, activities are labelled up for the training phase.

Read full article >


Jan 19

The Future is Touchless: Radical Gesture Control Powered by Radar and Edge AI

As the pandemic has swept over the world, gesture control and touchless user interfaces have become a hot topic. Both provide the ability to interact with devices without physically touching them. In addition to rising hygiene awareness, another primary driver of the touchless gesture control market today is a demand for lower maintenance costs. Here, we’ll explore the different types of gesture control technologies, provide some examples of how they are used in specific cases, and explain why, in our opinion, radar technology powered by Edge AI stands out above the rest for certain use cases. What is gesture control ? Gesture control, or gesture recognition, is both a topic of computer science and language technology, where the primary goal is the interpretation of human gestures via algorithms. Gesture control devices have the ability to recognize and interpret movements of the human body, allowing users to interact with and control a system without direct physical contact. Gestures can originate from any bodily motion or state, but normally originate from the hand. Types of touchless gesture control technologies There are several different types of touchless gesture control technologies used today to enable devices to recognize and respond to gestures and movements. These range from cameras to radar, and they each come with pros and cons depending on the application. Cameras (2D/3D)Many gesture control applications use a camera as input. In fact, there are already a number of products available on the market that use smartphone cameras to build mobile apps with gesture control features. In the automotive sector, BMW has led the way, featuring gesture control in some of their latest models. Their solution allows drivers to control select functions in the infotainment (iDrive) system by using hand gestures which are captured by a 3D camera. Located in the roof lining, the camera scans an area in front of the dashboard to detect any gestures performed. Various functions can be operated, depending on the equipment. Infrared sensorsAn infrared (IR) sensor is an electronic device that measures and detects infrared radiation in the surrounding environment. Anything that emits heat gives off infrared radiation. IR is invisible to the human eye, since its wavelength is longer than that of visible light. There are two types of infrared sensors: active and passive. Active infrared sensors both emit and detect infrared radiation, while passive infrared sensors only detect infrared radiation. Active IR sensors have two parts: a light emitting diode (LED) and a receiver. When an object gets close to the sensor, the infrared light from the LED reflects off of the object and is detected by the receiver. Active IR sensors make excellent proximity sensors, and are commonly used in obstacle detection systems. Due to the ability to detect a variety of simple gestures at a low cost, infrared sensing technology can be a good match for many industrial, consumer, and automotive applications. Radar  Of all the touchless gesture control technologies, radar is the most robust. Radar has some unique properties. One is that it is extremely accurate, even the tiniest of motions or gestures can be detected. Another unique property of radar is that it works through materials, such as plastic and glass. Furthermore, with radar, there are no lenses that can become dirty—which is not the case with cameras or infrared technologies. If a camera or infrared sensor become dirty, it doesn’t work. In fact, cameras often face many of the same limitations we find with the human eye. They require a clear lens in order to see properly, limiting where you can position them, and they don’t always provide a crisp or reliable picture in bad weather, particularly in heavy rain or snow.  We believe that gesture control using radar is a great solution that can be applied in many use cases. In-ear headphones are one great example, and there are many other excellent application examples in consumer electronics, automotive, and industry 4.0. However, for devices that already have a camera, such as smartphones, and where the extra cost of a radar component must be justified, radar is maybe not the best solution. There are a couple of known projects that use radar for gesture control. For instance, Project Soli by Google was announced at the Google I/O developers’ conference back in 2015. Project Soli includes a radar sensor and gesture control software, and was launched commercially in the Google Pixel 4 smartphone in 2019. The role of Edge AI in gesture control applications Sensors capture the data, but, of course, in order for gesture control to work, that data needs to be decoded. Today, the vast majority of software for processing and interpreting sensor data is based on traditional methods which include transformation, filtering, and statistical analysis. Such methods are designed by humans who, referencing their personal domain knowledge, are looking for some kind of “fingerprint” in the data. Quite often, this fingerprint is a complex combination of events in the data and machine learning is needed to successfully resolve the problem. To be able to process sensor data in real time, the machine learning model needs to run locally on the chip, close to the sensor itself—usually called “the edge.” Combining Edge AI with Acconeer radar for an exciting new class of embedded gesture control applications In the pursuit of creating radically new and creative gesture control embedded applications, we ended up finding our ultimate match in Acconeer, a leading innovator in radar sensors. Both Imagimob and Acconeer share the mission of supplying solutions for small battery-powered devices, with extreme requirements on energy efficiency, processing capacity, and cost. In 2019, we teamed up to create an embedded gesture control application. Acconeer produces the world’s smallest and most energy efficient radar sensor, the A1. The data from the sensor contains a lot of information and, for advanced use cases, such as gesture control, complex interpretation is needed—a perfect task for Imagimob’s ultra-efficient Edge AI software. The goal for our first project with Acconeer was to create an embedded application for gesture-controlled headphones that could classify five different hand gestures in real time using radar data. With its small size, the radar could be placed inside a pair of headphones, and the gestures could function as virtual buttons to steer the functionality, which is usually programmed into physical buttons. The end product for the project was a robust demo which was shown at CES in Las Vegas in January 2020. Later on in 2020, we decided to take it a step further. Together with Acconeer, we joined forces with OMS Group with the goal of presenting a fully functional prototype of gesture-controlled in-ear headphones. Gesture control is a perfect fit for in-ear headphones, since the earbud is small and invisible to the user, which makes physical buttons difficult to use. This project elaborates on our original concept to include a selection of MCU and developing firmware, while still keeping it so small that it will fit into the form factor of in-ear headphones. The result? The end product—and the great step forward in the exciting future of touchless technology it represents—will be demonstrated in Q1 2021. Note: The project is part of the strategic innovation program Smarter Electronic Systems - a joint investment by Vinnova, Formas and Swedish Energy Agency. 

Read full article >

Jan 19

The Future is Touchless: Radical Gesture Control Powered by Radar and Edge AI

As the pandemic has swept over the world, gesture control and touchless user interfaces have become a hot topic. Both provide the ability to interact with devices without physically touching them. In addition to rising hygiene awareness, another primary driver of the touchless gesture control market today is a demand for lower maintenance costs. Here, we’ll explore the different types of gesture control technologies, provide some examples of how they are used in specific cases, and explain why, in our opinion, radar technology powered by Edge AI stands out above the rest for certain use cases. What is gesture control ? Gesture control, or gesture recognition, is both a topic of computer science and language technology, where the primary goal is the interpretation of human gestures via algorithms. Gesture control devices have the ability to recognize and interpret movements of the human body, allowing users to interact with and control a system without direct physical contact. Gestures can originate from any bodily motion or state, but normally originate from the hand. Types of touchless gesture control technologies There are several different types of touchless gesture control technologies used today to enable devices to recognize and respond to gestures and movements. These range from cameras to radar, and they each come with pros and cons depending on the application. Cameras (2D/3D)Many gesture control applications use a camera as input. In fact, there are already a number of products available on the market that use smartphone cameras to build mobile apps with gesture control features. In the automotive sector, BMW has led the way, featuring gesture control in some of their latest models. Their solution allows drivers to control select functions in the infotainment (iDrive) system by using hand gestures which are captured by a 3D camera. Located in the roof lining, the camera scans an area in front of the dashboard to detect any gestures performed. Various functions can be operated, depending on the equipment. Infrared sensorsAn infrared (IR) sensor is an electronic device that measures and detects infrared radiation in the surrounding environment. Anything that emits heat gives off infrared radiation. IR is invisible to the human eye, since its wavelength is longer than that of visible light. There are two types of infrared sensors: active and passive. Active infrared sensors both emit and detect infrared radiation, while passive infrared sensors only detect infrared radiation. Active IR sensors have two parts: a light emitting diode (LED) and a receiver. When an object gets close to the sensor, the infrared light from the LED reflects off of the object and is detected by the receiver. Active IR sensors make excellent proximity sensors, and are commonly used in obstacle detection systems. Due to the ability to detect a variety of simple gestures at a low cost, infrared sensing technology can be a good match for many industrial, consumer, and automotive applications. Radar  Of all the touchless gesture control technologies, radar is the most robust. Radar has some unique properties. One is that it is extremely accurate, even the tiniest of motions or gestures can be detected. Another unique property of radar is that it works through materials, such as plastic and glass. Furthermore, with radar, there are no lenses that can become dirty—which is not the case with cameras or infrared technologies. If a camera or infrared sensor become dirty, it doesn’t work. In fact, cameras often face many of the same limitations we find with the human eye. They require a clear lens in order to see properly, limiting where you can position them, and they don’t always provide a crisp or reliable picture in bad weather, particularly in heavy rain or snow.  We believe that gesture control using radar is a great solution that can be applied in many use cases. In-ear headphones are one great example, and there are many other excellent application examples in consumer electronics, automotive, and industry 4.0. However, for devices that already have a camera, such as smartphones, and where the extra cost of a radar component must be justified, radar is maybe not the best solution. There are a couple of known projects that use radar for gesture control. For instance, Project Soli by Google was announced at the Google I/O developers’ conference back in 2015. Project Soli includes a radar sensor and gesture control software, and was launched commercially in the Google Pixel 4 smartphone in 2019. The role of Edge AI in gesture control applications Sensors capture the data, but, of course, in order for gesture control to work, that data needs to be decoded. Today, the vast majority of software for processing and interpreting sensor data is based on traditional methods which include transformation, filtering, and statistical analysis. Such methods are designed by humans who, referencing their personal domain knowledge, are looking for some kind of “fingerprint” in the data. Quite often, this fingerprint is a complex combination of events in the data and machine learning is needed to successfully resolve the problem. To be able to process sensor data in real time, the machine learning model needs to run locally on the chip, close to the sensor itself—usually called “the edge.” Combining Edge AI with Acconeer radar for an exciting new class of embedded gesture control applications In the pursuit of creating radically new and creative gesture control embedded applications, we ended up finding our ultimate match in Acconeer, a leading innovator in radar sensors. Both Imagimob and Acconeer share the mission of supplying solutions for small battery-powered devices, with extreme requirements on energy efficiency, processing capacity, and cost. In 2019, we teamed up to create an embedded gesture control application. Acconeer produces the world’s smallest and most energy efficient radar sensor, the A1. The data from the sensor contains a lot of information and, for advanced use cases, such as gesture control, complex interpretation is needed—a perfect task for Imagimob’s ultra-efficient Edge AI software. The goal for our first project with Acconeer was to create an embedded application for gesture-controlled headphones that could classify five different hand gestures in real time using radar data. With its small size, the radar could be placed inside a pair of headphones, and the gestures could function as virtual buttons to steer the functionality, which is usually programmed into physical buttons. The end product for the project was a robust demo which was shown at CES in Las Vegas in January 2020. Later on in 2020, we decided to take it a step further. Together with Acconeer, we joined forces with OMS Group with the goal of presenting a fully functional prototype of gesture-controlled in-ear headphones. Gesture control is a perfect fit for in-ear headphones, since the earbud is small and invisible to the user, which makes physical buttons difficult to use. This project elaborates on our original concept to include a selection of MCU and developing firmware, while still keeping it so small that it will fit into the form factor of in-ear headphones. The result? The end product—and the great step forward in the exciting future of touchless technology it represents—will be demonstrated in Q1 2021. Note: The project is part of the strategic innovation program Smarter Electronic Systems - a joint investment by Vinnova, Formas and Swedish Energy Agency. 

Read full article >

Jan 19

The Future is Touchless: Radical Gesture Control Powered by Radar and Edge AI

As the pandemic has swept over the world, gesture control and touchless user interfaces have become a hot topic. Both provide the ability to interact with devices without physically touching them. In addition to rising hygiene awareness, another primary driver of the touchless gesture control market today is a demand for lower maintenance costs. Here, we’ll explore the different types of gesture control technologies, provide some examples of how they are used in specific cases, and explain why, in our opinion, radar technology powered by Edge AI stands out above the rest for certain use cases. What is gesture control ? Gesture control, or gesture recognition, is both a topic of computer science and language technology, where the primary goal is the interpretation of human gestures via algorithms. Gesture control devices have the ability to recognize and interpret movements of the human body, allowing users to interact with and control a system without direct physical contact. Gestures can originate from any bodily motion or state, but normally originate from the hand. Types of touchless gesture control technologies There are several different types of touchless gesture control technologies used today to enable devices to recognize and respond to gestures and movements. These range from cameras to radar, and they each come with pros and cons depending on the application. Cameras (2D/3D)Many gesture control applications use a camera as input. In fact, there are already a number of products available on the market that use smartphone cameras to build mobile apps with gesture control features. In the automotive sector, BMW has led the way, featuring gesture control in some of their latest models. Their solution allows drivers to control select functions in the infotainment (iDrive) system by using hand gestures which are captured by a 3D camera. Located in the roof lining, the camera scans an area in front of the dashboard to detect any gestures performed. Various functions can be operated, depending on the equipment. Infrared sensorsAn infrared (IR) sensor is an electronic device that measures and detects infrared radiation in the surrounding environment. Anything that emits heat gives off infrared radiation. IR is invisible to the human eye, since its wavelength is longer than that of visible light. There are two types of infrared sensors: active and passive. Active infrared sensors both emit and detect infrared radiation, while passive infrared sensors only detect infrared radiation. Active IR sensors have two parts: a light emitting diode (LED) and a receiver. When an object gets close to the sensor, the infrared light from the LED reflects off of the object and is detected by the receiver. Active IR sensors make excellent proximity sensors, and are commonly used in obstacle detection systems. Due to the ability to detect a variety of simple gestures at a low cost, infrared sensing technology can be a good match for many industrial, consumer, and automotive applications. Radar  Of all the touchless gesture control technologies, radar is the most robust. Radar has some unique properties. One is that it is extremely accurate, even the tiniest of motions or gestures can be detected. Another unique property of radar is that it works through materials, such as plastic and glass. Furthermore, with radar, there are no lenses that can become dirty—which is not the case with cameras or infrared technologies. If a camera or infrared sensor become dirty, it doesn’t work. In fact, cameras often face many of the same limitations we find with the human eye. They require a clear lens in order to see properly, limiting where you can position them, and they don’t always provide a crisp or reliable picture in bad weather, particularly in heavy rain or snow.  We believe that gesture control using radar is a great solution that can be applied in many use cases. In-ear headphones are one great example, and there are many other excellent application examples in consumer electronics, automotive, and industry 4.0. However, for devices that already have a camera, such as smartphones, and where the extra cost of a radar component must be justified, radar is maybe not the best solution. There are a couple of known projects that use radar for gesture control. For instance, Project Soli by Google was announced at the Google I/O developers’ conference back in 2015. Project Soli includes a radar sensor and gesture control software, and was launched commercially in the Google Pixel 4 smartphone in 2019. The role of Edge AI in gesture control applications Sensors capture the data, but, of course, in order for gesture control to work, that data needs to be decoded. Today, the vast majority of software for processing and interpreting sensor data is based on traditional methods which include transformation, filtering, and statistical analysis. Such methods are designed by humans who, referencing their personal domain knowledge, are looking for some kind of “fingerprint” in the data. Quite often, this fingerprint is a complex combination of events in the data and machine learning is needed to successfully resolve the problem. To be able to process sensor data in real time, the machine learning model needs to run locally on the chip, close to the sensor itself—usually called “the edge.” Combining Edge AI with Acconeer radar for an exciting new class of embedded gesture control applications In the pursuit of creating radically new and creative gesture control embedded applications, we ended up finding our ultimate match in Acconeer, a leading innovator in radar sensors. Both Imagimob and Acconeer share the mission of supplying solutions for small battery-powered devices, with extreme requirements on energy efficiency, processing capacity, and cost. In 2019, we teamed up to create an embedded gesture control application. Acconeer produces the world’s smallest and most energy efficient radar sensor, the A1. The data from the sensor contains a lot of information and, for advanced use cases, such as gesture control, complex interpretation is needed—a perfect task for Imagimob’s ultra-efficient Edge AI software. The goal for our first project with Acconeer was to create an embedded application for gesture-controlled headphones that could classify five different hand gestures in real time using radar data. With its small size, the radar could be placed inside a pair of headphones, and the gestures could function as virtual buttons to steer the functionality, which is usually programmed into physical buttons. The end product for the project was a robust demo which was shown at CES in Las Vegas in January 2020. Later on in 2020, we decided to take it a step further. Together with Acconeer, we joined forces with OMS Group with the goal of presenting a fully functional prototype of gesture-controlled in-ear headphones. Gesture control is a perfect fit for in-ear headphones, since the earbud is small and invisible to the user, which makes physical buttons difficult to use. This project elaborates on our original concept to include a selection of MCU and developing firmware, while still keeping it so small that it will fit into the form factor of in-ear headphones. The result? The end product—and the great step forward in the exciting future of touchless technology it represents—will be demonstrated in Q1 2021. Note: The project is part of the strategic innovation program Smarter Electronic Systems - a joint investment by Vinnova, Formas and Swedish Energy Agency. 

Read full article >

Case Studies
Case Study : Automotive - The Learning Intelligent Steering Wheel

The learning intelligent steering wheel is a steering wheel that has optical touch sensors and LED's (with various colours) around the rim.

Save to Library
Case Study : Karolinska Institutet (Ki)

Karolinska Institutet (KI) are to assess physical activity, sitting and screen-time behavior for children and adults. To do this they have teamed up with Imagimob,...

Save to Library
POSTS FROM RELATED COMPANIES
Pure Storage - Jan 20 - Blog Post:
Great Tech and a Great Education for Muskogee Students

Students in Oklahoma’s Muskogee Public Schools benefit from leadership that uses technology to put learning first. 

Pure Storage - Jan 20 - Blog Post:
Great Tech and a Great Education for Muskogee Students

Students in Oklahoma’s Muskogee Public Schools benefit from leadership that uses technology to put learning first. 

Pure Storage - Jan 20 - Blog Post:
Great Tech and a Great Education for Muskogee Students

Students in Oklahoma’s Muskogee Public Schools benefit from leadership that uses technology to put learning first. 

AWS - Machine Learning - Jan 20 - Blog Post:
Redacting PII from application log output with Amazon Comprehend

Amazon Comprehend is a natural language processing (NLP) service that uses machine learning (ML) to find insights and relationships in...

AWS - Machine Learning - Jan 19 - Blog Post:
Building, automating, managing, and scaling ML workflows using Amazon SageMak...

We recently announced Amazon SageMaker Pipelines, the first purpose-built, easy-to-use continuous integration and continuous delivery (CI/CD) service for machine learning...

Keep updated on all things AI