Fighting Diabetes, Fake Food and Food Waste

FoodPhone™ Case: Advanced Vision Technology Turns Regular Mobile Phones into Precise Food & Nutrition Analysis Devices Using AI, 3D and Hyperspectral Approaches

Over 200 Million people deal with Type-1 or Type-2 diabetes as well as obesity worldwide. Vision technology now helps in fighting these diseases by taking a smartphone picture of a meal on a plate and instantly knowing its nutritional contents in scientific-like precision – as easy as sharing on social media. FoodPhone™ provides a mobile solution using Intel®’s RealSense™ (RS) 3D technology to determine the volumes, texture and shapes of all types of food, with seemingly only one image capture. Besides identifying carbohydrates and counting calories, this new phone case, equipped with Embedded Vision, detects the ingredients of commercially prepared food based on their chemical composition and instantly displays FDA-formatted nutrition labels. The FoodPhone™ device’s NIR (near IR) capabilities also recognize any natural imperfections, both visible and non-visible, and help in detecting the foods quality and freshness at the grocery store. Selecting the freshest fruits or vegetables such as avocados, is made simple by instantly displaying “freshness” levels on the user’s smartphone.

 

Billions of People Worldwide are Watching Their Nutritional Intake

Diabetics, athletes, fitness-lovers and many others struggling with their weight need to watch what they are eating. In the US alone, over 100 million people uses smartphones and smart devices to monitor their weight, fitness and diet each day. In all modern western and rising eastern societies, diabetes is rapidly increasing in numbers. Especially for diabetics, counting carbs is key to managing their disease and, for them, it is a matter of life and death. Their carbohydrate intake influences their insulin dosages. The carbohydrate total is the key meal information needed by a diabetic for using emerging technologies such as CGM (Constant Glucose Monitoring) and automated tubeless insulin pumps. And there lies the problem: These health-saving monitoring tools are only successful with the right user input. Manual input and user’s food estimations tend to be very inaccurate, causing incorrect insulin dosages which can be very dangerous and even life threatening for a diabetic.

 

Only One Smartphone Image Needed to Create FDA-Like Nutrition Fact Label

Today, smartphones have cameras, access to the internet, include modern, powerful Artificial Intelligence (AI) algorithms, and are used to take millions of food images every minute. The FoodPhone idea is to turn these smartphones into diet-helpers by analyzing the food directly on the plate. By snapping what seemingly is one image of the meal, the FoodPhone app, with its SpectraPixel™ technology, connects with the company’s proprietary cloud-based AI to recognize the meal’s content, specifically its chemical composition, quantity in ounces/grams and quality, while also segmenting mixed meals. With the help of the multispectral cameras and NIR sensors embedded in the FoodPhone’s new smartphone case design, the user instantly gets a scientific analysis of their meal.

 

FoodPhone’s FDA-like Nutrition Fact Label for a lunch serving

This one-shot analysis provides details on the carbohydrates, fats, proteins and other nutritional contents along with the true portion size of the prepared meal.  Combining different imaging technologies with AI based intelligence, FoodPhone precisely identifies the amount and composition of food. There is no need to input any information into the application, nor touch or probe the food or guess its volume.  ​The analysis is fast. In a very user friendly and efficient way, the FoodPhone case delivers exact Nutrition Fact sheets with accuracies beyond 90%.  With a mix of AI and AR, the technology will impact the spread of diabetes and support food-watchers in reaching their goals.

FoodPhone’s FDA-like Nutrition Fact Label for a lunch serving

 

Idea, Innovation and Implementation

“Originally, I was only looking for an easy way to count calories,” says Christopher M. Mutti, CEO & Founder of FoodPhone. As a passionate hockey player, he was forced to watch his nutrition, and monitor weight and calories to stay in shape. But for him, this multi-step procedure was not only inaccurate but also extremely complicated. This qualified mechanical engineer wanted to watch what he was eating in an easy way. Mutti started in 2013 with the basic idea to capture the nutritional content of a meal with one image from a either a camera or a smartphone.

Christopher M. Mutti, FoodPhone’s CEO, is a passionate hockey player and was originally looking for an easy way to count calories.

The technologies were ready: Computing power of smartphones had evolved to the point that they could take on complex algorithms; and more advanced vision-based AI technology was available through projects like Wikipedia ImageNet from Stanford University. Together, with a team of experienced scientists in Neural Network AI, 3D and hyperspectral imaging along with experienced engineers in camera and lens design, Mutti decided to develop the patents to make his idea into reality.

The first prototype was built with off-the-shelf components and had dimensions of 8” x 7” x 3”, what Mutti refers to as the “Million Dollar Blue Box”. At the time, it was the smallest available solution to merge 3D, RGB and NIR and the cost was about $3,000. It took more than 5 years for technological advancements to reach the level of performance and affordability to make FoodPhone a practical solution. Mutti often refers to this “perfect technological storm” that was perfectly timed as the basis for bringing his idea into reality. Nowadays, Intel®’s RealSense™ cameras are the size of a little finger, enabling new devices, like smartphones, to enter into a new world of 3D data collection and processing. With this, Mutti and his team have now found the perfect product, both in size and price, to provide the necessary information needed for the food recognition and analysis. They embed the Intel® RealSense™ cameras into a normal-looking phone case, maintaining a similar form factor to standard phone cases. With all the advancements in the technology used in their application that now come at a lower cost, the product can now be offered for just a few hundred dollars. Users will simply swap their existing phone case with the FoodPhone’s embed phone case and download the app. From there, they can start capturing images of their actual meals and get its nutritional information within seconds.

Design Study and prototype of the FoodPhone mobile phone case

Combining Multiple Vision Data Types with AI?

The FoodPhone solution uses multi-spectral imaging to precisely identify the macronutrients and volume or portion sizes. Mutti developed FoodPhone’s measurement IP without using a “Fiducial Object” on the plate of food or within the FOV (field of view) of the imaging system. The engineers decided to use the Intel® D435 RealSense depth camera because it is a USB -powered depth camera consisting of a pair of depth sensors, RGB sensor, and an infrared projector.  Mutti holds a patent for producing hyperspectral images by merging the output of multiple cameras. This vision technology closely emulates the way humans identify their food.

Color is the first element that people look at in a meal and, for this reason, FoodPhone uses the RGB camera to identify colors in the captured image. Then, the 3D stereo pair generates the data needed to identify the shape, outline and texture of the elements in a similar way to how people experience it. The raw 3D image data gives the dimensions and total volume or portion size of the food on the plate. By using the NIR data in the images captured by the multiple cameras and sensors, FoodPhone’s image processing (IP) algorithms are able to interpret the chemical composition of the food, kind of like how people taste and smell the aromas of the food they eat.

An overlay of more than ten images and raw data are categorized into components of visible light, color, spectral data and 3D information. The optical, spectral and physical information retrieved from these images are used to find the specific and individual characteristics of each morsel. The spectral profiles captured from the images are used to compare and classify against the different food types, as each one has a unique spectral fingerprint.

Different spectral profiles as unique fingerprint for any vegetables and meat

“Millions of images are used to train this powerful AI machine”, says Mutti. “To reach accuracy levels beyond 90%, it was a lot of hard work performing tens of thousands of food classifications, confusion tables and other processing steps. In order to calculate the individual food labels and weight correctly, the color, texture, spectral signature, and volume have to match. Raw image data are first processed by an Intel® Edison, a very small computer on a module, to identify carbohydrates, proteins, fats, and water content. From there, all the information that is collected is then sent to the cloud and processed by FoodPhone’s AI driven database. The smartphone receives the results and displays the nutrition fact label.

 

Additional Technological Benefits

Grocery shopping can be optimized by wasting less time looking for the freshest and healthiest products while saving money in the process. FoodPhone’s technology can also be used to detect food’s quality and freshness, with consumers receiving the food’s freshness in real-time, displayed on their smartphone. People with food allergies can check the stated ingredients of their food through a single image capture and not by attempting to decipher “cryptic” ingredient lists on the packages. A simple scan of the food provides a more detailed list of the ingredients along with a more exact shelf-life. By using information in the NIR spectrum, FoodPhone’s technology helps detect imperfections, like the ripeness and presence of bacteria, irrespective of the product’s “Best Before” date. Picking the freshest and ripest avocado is just a click away.

Forward-thinking but already-now realistic scenarios include FoodPhone’s technology integrated into every home kitchen. The company’s cabinet mounted device could scan each food or a meal’s separate ingredients as its being prepared will give the user a scientific analysis instantly with the display of a Nutritional Fact Label of their daily menu.  All of this, while FoodPhone’s software automatically updates the user’s weekly grocery shopping list or their food delivery list.

Cabinet mounted FoodPhone device scans readily prepared meal

An intelligent IoT-based refrigerator, equipped with FoodPhone’s intelligent device, would be able to track the use of what was purchased and stored within it and what has been consumed, linking directly with and updating shopping and delivery lists.  A family could track and plan if they are running out of a popular item easily by having their fridge do this for them.

This vision and AI-based food check can fight food fraud as well. The FoodPhone technology can help people make better, healthier choices by recognizing added chemicals or secret ingredients used to prolong shelf life or make products more appealing or addictive.

Top-Notch Example for Future-leading use of Vision

The innovative FoodPhone technology is a perfect example how vision-based solutions, in a very small footprint, with cutting-edge new applications can be used in every day devices, like smartphones and smart home appliances. Merging different types of AI-based imaging, in this case 3D, RGB and NIR, to extract additional information shows how powerful modern vision can be. Off-the-shelf cameras with integrated Depth Technology and infrared projectors are available at reasonable prices, ready-to-use and in a very small size. Easy to handle and affordable vision technology, liked provided by the FoodPhone engineers, can take applications in industrial and consumer markets to the next level.