Visualize Amazon Lookout for Vision Inference with Amazon QuickSight

Olalekan Elesin
3 min readMar 13, 2021
Visualize Amazon Lookout for Vision Inference with Amazon QuickSight — Olalekan Elesin

In my last post, I demonstrated how Amazon Lookout for Vision could help improve the quality of local food processing, using cassava as a case study.

In this post, I will show how we could capture, in real time, anomalies detected by Amazon Lookout for Vision with Raspberry Pi Zero Camera module. Inferences from Amazon Lookout for Vision is then published to an Amazon Kinesis Data Firehose Delivery Stream with an Amazon S3 destination. Finally, we will create an Amazon Athena table over the S3 data and visualize with Amazon QuickSight. See high-level architecture below:

Visualize Amazon Lookout for Vision Inference with Amazon QuickSight — Olalekan Elesin

For production purposes, you may want to leverage AWS IoT Greengrass to manage numerous IoT devices detecting anomalies with Amazon Lookout for Vision. You may also want to upload the pictures taken from the device camera to an Amazon S3 bucket for human-in-the-loop validations and model retraining.

The Work

Hardware Setup

I must admit that this was my first ever interaction with Raspberry Pi, and was super surprised with how it worked. I bought a Raspberry Pi Zero W starter kit, a 32GB microSD card, and camera from Amazon and followed the setup instructions on the Raspberry Pi website:

Once the camera setup was completed, my Raspberry Pi Zero W was ready to be operational. Hence, moved to setting up access to AWS resources.

AWS Setup

I created an AWS user account with CLI access only for my device with only the required policies based on the resources that would be accessed by the device, which are permission detect anomalies with Amazon Lookout for Vision and permission to publish to an Amazon Kinesis Data Firehose Delivery Stream.

{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"lookoutvision:DetectAnomalies"
],
"Resource": "<my-model-arn-here>"
},{
"Effect": "Allow",
"Action": [
"firehose:PutRecord",
"firehose:PutRecordBatch"
],
"Resource": "<my-firehose-deliverystream-arn-here>"
}
]
}

Once this was completed, I created a simple code snippet following the recipes available in the Picamera documentation.

Detect Anomalies with Amazon Lookout for Vision

The gist below contains a Python script detects anomalies with Amazon Lookout for Vision based on images captured from the Raspberry Pi camera and publishes the detection to an Amazon Kinesis Data Firehose Delivery Stream.

Explore with Amazon Athena, visualize with Amazon QuickSight

With inference information now arriving in our Amazon S3 data lake, we can setup an AWS Glue Crawler to automatically detect the schema of the data and create table in AWS Glue Data Catalog, queryable via Amazon Athena. I can run SQL queries via the Amazon Athena console, or using QueryPal, an open source web and mobile UI for Amazon Athena.

Furthermore, I can create a dashboard in Amazon QuickSight based on tables in Amazon Athena. And with the newly launched Amazon QuickSight Q, I can interact with the data in natural language queries e.g. “Show me anomalies detected in plant XYZ in the last 3 weeks” — imagine the possibilities.

Near Future Work

We have put a lot of work into developing this solution and we strongly believe that not everyone should be involved in undifferentiated heavy-lifting, unless you really want to hack this together. Therefore, we are actively working on making this available as a managed service. Kindly drop a comment if you’re interested in a live demo.

Can’t wait to hear what you’ll build with Amazon Lookout for Vision. You can reach me via email, follow me on Twitter or connect with me on LinkedIn.

--

--

Olalekan Elesin

Enterprise technologist with experience across technical leadership, architecture, cloud, machine learning, big-data and other cool stuff.