Meraki Facemask Detector
Description
This project integrates Meraki MV Cameras with Amazon Rekognition through their APIs to perform a deeper image analysis, to detect whether a person is wearing a facemask or not. The results will be posted into a Webex Teams space.
Workflow
The workflow will be the following:
Before start: What do you need
-
Access to a Meraki Dashboard (and its API key) with an MV Camera and MV Sense license available.
-
An AWS Account, with an access and secret key created (Instructions on how to generate them)
-
A Webex account. You will need it to create a Bot and grab its Access Token. You'll find instructions on how to do it here.
-
An MQTT broker reachable by the MV Camera. It can be configured on your laptop, or a separate server. In my case, I used a Raspberry Pi 3b+ with a Debian image, and I installed Mosquitto, an open source MQTT Broker.
Usage
- Clone this repo in your local machine typing on your terminal:
https://github.com/agmanuelian/Meraki_Facemask_Detector.git
- Install the required dependencies specified on the requirements.txt file:
pip3 install requirements.txt
-
Set up your MQTT Broker, and configure it on the Meraki Dashboard
- Select your MV Camera
- Go to Settings
- Select Sense
- Enable your MV Sense license
- Select Add or edit MQTT Brokers and configure its parameters.
- After you added your broker, select it on the dropdown list.
- Save your changes.
-
Configure your credentials on the lambda_module/main.py file.
-
On you AWS account, set up your Lambda Function. When it's time to upload your code, zip the lambda_module directory, and upload the .zip file.
Lambda Setup - Step 1
After you do this, increase the execution time up to 15 seconds, under the Configuration tab.
Lambda Setup - Step 2
- On you AWS account, set up your API Gateway. Once deployed, grab its public address. You will need it on the next step.
API Gateway Setup - Step 1
API Gatewat Setup - Step 2
-
Replace your credentials on the mqtt_trigger.py file. The API URL that you got on the previous step, should be added to the script on this step.
-
Add your recently created bot to a Webex room. The bot access token and the Room ID should be already configured on the lambda_module/main.py file.
-
Run the mqtt_trigger.py script. You should see displayed on the terminal a real time feed of the detected people quantity. When a person gets in front of the camera, it will trigger the API call and process the function. The results will be posted on the Webex room.
Output
These are the results of the image analysis posted into a Webex Room.
Links to DevNet Learning Labs
Meraki Learning Lab
Related Sandbox
Meraki Always On Sandbox
Meraki Enterprise Sandbox