In this assignment we will build a mobile application to provide a crowd-sensing extension to our application.Repository Structure
The repo is structured into five main files:
- index.html : view for the cloud-based deployment.
- edge.html : view for the edge-based deployment. Also provides a status section for the state of the user.
- styles.css : css for the beautiful yellow web page.
- Device with accelerometer sensor
- Chrome 63 or later
- Use of GitHub Pages
Differently from previous systems, now data are sent to Thingsboard using HTTP protocol.
So now we have to do a POST request to the dedicated URL.Note that here I have called it
TOPIC to maintain a sort of simmetry with what I did in the previous assignments, to have a conceptually identical structure.
The URL also contains the Access Token of the Thingsboard device: if you want to reproduce your own project you have to modify this token with yours.
Data are sent when they are extracted. The sampling of the values comes at a frequence of 1Hz: this means that one time per second data are taken and then they are elaborated (if needed) and sent.Cloud-based Deployment
The activity recognition model is performed into the cloud. The web page for this part of the project is available here.
cloud.js script the accelerometer values are extracted and then they are sent to Thingsboard using HTTP protocol.
Data are taken using the Generic Sensor API that collects data form the accelerator sensor of the mobile phone. When the script gets the data simply puts them into the html page and then send them to Thingsboard.
To perform a cloud model recognition on Thingsboard it's important to operate on the rule chains.
In my implementation I have simply modified the root rulechain adding a script called "Interceptor" that is charged of recognizing if the data are coming from the wanted device.
In our case the device that is accepting accelerometer raw data is the one named "Crowd Sensing G".
If the Interceptor script recognizes the device it forwards the message to a new rule chain named "Cloud Computation RC". Here the black magic happens: the message passes to a new script named "Model" where it is computed so that in the device message will be added a new field, the
moving one that tells us if the user is moving or not.
In this way it will become possible to take this new information and display it on a dashboard.
NB : in the dashboard it's possible to change the realtime display as you prefer.Edge-based Deployment
The activity recognition model is performed locally. The web page for this part of the project is available here.
edge.js script the accelerometer values are extracted, they are computed and then they are sent to Thingsboard using HTTP protocol.
When the script gets the data it simply puts them into the html page and then send them to Thingsboard.
Differently from above, here it's all happening locally and what is sent is only the parameter "moving" with the computed response that can be 0 or 1.
In the web view, as you can see, there's a dedicated area that is blue when the user is steady and becomes green as he starts moving.
To recognize user's activity I have calculated the magnitude of the 3D vector of acceleration data.
The magnitude of a 3D vector can be calculated with:
In this simple calculus, due to empirical experiences, I have set that when the magnitude exceeds a certain value the user's activity is recognized as moving.Youtube Video