Countless sensors exist that can detect when something is moving in a room. But, incorporating one of these sensors into an existing product or adding one of these sensors to a new design can be costly (i.e., development time and/or bill of material). For example, if you were trying to detect that someone had fallen down, a camera would be a great choice to use. You’d then have to determine how to incorporate that camera into your design (e.g., USB, a new board spin, library dependencies, memory allocation, etc.), which could end up costing you a lot (especially if in the end your hypothesis that knowing when someone falls down can provide value proves false). A camera would definitely get the job done, but Wi-Fi may be a cheaper alternative to consider. Meaning, if you can read the RSSI and/or CSI from your Wi-Fi module, you can likely infer the source that caused those signals to change.
I’ll quickly show what I mean using some visuals. Each figure consists of three graphs, from left to right: (1) time series of RSI and CSI, (2) a CSI amplitude heatmap of all subcarriers, and (3) a 3D surface plot of the CSI amplitude. In the first figure, no movement is present:
However, in the second figure, movement is present when I stand (green arrow) and sit (blue arrow) at my desk:
The data used for these figures were collected using an ESP32 devkit and a project started by Steven Hernandez (ESP32 CSI Toolkit). Once CSI data was collected, I used the it to train a model using SensiML that identifies when I stand at my desk and sit at my desk. Once trained, I deployed the model to the same ESP devkit that I used to collect the data and was able to make these inferences in real-time.
Since there isn’t an obvious pipeline to choose for this type of activity in SensiML, I selected the Activity Recognition model and used the 6 most independent CSI subcarriers as inputs (i.e., X, Y, and Z for accelerometer and gyroscope):
When labeling, I created STANDING, SITTING, and QUIET classifiers. The sections of the waveform that are not considered STANDING or SITTING are labeled QUIET.
At this point, I followed the workflow provided by SensiML and trained multiple model types and tested each one in real-time.
The goal of this project was to demonstrate that (1) a physical disturbance can be accurately classified using Wi-Fi and (2) demonstrate how quick and easy it is to collect data, train a model, and deploy it to operate in real-time. In the end, I spent approximately 60 hours collecting data, training models, deploying each one, and testing their performance.
You can read more here: https://embeddedcomputing.com/application/networking-5g/short-range-wireless-pan/environmental-awareness-with-wi-fi-sensing. If you scroll down about half-way, there’s a video of me showing how I deployed it to the ESP32 and made a real-time inference.
I’d love to hear how you feel about incorporating this into your product! Too risky? Sounds promising?