Our AI determines if a person is standing, walking, or running. It is able to distinguish between those classes surprisingly well considering the amount of data we collected.
I realize this post is a bit late. There is a good reason for that. I think we tried tackling two other problems before this, but no matter how much we tried, we were unable to make them work. This is our third attempt, and we were successful because we chose a simpler task than our previous ones (handwriting and Spotify Ads recognition).
Getting the data for the project was interesting. The standing data was easy. I put the microcontroller in my sock and stood for four minutes. Walking was slightly more difficult and involved experimenting with carrying a computer in a backpack that connected to the microcontroller with a rather long USB cable. To start and monitor the data collection we used remote desktop, and then we collected the data by walking around our apartment. Running added another level of difficulty. It is a bit hard to run for any significant length of time in our apartment, so we had to run outside. The problem with that was that the WiFi connection outside is spotty. This was the simplest problem we encountered, which we solved by tethering a phone to the laptop in the backpack. I then ran around the Davis parking lot for 30 seconds.
After we collected the data, we trained several models. I have only tested two so far, but the gradient boosting machine works flawlessly.