Search
  • Adam Tilton

How I Started Working in Wearables

I first heard about “wearables” in 2013 while participating in the National Science Foundations I-CORPS program for Rithmio. My co-founder (and PhD adviser) Prashant Mehta and I were hoping to commercialize technology for nonlinear estimation that was developed in his lab at the University of Illinois in Urbana Champaign. The I-CORPS instructors were trying to help us transition from scientists in the lab to entrepreneurs speaking with customers to understand their problems. It just so happened that the Invensense Motion Tracking Developers Conference was taking place down the street at the Palace Hotel in San Francisco, and that sounded like as good a place as any to find some “customers.”


Prashant and I split up to cover more ground at the conference, and I attended a presentation on Wearable Sensing and Activity Recognition from Sam Massih, the then Director of Wearable Sensing at Invensense. The presentation was about the wearable sensors market, and included a demo of an IMU with built-in activity detection algorithms. The presenter was wearing a wrist wearable motion tracker, and while he was swinging his arm like he was holding a tennis racket he said, “see, we can track tennis!” On the screen behind him flashed the word “walking,” and then “running,” and then finally it figured out he was imitating “tennis.” I left inspired by this problem of on-body sensing, and feeling confident we could build a better solution.


In the rental car driving south to Stanford on the 101 I shared my idea with Prashant. After listening to me ramble for a few minutes he replied, “Could you just write down a math problem please?” Eventually, we wrote an approach to use state estimation to learn models for patterns in data, and joint data association probabilities to classify known patterns. This became the kernel of our software at Rithmio, which was developed for low-power wearable devices (e.g. Arm Cortex M4 or M0).


In real-time, the software would learn new patterns performed by the user, like a bicep curl. On its own, it couldn’t provide a human-readable label, so for any newly learned exercises it would apply an automatic label, like “New Exercise 3.” The user could change the label on their phone, and when enough users gave a same or similar label to an exercise we would add that to our list of known exercises. That way, when the one-hundred and first user did a Bicep Curl it would show up automatically as a Bicep Curl. Along with classifying the activity our software could also count reps as they were occurring. It was a continuous count too, i.e. we could count reps to the decimal place. Our accuracy levels for classification and rep counting were greater than 98% for many exercises, and we did all of this with less than 500kb of memory and 50 MIPS. Here’s a cool video we made in 2016 describing the technology.


I ventured to start my next company in 2018, and started by consulting. My goal was to become embedded with customers and learn about their problems. This time, I wanted to find the business problem first before I started building technology. I was most interested in the intelligent audio market, which I wrote about in my 2018 Intelligent Audio Market Analysis. We started building relationships with large brands and silicon manufacturers, and the pattern of each engagement was that they hired us for our machine learning expertise, but needed our data engineering and infrastructure expertise. You can’t build advanced machine learning without quality data and the plumbing to analyze it, so that’s what we started to build. In 2019 we transitioned out of consulting and into a product company called Aktive.


Our product was a full stack machine learning platform with software-defined sensors for real-time embedded machine learning and scalable data infrastructure to support training and federated learning. If you wanted to learn, classify, or analyze what was happening in a sensor data stream and share knowledge between sensors, we would provide firmware libraries and APIs to make that easy. We quickly signed a term sheet to raise some funds and later LOI to be acquired by one of the customers we were working with. The time we spent working to analyze the market and identify where technology needed to be built was well rewarded, but this experience wasn’t without it’s own lessons.


I’m now at Nike working in innovation where I lead technical architecture for Connected Product software. That’s what Nike calls wearables. One of the products we work on is the Nike Adapts, and our team most recently launched the Gesture Unlace feature which allows you to tap your feet together to unlace your shoes.


Outside of Nike, I’m a OnDeck Deep Tech fellow, a mentor for Techstars Chicago and I evaluate start-up ideas as a hobby (as well as share frameworks for what I’ve seen work). Sitting at the intersection of signal processing, machine learning, and software engineering, my work now focuses on exploring the technical frontiers of wearables to shape a healthier future. I’m excited by what I’m seeing and believe there’s still much to learn.