PAL
Personalized Active Learner (PAL) is a closed-loop, contextually-aware, personalized and user-centric wearable device which leverages Artificial Intelligence and Biotechnology for Memory Augmentation, Language Learning, and Behavior Change.

PAL aims to enable people to design their lives (i.e. inspire behavior change) to optimize their cognitive, physical, and emotional well-being. People's actions deeply influence their internal and external bodily states, which in turn reinforces their own actions. PAL aims to help people to be aware of the correlations between their different activities and internal states so that behavioral awareness can drive intrinsically motivated behavior change.
I was in charge of the Artificial Intelligence aspect of the 'PAL' project. Since I was working with wearable computing, I had to adhere to a unique set of constraints: low compute, fewer data and less memory space.
  • To overcome the low compute I employed special USB-type Neural Accelerators to run the machine learning and designed my Neural Network models to be smaller and run faster. I was able to achieve real-time, offline and on-device face recognition on the wearable system.
  • I devised a one-shot machine learning technique to easily onboard new user faces with just a single picture without retraining the whole model every time. Onboarding now takes about 2 seconds on an offline, low-power embedded device.
This now allows the wearable to provide real-time memory assistance (Memory Augmentation) using personalized face recognition to recognize and remind people of persons in their social circle.
I was also in charge of the software pipelining in the PAL project.
I built a modular wearable architecture called PiWear which allows for rapid-prototyping of wearable experiments involving data collection, processing, and machine learning. I designed it to be highly modular so that researchers after me can easily add or remove modules to suit their experiment setting. PiWear is currently being used in the PAL project for Memory augmentation, Language Learning and Activity recognition. I also led the software team with an iterative process of development, deploy, and feedback which allowed us to demo fully functional prototypes of Memory Augmentation and Language Learning to Bose and Members' week.
I also designed superfast and offline object recognition for the wearable device. I was able to achieve a frame rate of 9 fps. This enables the wearable to use real-time object detection to enable contextual Language Learning in the real world so that the users can learn new languages in a mobile and contextual setting.

I am also working on a novel Activity Recognition classifier that works in a way similar to how humans classify their own activities. Activity recognition can be used by the wearable to keep track of a person's daily activities in order to frame effective interventions or record important attributes like sleep, productivity and exercise time to gain a better insight into the user's mental and physical state and thereby, advance general well being and increase self-awareness.