Exploring Machine Teaching with Children
IAM Lab @ UMD, Kidsteam @ UMD
January 2018 - Current
We explored how children use machine teaching interfaces with a team of 14 children (aged 7-13 years) and adult co-designers. Using Cooperative Inquiry and observing children's behaviours while they used Google Teachable Machine and a custom Augmented Reality based iOS application, to train a machine learning model. Children trained image classifiers and tested each other’s models for robustness. Our study illuminates how children reason about ML concepts, offering these insights for designing machine teaching experiences for children.
Publications: VLHCC '21
Accessibility Datasets and Sharing Practices
IAM Lab @ UMD
April 2018 - Current
We conduct a systematic review of 137 accessibility datasets that sourced from people with disabilities and older adults. We uncovered patterns in data collection purpose, terminology, sample size, data types, and data sharing practices across communities of focus.Metadata about these datasets were made publically available through a full-stack website that I and my collaborators designed and developed.
Eye-tracking and Edtech research
IBM Research India
June 2015 - August 2017
As a part of the Mobile HCI Research team at IBM Research India I worked on creating and evaluating technologies that used eye-tracking data, and text mining techniques. Quantitative evaluations of these techonlogies led to publications in conferences. The major client I developed use cases for was Sesame Street where I developed iOS augmented reality applications and backend technology for vocabulary learning. These projects also yielded 5 patents, one of which was filed in US, EU and Japan, while the rest are filed in US.