I am a Computer Engineering MS student at Arizona State University, with an emphasis in multimedia signal processing and deep learning.
My main interest and experience areas are: speech signal processing, natural language processing, automotive signal processing, robotics, and affective computing.
I want to further human computer interaction through advancements in computational linguistics, artificial intellignce, and signal processing. I want to use my background in signal processing and machine learning to create software that can naturally relate to humans, bridge cultural and linguistic barriers, and create a better future.
I currently work for Aural Analytics as a Speech Research Engineer part time, developing ASR- and DSP-based methods for analyzing speech to track and potentially detect early stage degenerative brain disease.
I am supported as a jointly-funded RA student by the Center for Cognitive Ubiquitous Computing and Brain Behavior Analytics Labs at ASU where I am pursuing my master's thesis research on biologically inspired deep speech processing.
I assist members of the NSF Center for Efficient Vehicles and Sustainable Transportation Systems at ASU in developing automotive synchronous multisensor collection platforms and processing LiDAR data for use in neural networks, as a continuation of the work I did with them for my EE senior design project.Past
Previously I have worked in The Luminosity Lab, an ASU strategic initiative that rapidly builds student-led teams to attempt ambitious solutions to big problems, where I helped produce projects ranging from drone swarm communication software to chat agents for online learning platforms.
Additionally, I have worked for a summer at General Dynamics Mission Systems running software level testing for the HOOK3 combat survival radio, in the Social Robotics Lab at the National University of Singapore, developing an emotive dialogue management system, and in the ASU Engineering Tutoring Center.