Recent Updates

[July 2024] Our work on higher-order equivariant NNs for charge density prediction was published in npj Computational Materials!

[February 2024] Presented a workshop paper at AAAI on Designing Retrieval Augmented Language Models for Clinical Decision Support

[November 2023] Co-taught a tutorial on Cross-Modality Generative AI at RAAINS 2023

[March 2023] Attended IEEE AeroConf, and gave a talk about CHASER

[Nov 2022] I spoke at Harvard Medical School’s Clinical Informatics Lecture Series on RadTex, a transfer learning primer, and current challenges of limited labeled data in Medical AI

[Sept 2022] RadTex: Learning Efficient Radiograph Representations from Text Reports won Best Paper Award at MICCAI REMIA Workshop in Singapore!

About

I work in the Artificial Intelligence Technology group at MIT Lincoln Laboratory, where I am an AI researcher and ML Research Engineer. My research interests focus on the effective use of multi-modal signals in deep learning, and applications of AI for healthcare. I have over 5 years of experience developing machine learning algorithms in Python, and have deep technical experience in NLP, computer vision, and graph modeling. I’m also currently a program lead at Lincoln, leading research on RAG and knowledge bases for clincial decision support in austere environments.

Prior to joining the AI Technology group, I gained experience with many of the most advanced sensor systems available. I have hardware and software experience in optical, infrared, hyperspectral, stereoscopic, radar, LiDAR, and acoustic sensors, and my past experience in these domains drives my current interest in multimodal AI. I’m particularly excited about opportunities to connect LLMs with other modalities and information sources to access richer semantic inputs.

Beyond developing novel deep learning methods, I recognize the impact of working alongside domain experts and all those who will engage with AI tools to build more trust and understanding of how algorithms arrive at their decisions. This not only means improving AI literacy through outreach and education, but also building expainable AI (XAI) systems. I believe that interpretable models can build more trust in AI systems, and empower those systems to make greater impact on society.

I care about creating AI for positive social impact, whether it be advancing medicine, combatting climate change, or fostering community.

LinkedIn GitHub Twitter Scholar