
I am a fourth-year PhD student in the Robotic Musicianship Lab at Georgia Tech’s Center for Music Technology, where I conduct research under the advisement of Dr. Gil Weinberg. My musical foundation began with classical trombone training starting in fourth grade, continuing through various orchestras.

Since probably second grade, I have always loved building things. Both my parents were programmers, so my act of rebellion was being a hardware person. Growing up I participated in Lego robotics competitions, and in high school I moved to bigger “metal” robot competitions. High school was where I learned the technical details of robotics, which pushed my excitement for specifically robots to my core.

In eighth grade I moved to a music school that introduced me to the technical and scientific aspects of music, including music perception. There was a lot more engineering to music than I thought. This exposure sparked my interest in combining music with engineering and technology, and I have since diversified my musical skills, learning electronic music production, bass guitar, and jazz trombone to enable improvisation with robotic musicians.


I earned my undergraduate degree in Mechanical Engineering with a Music Technology minor from Rochester Institute of Technology. During my undergraduate studies, I worked on a project that developed robots to assist sailors with disabilities in steering boats. I also worked with the campus Tech Crew TC4L!!!!!, where I designed and implemented sound and lighting systems for various campus events.
I also ende dup working at Yamaha in Japan, where I worked on loudspeaker design. I used fineCone FEA to create current speaker simulations, and then used the tool to design a new speaker driver for yamaha. I found out about Gil’s lab sophomore year of college and from that moment knew I had to go there.
My doctoral research focuses on using music to improve safety and fluency in human-robot collaboration. Unlike traditional robotic sonification that relies on repetitive, attention-demanding beeps, my approach generates continuous musical content that can communicate danger and urgency levels without causing listener fatigue. My methodology combines machine learning techniques with traditional music theory principles to create musical systems that facilitate human-robot synchronization.

As a mechanical engineer in the Robotic Musicianship Lab, I have made many design and manufacture contributions to multiple robotic systems including work on Medusai, the lab’s guitar robot, and Shimon, the lab’s marimba-playing robot. My work on Shimi focuses on developing accessibility applications for deaf and hard-of-hearing users, transforming the robot into a tool for experiencing music through visual and haptic feedback.

Outside of the lab, I love going on bike rides, hiking, sailing, and playing with my niece and nephew! I also am practicing new music with friends and designing toyes on my 3D printer!