Neurosymbolic AI or: How I Learned to Stop Worrying and Love the Large Language Model
Dr. Lara J. Martin, University of Pennsylvania
12-1:00 pm, Monday, Feb. 27, 2023
ITE 459, UMBC
The large language model ChatGPT has shown extraordinary abilities for writing. While impressive at first glance, large language models aren't perfect and often make mistakes humans would not make. The main architecture behind ChatGPT mostly doesn't differ from early neural networks, and as a consequence, carries some of the same limitations. My work revolves around the uses of neural networks like ChatGPT mixed with symbolic methods from early AI and how these two families of methods can combine to create more robust AI. I talk about some of the neurosymbolic methods I used for applications in story generation and understanding -- with the goal of eventually creating AI that can play Dungeons & Dragons. I also discuss pain points that I found for improving accessible communication and show how large language models can supplement such communication.
Dr. Lara J. Martin is a 2020 Computing Innovation Fellow (CIFellow) postdoctoral researcher at the University of Pennsylvania working with Dr. Chris Callison-Burch. In 2020, she earned a PhD in Human-Centered Computing from the Georgia Institute of Technology, working with Dr. Mark Riedl. They also have a MS in Language Technologies from Carnegie Mellon University and a BS in Computer Science & Linguistics from Rutgers University—New Brunswick. Dr. Martin’s work resides in the field of Human-Centered Artificial Intelligence with a focus on natural language applications. They have worked in the areas of automated story generation, speech processing, and affective computing, publishing in top-tier conferences such as AAAI, EMNLP, and IJCAI. They have also been featured in Wired and BBC Science Focus magazine.