Troubleshooting and Resolving Common LLM Issues In Production
According to a recent survey, 61.7% of enterprise engineering teams now have or are planning to have a generative AI application within a year—and 14.1% are already in production. As enterprises race to deploy generative AI into their businesses, the need to ensure that LLMs are deployed reliably and responsibly is paramount. But how can enterprises and AI engineers evaluate and troubleshoot models in real time? In this session, Greg Chase, a data scientist and machine learning engineer at Arize AI, will cover emerging best practices from direct work advising enterprises with real issues. Whether teams have LLM apps or are using LLMs as an additional tool for human-in-the-loop evaluations, this session will help mitigate the inevitable issues that arise, like inaccurate responses and hallucinations.
Greg Chase is a data scientist and machine learning engineer who works directly with clients of Arize AI to help them deliver more successful AI in production. Previously, Greg was a Senior Data Scientist at Visa and built machine learning infrastructure at several startups. He lives in Denver, Colorado.