Understanding the LLM Development Cycle: Building, Training, and Finetuning with Sebastian Raschka
This talk will guide you through the key stages of developing large language models (LLMs), from initial coding to deployment. We will start by explaining how these models are built, including the coding of their architectures. Next, we will discuss the processes of pre-training and finetuning, showing what these stages involve and why they are important. Throughout the talk, we will provide real examples and encourage questions, making this a practical and interactive session for anyone interested in how LLMs are created and used.
Sebastian Raschka Bio
Sebastian Raschka, PhD has been working on machine learning and AI for more than a decade. Sebastian joined Lightning AI in 2022, where he is a Staff Research Engineer focusing on AI and LLM research and development. Prior to that, Sebastian worked at the University of Wisconsin-Madison as an assistant professor in the Department of Statistics, focusing on deep learning and machine learning research. He has a strong passion for education and is best known for his bestselling books on machine learning using open-source software.