Workshop «Exploring instruction fine-tuning techniques and QloRa»
Topic and target audience:
This workshop covers the theory and some practical examples for Supervised Fine-tuning with QLoRA. The workshop requires an intermediate level of knowledge for Generative AI and interest in applying fine-tuning techniques to large language models. Knowledge about fine-tuning of Transformer models with HuggingFace is beneficial.
Who’s leading the workshop:
Bilyana Taneva-Popova:
Bilyana is a Senior Applied Scientist at Thomson Reuters Labs in Zug, Switzerland. She received her PhD from the Max-Planck Institute for Informatics in Saarbrücken, Germany. Her main research interests are in NLP, Deep Learning, and Data-centric AI. At Thomson Reuters, she has been working on a variety of internal and external products. Prior positions of hers include two start-up companies, Telepathy Labs (developing conversational agents) and AVA women (developing health related products), as well a position at Nokia Bell Labs in Dublin and at the Grenoble Informatics Laboratory in France.
Timing:
1.5 hours
Prerequisites and what to bring:
- All participants need to bring a laptop
- No special software is required on the laptop
- People should be familiar with Python and Jupyter Notebooks
- A Kaggle account is required for GPU usage
- Minimal knowledge of Transformer architecture is required