This tutorial focuses on the principles and practices of fine-tuning Large Language Models (LLMs) on downstream tasks. The session will consist of two parts: a 50-minute theoretical introduction followed by a 50-minute hands-on workshop. Regarding theory, we will review the fundamental structure of LLMs, explore challenges in adapting these models to new domains as well as computational resources, and introduce techniques that make fine-tuning more accessible and resource-efficient. The practical part will guide participants through fine-tuning LLMs for a wide range of tasks on textual data, with an emphasis on understanding the parameters and methods that lead to effective training (LoRA adapters, quantization, etc.). By the end of the tutorial, attendees will be ready to efficiently fine-tune LLMs for their specific research or application needs.