Skip to content

LTU-Machine-Learning/WASP_summer_school

Repository files navigation

WASP thematic summer school 2025 on "Rethinking and Rescaling LLMs"

This project provides examples on Knowledge Distillation (rationale-based method - incorporates step-by-step reasoning) and Parameter-efficient Finetuning (using QLoRA).

References for KD are thanks to Anaumghori and Google Research. Reference for QLoRA is thanks to Schaeffer23.

About

WASP Summer School 2025 coding examples

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published