diff --git a/our-initiatives/tutorials/2024-2025/_category_.json b/our-initiatives/tutorials/2024-2025/_category_.json index 49fc1f2..3438085 100644 --- a/our-initiatives/tutorials/2024-2025/_category_.json +++ b/our-initiatives/tutorials/2024-2025/_category_.json @@ -1,6 +1,6 @@ { "label": "2024-2025", - "position": 1, + "position": 2, "link": { "type": "doc", "id": "tutorials/2024-2025/index" diff --git a/our-initiatives/tutorials/2024-2025/index.mdx b/our-initiatives/tutorials/2024-2025/index.mdx index c81fb61..f750c4a 100644 --- a/our-initiatives/tutorials/2024-2025/index.mdx +++ b/our-initiatives/tutorials/2024-2025/index.mdx @@ -19,6 +19,7 @@ This academic year, the tutorial series is being delivered by the following peop - [Paul Chaminieu](#) (ML Officer) - [Anna-Maria](#) (ML Officer) - [Franciszek Nowak](#) (ML Officer - Visual Computing I) +- [James Ray](#) (ML Officer - Generative Visual Computing) ## DOXA Challenges diff --git a/our-initiatives/tutorials/2024-2025/intro_to_transformers.md b/our-initiatives/tutorials/2024-2025/intro_to_transformers.md new file mode 100644 index 0000000..93a787a --- /dev/null +++ b/our-initiatives/tutorials/2024-2025/intro_to_transformers.md @@ -0,0 +1,14 @@ +--- +sidebar_position: 11 +--- + +# 9: Introduction to Transformers + +**Date: 11th December 2024** + +💡 **Transformers** were initially introduced for the purpose of **machine translation**, but is now the most prevalent (State Of The Art) architecture used for virtually all deep learning tasks. Unlike traditional neural networks, Transformers rely on a mechanism called **attention**, which allows them to focus on relevant parts of the input sequence. Unlike RNNs this architecture takes in sequential input data in parallel. + +Central to this model are the **encoder-decoder blocks**, where input data undergoes **tokenization** and is embedded into vectors with **positional encodings** to capture word order. This week, we will explore the **attention mechanism**, including **multi-headed attention**, the structure of **encoder and decoder blocks**, and the processes involved in **training Transformers**, such as **tokenization, masking strategies**, and managing **computational costs**. +💡 + +You can access our **slides** here: 💻 [**Tutorial 9 Slides**](https://www.canva.com/design/DAGYOwRh8u8/xn2OqkUHgTGClSoYOhSxYQ/view?utm_content=DAGYOwRh8u8&utm_campaign=designshare&utm_medium=link2&utm_source=uniquelinks&utlId=ha097b37913) diff --git a/our-initiatives/tutorials/2024-2025/neural-networks.md b/our-initiatives/tutorials/2024-2025/neural-networks.md index 368b7ee..78e80f7 100644 --- a/our-initiatives/tutorials/2024-2025/neural-networks.md +++ b/our-initiatives/tutorials/2024-2025/neural-networks.md @@ -2,7 +2,7 @@ sidebar_position: 7 --- -# 4: Neural Networks +# 5: Neural Networks **Date: 13th November 2024** diff --git a/our-initiatives/tutorials/2024-2025/rnns.md b/our-initiatives/tutorials/2024-2025/rnns.md new file mode 100644 index 0000000..62ce21f --- /dev/null +++ b/our-initiatives/tutorials/2024-2025/rnns.md @@ -0,0 +1,14 @@ +--- +sidebar_position: 10 +--- + +# 8: Recurrent Neural Networks + +**Date: 4th December 2024** + +💡 **Recurrent Neural Networks (RNNs)** are a class of models designed to handle sequential data, such as **time series** or **language**, by using **feedback loops** to maintain **context** over time. This week, we will explore the fundamentals of RNNs, the challenges of training them—especially backpropagation through time—and the introduction of variants like **Long Short-Term Memory (LSTM)** networks that better capture **long-term dependencies**. We will briefly mention contrast these approaches with **transformers**, which have largely replaced RNNs and LSTMs in state-of-the-art applications by using self-attention mechanisms to model sequence elements in parallel, ultimately offering a broader perspective on modern sequence modeling techniques.💡 + +You can access our **demonstration notebook** here: 📘 [**Tutorial 8 Notebook**](https://github.com/UCLAIS/ml-tutorials-season-5/blob/main/week-8/rnn.ipynb) + +You can access our **slides** here: 💻 [**Tutorial 8 Slides**](https://www.canva.com/design/DAGSEPaNv_I/RpD2FqJCqnRyZxwa_cvsGQ/view?utm_content=DAGSEPaNv_I&utm_campaign=designshare&utm_medium=link2&utm_source=uniquelinks&utlId=h053c9bd49f) + diff --git a/our-initiatives/tutorials/2024-2025/visual-computing-1.md b/our-initiatives/tutorials/2024-2025/visual-computing-1.md index 9ebab5a..96ef40a 100644 --- a/our-initiatives/tutorials/2024-2025/visual-computing-1.md +++ b/our-initiatives/tutorials/2024-2025/visual-computing-1.md @@ -2,7 +2,7 @@ sidebar_position: 8 --- -# 5: Visual Computing I +# 6: Visual Computing I **Date: 20th November 2024** diff --git a/our-initiatives/tutorials/2024-2025/visual-computing-2.md b/our-initiatives/tutorials/2024-2025/visual-computing-2.md new file mode 100644 index 0000000..d6ed665 --- /dev/null +++ b/our-initiatives/tutorials/2024-2025/visual-computing-2.md @@ -0,0 +1,27 @@ +--- +sidebar_position: 9 +--- + +# 7: Generative Visual Computing + +**Date: 27th November 2024** + +💡 This week, we'll dive into the exciting world of generative models for computer vision! We'll explore how to create models that can learn the intrinsic features of a dataset and generate new images. We'll focus on building an **auto-encoder**, a powerful tool for capturing the essence of visual data in a compressed **latent space**. + +A latent space is a lower-dimensional representation that encodes the most important features of the data. By learning this compact representation, generative models can create new images that resemble the original dataset. We'll introduce you to various state-of-the-art models used in industry and research, such as **Variational Auto-Encoders** (VAEs), **Generative Adversarial Networks** (GANs) and **Diffussion Models**!💡 + +Also, I want to remind you guys that we are running a [**DOXA challenge**](https://doxaai.com/competition/cifar-10) and which you can get prizes for! + +•⁠ ⁠⁠1st place will get a Mystery prize 🍫 + AI Society Shirt 👕+ Pen🖊️ +•⁠ ⁠⁠2nd place will receive AI Society Shirt 👕 + Pen. 🖊️(**NOTE** that 1st and 2nd place have to achieve a score greater than 0.8074) +•⁠ ⁠⁠⁠The remaining participants will receive UCL AI Society Pens as long as you can get a score above Jeremy (0.6077) + +The deadline for submission is Wednesday, December 4th, 5:59PM which is right before our next session on RNNs. + +For more information on how to do better in the challenge access the last 10 slides and watch our tutorial recording below. + +You can access our **slides** here: 💻 [**Tutorial 7 Slides**](https://www.canva.com/design/DAGSEDAKiHs/jRkDsMJRc65jzSe0KgmbYg/view?utm_content=DAGSEDAKiHs&utm_campaign=designshare&utm_medium=link&utm_source=editor) + +The **recording** from this session is available here: 🎤 [**Tutorial 7 Recording**](https://youtu.be/5ceoctSndC0) + +We did not go through the **demonstration notebook** during session, but you can access our it here: 📘 [**Tutorial 7 Notebook**](https://github.com/UCLAIS/ml-tutorials-season-5/blob/main/week-7/VAE_andAE.ipynb) \ No newline at end of file diff --git a/our-initiatives/tutorials/_category_.json b/our-initiatives/tutorials/_category_.json index 95200eb..29375bb 100644 --- a/our-initiatives/tutorials/_category_.json +++ b/our-initiatives/tutorials/_category_.json @@ -1,6 +1,6 @@ { "label": "💻 ML Tutorial Series", - "position": 2, + "position": 1, "link": { "type": "doc", "id": "tutorials/2024-2025/index" diff --git a/our-initiatives/tutorials/index.mdx b/our-initiatives/tutorials/index.mdx index fca712d..9aa2bb4 100644 --- a/our-initiatives/tutorials/index.mdx +++ b/our-initiatives/tutorials/index.mdx @@ -1,5 +1,5 @@ --- -sidebar_position: 0 +sidebar_position: 1 --- import DocCardList from '@theme/DocCardList' @@ -18,6 +18,8 @@ This academic year, the tutorial series is being delivered by the following peop - [Zachary Baker](#) (ML Officer) - [Paul Chaminieu](#) (ML Officer) - [Anna-Maria](#) (ML Officer) +- [Franciszek Nowak](#) (ML Officer - Visual Computing I) +- [James Ray](#) (ML Officer - Generative Visual Computing) ## DOXA Challenges @@ -40,7 +42,7 @@ During the first half term, we aim to cover basic concepts of **classical ML**: - Tutorial 0: **Introduction to AI** - Tutorial 1: **Introduction to Python** - Tutorial 2: **Regression** -- Tutorial 3: **Classification I** +- Tutorial 3: **Classification I** (Doxa) - Tutorial 4: **Classification II** After reading week, we will focus on **Deep Learning**! @@ -48,14 +50,14 @@ After reading week, we will focus on **Deep Learning**! - Tutorial 5: **Neural Networks** - Tutorial 6: **Visual Computing I** (Doxa) - Tutorial 7: **Generative visual computing** -- Tutorial 8: **Recurrent Neural Networks** (Doxa) +- Tutorial 8: **Recurrent Neural Networks** - Tutorial 9: **Introduction to Transforments** ### Term 2 - Tutorial 10: **Natural Language Processing I** - Tutorial 11: **Natural Language Processing II** -- Tutorial 12: **Graph neural networks / Reinforcement learning** +- Tutorial 12: **Graph Neural Networks / Reinforcement Learning** ## Previous Seasons