Skip to content
This repository was archived by the owner on Nov 22, 2025. It is now read-only.

gsamil/my-learning

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

My Learning Notes

I created this repo to have an organized view of them in my mind. I might not need this in future, or maybe I may need to create more, for now let's start with this.

DevOps

BackEnd

Machine Learning

Blog

Speech Processing

  • 2022-08-25 | MFCC : Short description for MFCC

Statistics

  • 2023-08-25 | Hypothesis Testing : Short description for hypothesis testing
  • 2023-08-25 | z-score : Short description for z-score

System Design

Coding

Paper Summaries

  • 2021-10-16 | Edward Hu et. al. | LoRA: Low-Rank Adaptation of Large Language Models : LoRA introduces a resource-efficient approach for adapting large pre-trained language models, such as GPT-3, to specific tasks without the heavy costs of traditional fine-tuning. It maintains model quality, minimizes inference latency, and facilitates quick task-switching.
  • 2023-12-19 | Lingling Xu et. al. | Parameter-Efficient Fine-Tuning Methods for Pretrained Language Models: A Critical Review and Assessment : This paper critically reviews Parameter Efficient Fine-Tuning (PEFT) methods for large pretrained language models (PLMs), highlighting their benefits in resource-limited settings. It assesses these methods' performance, efficiency, and memory usage across tasks like natural language understanding, machine translation, and generation.
  • 2023-05-17 | Rohan Anil et. al. | PaLM 2 Technical Report : We introduce PaLM 2, a new state-of-the-art language model that has better multilingual and reasoning capabilities and is more compute-efficient than its predecessor PaLM.
  • 2023-05-18 | Chunting Zhou et. al. | LIMA: Less Is More for Alignment : Large language models are trained in two stages: (1) unsupervised pretraining from raw text, to learn general-purpose representations, and (2) large scale instruction tuning and reinforcement learning, to better align to end tasks and user preferences. We measure the relative importance of these two stages.
  • 2022-03-29 | Jordan Hoffmann et. al. | Training Compute-Optimal Large Language Models : We investigate the optimal model size and number of tokens for training a transformer language model under a given compute budget.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published