paper reading

把看过的论文做一个总结,防止我忘掉 # 阅读习惯 1. 看到公式变量用绿色标注一下! 2. 几遍 1. 全文通读,分清楚每句话的意义,只记录重要的话,写下问题 2. 每章提炼出重点 3. 回答以上问题

paper集合

  1. FULL PARAMETER FINE-TUNING FOR LARGE LANGUAGE MODELS WITH LIMITED RESOURCES
    1. 全参数|微调
  2. Towards A Unified View Of Parameter-Efficient Transfer Learning
    1. 微调|高效参数|
  3. 5
  4. 4
  5. 3
  6. LISA: Layerwise Importance Sampling for Memory-Efficient Large Language Model Fine-Tuning
  7. TRANSFORMER EXPLAINER: Interactive Learning of Text-Generative Models
    1. 没啥营养,把transformer的过程搞了个可视化(也没啥用),唯一的贡献就是让我重新梳理了一遍transformer
  8. Aligner: Efficient Alignment by Learning to Correct
  9. A Comprehensive Survey of LLM Alignment Techniques: RLHF, RLAIF, PPO, DPO and More
    1. 综述文章,对齐
  10. CLLMs: Consistency Large Language Models

看的时候遇到的问题,有的需要借助实验来解决

  1. 上游 upstream是什么?
  2. token长什么样子?