Skip to content

xiaofengwang01/ict-llm-seminar-materials

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 

Repository files navigation

Pretrain

Dense Models

  • [2024/07] The Llama 3 Herd of Models | paper | code
  • [2024/05] The Road Less Scheduled | paper | code
  • [2023/08] Continual Pre-Training of Large Language Models: How to (re)warm your model? | paper
  • [2023/07] Llama 2: Open Foundation and Fine-Tuned Chat Models | paper
  • [2023/03] GPT-4 Technical Report | paper
  • [2023/02] LLaMA: Open and Efficient Foundation Language Models | paper
  • [2022/05] OPT: Open Pre-trained Transformer Language Models | paper
  • [2022/04] PaLM: Scaling Language Modeling with Pathways | paper
  • [2021/04] RoFormer: Enhanced Transformer with Rotary Position Embedding | paper
  • [2020/05] Language Models are Few-Shot Learners(GPT-3) | paper
  • [2019/02] Language Models are Unsupervised Multitask Learners(GPT-2) | paper
  • [2018/10] BERT: Pre-Training of Deep Bidirectional Transformers | paper
  • [2018/06] Improving Language Understanding by Generative Pre-Training(GPT-1) | paper
  • [2017/06] Attention Is All You Need | paper
  • [2016/07] Layer Normalization | paper
  • [2015/08] Neural Machine Translation of Rare Words with Subword Units | paper
  • [2013/01] Efficient Estimation of Word Representations in Vector Space | paper

MoE

  • [2024/05] DeepSeek-V2: A Strong, Economical, and Efficient Mixture-of-Experts Language Model | paper | code
  • [2024/01] DeepSeekMoE: Towards Ultimate Expert Specialization in Mixture-of-Experts Language Models | paper | code
  • [2024/01] Mixtral of Experts | paper
  • [2022/02] ST-MoE: Designing Stable and Transferable Sparse Expert Models | paper
  • [2021/12] GLaM: Efficient Scaling of Language Models with Mixture-of-Experts | paper
  • [2021/01] Switch Transformers: Scaling to Trillion Parameter Models with Simple and Efficient Sparsity | paper
  • [2020/06] GShard: Scaling Giant Models with Conditional Computation | paper
  • [2017/01] Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer | paper

Scaling Laws & Emergent Analysis

  • [2024/05] Scaling Laws and Compute-Optimal Training Beyond Fixed Training Durations | paper
  • [2023/05] Scaling Data-Constrained Language Models | paper
  • [2023/04] Are Emergent Abilities of Large Language Models a Mirage? | paper
  • [2022/06] Emergent Abilities of Large Language Models | paper
  • [2022/03] Training Compute-Optimal Large Language Models | paper
  • [2022/02] Compute Trends Across Three Eras of Machine Learning | paper
  • [2020/01] Scaling Laws for Neural Language Models | paper

Instruction Tuning / Supervised Fine-tuning

  • [2024/12] Instruction Tuning for Large Language Models: A Survey | paper| code
  • [2024/03] COIG-CQIA: Quality Is All You Need for Chinese Instruction Fine-Tuning | paper
  • [2023/05] LIMA: Less Is More for Alignment | paper
  • [2023/04] WizardLM: Empowering Large Language Models to Follow Complex Instructions | paper
  • [2023/03] Alpaca: A Strong, Replicable Instruction-Following Model | paper | code
  • [2022/12] Self-Instruct: Aligning Language Models with Self-Generated Instructions | paper | code
  • [2021/09] Finetuned Language Models Are Zero-Shot Learners | paper

Fine-Tuning

  • [2023/12] AdaLoRA – Adaptive Budget Allocation for Parameter-Efficient Fine-Tuning | paper | code
  • [2023/05] QLoRA – Efficient Finetuning of Quantized LLMs | paper | code
  • [2022/05] Few-Shot Parameter-Efficient Fine-Tuning is Better and Cheaper than In-Context Learning | paper | code
  • [2021/06] BitFit: Simple Parameter-efficient Fine-tuning for Transformer-based Masked Language-models | paper | code
  • [2021/06] LoRA – Low-Rank Adaptation of Large Language Models | paper | code
  • [2021/04] The Power of Scale for Parameter-Efficient Prompt Tuning | paper | code
  • [2021/01] Prefix-Tuning – Optimizing Continuous Prompts for Generation | paper | code
  • [2020/12] Parameter-Efficient Transfer Learning with Diff Pruning | paper
  • [2019/02] Parameter-Efficient Transfer Learning for NLP | paper | code

Alignment / RLHF

  • [2025/03] Self-Rewarding Language Models | paper
  • [2024/12] The PRISM Alignment Dataset: What Participatory, Representative and Individualised Human Feedback Reveals About the Subjective and Multicultural Alignment of Large Language Models | paper
  • [2024/11] ReST-MCTS*: LLM Self-Training via Process-Reward-Guided Tree Search | paper | code
  • [2024/07] Direct Preference Optimization: Your Language Model is Secretly a Reward Model | paper | code
  • [2024/06] Self-Play Fine-Tuning Converts Weak Language Models to Strong Language Models | paper | code
  • [2024/06] Monte Carlo Tree Search Boosts Reasoning via Iterative Preference Learning | paper | code
  • [2024/01] Secrets of RLHF in Large Language Models Part II: Reward Modeling | paper
  • [2023/07] Secrets of RLHF in Large Language Models Part I: PPO | paper
  • [2022/03] Training language models to follow instructions with human feedback(InstructGPT) | paper
  • [2017/08] Proximal Policy Optimization Algorithms | paper | code

Inference

  • [2024/08] RULER: What’s the Real Context Size of Your Long-Context Language Models? | paper
  • [2024/04] Better & Faster Large Language Models via Multi-token Prediction | paper
  • [2024/01] The What, Why, and How of Context Length Extension Techniques in Large Language Models — A Detailed Survey | paper
  • [2023/07] Lost in the Middle: How Language Models Use Long Contexts | paper
  • [2022/12] A Length-Extrapolatable Transformer | paper
  • [2022/11] Fast Inference from Transformers via Speculative Decoding | paper
  • [2022/10] Contrastive Decoding: Open-ended Text Generation as Optimization | paper
  • [2018/11] Blockwise Parallel Decoding for Deep Autoregressive Models | paper

Reasoning

  • [2023/05] Tree of Thoughts: Deliberate Problem Solving with Large Language Models | paper | code
  • [2022/11] Program of Thoughts Prompting: Disentangling Computation from Reasoning for Numerical Reasoning Tasks | paper | code
  • [2022/05] Large Language Models are Zero-Shot Reasoners | paper
  • [2022/03] Self-Consistency Improves Chain of Thought Reasoning in Language Models | paper
  • [2022/01] Chain-of-Thought Prompting Elicits Reasoning in Large Language Models | paper

AI Infra

  • [2024/07] FlashAttention-3: Fast and Accurate Attention with Asynchrony and Low-precision | paper | code
  • [2023/10] Ring Attention with Blockwise Transformers for Near-Infinite Context | paper
  • [2023/09] DeepSpeed Ulysses: System Optimizations for Enabling Training of Extreme Long Sequence Transformer Models | paper | code
  • [2023/07] FlashAttention-2: Faster Attention with Better Parallelism and Work Partitioning | paper | code
  • [2022/05] FlashAttention: Fast and Memory-Efficient Exact Attention with IO-Awareness | paper | code
  • [2022/01] Using DeepSpeed and Megatron to Train Megatron-Turing NLG 530B, A Large-Scale Generative Language Model | paper | code
  • [2021/04] Efficient Large-Scale Language Model Training on GPU Clusters Using Megatron-LM | paper | code
  • [2019/10] ZeRO: Memory Optimizations Toward Training Trillion Parameter Models | paper

AI4Code

  • [2025/04] QiMeng-GEMM: Automatically Generating High-Performance Matrix Multiplication Code by Exploiting Large Language Models | paper
  • [2025/01] A Survey of Neural Code Intelligence: Paradigms, Advances and Beyond | paper
  • [2024/12] AGON: Automated Design Framework for Customizing Processors from ISA Documents | paper
  • [2024/11] OpenCoder: The Open Cookbook for Top-Tier Code Large Language Models | paper | code
  • [2024/11] From Generation to Judgment: Opportunities and Challenges of LLM-as-a-judge | paper
  • [2024/11] A Survey on Large Language Models for Code Generation | paper
  • [2024/11] Divide-and-Conquer Meets Consensus: Unleashing the Power of Functions in Code Generation | paper | code
  • [2024/11] SelfCodeAlign: Self-Alignment for Code Generation | paper | code
  • [2024/09] ComBack: A Versatile Dataset for Enhancing Compiler Backend Development Efficiency | paper
  • [2024/08] DeepSeek-Prover-V1.5: Harnessing Proof Assistant Feedback for Reinforcement Learning and Monte-Carlo Tree Search | paper | code
  • [2024/07] CodeV: Empowering LLMs for Verilog Generation through Multi-Level Summarization | paper | code
  • [2024/06] Magicoder: Empowering Code Generation with OSS-Instruct | paper | code
  • [2024/06] Unifying the Perspectives of NLP and Software Engineering: A Survey on Language Models for Code | paper
  • [2024/06] McEval: Massively Multilingual Code Evaluation | paper | code
  • [2024/06] LiveCodeBench: Holistic and Contamination Free Evaluation of Large Language Models for Code | paper | code
  • [2024/05] DeepSeek-Prover: Advancing Theorem Proving in LLMs through Large-Scale Synthetic Data | paper | code
  • [2024/04] ChipNeMo: Domain-Adapted LLMs for Chip Design | paper
  • [2024/01] A Survey of Large Language Models for Code: Evolution, Benchmarking, and Future Trends | paper
  • [2023/10] LILO: Learning Interpretable Libraries by Compressing and Documenting Code | paper | code
  • [2023/12] VerilogEval: Evaluating Large Language Models for Verilog Code Generation | paper | code
  • [2023/11] Chip-Chat: Challenges and Opportunities in Conversational Hardware Design | paper| code
  • [2023/11] ANPL: Towards Natural Programming with Interactive Decomposition | paper | code
  • [2023/10] CodeBERTScore: Evaluating Code Generation with Pretrained Models of Code | paper
  • [2023/10] WizardCoder: Empowering Code Large Language Models with Evol-Instruct | paper | code
  • [2023/09] Measuring Coding-Challenge Competence with APPS | paper | code
  • [2023/06] Program Synthesis with Large Language Models | paper
  • [2022/11] CodeT: Code Generation with Generated Tests | paper | code
  • [2022/07] Efficient Training of Language Models to Fill in the Middle | paper
  • [2022/04] InCoder: A Generative Model for Code Infilling and Synthesis | paper | code
  • [2021/11] Measuring Coding Challenge Competence With APPS | paper | code
  • [2021/08] CodeBLEU: A Method for Automatic Evaluation of Code Synthesis | paper
  • [2020/09] Evaluating Large Language Models Trained on Code | paper

AIGC

  • [2023/02] Adding Conditional Control to Text-to-Image Diffusion Models | paper | code
  • [2023/03] Scalable Diffusion Models with Transformers (DiT) | paper | code
  • [2022/09] DreamFusion: Text-to-3D using 2D Diffusion | paper
  • [2022/05] Photorealistic Text-to-Image Diffusion Models with Deep Language Understanding (Imagen) | paper
  • [2022/04] Hierarchical Text-Conditional Image Generation with CLIP Latents(DALL-E 2) | paper
  • [2021/12] High-Resolution Image Synthesis with Latent Diffusion Models | paper | code
  • [2020/06] Denoising Diffusion Probabilistic Models(Diffusion) | paper | code

LLM Agent

  • [2025/03] A Survey on Large Language Model based Autonomous Agents | paper
  • [2023/10] MemGPT: Towards LLMs as Operating Systems | paper | code
  • [2023/09] The Rise and Potential of Large Language Model Based Agents: A Survey | paper
  • [2023/05] Voyager: An Open-Ended Embodied Agent with Large Language Models | paper | code
  • [2023/03] ReAct: Synergizing Reasoning and Acting in Language Models | paper
  • [2023/02] Toolformer: Language Models Can Teach Themselves to Use Tools | paper
  • [2023/04] Generative Agents: Interactive Simulacra of Human Behavior | paper | code
  • [2023/04] LLM+P: Empowering Large Language Models with Optimal Planning Proficiency | paper | code
  • [2023/03] HuggingGPT: Solving AI Tasks with ChatGPT and its Friends in HuggingFace | paper | code
  • [2023/03] PaLM-E: An Embodied Multimodal Language Model | paper | code
  • [2023/03] Reflexion: Language Agents with Verbal Reinforcement Learning | paper | code
  • [2022/12] LLM-Planner: Few-Shot Grounded Planning for Embodied Agents with Large Language Models | paper | code
  • [2022/11] PAL: Program-Aided Language Models | paper | code
  • [2022/10] Do As I Can, Not As I Say: Grounding Language in Robotic Affordances | paper | code
  • [2022/09] Code as Policies: Language Model Programs for Embodied Control | paper | code
  • [2022/05] TALM: Tool Augmented Language Models | paper
  • [2015/06] Language Understanding for Text-based Games Using Deep Reinforcement Learning | paper

MLLMs

  • [2023/10] Improved Baselines with Visual Instruction Tuning | paper | code
  • [2023/04] Visual Instruction Tuning(LLaVA) | paper | code
  • [2023/01] BLIP-2: Bootstrapping Language-Image Pre-training | paper | code
  • [2022/04] Flamingo: a Visual Language Model for Few-Shot Learning | paper
  • [2022/01] BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation | paper | code
  • [2021/07] Align before Fuse: Vision and Language Representation Learning with Momentum Distillation | paper | code
  • [2021/01] Learning Transferable Visual Models From Natural Language Supervision | paper | code

About

中科院计算所大模型研讨课相关论文汇总

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published