Skip to content
AI-HPC.org
Search
K
Main Navigation
Home
Guide
Community
AI-HPC Assistant
About
English
简体中文
English
简体中文
Appearance
Menu
Return to top
On this page
LLM Pre-training
Transformer Architecture
Self-Attention Mechanism
Multi-Head Attention
Feed Forward Network (FFN)
Positional Encoding
Absolute Positional Encoding (Sinusoidal)
Rotary Positional Embedding (RoPE)
Training Objectives
Masked Language Modeling (MLM)
Causal Language Modeling (CLM)