Lustre 2.15.7 Deployment Guide (EL8)
Abstract: This document details the deployment process of the Lustre 2.15.7 parallel file system on EL8 (RHEL 8, Rocky Linux 8, CentOS Stream 8), covering MGS/MDS/OSS server configuration and client mounting.
1. Environment Preparation
1.1 System Requirements
- OS: RHEL 8 / Rocky Linux 8 / CentOS Stream 8.
- Kernel: Official EL8 kernels are recommended (Lustre 2.15.x supports "patchless" server mode).
- Network: InfiniBand (IB) or 100G RoCE is recommended for optimal performance.
1.2 Node Roles
| Role | Description | Hardware Recommendation |
|---|---|---|
| MGS (Management Server) | Manages global config (often co-located with MDS). | High Availability Node |
| MDS (Metadata Server) | Stores metadata (filenames, permissions). | NVMe SSD (Critical) |
| OSS (Object Storage Server) | Stores actual file data. | Large HDD RAID or NVMe |
| Client | Compute nodes accessing the filesystem. | Install Lustre Client |
1.3 System Optimization
Execute on all Server Nodes (MGS/MDS/OSS):
bash
# 1. Disable Firewall
systemctl stop firewalld
systemctl disable firewalld
# 2. Disable SELinux
setenforce 0
sed -i 's/^SELINUX=.*/SELINUX=disabled/g' /etc/selinux/config
# 3. Disable Swap (Performance)
swapoff -a
sed -i '/swap/d' /etc/fstab2. Configure Whamcloud Repositories
Configure Yum repos on ALL nodes (including clients):
bash
cat > /etc/yum.repos.d/lustre.repo <<EOF
[lustre-server]
name=Lustre Server
baseurl=https://downloads.whamcloud.com/public/lustre/lustre-2.15.7/el8/patchless-ldiskfs-server/
enabled=1
gpgcheck=0
[lustre-client]
name=Lustre Client
baseurl=https://downloads.whamcloud.com/public/lustre/lustre-2.15.7/el8/client/
enabled=1
gpgcheck=0
[e2fsprogs]
name=e2fsprogs
baseurl=https://downloads.whamcloud.com/public/e2fsprogs/latest/el8/
enabled=1
gpgcheck=0
EOF3. Installation
3.1 Dependencies (All Nodes)
Lustre requires patched e2fsprogs to handle ldiskfs.
bash
dnf install -y e2fsprogs e2fsprogs-libs libcom_err libss3.2 Server Installation (MGS/MDS/OSS)
bash
# Install Kernel Modules
dnf install -y kmod-lustre
# Install Utils (mkfs.lustre, lctl, etc.)
dnf install -y lustre lustre-utils
# Verify Module Loading
modprobe lustre
lsmod | grep lustreExpected: Output should contain lustre, lnet, ldiskfs.
3.3 Client Installation
bash
dnf install -y lustre-client4. Filesystem Configuration
Data Loss Warning
mkfs.lustre will wipe the target disk. Ensure device paths (/dev/sdX) are correct.
4.1 MGS & MDT Configuration (Metadata)
Assuming device /dev/sdb, co-locating MGS and the first MDT.
bash
# 1. Wipe signatures
wipefs -a /dev/sdb
# 2. Format
# --fsname: Filesystem name (Must match across cluster)
# --mgs: Enable Management Service
# --mdt: Enable Metadata Target
# --index=0: First MDT index must be 0
mkfs.lustre --fsname=myfs --mgs --mdt --index=0 /dev/sdb
# 3. Mount
mkdir -p /mnt/mdt0
mount -t lustre /dev/sdb /mnt/mdt04.2 OSS & OST Configuration (Object Storage)
Assuming device /dev/sdc and MGS IP 192.168.1.10.
bash
# 1. Wipe signatures
wipefs -a /dev/sdc
# 2. Format
# --mgsnode: NID of MGS (IP@tcp)
# --ost: Object Storage Target
# --index=0: OST index (Unique per OST)
mkfs.lustre --fsname=myfs --mgsnode=192.168.1.10@tcp --ost --index=0 /dev/sdc
# 3. Mount
mkdir -p /mnt/ost0
mount -t lustre /dev/sdc /mnt/ost05. Client Mounting & Verification
5.1 Mount Filesystem
On compute nodes:
bash
mkdir -p /mnt/lustre
# Syntax: mount -t lustre <MGS_IP>@<NET>:/<FSNAME> <MOUNT_POINT>
mount -t lustre 192.168.1.10@tcp:/myfs /mnt/lustre5.2 Verification
bash
# Check Usage
lfs df -h
# Check OSTs
lfs osts
# Check MDTs
lfs mdts
# Check Health
lfs check servers6. Persistence (fstab)
Edit /etc/fstab for auto-mount. Use _netdev to prevent boot failures if network is down.
Servers (MDS/OSS):
text
/dev/sdb /mnt/mdt0 lustre defaults,_netdev 0 0
/dev/sdc /mnt/ost0 lustre defaults,_netdev 0 0Clients:
text
192.168.1.10@tcp:/myfs /mnt/lustre lustre defaults,_netdev 0 0