Intel oneAPI Installation and Environment Configuration
Abstract: This guide details the standardized deployment process for the Intel oneAPI cross-architecture programming model, covering the installation of Base and HPC Toolkits, silent deployment strategies, environment variable management, and MPI functionality verification.
1. Overview and Preparation
1.1 Introduction to Intel oneAPI
Intel oneAPI is a cross-industry, open, standards-based unified programming model designed to simplify development across multiple architectures (CPU, GPU, FPGA). The core toolkits include:
- Intel® oneAPI Base Toolkit: The foundational kit containing the DPC++ compiler, Math Kernel Library (MKL), and performance analysis tools (Advisor, VTune).
- Intel® oneAPI HPC Toolkit: An extension for high-performance computing, containing the Fortran compiler, MPI library, and debugging tools (Inspector, Trace Analyzer).
1.2 Software Acquisition & Planning
It is recommended to install oneAPI in a standard path to ensure compatibility.
| Item | Description | Example Path |
|---|---|---|
| Package Dir | Where .sh scripts are stored | /root/inteloneapi |
| Install Path | Final software destination | /opt/intel2021 |
Download Channels
The toolkits are available for free from the official Intel website:
2. Installation Process
2.1 Preparation
Upload the installation scripts (e.g., l_BaseKit_p_*.sh and l_HPCKit_p_*.sh) to the server and grant execution permissions:
cd /root/inteloneapi
chmod a+x l_BaseKit_p_*.sh
chmod a+x l_HPCKit_p_*.sh2.2 Important Notice
Path Consistency
The HPC Toolkit must be installed in the same directory as the Base Toolkit. When installing the Base Toolkit, ensure you define the path (e.g., /opt/intel2021/oneapi) correctly. You cannot change the root path when subsequently installing the HPC Toolkit.
2.3 Installation Methods
Method A: GUI Installation
Suitable for desktop environments or scenarios with X11 Forwarding enabled.
Install Base Toolkit:
bashsh l_BaseKit_p_2021.1.0.2659_offline.sh- Select
Accept & configure installation. - Specify path:
/opt/intel2021/oneapi. - Click
Begin Installation.
- Select
Install HPC Toolkit:
bashsh l_HPCKit_p_2021.1.0.2684_offline.sh- The installer will automatically detect the Base Toolkit path.
- Click
Begin Installation.
Method B: Silent/CLI Installation
Suitable for server batch deployment using the --cli argument.
1. Silent Install Base Toolkit
# Enter interactive command line mode
sh l_BaseKit_p_*.sh -a --cli- Workflow:
- Accept license.
- Select
Accept & configure installation. - Enter custom path:
/opt/intel2021/oneapi. - Opt-out of data collection (optional).
- Select
Begin Installation.
2. Silent Install HPC Toolkit
sh l_HPCKit_p_*.sh -a --cli- Since the path is locked by the Base Kit, simply select
Accept & install.
3. Environment Variables
To invoke tools like icc or mpicc directly from the shell, environment variables must be loaded.
3.1 Source Method (Recommended/Standalone)
The most direct method is to initialize the current shell session using setvars.sh.
# Syntax: source <install_path>/setvars.sh <arch> --force
source /opt/intel2021/oneapi/setvars.sh intel64 --forceExample Output:
:: initializing oneAPI environment ...
:: oneAPI environment initialized ::3.2 Module Method (HPC Clusters)
In cluster environments (like Slurm), it is recommended to generate Modulefiles for version management.
Generate Modulefiles:
bashcd /opt/intel2021/oneapi # Run the setup script sh modulefiles-setup.shDefault path:
/opt/intel2021/oneapi/modulefilesLoad Modules:
bashmodule use /opt/intel2021/oneapi/modulefiles module load mpi/2021.1.1 module load compiler/2021.1.1
4. Verification (MPI Hello World)
Compile a simple MPI C program to verify that the compiler and MPI library are working correctly.
4.1 Create Test Code
Create a file named hello.c:
#include <mpi.h>
#include <stdio.h>
int main(int argc, char* argv[]) {
MPI_Init(NULL, NULL);
int world_size;
MPI_Comm_size(MPI_COMM_WORLD, &world_size);
int world_rank;
MPI_Comm_rank(MPI_COMM_WORLD, &world_rank);
char processor_name[MPI_MAX_PROCESSOR_NAME];
int name_len;
MPI_Get_processor_name(processor_name, &name_len);
printf("Hello World from processor %s, rank %d out of %d processors\n",
processor_name, world_rank, world_size);
MPI_Finalize();
return 0;
}4.2 Compile and Run
Compile: Ensure environment variables are loaded, then use
mpicc.bashmpicc -o hello hello.cRun on Single Node:
bash# Launch 5 processes mpirun -np 5 ./helloRun on Cluster: Assuming
hostfiledefines the compute nodes and the network uses InfiniBand (OFI).bashmpirun -genv I_MPI_FABRICS shm:ofi -machinefile hostfile -np 6 ./hello
