Skip to content

Intel oneAPI Installation and Environment Configuration

Abstract: This guide details the standardized deployment process for the Intel oneAPI cross-architecture programming model, covering the installation of Base and HPC Toolkits, silent deployment strategies, environment variable management, and MPI functionality verification.

1. Overview and Preparation

1.1 Introduction to Intel oneAPI

Intel oneAPI is a cross-industry, open, standards-based unified programming model designed to simplify development across multiple architectures (CPU, GPU, FPGA). The core toolkits include:

  • Intel® oneAPI Base Toolkit: The foundational kit containing the DPC++ compiler, Math Kernel Library (MKL), and performance analysis tools (Advisor, VTune).
  • Intel® oneAPI HPC Toolkit: An extension for high-performance computing, containing the Fortran compiler, MPI library, and debugging tools (Inspector, Trace Analyzer).

1.2 Software Acquisition & Planning

It is recommended to install oneAPI in a standard path to ensure compatibility.

ItemDescriptionExample Path
Package DirWhere .sh scripts are stored/root/inteloneapi
Install PathFinal software destination/opt/intel2021

Download Channels

The toolkits are available for free from the official Intel website:

2. Installation Process

2.1 Preparation

Upload the installation scripts (e.g., l_BaseKit_p_*.sh and l_HPCKit_p_*.sh) to the server and grant execution permissions:

bash
cd /root/inteloneapi
chmod a+x l_BaseKit_p_*.sh
chmod a+x l_HPCKit_p_*.sh

2.2 Important Notice

Path Consistency

The HPC Toolkit must be installed in the same directory as the Base Toolkit. When installing the Base Toolkit, ensure you define the path (e.g., /opt/intel2021/oneapi) correctly. You cannot change the root path when subsequently installing the HPC Toolkit.

2.3 Installation Methods

Method A: GUI Installation

Suitable for desktop environments or scenarios with X11 Forwarding enabled.

  1. Install Base Toolkit:

    bash
    sh l_BaseKit_p_2021.1.0.2659_offline.sh
    • Select Accept & configure installation.
    • Specify path: /opt/intel2021/oneapi.
    • Click Begin Installation.
  2. Install HPC Toolkit:

    bash
    sh l_HPCKit_p_2021.1.0.2684_offline.sh
    • The installer will automatically detect the Base Toolkit path.
    • Click Begin Installation.

Method B: Silent/CLI Installation

Suitable for server batch deployment using the --cli argument.

1. Silent Install Base Toolkit

bash
# Enter interactive command line mode
sh l_BaseKit_p_*.sh -a --cli
  • Workflow:
    1. Accept license.
    2. Select Accept & configure installation.
    3. Enter custom path: /opt/intel2021/oneapi.
    4. Opt-out of data collection (optional).
    5. Select Begin Installation.

2. Silent Install HPC Toolkit

bash
sh l_HPCKit_p_*.sh -a --cli
  • Since the path is locked by the Base Kit, simply select Accept & install.

3. Environment Variables

To invoke tools like icc or mpicc directly from the shell, environment variables must be loaded.

The most direct method is to initialize the current shell session using setvars.sh.

bash
# Syntax: source <install_path>/setvars.sh <arch> --force
source /opt/intel2021/oneapi/setvars.sh intel64 --force

Example Output:

text
:: initializing oneAPI environment ...
:: oneAPI environment initialized ::

3.2 Module Method (HPC Clusters)

In cluster environments (like Slurm), it is recommended to generate Modulefiles for version management.

  1. Generate Modulefiles:

    bash
    cd /opt/intel2021/oneapi
    # Run the setup script
    sh modulefiles-setup.sh

    Default path: /opt/intel2021/oneapi/modulefiles

  2. Load Modules:

    bash
    module use /opt/intel2021/oneapi/modulefiles
    module load mpi/2021.1.1
    module load compiler/2021.1.1

4. Verification (MPI Hello World)

Compile a simple MPI C program to verify that the compiler and MPI library are working correctly.

4.1 Create Test Code

Create a file named hello.c:

c
#include <mpi.h>
#include <stdio.h>

int main(int argc, char* argv[]) {
    MPI_Init(NULL, NULL);
    
    int world_size;
    MPI_Comm_size(MPI_COMM_WORLD, &world_size);
    
    int world_rank;
    MPI_Comm_rank(MPI_COMM_WORLD, &world_rank);
    
    char processor_name[MPI_MAX_PROCESSOR_NAME];
    int name_len;
    MPI_Get_processor_name(processor_name, &name_len);

    printf("Hello World from processor %s, rank %d out of %d processors\n",
           processor_name, world_rank, world_size);
           
    MPI_Finalize();
    return 0;
}

4.2 Compile and Run

  1. Compile: Ensure environment variables are loaded, then use mpicc.

    bash
    mpicc -o hello hello.c
  2. Run on Single Node:

    bash
    # Launch 5 processes
    mpirun -np 5 ./hello
  3. Run on Cluster: Assuming hostfile defines the compute nodes and the network uses InfiniBand (OFI).

    bash
    mpirun -genv I_MPI_FABRICS shm:ofi -machinefile hostfile -np 6 ./hello

AI-HPC Organization