Link Search Menu Expand Document Documentation Menu

GPU 加速

在 OpenSearch 集群中使用机器学习 (ML) 节点运行自然语言处理 (NLP) 模型时,您可以使用图形处理单元 (GPU) 加速在 ML 节点上获得更好的性能。GPU 可以与集群的 CPU 协同工作,以加快模型上传和训练速度。

支持的 GPU

目前,ML 节点支持以下 GPU 实例:

如果您需要 GPU 算力,可以通过 Amazon Elastic Compute Cloud (Amazon EC2) 配置 GPU 实例。有关如何配置 GPU 实例的更多信息,请参阅 推荐的 GPU 实例

支持的镜像

您可以使用带有 CUDA 11.6 的 Docker 镜像Amazon Machine Images (AMI) 来实现 GPU 加速。

PyTorch

GPU 加速的 ML 节点需要 PyTorch 1.12.1 才能与 ML 模型配合使用。

设置 GPU 加速的 ML 节点

根据 GPU 的不同,您可以手动或使用自动化初始化脚本来配置 GPU 加速的 ML 节点。

准备 NVIDIA ML 节点

NVIDIA 使用 CUDA 来提高节点性能。为了利用 CUDA,您需要确保您的驱动程序在 /dev 目录中包含 nvidia-uvm 内核。要检查内核,请输入 ls -al /dev | grep nvidia-uvm

如果 nvidia-uvm 内核不存在,请运行 nvidia-uvm-init.sh

#!/bin/bash
## Script to initialize nvidia device nodes.
## https://docs.nvda.net.cn/cuda/cuda-installation-guide-linux/index.html#runfile-verifications
/sbin/modprobe nvidia
if [ "$?" -eq 0 ]; then
  # Count the number of NVIDIA controllers found.
  NVDEVS=`lspci | grep -i NVIDIA`
  N3D=`echo "$NVDEVS" | grep "3D controller" | wc -l`
  NVGA=`echo "$NVDEVS" | grep "VGA compatible controller" | wc -l`
  N=`expr $N3D + $NVGA - 1`
  for i in `seq 0 $N`; do
    mknod -m 666 /dev/nvidia$i c 195 $i
  done
  mknod -m 666 /dev/nvidiactl c 195 255
else
  exit 1
fi
/sbin/modprobe nvidia-uvm
if [ "$?" -eq 0 ]; then
  # Find out the major device number used by the nvidia-uvm driver
  D=`grep nvidia-uvm /proc/devices | awk '{print $1}'`
  mknod -m 666 /dev/nvidia-uvm c $D 0
  mknod -m 666 /dev/nvidia-uvm-tools c $D 0
else
  exit 1
fi

验证 /dev 下存在 nvidia-uvm 后,您可以在集群中启动 OpenSearch。

准备 AWS Inferentia ML 节点

根据 AWS Inferentia 上运行的 Linux 操作系统,您可以使用以下命令和脚本来配置 ML 节点并在集群中运行 OpenSearch。

首先,在集群上下载并安装 OpenSearch

然后导出 OpenSearch 并设置您的环境变量。本例将 OpenSearch 导出到目录 opensearch-2.5.0,因此 OPENSEARCH_HOME = opensearch-2.5.0

echo "export OPENSEARCH_HOME=~/opensearch-2.5.0" | tee -a ~/.bash_profile
echo "export PYTORCH_VERSION=1.12.1" | tee -a ~/.bash_profile
source ~/.bash_profile

接下来,创建一个名为 prepare_torch_neuron.sh 的 shell 脚本文件。您可以根据您的 Linux 操作系统复制并自定义以下示例之一:

运行脚本后,退出当前终端并打开一个新终端来启动 OpenSearch。

GPU 加速仅在 Ubuntu 20.04 和 Amazon Linux 2 上进行过测试。但是,您可以使用其他 Linux 操作系统。

Ubuntu 20.04

. /etc/os-release
sudo tee /etc/apt/sources.list.d/neuron.list > /dev/null <<EOF
deb https://apt.repos.neuron.amazonaws.com ${VERSION_CODENAME} main
EOF
wget -qO - https://apt.repos.neuron.amazonaws.com/GPG-PUB-KEY-AMAZON-AWS-NEURON.PUB | sudo apt-key add -

# Update OS packages
sudo apt-get update -y

################################################################################################################
# To install or update to Neuron versions 1.19.1 and newer from previous releases:
# - DO NOT skip 'aws-neuron-dkms' install or upgrade step, you MUST install or upgrade to latest Neuron driver
################################################################################################################

# Install OS headers
sudo apt-get install linux-headers-$(uname -r) -y

# Install Neuron Driver
sudo apt-get install aws-neuronx-dkms -y

####################################################################################
# Warning: If Linux kernel is updated as a result of OS package update
#          Neuron driver (aws-neuron-dkms) should be re-installed after reboot
####################################################################################

# Install Neuron Tools
sudo apt-get install aws-neuronx-tools -y

######################################################
#   Only for Ubuntu 20 - Install Python3.7
sudo add-apt-repository ppa:deadsnakes/ppa
sudo apt-get install python3.7
######################################################
# Install Python venv and activate Python virtual environment to install    
# Neuron pip packages.
cd ~
sudo apt-get install -y python3.7-venv g++
python3.7 -m venv pytorch_venv
source pytorch_venv/bin/activate
pip install -U pip

# Set pip repository to point to the Neuron repository
pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com

#Install Neuron PyTorch
pip install torch-neuron torchvision
# If you need to trace the neuron model, install torch neuron with this command
# pip install torch-neuron neuron-cc[tensorflow] "protobuf==3.20.1" torchvision

# If you need to trace neuron model, install the transformers for tracing the Huggingface model.
# pip install transformers

# Copy torch neuron lib to OpenSearch
PYTORCH_NEURON_LIB_PATH=~/pytorch_venv/lib/python3.7/site-packages/torch_neuron/lib/
mkdir -p $OPENSEARCH_HOME/lib/torch_neuron; cp -r $PYTORCH_NEURON_LIB_PATH/ $OPENSEARCH_HOME/lib/torch_neuron
export PYTORCH_EXTRA_LIBRARY_PATH=$OPENSEARCH_HOME/lib/torch_neuron/lib/libtorchneuron.so
echo "export PYTORCH_EXTRA_LIBRARY_PATH=$OPENSEARCH_HOME/lib/torch_neuron/lib/libtorchneuron.so" | tee -a ~/.bash_profile

# Increase JVm stack size to >=2MB
echo "-Xss2m" | tee -a $OPENSEARCH_HOME/config/jvm.options
# Increase max file descriptors to 65535
echo "$(whoami) - nofile 65535" | sudo tee -a /etc/security/limits.conf
# max virtual memory areas vm.max_map_count to 262144
sudo sysctl -w vm.max_map_count=262144

Amazon Linux 2

# Configure Linux for Neuron repository updates
sudo tee /etc/yum.repos.d/neuron.repo > /dev/null <<EOF
[neuron]
name=Neuron YUM Repository
baseurl=https://yum.repos.neuron.amazonaws.com
enabled=1
metadata_expire=0
EOF
sudo rpm --import https://yum.repos.neuron.amazonaws.com/GPG-PUB-KEY-AMAZON-AWS-NEURON.PUB
# Update OS packages
sudo yum update -y
################################################################################################################
# To install or update to Neuron versions 1.19.1 and newer from previous releases:
# - DO NOT skip 'aws-neuron-dkms' install or upgrade step, you MUST install or upgrade to latest Neuron driver
################################################################################################################
# Install OS headers
sudo yum install kernel-devel-$(uname -r) kernel-headers-$(uname -r) -y
# Install Neuron Driver
####################################################################################
# Warning: If Linux kernel is updated as a result of OS package update
#          Neuron driver (aws-neuron-dkms) should be re-installed after reboot
####################################################################################
sudo yum install aws-neuronx-dkms -y
# Install Neuron Tools
sudo yum install aws-neuronx-tools -y

# Install Python venv and activate Python virtual environment to install    
# Neuron pip packages.
cd ~
sudo yum install -y python3.7-venv gcc-c++
python3.7 -m venv pytorch_venv
source pytorch_venv/bin/activate
pip install -U pip

# Set Pip repository  to point to the Neuron repository
pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com

# Install Neuron PyTorch
pip install torch-neuron torchvision
# If you need to trace the neuron model, install torch neuron with this command
# pip install torch-neuron neuron-cc[tensorflow] "protobuf<4" torchvision

# If you need to run the trace neuron model, install transformers for tracing Huggingface model.
# pip install transformers

# Copy torch neuron lib to OpenSearch
PYTORCH_NEURON_LIB_PATH=~/pytorch_venv/lib/python3.7/site-packages/torch_neuron/lib/
mkdir -p $OPENSEARCH_HOME/lib/torch_neuron; cp -r $PYTORCH_NEURON_LIB_PATH/ $OPENSEARCH_HOME/lib/torch_neuron
export PYTORCH_EXTRA_LIBRARY_PATH=$OPENSEARCH_HOME/lib/torch_neuron/lib/libtorchneuron.so
echo "export PYTORCH_EXTRA_LIBRARY_PATH=$OPENSEARCH_HOME/lib/torch_neuron/lib/libtorchneuron.so" | tee -a ~/.bash_profile
# Increase JVm stack size to >=2MB
echo "-Xss2m" | tee -a $OPENSEARCH_HOME/config/jvm.options
# Increase max file descriptors to 65535
echo "$(whoami) - nofile 65535" | sudo tee -a /etc/security/limits.conf
# max virtual memory areas vm.max_map_count to 262144
sudo sysctl -w vm.max_map_count=262144

脚本运行完成后,打开一个新终端以使设置生效。然后,启动 OpenSearch。

OpenSearch 现在应该在您的 GPU 加速集群中运行。但是,如果在配置过程中发生任何错误,您可以手动安装 GPU 加速器驱动程序。

手动准备 ML 节点

如果前面两个脚本未能正确配置您的 GPU 加速节点,您可以手动安装 AWS Inferentia 的驱动程序:

  1. 根据您选择的 Linux 操作系统部署 AWS 加速器实例。有关说明,请参阅在 AWS 加速器实例上部署

  2. 将 Neuron 库复制到 OpenSearch 中。以下命令使用名为 opensearch-2.5.0 的目录:

    OPENSEARCH_HOME=~/opensearch-2.5.0
    
  3. 设置 PYTORCH_EXTRA_LIBRARY_PATH 路径。在本例中,我们在 OPENSEARCH_HOME 文件夹中创建了一个 pytorch 虚拟环境:

    PYTORCH_NEURON_LIB_PATH=~/pytorch_venv/lib/python3.7/site-packages/torch_neuron/lib/
    
    
    mkdir -p $OPENSEARCH_HOME/lib/torch_neuron; cp -r  $PYTORCH_NEURON_LIB_PATH/ $OPENSEARCH_HOME/lib/torch_neuron
    export PYTORCH_EXTRA_LIBRARY_PATH=$OPENSEARCH_HOME/lib/torch_neuron/lib/libtorchneuron.so
    
  4. (可选)要监控加速器实例的 GPU 使用情况,请安装 Neuron 工具,这允许在您的实例中使用模型:

    # Install Neuron Tools
    sudo apt-get install aws-neuronx-tools -y
    
    # Add Neuron tools your PATH
    export PATH=/opt/aws/neuron/bin:$PATH
    
    # Test Neuron tools
    neuron-top
    
  5. 为确保您有足够的内存来上传模型,请将 JVM 堆栈大小增加到 >+2MB

    echo "-Xss2m" | sudo tee -a $OPENSEARCH_HOME/config/jvm.options
    
  6. 启动 OpenSearch。

故障排除

由于处理 ML 模型所需的数据量,在尝试在集群中运行 OpenSearch 时,您可能会遇到以下 max file descriptorsvm.max_map_count 错误:

[1]: max file descriptors [8192] for opensearch process is too low, increase to at least [65535]
[2]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]

要解决 max file descriptors 错误,请运行以下命令:

echo "$(whoami) - nofile 65535" | sudo tee -a /etc/security/limits.conf

要修复 vm.max_map_count 错误,请运行此命令将计数增加到 262114

sudo sysctl -w vm.max_map_count=262144

后续步骤

如果您想尝试使用 AWS Inferentia 和预训练的 HuggingFace 模型来构建 GPU 加速集群,请参阅编译和部署 HuggingFace 预训练的 BERT

剩余 350 字符

有问题?

想要贡献?