ยง2024-12-13
`. Install Anaconda or Pip
- if you need to build PyTorch with GPU support a. for NVIDIA GPUs, install CUDA, if your machine has a CUDA-enabled GPU. b. for AMD GPUs, install ROCm, if your machine has a ROCm-enabled GPU
- Follow the steps described here: https://github.com/pytorch/pytorch#from-source
To build PyTorch from source with GPU (CUDA) support, you need to follow a series of steps to ensure that you have the necessary tools, libraries, and environment to compile PyTorch with GPU acceleration.
Prerequisites CUDA Toolkit: Ensure that you have the appropriate version of the CUDA toolkit installed. PyTorch supports different versions of CUDA, so you need to check which versions are compatible with your desired PyTorch version.
CUDA version 10.2 and 11.x are commonly supported in recent PyTorch releases. You can check your GPU's CUDA compatibility on NVIDIA's CUDA GPUs page. NVIDIA Drivers: Install the correct version of the NVIDIA drivers for your GPU that is compatible with your CUDA version.
Python: Ensure that you have Python installed, and it's recommended to use a virtual environment for isolation.
System Dependencies: You will need build tools such as git, cmake, and gcc.
Step-by-Step Guide to Build PyTorch from Source with GPU Support
Step 1: Install System Dependencies First, install the necessary development tools and libraries. On Ubuntu or Debian-based systems:
sudo apt-get update
sudo apt-get install -y \
build-essential \
cmake \
git \
libgoogle-glog-dev \
libgflags-dev \
libssl-dev \
libffi-dev \
python3-dev \
python3-pip \
libomp-dev \
libblas-dev \
liblapack-dev \
zlib1g-dev
If you're using another Linux distribution, you can install equivalent packages using the respective package manager.
Step 2: Install CUDA and cuDNN
To ensure PyTorch builds with GPU support, you need to install both the CUDA toolkit and cuDNN (CUDA Deep Neural Network library).
CUDA Installation: Follow the installation instructions for your operating system from the official CUDA Toolkit Installation Guide.
- CUDA Toolkit Installer
- Installation Instructions:
- Please ensure your device is configured per the CUDA Tegra Setup Documentation.
wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/arm64/cuda-ubuntu2204.pin
sudo mv cuda-ubuntu2204.pin /etc/apt/preferences.d/cuda-repository-pin-600
wget https://developer.download.nvidia.com/compute/cuda/12.6.3/local_installers/cuda-tegra-repo-ubuntu2204-12-6-local_12.6.3-1_arm64.deb
sudo dpkg -i cuda-tegra-repo-ubuntu2204-12-6-local_12.6.3-1_arm64.deb
sudo cp /var/cuda-tegra-repo-ubuntu2204-12-6-local/cuda-*-keyring.gpg /usr/share/keyrings/
sudo apt-get update && sudo apt-get -y install cuda-toolkit-12-6 cuda-compat-12-6
cuDNN Installation: Download the correct version of cuDNN from the NVIDIA website: cuDNN Download. Follow the installation instructions based on your CUDA version.
- Installation Instructions:
wget https://developer.download.nvidia.com/compute/cudnn/9.6.0/local_installers/cudnn-local-tegra-repo-ubuntu2204-9.6.0_1.0-1_arm64.deb
sudo dpkg -i cudnn-local-tegra-repo-ubuntu2204-9.6.0_1.0-1_arm64.deb
sudo cp /var/cudnn-local-tegra-repo-ubuntu2204-9.6.0/cudnn-*-keyring.gpg /usr/share/keyrings/
sudo apt-get update && sudo apt-get -y install cudnn
After installation, set the environment variables (adjust paths as needed for your installation):
export PATH=/usr/local/cuda-12.6/bin:$PATH
export CUDNN_INCLUDE_DIR=/usr/local/cuda/include
export CUDNN_LIB_DIR=/usr/local/cuda/lib64
Step 3: Set Up a Python Virtual Environment
To avoid conflicts with system packages, it's a good idea to create a virtual environment for PyTorch.
python3 -m venv pytorch_env source pytorch_env/bin/activate
Step 4: Install Python Dependencies Use pip to install the necessary Python dependencies.
pip install -U pip setuptools
pip install numpy ninja typing
Step 5: Clone the PyTorch Repository
Clone the official PyTorch repository from GitHub:
git clone --recursive https://github.com/pytorch/pytorch
cd pytorch
pip install -r requirements.txt
pip install numpy ninja typing
Step 6: Configure Build for CUDA Support
PyTorch uses CMake to configure the build process. To enable CUDA support, you need to set the appropriate environment variables before building.
Make sure to specify the CUDA version when setting up the environment, e.g., for CUDA 11.3:(mine 12.6)
TORCH_CUDA_ARCH_LIST Since Orin is based on the Ampere architecture and has a compute capability of 8.6, you can set TORCH_CUDA_ARCH_LIST to 8.6.
Example: To target CUDA architecture 8.6 (Ampere), you would set the environment variable as follows:
export TORCH_CUDA_ARCH_LIST="8.6"
export TORCH_CUDA_ARCH_LIST="6.1;7.0;7.5" # Adjust according to your GPU's architecture
export TORCH_CUDA_ARCH_LIST="8.6"
export CUDNN_INCLUDE_DIR=/usr/local/cuda/include
export CUDNN_LIB_DIR=/usr/local/cuda/lib64
You can find a list of CUDA compute architectures here: CUDA GPUs Compute Capability.
Step 7: Build PyTorch
Run the build process using python setup.py:
WARNING: we strongly recommend enabling linker script optimization for ARM + CUDA. | | To do so please export USE_PRIORITIZED_TEXT_FOR_LD=1
pip install pyyaml
(PYTHON-3.10.15) alexlai@JetsonOrinNano:~/build/pytorch$ time python3 setup.py install >log 2>&1 &
export USE_PRIORITIZED_TEXT_FOR_LD=1
time python3 setup.py install >log 2>&1 &
....
writing torch.egg-info/PKG-INFO
writing dependency_links to torch.egg-info/dependency_links.txt
writing entry points to torch.egg-info/entry_points.txt
writing requirements to torch.egg-info/requires.txt
writing top-level names to torch.egg-info/top_level.txt
writing manifest file 'torch.egg-info/SOURCES.txt'
reading manifest file 'torch.egg-info/SOURCES.txt'
reading manifest template 'MANIFEST.in'
warning: no files found matching '*.*' under directory 'modules'
warning: no previously-included files matching '*.o' found anywhere in distribution
warning: no previously-included files matching '*.dylib' found anywhere in distribution
warning: no previously-included files matching '*.swp' found anywhere in distribution
adding license file 'LICENSE'
adding license file 'NOTICE'
writing manifest file 'torch.egg-info/SOURCES.txt'
Copying torch.egg-info to /home/alexlai/PYTHON-3.10.15/lib/python3.10/site-packages/torch-2.6.0a0+git57c46af-py3.10.egg-info
running install_scripts
Installing torchfrtrace script to /home/alexlai/PYTHON-3.10.15/bin
Installing torchrun script to /home/alexlai/PYTHON-3.10.15/bin
real 530m51.584s
user 1318m54.142s
sys 107m15.014s
```
This process can take a while, depending on your system's resources. PyTorch will be compiled with GPU support using the CUDA toolkit.
Step 8: Verify Installation
Once the installation completes, you can verify that PyTorch is built correctly and GPU support is enabled by running the following Python script:
```
import torch
print(torch.__version__) # Check PyTorch version
print(torch.cuda.is_available()) # Check if CUDA is available
print(torch.cuda.current_device()) # Get the current device
print(torch.cuda.get_device_name(0)) # Get the name of the GPU
```
If torch.cuda.is_available() returns True, it means the installation is successful, and PyTorch can use the GPU.
Additional Notes:
CUDA Architecture: PyTorch is optimized for various CUDA architectures. Make sure to specify the right TORCH_CUDA_ARCH_LIST based on your GPU. You can get the correct architecture for your GPU model from NVIDIA's documentation.
Dependencies: The build process may require additional libraries depending on the configuration. Ensure that all dependencies are installed as per the requirements.
CUDA Version Compatibility: PyTorch only supports certain versions of CUDA. Be sure that the CUDA version installed on your system is compatible with the PyTorch version you are building. The official PyTorch documentation or the PyTorch GitHub repository will list the compatible CUDA versions.
Troubleshooting:
If you encounter errors during the build, carefully read the error messages, as they often provide information about missing dependencies or incompatible versions.
If you run into issues related to ninja or cmake, try updating them:
bash
Copy code
pip install --upgrade ninja cmake
By following these steps, you should be able to build PyTorch from source with GPU support. Let me know if you need further help!
---
```
(PYTHON-3.10.15) alexlai@JetsonOrinNano:~/build/pytorch$ python3
Python 3.10.15 (main, Dec 13 2024, 12:44:38) [GCC 11.4.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/alexlai/build/pytorch/torch/__init__.py", line 964, in <module>
raise ImportError(
ImportError: Failed to load PyTorch C extensions:
It appears that PyTorch has loaded the `torch/_C` folder
of the PyTorch repository rather than the C extensions which
are expected in the `torch._C` namespace. This can occur when
using the `install` workflow. e.g.
$ python setup.py install && python -c "import torch"
This error can generally be solved using the `develop` workflow
$ python setup.py develop && python -c "import torch" # This should succeed
or by running Python from a different directory.
>>> quit
Use quit() or Ctrl-D (i.e. EOF) to exit
>>>
(PYTHON-3.10.15) alexlai@JetsonOrinNano:~/build/pytorch$ python setup.py develop && python -c "import torch"
Building wheel torch-2.6.0a0+git57c46af
-- Building version 2.6.0a0+git57c46af
cmake --build . --target install --config Release
[1/2] Install the project...
-- Install configuration: "Release"
running develop
/home/alexlai/PYTHON-3.10.15/lib/python3.10/site-packages/setuptools/command/easy_install.py:144: EasyInstallDeprecationWarning: easy_install command is deprecated. Use build and pip and other standards-based tools.
warnings.warn(
/home/alexlai/PYTHON-3.10.15/lib/python3.10/site-packages/setuptools/command/install.py:34: SetuptoolsDeprecationWarning: setup.py install is deprecated. Use build and pip and other standards-based tools.
warnings.warn(
running egg_info
writing torch.egg-info/PKG-INFO
writing dependency_links to torch.egg-info/dependency_links.txt
writing entry points to torch.egg-info/entry_points.txt
writing requirements to torch.egg-info/requires.txt
writing top-level names to torch.egg-info/top_level.txt
reading manifest file 'torch.egg-info/SOURCES.txt'
reading manifest template 'MANIFEST.in'
warning: no files found matching '*.*' under directory 'modules'
warning: no previously-included files matching '*.o' found anywhere in distribution
warning: no previously-included files matching '*.dylib' found anywhere in distribution
warning: no previously-included files matching '*.swp' found anywhere in distribution
adding license file 'LICENSE'
adding license file 'NOTICE'
writing manifest file 'torch.egg-info/SOURCES.txt'
running build_ext
-- Building with NumPy bindings
-- Detected cuDNN at ,
-- Detected CUDA at /usr/local/cuda-12.6
-- Not using XPU
-- Not using MKLDNN
-- Building NCCL library
-- Building with distributed package:
-- USE_TENSORPIPE=True
-- USE_GLOO=True
-- USE_MPI=False
-- Building Executorch
-- Not using ITT
Copying functorch._C from functorch/functorch.so to /home/alexlai/build/pytorch/build/lib.linux-aarch64-cpython-310/functorch/_C.cpython-310-aarch64-linux-gnu.so
copying build/lib.linux-aarch64-cpython-310/torch/_C.cpython-310-aarch64-linux-gnu.so -> torch
copying build/lib.linux-aarch64-cpython-310/functorch/_C.cpython-310-aarch64-linux-gnu.so -> functorch
Creating /home/alexlai/PYTHON-3.10.15/lib/python3.10/site-packages/torch.egg-link (link to .)
Adding torch 2.6.0a0+git57c46af to easy-install.pth file
Installing torchfrtrace script to /home/alexlai/PYTHON-3.10.15/bin
Installing torchrun script to /home/alexlai/PYTHON-3.10.15/bin
Installed /home/alexlai/build/pytorch
Processing dependencies for torch==2.6.0a0+git57c46af
Searching for sympy==1.13.1
Best match: sympy 1.13.1
Adding sympy 1.13.1 to easy-install.pth file
Installing isympy script to /home/alexlai/PYTHON-3.10.15/bin
Using /home/alexlai/PYTHON-3.10.15/lib/python3.10/site-packages
Searching for fsspec==2024.10.0
Best match: fsspec 2024.10.0
Adding fsspec 2024.10.0 to easy-install.pth file
Using /home/alexlai/PYTHON-3.10.15/lib/python3.10/site-packages
Searching for jinja2==3.1.4
Best match: jinja2 3.1.4
Adding jinja2 3.1.4 to easy-install.pth file
Using /home/alexlai/PYTHON-3.10.15/lib/python3.10/site-packages
Searching for networkx==3.4.2
Best match: networkx 3.4.2
Adding networkx 3.4.2 to easy-install.pth file
Using /home/alexlai/PYTHON-3.10.15/lib/python3.10/site-packages
Searching for typing-extensions==4.12.2
Best match: typing-extensions 4.12.2
Adding typing-extensions 4.12.2 to easy-install.pth file
Using /home/alexlai/PYTHON-3.10.15/lib/python3.10/site-packages
Searching for filelock==3.16.1
Best match: filelock 3.16.1
Adding filelock 3.16.1 to easy-install.pth file
Using /home/alexlai/PYTHON-3.10.15/lib/python3.10/site-packages
Searching for mpmath==1.3.0
Best match: mpmath 1.3.0
Adding mpmath 1.3.0 to easy-install.pth file
Using /home/alexlai/PYTHON-3.10.15/lib/python3.10/site-packages
Searching for MarkupSafe==3.0.2
Best match: MarkupSafe 3.0.2
Adding MarkupSafe 3.0.2 to easy-install.pth file
Using /home/alexlai/PYTHON-3.10.15/lib/python3.10/site-packages
Finished processing dependencies for torch==2.6.0a0+git57c46af
```
```
(PYTHON-3.10.15) alexlai@JetsonOrinNano:~/build/pytorch$ python3
Python 3.10.15 (main, Dec 13 2024, 12:44:38) [GCC 11.4.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
>>> torch.cuda.is_available()
True
>>> x = torch.rand(5, 3)
>>> print(x)
tensor([[0.6899, 0.7354, 0.8878],
[0.3439, 0.8474, 0.5842],
[0.6278, 0.8280, 0.2129],
[0.7684, 0.6015, 0.6022],
[0.1944, 0.6431, 0.8432]])
>>>
```