§2024-12-05
To prepare the NVIDIA Jetson Orin Nano development board for coding with PyTorch, you'll need to go through several steps to ensure that the necessary software and libraries are installed and configured correctly. Here's a step-by-step guide to get you started:
Prerequisites: Jetson Orin Nano development board with Ubuntu-based Jetson OS (like JetPack). A monitor, keyboard, and mouse connected to the Jetson Orin Nano. An internet connection for downloading required packages and dependencies. Step-by-Step Guide:
- Set Up Your Jetson Orin Nano (if not already done) Flash JetPack: If you haven’t already set up your Jetson Orin Nano, you need to install the operating system. NVIDIA provides JetPack, which is an Ubuntu-based OS with all the necessary drivers, CUDA, and libraries pre-installed for the Jetson hardware. You can flash the board using the NVIDIA SDK Manager on your host PC, which will guide you through the process. The SDK Manager will also install dependencies like CUDA, cuDNN, and TensorRT that are required for running PyTorch efficiently on Jetson hardware.
Download SDK Manager from NVIDIA's Developer Site. Follow the Setup Instructions to flash your Jetson Orin Nano. Once flashing is complete, boot up your device.
- Check Jetson OS Version Once your Jetson Orin Nano is up and running, it’s good to check the installed version of JetPack and Ubuntu:
bash コードをコピーする
Check JetPack version
cat /etc/nv_tegra_release
Check Ubuntu version
lsb_release -a 3. Update System Packages It’s important to keep your system updated before proceeding with the installation of libraries.
bash コードをコピーする sudo apt update sudo apt upgrade sudo apt dist-upgrade 4. Install Required Dependencies PyTorch on Jetson requires several dependencies, including Python, pip, CUDA, and cuDNN. Since Jetson devices come with CUDA and cuDNN as part of JetPack, you typically don't need to install them manually. However, there are a few additional packages you'll need.
First, make sure Python 3 and pip are installed (they should be by default):
bash コードをコピーする sudo apt install python3 python3-pip python3-dev Additionally, install some necessary build tools:
bash コードをコピーする sudo apt install build-essential cmake git 5. Install PyTorch for Jetson PyTorch needs to be installed with specific optimizations for the Jetson architecture. NVIDIA provides pre-built PyTorch wheels for the Jetson platform, so we’ll use those instead of building from source (which can be time-consuming).
To install PyTorch on Jetson:
Install NVIDIA's PyTorch Wheel: Go to NVIDIA's PyTorch page for Jetson to find the correct wheel for your version of JetPack (make sure it matches your CUDA version).
Example Command for Installing PyTorch: For example, if you're using JetPack 5.x with CUDA 11.4, you might run:
bash コードをコピーする sudo pip3 install torch-2.x.x-cp36-cp36m-linux_aarch64.whl You can download the correct wheel file from NVIDIA’s site. Make sure you download the wheel that corresponds to your exact version of JetPack.
Install torchvision (optional but recommended for computer vision tasks): To install the torchvision library, use the following command (assuming your PyTorch version is compatible):
bash コードをコピーする sudo pip3 install torchvision Install other dependencies for PyTorch: If you want additional functionality, such as support for specific deep learning models or other machine learning libraries, you might want to install additional libraries like scipy, numpy, etc.
bash コードをコピーする sudo pip3 install numpy scipy matplotlib 6. Verify PyTorch Installation After installation is complete, verify that PyTorch is installed correctly by opening a Python shell and testing PyTorch:
bash コードをコピーする python3 Then, run the following commands inside the Python shell:
python コードをコピーする import torch print(torch.version) print(torch.cuda.is_available()) torch.version will print the installed version of PyTorch. torch.cuda.is_available() should return True if PyTorch can detect the CUDA-enabled GPU on your Jetson device. 7. Test a Simple PyTorch Model (Optional) To confirm everything is working, try running a simple model:
python コードをコピーする import torch import torch.nn as nn import torch.optim as optim
Create a simple model
model = nn.Linear(10, 5).cuda()
Create dummy data
input_data = torch.randn(1, 10).cuda() target = torch.randn(1, 5).cuda()
Loss and optimizer
criterion = nn.MSELoss() optimizer = optim.SGD(model.parameters(), lr=0.01)
Forward pass
output = model(input_data) loss = criterion(output, target) print(f'Loss: {loss.item()}') If this runs without errors, you're all set to start working with PyTorch on your Jetson Orin Nano!
- Set Up Jupyter Notebook (Optional) If you prefer to code in Jupyter Notebook for AI experiments, you can install Jupyter as follows:
bash コードをコピーする sudo apt install python3-notebook sudo pip3 install jupyter Start Jupyter with the following command:
bash コードをコピーする jupyter notebook Access it via your browser at http://<Jetson_IP>:8888.
- Optimizing Performance (Optional) TensorRT: You can use TensorRT for optimizing your models for inference on the Jetson hardware. Jetson Accelerated Libraries: Leverage other Jetson-specific libraries, such as cuDNN, CUDA, and TensorRT, to get the most out of your GPU. Summary: Set up your Jetson Orin Nano with JetPack and make sure it is updated. Install Python 3 and required libraries like pip and build tools. Install the PyTorch wheel from NVIDIA's site for Jetson. Verify the installation and run simple tests to confirm PyTorch is using your GPU. Optionally, set up Jupyter Notebooks for a better coding experience. Once all these steps are done, you're ready to start developing AI applications using PyTorch on your Jetson Orin Nano!