|§22024-12-12

This tutorial was written for the NVIDIA Jetson Orin Nano Developer Kit (8 GB). The tutorial was written on 2024-2-13:19-42-5. Details of the board are:

key Value
P-Number p3767-0005
Module NVIDIA Jetson Orin Nano (Developer kit)
SoC tegra234
CUDA Arch BIN 8.7
L4T 36.2.0
Jetpack 6.0 DP , 6.1 Latest
Machine aarch64
System Linux
Distribution Ubuntu 22.04 Jammy Jellyfish
Release 5.15.122-tegra
Python 3.11.5
CUDA 12.2.140
OpenCV 4.8.0
OpenCV-Cuda False
cuDNN 8.9.4.25
TensorRT 8.6.2.3
VPI 3.0.10
Vulkan 1.3.204

IMPORTANT WARNING: The host must be running on Ubuntu 20.04 (22.04) to flash the board.

  1. Flashing the Board

The board can be flashed using the SDK Manager. The SDK Manager can be downloaded from the NVIDIA website. To flash the board, force the board into recovery mode by following the steps in JetsonHacks Tutorial. Then, select the following options:

Host Machine: Ubuntu 20.04( mine ubunt 22.04) Target Hardware: Jetson Orin Nano Developer Kit 8 GB Target OS: Jetpack 6.0 DP (Using JetPack 6.1) DeepStream: 6.0 ( 7.1) In my case, I had already mounted an NVME SSD as a storage component. Most tutorials do SD cards, but I went with a mainstream 1 TB NVME SSD for performance reasons. Select the pre-config option to set the board and initialize it with a username and a password. For simplicity, the username and password are set to nvidia. The board will be flashed, and it will be ready to use.

2.Setting up the Board

First, update the system:

sudo apt-get update
sudo apt-get upgrade
sudo reboot

Then, install the following packages:

sudo apt-get upgrade

sudo apt-get install python3 python3-dev python3-distutils python3-venv python3-pip

sudo apt-get install ssh firefox zlib1g software-properties-common lsb-release cmake build-essential libtool autoconf unzip wget htop ninja-build terminator zip

These packages will prepare the board for development. If you are working extensively with Python, install conda package manager.

The following commands will install miniconda:

cd Downloads/
sudo apt upgrade
wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-aarch64.sh
chmod +x Miniconda3-latest-Linux-aarch64.sh
$ ./Miniconda3-latest-Linux-aarch64.sh

Welcome to Miniconda3 py312_24.9.2-0

...
1. Miniconda. In order to access some features and functionalities of Business, You may need to first download and install Miniconda.
2. Copyright Notice. Miniconda(R) (C) 2015-2024, Anaconda, Inc. All rights reserved under the 3-clause BSD License.
3. License Grant. Subject to the terms of this Agreement, Anaconda hereby grants You a non-exclusive, non-transferable license to: (1) Install and use Miniconda(
R); (2) Modify and create derivative works of sample source code delivered in Miniconda(R) subject to the Anaconda Terms of Service (available at https://legal.a
naconda.com/policies/en/?name=terms-of-service); and (3) Redistribute code files in source (if provided to You by Anaconda as source) and binary forms, with or w
ithout modification subject to the requirements set forth below.
4. Updates. Anaconda may, at its option, make available patches, workarounds or other updates to Miniconda(R). Unless the updates are provided with their separat
e governing terms, they are deemed part of Miniconda(R) licensed to You as provided in this Agreement.
5. Support. This Agreement does not entitle You to any support for Miniconda(R).
6. Redistribution. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:
 (1) Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer; (2) Redistributions in binary f
orm must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the
 distribution.
7. Intellectual Property Notice. You acknowledge that, as between You and Anaconda, Anaconda owns all right, title, and interest, including all intellectual prop
erty rights, in and to Miniconda(R) and, with respect to third-party products distributed with or through Miniconda(R), the applicable third-party licensors own 
all right, title and interest, including all intellectual property rights, in and to such products.


Do you accept the license terms? [yes|no]
Miniconda3 will now be installed into this location:
/home/alexlai/miniconda3

  - Press ENTER to confirm the location
  - Press CTRL-C to abort the installation
  - Or specify a different location below

[/home/alexlai/miniconda3] >>> 
....
 zstandard          pkgs/main/linux-aarch64::zstandard-0.23.0-py312hc476304_0 
  zstd               pkgs/main/linux-aarch64::zstd-1.5.6-h6a09583_0 



Downloading and Extracting Packages:

Preparing transaction: done
Executing transaction: done
installation finished.
Do you wish to update your shell profile to automatically initialize conda?
This will activate conda on startup and change the command prompt when activated.
If you'd prefer that conda's base environment not be activated on startup,
   run the following command when conda is activated:

conda config --set auto_activate_base false

You can undo this by running `conda init --reverse $SHELL`? [yes|no]
[no] >>> no

You have chosen to not have conda modify your shell scripts at all.
To activate conda's base environment in your current shell session:

eval "$(/home/alexlai/miniconda3/bin/conda shell.YOUR_SHELL_NAME hook)" 

To install conda's shell functions for easier access, first activate, then:

conda init

Thank you for installing Miniconda3!

$ ~/miniconda/bin/conda init  ---> this will ~/.bashrc

# source ~/.bashrc
$ echo $PATH
/home/alexlai/miniconda3/bin:/home/alexlai/miniconda3/condabin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/snap/bin

In order to continue the installation process, please review the license agreement. Please, press ENTER to continue


3. Checking CUDA and cuDNN

Before we advance on checking CUDA and cuDNN, we need to verify gcc and nvidia-smi:

gcc --version nvidia-smi


(base) alexlai@JetsonOrinNano:/Downloads/src$ gcc --version nvidia-smi gcc (Ubuntu 11.4.0-1ubuntu122.04) 11.4.0 Copyright (C) 2021 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.

Thu Dec 12 16:09:13 2024
+---------------------------------------------------------------------------------------+ | NVIDIA-SMI 540.4.0 Driver Version: 540.4.0 CUDA Version: 12.6 | |-----------------------------------------+----------------------+----------------------+ | GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |=========================================+======================+======================| | 0 Orin (nvgpu) N/A | N/A N/A | N/A | | N/A N/A N/A N/A / N/A | Not Supported | N/A N/A | | | | N/A | +-----------------------------------------+----------------------+----------------------+

+---------------------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=======================================================================================| | No running processes found | +---------------------------------------------------------------------------------------+ (base) alexlai@JetsonOrinNano:~/D


We will begin with checking CUDA:

mkdir ~/build && cd $_ git clone https://github.com/NVIDIA/cuda-samples.git cd cuda-samples/Samples/1_Utilities/deviceQuery/ make ./deviceQuery $ ./deviceQuery ./deviceQuery Starting...

CUDA Device Query (Runtime API) version (CUDART static linking)

Detected 1 CUDA Capable device(s)

Device 0: "Orin" CUDA Driver Version / Runtime Version 12.6 / 12.6 CUDA Capability Major/Minor version number: 8.7 Total amount of global memory: 7620 MBytes (7989907456 bytes) (008) Multiprocessors, (128) CUDA Cores/MP: 1024 CUDA Cores GPU Max Clock rate: 624 MHz (0.62 GHz) Memory Clock rate: 624 Mhz Memory Bus Width: 128-bit L2 Cache Size: 2097152 bytes Maximum Texture Dimension Size (x,y,z) 1D=(131072), 2D=(131072, 65536), 3D=(16384, 16384, 16384) Maximum Layered 1D Texture Size, (num) layers 1D=(32768), 2048 layers Maximum Layered 2D Texture Size, (num) layers 2D=(32768, 32768), 2048 layers Total amount of constant memory: 65536 bytes Total amount of shared memory per block: 49152 bytes Total shared memory per multiprocessor: 167936 bytes Total number of registers available per block: 65536 Warp size: 32 Maximum number of threads per multiprocessor: 1536 Maximum number of threads per block: 1024 Max dimension size of a thread block (x,y,z): (1024, 1024, 64) Max dimension size of a grid size (x,y,z): (2147483647, 65535, 65535) Maximum memory pitch: 2147483647 bytes Texture alignment: 512 bytes Concurrent copy and kernel execution: Yes with 2 copy engine(s) Run time limit on kernels: No Integrated GPU sharing Host Memory: Yes Support host page-locked memory mapping: Yes Alignment requirement for Surfaces: Yes Device has ECC support: Disabled Device supports Unified Addressing (UVA): Yes Device supports Managed Memory: Yes Device supports Compute Preemption: Yes Supports Cooperative Kernel Launch: Yes Supports MultiDevice Co-op Kernel Launch: Yes Device PCI Domain ID / Bus ID / location ID: 0 / 0 / 0 Compute Mode: < Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >

deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 12.6, CUDA Runtime Version = 12.6, NumDevs = 1 Result = PASS


>CUDA Toolkit for GPU-accelerated parallel computing.
>cuDNN, a GPU-accelerated library for deep neural networks.
>TensorRT for deep learning inference optimization.
>OpenCV for computer vision applications.


- After checking CUDA, we will check cuDNN:

- git clone cudnn_sample

cd ~/build $ git clone https://github.com/johnpzh/cudnn_samples_v8.git $ cd cudnn_samples_v8/mnistCUDNN/ $ ls data error_util.h fp16_dev.cu fp16_dev.h fp16_dev.o fp16_emu.cpp fp16_emu.h fp16_emu.o gemv.h Makefile mnistCUDNN mnistCUDNN.cpp mnistCUDNN.o readme.txt $ sudo apt install libfreeimage3 libfreeimage-dev $ sudo apt-get update && sudo apt-get upgrade && sudo apt auto-remove

$ make clean && make rm -rf *o rm -rf mnistCUDNN CUDA_VERSION is 12060 Linking agains cublasLt = true CUDA VERSION: 12060 TARGET ARCH: aarch64 HOST_ARCH: aarch64 TARGET OS: linux SMS: 35 50 53 60 61 62 70 72 75 80 86 /usr/local/cuda/bin/nvcc -ccbin g++ -I/usr/local/cuda/include -I/usr/local/cuda/targets/ppc64le-linux/include -IFreeImage/include -m64 -gencode arch=compute_35,code=sm_35 -gencode arch=compute_50,code=sm_50 -gencode arch=compute_53,code=sm_53 -gencode arch=compute_60,code=sm_60 -gencode arch=compute_61,code=sm_61 -gencode arch=compute_62,code=sm_62 -gencode arch=compute_70,code=sm_70 -gencode arch=compute_72,code=sm_72 -gencode arch=compute_75,code=sm_75 -gencode arch=compute_80,code=sm_80 -gencode arch=compute_86,code=sm_86 -gencode arch=compute_86,code=compute_86 -o fp16_dev.o -c fp16_dev.cu nvcc fatal : Unsupported gpu architecture 'compute_35' make: *** [Makefile:221: fp16_dev.o] Error 1

$ nvidia-smi Thu Dec 12 16:39:26 2024
+---------------------------------------------------------------------------------------+ | NVIDIA-SMI 540.4.0 Driver Version: 540.4.0 CUDA Version: 12.6 | |-----------------------------------------+----------------------+----------------------+ | GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |=========================================+======================+======================| | 0 Orin (nvgpu) N/A | N/A N/A | N/A | | N/A N/A N/A N/A / N/A | Not Supported | N/A N/A | | | | N/A | +-----------------------------------------+----------------------+----------------------+

+---------------------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=======================================================================================| | No running processes found | +---------------------------------------------------------------------------------------+

>Based on the output from nvidia-smi, it looks like you're using an NVIDIA Orin device, which is part of the NVIDIA Jetson family of embedded systems. The Orin SoC (System on Chip) uses an Arm-based architecture, and its GPU is a NVIDIA Ampere architecture.

Your current CUDA version is 12.6, which should support the latest architectures like sm_80 and sm_86 (Ampere and beyond). Given this, we need to update the Makefile to target the appropriate GPU architecture for the Ampere architecture (Orin is based on Ampere).

cp -r /usr/src/cudnn_samples_v8/ ~/Documents/workspace/ cd cudnn_samples_v8/mnistCUDNN/ sudo apt install libfreeimage3 libfreeimage-dev sudo apt-get update sudo apt-get upgrade make clean && make ./mnistCUDNN

the above is not working

#include <iostream>
#include <cudnn.h>

int main() {
    cudnnHandle_t cudnn;
    cudnnStatus_t status;

    // Initialize cuDNN
    status = cudnnCreate(&cudnn);
    if (status != CUDNN_STATUS_SUCCESS) {
        std::cerr << "cuDNN initialization failed: " << cudnnGetErrorString(status) << std::endl;
        return -1;
    }

    // Get and print cuDNN version
    int version = cudnnGetVersion();
    std::cout << "cuDNN version: " << version << std::endl;

    // Destroy cuDNN handle
    cudnnDestroy(cudnn);
    return 0;
}
(base) alexlai@JetsonOrinNano:~/build/cudnn_test$ g++ cudnn_test.cpp -o cudnn_test -lcudnn -lcuda -lstdc++ -I/usr/include -I/usr/local/cuda/include -L/usr/lib/aarch64-linux-gnu/ -L/usr/local/cuda/lib64
(base) alexlai@JetsonOrinNano:~/build/cudnn_test$ ./cudnn_test 
cuDNN version: 90300
  1. Monitoring the Board

To monitor the board, we will install jetson-stats:

sudo pip3 install -U jetson-stats then logout and in jtop

To check your board details and version of different software using jtop, as well as the usage across its computing resources and power consumption, there are some Python scripts that use jtop. For example jtop_properties.py is a quick way to monitor the aforementioned.

  1. VS Code

If you are a Visual Studio Code user, it is supported on the Jetson. Run the following commands:

cd Downloads/
wget -qO- https://packages.microsoft.com/keys/microsoft.asc | gpg --dearmor > packages.microsoft.gpg
sudo install -o root -g root -m 644 packages.microsoft.gpg /etc/apt/trusted.gpg.d/
sudo sh -c 'echo "deb [arch=amd64,arm64,armhf signed-by=/etc/apt/trusted.gpg.d/packages.microsoft.gpg] https://packages.microsoft.com/repos/code stable main" > /etc/apt/sources.list.d/vscode.list'
rm -f packages.microsoft.gpg
sudo apt install apt-transport-https
sudo apt update
sudo apt install code
  1. Case

For my board, I bought the Yahboom CUBE nano case. On their page, there are also tutorials and code for setting up the case and configuring the OLED screen that comes with it. Finally, there is also a GitHub repo associated with the case.

  1. Install PyTorch

First, we will install the dependencies:

sudo apt-get install libopenblas-base libopenmpi-dev libomp-dev
sudo apt-get install libjpeg-dev zlib1g-dev libpython3-dev libopenblas-dev libavcodec-dev libavformat-dev libswscale-dev

Then, we initialize the conda environment for PyTorch:

conda create -n torchenv python=3.10 pip

(base) alexlai@JetsonOrinNano:~$ conda activate torchenv
(torchenv) alexlai@JetsonOrinNano:~$ 

7.1. Now, we can install PyTorch:

(torchenv) alexlai@JetsonOrinNano:~$ pip install Cython
Collecting Cython
  Downloading Cython-3.0.11-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl.metadata (3.2 kB)
Downloading Cython-3.0.11-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl (3.5 MB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 3.5/3.5 MB 10.6 MB/s eta 0:00:00
Installing collected packages: Cython
Successfully installed Cython-3.0.11

(torchenv) alexlai@JetsonOrinNano:~$ mkdir -p ~/build/src && cd $_
(torchenv) alexlai@JetsonOrinNano:~/build/src$ wget https://nvidia.box.com/shared/static/0h6tk4msrl9xz3evft9t0mpwwwkw7a32.whl -O torch-2.1.0-cp310-cp310-linux_aarch64.whl


pip install numpy torch-2.1.0-cp310-cp310-linux_aarch64.whl

Finally, we install torchvision:

(torchenv) alexlai@JetsonOrinNano:~/build/src$ cd ~/build
(torchenv) alexlai@JetsonOrinNano:~/build$ git clone --branch v0.16.1 https://github.com/pytorch/vision torchvision

export BUILD_VERSION=0.16.1
cd torchvision/
python setup.py install --user
cd ../
pip install Pillow
To test PyTorch, run the following:

import torch

print(torch.__version__)
print('CUDA available: ' + str(torch.cuda.is_available()))
print('cuDNN version: ' + str(torch.backends.cudnn.version()))

a = torch.cuda.FloatTensor(2).zero_()
print('Tensor a = ' + str(a))
b = torch.randn(2).cuda()
print('Tensor b = ' + str(b))
c = a + b
print('Tensor c = ' + str(c))
To test torchvision, run the following:

import torch
import torchvision

print(torchvision.__version__)

from torchvision.models import resnet50

m = resnet50(weights=None)
m.eval()
x = torch.randn((4,3,224,224))
m(x)
Ensure LibTorch
To ensure that libtorch is installed, run the following:

cd ~/Documents/workspace/
mkdir tests
cd tests
mkdir build
Then, create a CMakeLists.txt file, with the following content:

cmake_minimum_required(VERSION 3.18 FATAL_ERROR)
project(torchsc)

find_package(Torch REQUIRED)
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} ${TORCH_CXX_FLAGS}")

add_executable(torchsc torchsc.cpp)
target_link_libraries(torchsc "${TORCH_LIBRARIES}")
set_property(TARGET torchsc PROPERTY CXX_STANDARD 17)

# The following code block is suggested to be used on Windows.
# According to https://github.com/pytorch/pytorch/issues/25457,
# the DLLs need to be copied to avoid memory errors.
if (MSVC)
  file(GLOB TORCH_DLLS "${TORCH_INSTALL_PREFIX}/lib/*.dll")
  add_custom_command(TARGET torchsc
                     POST_BUILD
                     COMMAND ${CMAKE_COMMAND} -E copy_if_different
                     ${TORCH_DLLS}
                     $<TARGET_FILE_DIR:example-app>)
endif (MSVC)
Finally, create a torchsc.cpp file, with the following content:

#include <torch/torch.h>
#include <iostream>

int main() {
    torch::Tensor tensor = torch::rand({2, 3});
    std::cout << tensor << std::endl;
}
The directory structure should look like this:

.
├── build
├── CMakeLists.txt
└── torchsc.cpp

1 directory, 2 files
To build the project, run the following:

cd build
cmake -DCMAKE_PREFIX_PATH=`python3 -c 'import torch;print(torch.utils.cmake_prefix_path)'` ..
cmake --build . --config Release
Finally, run the executable:

./torchsc
Kineto
Kineto is a library that provides performance analysis for PyTorch. It is a part of the PyTorch Profiler. However, it is not built in the PyTorch wheel for Jetson. To install it, you would need to first build PyTorch from source. After that, you can build libkineto from source. Otherwise, you will be getting the following warning:

static library kineto_LIBRARY-NOTFOUND not found
To install Kineto from source, you can run the following:

export CUDA_SOURCE_DIR=/usr/local/cuda-12.2
git clone --recursive --branch v0.4.0 https://github.com/pytorch/kineto.git
cd kineto/libkineto
mkdir build && cd build
cmake ..
make
After ensuring that libkineto is working, you can install it:

sudo make install
To test the libkineto library, run the following:

import torch
import torch.nn as nn

x = torch.randn(1, 1).cuda()
lin = nn.Linear(1, 1).cuda()

with torch.profiler.profile(
    activities=[
        torch.profiler.ProfilerActivity.CPU,
        torch.profiler.ProfilerActivity.CUDA]
) as p:
    for _ in range(10):
        out = lin(x)
print(p.key_averages().table(
    sort_by="self_cuda_time_total", row_limit=-1))
Set Performance Mode
sudo nvpmodel -m 0
sudo jetson_clocks
Also set fan speed to maximal:

sudo jetson_clocks --fan
Install Java
sudo apt-get update
sudo apt-get install openjdk-11-jdk
java -version
Install Bazel
To install Bazel, you can run the following:

cd ~/Downloads
wget https://github.com/bazelbuild/bazelisk/releases/download/v1.8.1/bazelisk-linux-arm64
chmod +x bazelisk-linux-arm64
sudo mv bazelisk-linux-arm64 /usr/local/bin/bazel
which bazel
Notes
It is important to note that the NVIDIA SDK Manager must be installed on an Ubuntu 20.04 engine. I tried two different machines running Ubuntu 22.04 and attempted to flash the board, but it would yield errors. I also tried Ubuntu 18.04, but the latest supported Jetpack was 5.x.y, and at the moment of writing, the latest Jetpack is 6.z. Therefore, the host machine must be running Ubuntu 20.04.

Helpful Links
Deep Learning Libraries Compilation on Jetson Nano



---

&sect;2024-11-21

- [How to set up NVIDIA's latest GPU-enabled single-board computer](https://shawnhymel.com/2255/getting-started-with-nvidia-jetson-orin-nano/)

The NVIDIA Jetson Orin Nano is a powerful single-board computer built with an NVIDIA Ampere GPU for performing a variety of parallel-operation tasks, like cryptocurrency mining and AI. In this guide, we will walk you through the process of flashing NVIDIA’s Ubuntu image to the Orin Nano Development Kit.

The Jetson Orin Nano Developer Kit Getting Started Guide defaults to flashing an SD card with the pre-configured Ubuntu image (similar to how you might configure a Raspberry Pi). This has 2 issues:

1.The SD card is much slower and offers less space than an NVMe M.2 SSD. If you are working with large AI models, I highly recommend purchasing and mounting an SSD in one of the available M.2 slots under the dev kit.
    
 2. Jetpack 6.0+ (with the newest Ubuntu OS) contains updated QSPI drivers. If you try to flash the SD card, you will likely find that your Orin Nano simply boots to a blank screen due to the outdated drivers. As a result, you must use the NVIDIA SDK Manager from a host computer to flash the OS directly to the dev kit (i.e. over a USB cable) the first time. See this post for more information.

To ensure that you can flash to either SSD or SD card as well as have the most up-to-date drivers, I recommend using the SDK Manager to flash directly to the board. The rest of this guide will show you how to use the SDK Manager to flash Ubuntu to the Orin Nano dev kit.
Required Hardware

- You will need the following hardware:
    - NVIDIA Jetson Orin Nano Developer Kit
    - USB C cable
    - SD Card (64GB UHS-1 or larger) or NVMe M.2 SSD
    - (Optional) DisplayPort to HDMI Adapter and HDMI cable
    - Keyboard, mouse, monitor

&para;Install and Run Required Host Operating System

To flash the operating system (OS) onto the Orin Nano, you should use the NVIDIA SDK Manager running from a host computer. In my experience, you absolutely `must use the supported host operating system` to run the SDK Manager. [See the supported OS chart](https://developer.nvidia.com/sdk-manager) on the SDK Manager page to figure out which host OS you need to use. For this guide, we will use the SDK Manager to install JetPack 6.0, which means we must use exactly Ubuntu 20.04 or Ubuntu 22.04 (not Linux Mint or any other derivative–it must be the official Ubuntu distro).

If you happen to be running either Ubuntu 20.04 or Ubuntu 22.04, great! If not, I recommend creating a bootable USB drive that you can use to try Ubuntu without installing it.

1. Download ubuntu-22.04.4-desktop-amd64.iso, Jammy  from [this page](https://releases.ubuntu.com/22.04/)
2. Follow these instructions to create a bootable USB drive with the Ubuntu image(Balena Etcher???)

&para;Install NVIDIA SDK Manager, see 

JupyterHub/Computer/2024/nVidia/SDK-Manager/01-introduction.md

&para;Connect Orin Nano

To flash the Orin Nano using the SDK Manager, it must first be put into “recovery mode.” To do that, attach a jumper or jumper wire between the `FC_REC and GND pins (pins 2 and 3)` on the underside of the Orin Nano card.

![JetsonOrinNanoDeveloperKit-05.png](../images/JetsonOrinNanoDeveloperKit-05.png)

- Connect a cable between the USB-C port on the dev kit and a USB port on your host computer. - - Plug the power adapter into the dev kit.
- In a terminal on the host computer, enter the following command:
    - `lsusb`
    
> You should see `ID 0955:7523 NVIDIA Corp. APX ` as one of the items. This ID and name is important! If you do not see this ID/name, it means the board is not in recovery mode, not connected, or not powered. `The SDK Manager looks for this exact name to find the board in recovery mode`.

&para;Flash OS With the SDK Manager

- Enter the following command to run the SDK Manager:
    - `sdkmanager`