diff --git a/assets/contributors.csv b/assets/contributors.csv index ca9d8d4a27..478fe00629 100644 --- a/assets/contributors.csv +++ b/assets/contributors.csv @@ -94,4 +94,5 @@ Peter Harris,Arm,,,, Chenying Kuo,Adlink,evshary,evshary,, William Liang,,,wyliang,, Waheed Brown,Arm,https://github.com/armwaheed,https://www.linkedin.com/in/waheedbrown/,, -Aryan Bhusari,Arm,,https://www.linkedin.com/in/aryanbhusari,, \ No newline at end of file +Aryan Bhusari,Arm,,https://www.linkedin.com/in/aryanbhusari,, +Ken Zhang, Insyde,,,, \ No newline at end of file diff --git a/content/learning-paths/cross-platform/zenoh-multinode-ros2/_index.md b/content/learning-paths/cross-platform/zenoh-multinode-ros2/_index.md index 39f75a71da..f51f506190 100644 --- a/content/learning-paths/cross-platform/zenoh-multinode-ros2/_index.md +++ b/content/learning-paths/cross-platform/zenoh-multinode-ros2/_index.md @@ -51,7 +51,7 @@ further_reading: type: documentation - resource: title: Zenoh and ROS 2 Integration Guide - link: https://github.com/eclipse-zenoh/zenoh-plugin-ros2 + link: https://github.com/eclipse-zenoh/zenoh-plugin-ros2dds type: documentation diff --git a/content/learning-paths/servers-and-cloud-computing/neoverse-rdv3-swstack/1_introduction_rdv3.md b/content/learning-paths/servers-and-cloud-computing/neoverse-rdv3-swstack/1_introduction_rdv3.md new file mode 100644 index 0000000000..4a89b0ce5b --- /dev/null +++ b/content/learning-paths/servers-and-cloud-computing/neoverse-rdv3-swstack/1_introduction_rdv3.md @@ -0,0 +1,82 @@ +--- +title: Introducing the Arm RD‑V3 Platform +weight: 2 + +### FIXED, DO NOT MODIFY +layout: learningpathall +--- + +## Introduction to the Arm RD‑V3 Platform + +This module introduces the Arm [Neoverse CSS‑V3](https://www.arm.com/products/neoverse-compute-subsystems/css-v3) architecture and the RD‑V3 [Reference Design Platform Software](https://neoverse-reference-design.docs.arm.com/en/latest/index.html) that implements it. You'll learn how these components enable scalable, server-class system design, and how to simulate and validate the full firmware stack using Fixed Virtual Platforms (FVP)—well before hardware is available. + +Arm Neoverse is designed to meet the demanding requirements of data center and edge computing, delivering high performance and efficiency. Widely adopted in servers, networking, and edge devices, the Neoverse architecture provides a solid foundation for modern infrastructure. + +Using Arm Fixed Virtual Platforms (FVPs), you can explore system bring-up, boot flow, and firmware customization well before physical silicon becomes available. + +This module also introduces the key components involved, from Neoverse V3 cores to secure subsystem controllers, and shows how these elements work together in a fully virtualized system simulation. + +### Neoverse CSS-V3 Platform Overview + +[Neoverse CSS-V3](https://www.arm.com/products/neoverse-compute-subsystems/css-v3) (Compute Subsystem Version 3) is the core subsystem architecture underpinning the Arm RD-V3 platform. It is specifically optimized for high-performance server and data center applications, providing a highly integrated solution combining processing cores, memory management, and interconnect technology. + +CSS V3 forms the key building block for specialized computing systems. It reduces design and validation costs for the general-purpose compute subsystem, allowing partners to focus on their specialization and acceleration while reducing risk and accelerating time to deployment. + +CSS‑V3 is available in configurable subsystems, supporting up to 64 Neoverse V3 cores per die. It also enables integration of high-bandwidth DDR5/LPDDR5 memory (up to 12 channels), PCIe Gen5 or CXL I/O (up to 64 lanes), and high-speed die-to-die links with support for UCIe 1.1 or custom PHYs. Designs can be scaled down to smaller core-count configurations, such as 32-core SoCs, or expanded through multi-die integration. + +Key features of CSS-V3 include: + +* High-performance CPU clusters: Optimized for server workloads and data throughput. + +* Advanced memory management: Efficient handling of data across multiple processing cores. + +* Interconnect technology: Enabling high-speed, low-latency communication within the subsystem. + +The CSS‑V3 subsystem is fully supported by Arm's Fixed Virtual Platform, enabling pre-silicon testing of these capabilities. + +### RD‑V3 Platform Introduction + +The RD‑V3 platform is a comprehensive reference design built around Arm’s [Neoverse V3](https://www.arm.com/products/silicon-ip-cpu/neoverse/neoverse-v3) CPUs, along with [Cortex-M55](https://www.arm.com/products/silicon-ip-cpu/cortex-m/cortex-m55) and [Cortex-M7](https://www.arm.com/products/silicon-ip-cpu/cortex-m/cortex-m7) microcontrollers. This platform enables efficient high-performance computing and robust platform management: + + +| Component | Description | +|---------------|-----------------------------------------------------------------------------| +| Neoverse V3 | The primary application processor responsible for executing OS and payloads | +| Cortex M7 | Implements the System Control Processor (SCP) for power, clocks, and init | +| Cortex M55 | Hosts the Runtime Security Engine (RSE), providing secure boot and runtime integrity | + +These subsystems work together in a coordinated architecture, communicating through shared memory regions, control buses, and platform protocols. This enables multi-stage boot processes and robust secure boot implementations. + +Here is the Neoverse Reference Design Platform [Software Stack](https://neoverse-reference-design.docs.arm.com/en/latest/about/software_stack.html#sw-stack) for your reference. + +![img1 alt-text#center](rdinfra_sw_stack.jpg "Neoverse Reference Design Software Stack") + + +### Develop and Validate Without Hardware + +In traditional development workflows, system validation cannot begin until silicon is available—often introducing risk and delay. + +To address this, Arm provides the Fixed Virtual Platform ([FVP](https://developer.arm.com/Tools%20and%20Software/Fixed%20Virtual%20Platforms)) —a complete simulations model that emulates full Arm SoC behavior on a host machine. The CSS‑V3 platform is available in multiple FVP configurations, allowing developers to select the model that best fits their specific development and validation needs. + + +Key Capabilities of FVP: +* Multi-core CPU simulation with SMP boot +* Multiple UART interfaces for serial debug and monitoring +* Compatible with TF‑A, UEFI, GRUB, and Linux kernel images +* Provides boot logs, trace outputs, and interrupt event visibility for debugging + +FVP enables developers to verify boot sequences, debug firmware handoffs, and even simulate RSE behaviors—all before first silicon. + +### Comparing different version of RD-V3 FVP + +To support different use cases and levels of platform complexity, Arm offers three virtual models based on the CSS‑v3 architecture: RD‑V3, RD-V3-Cfg1, and RD‑V3‑R1. While they share a common foundation, they differ in chip count, system topology, and simulation flexibility. + +| Model | Description | Recommended Use Cases | +|-------------|------------------------------------------------------------------|--------------------------------------------------------------------| +| RD‑V3 | Standard single-die platform with full processor and security blocks | Ideal for newcomers, firmware bring-up, and basic validation | +| RD‑V3‑R1 | Dual-die platform simulating chiplet-based architecture | Suitable for multi-node, interconnect, and advanced boot tests | +| CFG1 | Lightweight model with reduced control complexity for fast startup | Best for CI pipelines, unit testing, and quick validations | + + +This Learning Path begins with RD‑V3 as the primary platform for foundational exercises, guiding you through the process of building the software stack and simulating it on FVP to verify the boot sequence. +In later modules, you’ll transition to RD‑V3‑R1 to more advanced system simulation, multi-node bring-up, and firmware coordination across components like MCP and SCP. diff --git a/content/learning-paths/servers-and-cloud-computing/neoverse-rdv3-swstack/2_rdv3_bootseq.md b/content/learning-paths/servers-and-cloud-computing/neoverse-rdv3-swstack/2_rdv3_bootseq.md new file mode 100644 index 0000000000..fd07f2c169 --- /dev/null +++ b/content/learning-paths/servers-and-cloud-computing/neoverse-rdv3-swstack/2_rdv3_bootseq.md @@ -0,0 +1,160 @@ +--- +title: Understanding the CSS V3 Boot Flow and Firmware Stack +weight: 3 + +### FIXED, DO NOT MODIFY +layout: learningpathall +--- + +## Firmware Stack Overview and Boot Sequence Coordination + +To ensure the platform transitions securely and reliably from power-on to operating system launch, this module introduces the roles and interactions of each firmware component within the RD‑V3 boot process. +You’ll learn how each module contributes to system initialization and how control is systematically handed off across the boot chain. + + +## How the System Wakes Up + +In the RD‑V3 platform, each subsystem—such as TF‑A, RSE, SCP, LCP, and UEFI—operates independently but cooperates through a well-defined sequence. +Each module is delivered as a separate firmware image, yet they coordinate tightly through a structured boot flow and inter-processor signaling. + +The following diagram from the [Neoverse Reference Design Documentation](https://neoverse-reference-design.docs.arm.com/en/latest/shared/boot_flow/rdv3_single_chip.html?highlight=boot) illustrates the progression of component activation from initial reset to OS handoff: + +![img1 alt-text#center](rdf_single_chip.png "Boot Flow for RD-V3 Single Chip") + +### Stage 1. Security Validation Starts First (RSE) + +The first firmware module triggered after BL2 is the Runtime Security Engine (RSE), executing on Cortex‑M55. RSE authenticates all critical firmware components—including SCP, UEFI, and kernel images—using secure boot mechanisms. It performs cryptographic measurements and builds a Root of Trust before allowing any other processors to start. + +***RSE acts as the platform’s security gatekeeper.*** + +### Stage 2. Early Hardware Initialization (SCP / MCP) + +Once RSE completes verification, the System Control Processor (SCP) and Management Control Processor (MCP) are released from reset. + +These controllers perform essential platform bring-up: +* Initialize clocks, reset lines, and power domains +* Prepare DRAM and interconnect +* Enable the application cores and signal readiness to TF‑A + +***SCP/MCP are the ground crew bringing hardware systems online.*** + +### Stage 3. Secure Execution Setup (TF‑A) + +Once the AP is released, it begins executing Trusted Firmware‑A (TF‑A) at EL3, starting from the reset vector address programmed during boot image layout. +TF‑A configures the secure world, sets up exception levels, and prepares for handoff to UEFI. + +***TF‑A is the ignition controller, launching the next stages securely.*** + +### Stage 4. Firmware and Bootloader (EDK2 / GRUB) + +TF‑A hands off control to UEFI firmware (EDK2), which performs device discovery and launches GRUB. + +Responsibilities: +* Detect and initialize memory, PCIe, and boot devices +* Generate ACPI and platform configuration tables +* Locate and launch GRUB from storage or flash + +***EDK2 and GRUB are like the first- and second-stage rockets launching the payload.*** + +### Stage 5. Linux Kernel Boot + +GRUB loads the Linux kernel and passes full control to the OS. + +Responsibilities: +* Initialize device drivers and kernel subsystems +* Mount the root filesystem +* Start user-space processes (e.g., BusyBox) + +***The Linux kernel is the spacecraft—it takes over and begins its mission.*** + +## Firmware Module Responsibilities in Detail + +Now that we’ve examined the high-level boot stages, let’s break down each firmware module’s role in more detail. + +Each stage of the boot chain is backed by a dedicated component—either a secure bootloader, platform controller, or operating system manager—working together to ensure a reliable system bring-up. + +### RSE: Runtime Security Engine (Cortex‑M55) (Stage 1: Security Validation) + +RSE firmware runs on the Cortex‑M55 and plays a critical role in platform attestation and integrity enforcement. +* Authenticates BL2, SCP, and UEFI firmware images (Secure Boot) +* Records boot-time measurements (e.g., PCRs, ROT) +* Releases boot authorization only after successful validation + +RSE acts as the second layer of the chain of trust, maintaining a monitored and secure environment throughout early boot. + + +### SCP: System Control Processor (Cortex‑M7) (Stage 2: Early Hardware Bring-up) + +SCP firmware runs on the Cortex‑M7 core and performs early hardware initialization and power domain control. +* Initializes clocks, reset controllers, and system interconnect +* Manages DRAM setup and enables power for the application processor +* Coordinates boot readiness with RSE via MHU (Message Handling Unit) + +SCP is central to bring-up operations and ensures the AP starts in a stable hardware environment. + +### TF-A: Trusted Firmware-A (BL1 / BL2) (Stage 3: Secure Execution Setup) + +TF‑A is the entry point of the boot chain and is responsible for establishing the system’s root of trust. +* BL1 (Boot Loader Stage 1): Executes from ROM, initializing minimal hardware such as clocks and serial interfaces, and loads BL2. +* BL2 (Boot Loader Stage 2): Validates and loads SCP, RSE, and UEFI images, setting up secure handover to later stages. + +TF‑A ensures all downstream components are authenticated and loaded from trusted sources, laying the foundation for a secure boot. + + +### UEFI / GRUB / Linux Kernel (Stage 4–5: Bootloader and OS Handoff) + +After SCP powers on the application processor, control passes to the main bootloader and operating system: +* UEFI (EDK2): Provides firmware abstraction, hardware discovery, and ACPI table generation +* GRUB: Selects and loads the Linux kernel image +* Linux Kernel: Initializes the OS, drivers, and launches the userland (e.g., BusyBox) + +On the FVP, you can observe this process via UART logs, helping validate each stage’s success. + + +### LCP: Low Power Controller (Optional Component) + +If present in the configuration, LCP handles platform power management at a finer granularity: +* Implements sleep/wake transitions +* Controls per-core power gating +* Manages transitions to ACPI power states (e.g., S3, S5) + +LCP support depends on the FVP model and may be omitted in simplified virtual setups. + + +### Coordination and Handoff Logic + +The RD‑V3 boot sequence follows a multi-stage, dependency-driven handshake model, where each firmware module validates, powers, or authorizes the next. + +| Stage | Dependency Chain | Description | +|-------|----------------------|-------------------------------------------------------------------------| +| 1 | RSE ← BL2 | RSE is loaded and triggered by BL2 to begin security validation | +| 2 | SCP ← BL2 + RSE | SCP initialization requires both BL2 and authorization from RSE | +| 3 | AP ← SCP + RSE | The application processor starts only after SCP sets power and RSE permits | +| 4 | UEFI → GRUB → Linux | UEFI launches GRUB, which loads the kernel and enters the OS | + +This handshake model ensures that no firmware stage proceeds unless its dependencies have securely initialized and authorized the next step. + +{{% notice Note %}} +In the table above, arrows (←) represent **dependency relationships**—the component on the left **depends on** the component(s) on the right to be triggered or authorized. +For example, `RSE ← BL2` means that RSE is loaded and triggered by BL2; +`AP ← SCP + RSE` means the application processor can only start after SCP has initialized the hardware and RSE has granted secure boot authorization. +These arrows do not represent execution order but indicate **which component must be ready for another to begin**. +{{% /notice %}} + +{{% notice Note %}} +Once the firmware stack reaches UEFI, it performs hardware discovery and launches GRUB. +GRUB then selects and boots the Linux kernel. Unlike the previous dependency arrows (←), this is a **direct execution path**—each stage passes control directly to the next. +{{% /notice %}} + +This layered approach supports modular testing, independent debugging, and early-stage simulation—all essential for secure and robust platform bring-up. + + +In this module, you have: + +* Explored the full boot sequence of the RD‑V3 platform, from power-on to Linux login +* Understood the responsibilities of key firmware components such as TF‑A, RSE, SCP, LCP, and UEFI +* Learned how secure boot is enforced and how each module hands off control to the next +* Interpreted boot dependencies using FVP simulation and UART logs + +With the full boot flow and firmware responsibilities now clear, you're ready to apply these insights. +In the next module, you'll fetch the RD‑V3 codebase, configure your workspace, and begin building your own firmware stack for simulation. diff --git a/content/learning-paths/servers-and-cloud-computing/neoverse-rdv3-swstack/3_rdv3_sw_build.md b/content/learning-paths/servers-and-cloud-computing/neoverse-rdv3-swstack/3_rdv3_sw_build.md new file mode 100644 index 0000000000..ea55e55327 --- /dev/null +++ b/content/learning-paths/servers-and-cloud-computing/neoverse-rdv3-swstack/3_rdv3_sw_build.md @@ -0,0 +1,245 @@ +--- +title: Build the RD‑V3 Reference Platform +weight: 4 + +### FIXED, DO NOT MODIFY +layout: learningpathall +--- +## Building the RD‑V3 Reference Platform + +In this module, you’ll set up your development environment on Arm server and build the firmware stack required to simulate the RD‑V3 platform. + +### Step 1: Prepare the Development Environment + +First, ensure your system is up-to-date and install the required tools and libraries: + +```bash +sudo apt update +sudo apt install curl git +``` + +Configure git as follows. + +```bash +git config --global user.name "" +git config --global user.email "" +``` + +### Step 2: Fetch the Source Code + +The RD‑V3 platform firmware stack consists of many independent components—such as TF‑A, SCP, RSE, UEFI, Linux kernel, and Buildroot. Each component is maintained in a separate Git repository. To manage and synchronize these repositories efficiently, we use the `repo` tool. It simplifies syncing the full platform software stack from multiple upstreams. + +If repo is not installed, you can download it manually: + +```bash +mkdir -p ~/.bin +PATH="${HOME}/.bin:${PATH}" +curl https://storage.googleapis.com/git-repo-downloads/repo > ~/.bin/repo +chmod a+rx ~/.bin/repo +``` + +Once ready, create a workspace and initialize the repo manifest: + +We use a pinned manifest to ensure reproducibility across different environments. This locks all component repositories to known-good commits that are validated and aligned with a specific FVP version. + +For this session, we will use `pinned-rdv3.xml` and `RD-INFRA-2025.07.03`. + +```bash +cd ~ +mkdir rdv3 +cd rdv3 +# Initialize the source tree +repo init -u https://git.gitlab.arm.com/infra-solutions/reference-design/infra-refdesign-manifests.git -m pinned-rdv3.xml -b refs/tags/RD-INFRA-2025.07.03 --depth=1 + +# Sync the full source code +repo sync -c -j $(nproc) --fetch-submodules --force-sync --no-clone-bundle +``` + +Once synced, you will see the message like: +``` +Syncing: 95% (19/20), done in 2m36.453s +Syncing: 100% (83/83) 2:52 | 1 job | 0:01 platsw/edk2-platforms @ uefi/edk2/edk2-platformsrepo sync has finished successfully. +``` + +{{% notice Note %}} +As of the time of writing, the latest official release tag is RD-INFRA-2025.07.03. +Please note that newer tags may be available as future platform updates are published. +{{% /notice %}} + +This manifest will fetch all required sources including: +* TF‑A +* SCP / RSE firmware +* EDK2 (UEFI) +* Linux kernel +* Buildroot and platform scripts + + +### Step 3: Build the Docker Image + +There are two supported methods for building the reference firmware stack: **host-based** and **container-based**. + +- The **host-based** build installs all required dependencies directly on your local system and executes the build natively. +- The **container-based** build runs the compilation process inside a pre-configured Docker image, ensuring consistent results and isolation from host environment issues. + +In this Learning Path, we will use the **container-based** approach. + +The container image is designed to use the source directory from the host (`~/rdv3`) and perform the build process inside the container. Make sure Docker is installed on your Linux machine. You can follow this [installation guide](https://learn.arm.com/install-guides/docker/). + + +After Docker is installed, you’re ready to build the container image. + +The `container.sh` script is a wrapper that builds the container using default settings for the Dockerfile and image name. You can customize these by using the `-f` (Dockerfile) and `-i` (image name) options, or by editing the script directly. + +To view all available options: + +```bash +cd ~/rdv3/container-scripts +./container.sh -h +``` + +To build the container image: + +```bash +./container.sh build +``` + +The build procedure may take a few minutes, depending on network bandwidth and CPU performance. On my AWS m7g.4xlarge instance, it took 250 seconds. + +``` +Building docker image: rdinfra-builder ... +[+] Building 239.7s (19/19) FINISHED docker:default + => [internal] load build definition from rd-infra-arm64 0.0s + => => transferring dockerfile: 4.50kB 0.0s + => [internal] load metadata for docker.io/library/ubuntu:jammy-20240911.1 1.0s + => [internal] load .dockerignore 0.0s + => => transferring context: 2B 0.0s + => [internal] load build context 0.0s + => => transferring context: 10.80kB 0.0s + => [ 1/14] FROM docker.io/library/ubuntu:jammy-20240911.1@sha256:0e5e4a57c2499249aafc3b40fcd541e9a456aab7296681a3994d631587203f97 1.7s + => => resolve docker.io/library/ubuntu:jammy-20240911.1@sha256:0e5e4a57c2499249aafc3b40fcd541e9a456aab7296681a3994d631587203f97 0.0s + => => sha256:0e5e4a57c2499249aafc3b40fcd541e9a456aab7296681a3994d631587203f97 6.69kB / 6.69kB 0.0s + => => sha256:7c75ab2b0567edbb9d4834a2c51e462ebd709740d1f2c40bcd23c56e974fe2a8 424B / 424B 0.0s + => => sha256:981912c48e9a89e903c89b228be977e23eeba83d42e2c8e0593a781a2b251cba 2.31kB / 2.31kB 0.0s + => => sha256:a186900671ab62e1dea364788f4e84c156e1825939914cfb5a6770be2b58b4da 27.36MB / 27.36MB 1.1s + => => extracting sha256:a186900671ab62e1dea364788f4e84c156e1825939914cfb5a6770be2b58b4da 0.5s + => [ 2/14] RUN apt-get update -q=2 && apt-get install -q=2 --yes --no-install-recommends ca-certificates curl 12.5s + => [ 3/14] RUN wget -O - https://apt.kitware.com/keys/kitware-archive-latest.asc 2>/dev/null | gpg --dearmor - | tee /etc/apt/trust 0.5s + => [ 4/14] RUN apt-get update -q=2 && apt-get install -q=2 --yes --no-install-recommends acpica-tools autoconf 40.0s + => [ 5/14] RUN pip3 install --no-cache-dir poetry 7.4s + => [ 6/14] RUN curl https://storage.googleapis.com/git-repo-downloads/repo > /usr/bin/repo && chmod a+x /usr/bin/repo 0.3s + => [ 7/14] COPY common/install-openssl.sh /tmp/common/ 0.0s + => [ 8/14] RUN bash /tmp/common/install-openssl.sh /opt 32.7s + => [ 9/14] COPY common/install-gcc.sh /tmp/common/ 0.0s + => [10/14] COPY common/install-clang.sh /tmp/common/ 0.0s + => [11/14] RUN bash /tmp/common/install-gcc.sh /opt 13.2.rel1 "arm-none-eabi" 19.8s + => [12/14] RUN bash /tmp/common/install-gcc.sh /opt 13.2.rel1 "aarch64-none-elf" 13.4s + => [13/14] RUN bash /tmp/common/install-clang.sh /opt 15.0.6 101.2s + => [14/14] COPY common/entry.sh /root/entry.sh 0.0s + => exporting to image 9.2s + => => exporting layers 9.2s + => => writing image sha256:3a395c5a0b60248881f9ad06048b97ae3ed4d937ffb0c288ea90097b2319f2b8 0.0s + => => naming to docker.io/library/rdinfra-builder 0.0s +``` + +After the docker image build completes successfully, you can use `docker images` to find the build docker image called `rdinfra-builder`. + +``` +REPOSITORY TAG IMAGE ID CREATED SIZE +rdinfra-builder latest 3a395c5a0b60 4 minutes ago 8.12GB +``` + + +### Step 4: Enter the Container and Build Firmware + +You can enter the Docker container interactively to quick look the image. + +```bash +cd ~/rdv3/container-scripts +./container.sh -v ~/rdv3 run +``` + +This mounts your source directory (~/rdv3) into the container and opens a shell at that location. +Inside the container, you’ll see a prompt like: + +``` +Running docker image: rdinfra-builder ... +To run a command as administrator (user "root"), use "sudo ". +See "man sudo_root" for details. + +your-username:hostname:/home/your-username/rdv3$ +``` + +Since building the full firmware stack can involve many components, the more efficient method is to use a single Docker command that runs both build and package steps automatically. + +- **build**: This phase compiles all individual components of the firmware stack, including TF‑A, SCP, RSE, UEFI, Linux kernel, and rootfs. + +- **package**: This phase consolidates the build outputs into simulation-ready formats and organizes boot artifacts for FVP. + +To execute the full build and packaging flow: + +```bash +cd ~/rdv3 +docker run --rm \ + -v "$PWD:$PWD" \ + -w "$PWD" \ + --mount type=volume,dst="$HOME" \ + --env ARCADE_USER="$(id -un)" \ + --env ARCADE_UID="$(id -u)" \ + --env ARCADE_GID="$(id -g)" \ + -t -i rdinfra-builder \ + bash -c "./build-scripts/rdinfra/build-test-buildroot.sh -p rdv3 build && \ + ./build-scripts/rdinfra/build-test-buildroot.sh -p rdv3 package" +``` + +The build artifacts will be placed under `~/rdv3/output/rdv3/rdv3/`, where the last `rdv3` corresponds to the selected platform name. + +After a successful build, the following output artifacts will be generated under `~/rdv3/output/rdv3/rdv3/` + +``` +ls ~/rdv3/output/rdv3/rdv3 -al + +total 7092 +drwxr-xr-x 2 ubuntu ubuntu 4096 Aug 12 13:15 . +drwxr-xr-x 4 ubuntu ubuntu 4096 Aug 12 13:15 .. +lrwxrwxrwx 1 ubuntu ubuntu 25 Aug 12 13:15 Image -> ../components/linux/Image +lrwxrwxrwx 1 ubuntu ubuntu 35 Aug 12 13:15 Image.defconfig -> ../components/linux/Image.defconfig +-rw-r--r-- 1 ubuntu ubuntu 7250838 Aug 12 13:15 fip-uefi.bin +lrwxrwxrwx 1 ubuntu ubuntu 32 Aug 12 13:15 lcp_ramfw.bin -> ../components/rdv3/lcp_ramfw.bin +lrwxrwxrwx 1 ubuntu ubuntu 26 Aug 12 13:15 lkvm -> ../components/kvmtool/lkvm +lrwxrwxrwx 1 ubuntu ubuntu 32 Aug 12 13:15 mcp_ramfw.bin -> ../components/rdv3/mcp_ramfw.bin +lrwxrwxrwx 1 ubuntu ubuntu 26 Aug 12 13:15 rmm.img -> ../components/rdv3/rmm.img +lrwxrwxrwx 1 ubuntu ubuntu 32 Aug 12 13:15 scp_ramfw.bin -> ../components/rdv3/scp_ramfw.bin +lrwxrwxrwx 1 ubuntu ubuntu 29 Aug 12 13:15 tf-bl1.bin -> ../components/rdv3/tf-bl1.bin +lrwxrwxrwx 1 ubuntu ubuntu 29 Aug 12 13:15 tf-bl2.bin -> ../components/rdv3/tf-bl2.bin +lrwxrwxrwx 1 ubuntu ubuntu 30 Aug 12 13:15 tf-bl31.bin -> ../components/rdv3/tf-bl31.bin +lrwxrwxrwx 1 ubuntu ubuntu 53 Aug 12 13:15 tf_m_flash.bin -> ../components/arm/rse/neoverse_rd/rdv3/tf_m_flash.bin +lrwxrwxrwx 1 ubuntu ubuntu 46 Aug 12 13:15 tf_m_rom.bin -> ../components/arm/rse/neoverse_rd/rdv3/rom.bin +lrwxrwxrwx 1 ubuntu ubuntu 48 Aug 12 13:15 tf_m_vm0_0.bin -> ../components/arm/rse/neoverse_rd/rdv3/vm0_0.bin +lrwxrwxrwx 1 ubuntu ubuntu 48 Aug 12 13:15 tf_m_vm1_0.bin -> ../components/arm/rse/neoverse_rd/rdv3/vm1_0.bin +lrwxrwxrwx 1 ubuntu ubuntu 33 Aug 12 13:15 uefi.bin -> ../components/css-common/uefi.bin +``` + +| Component | Output Files | Description | +|----------------------|----------------------------------------------|-----------------------------| +| TF‑A | `bl1.bin`, `bl2.bin`, `bl31.bin`, `fip.bin` | Entry-level boot firmware | +| SCP and RSE firmware | `scp.bin`, `mcp_rom.bin`, etc. | Platform power/control | +| UEFI | `uefi.bin`, `flash0.img` | Boot device enumeration | +| Linux kernel | `Image` | OS payload | +| Initrd | `rootfs.cpio.gz` | Minimal filesystem | + + +### Optional: Run the Build Manually from Inside the Container + +You can also perform the build manually after entering the container: + +In the container shell: +```bash +cd ~/rdv3 +./build-scripts/rdinfra/build-test-buildroot.sh -p rdv3 build +./build-scripts/rdinfra/build-test-buildroot.sh -p rdv3 package +``` + +This manual workflow is useful for debugging, partial builds, or making custom modifications to individual components. + + +You’ve now successfully prepared and built the full RD‑V3 firmware stack. In the next module, you’ll install the matching FVP model and simulate the full boot sequence—bringing the firmware to life in a virtual platform. diff --git a/content/learning-paths/servers-and-cloud-computing/neoverse-rdv3-swstack/4_rdv3_on_fvp.md b/content/learning-paths/servers-and-cloud-computing/neoverse-rdv3-swstack/4_rdv3_on_fvp.md new file mode 100644 index 0000000000..5d0553d039 --- /dev/null +++ b/content/learning-paths/servers-and-cloud-computing/neoverse-rdv3-swstack/4_rdv3_on_fvp.md @@ -0,0 +1,170 @@ +--- +title: Simulate RD‑V3 Boot Flow on Arm FVP +weight: 5 + +### FIXED, DO NOT MODIFY +layout: learningpathall +--- + +## Simulating RD‑V3 with Arm FVP + +In the previous module, you built the complete CSS‑V3 firmware stack. +Now, you’ll use Arm Fixed Virtual Platform (FVP) to simulate the system—allowing you to verify the boot sequence without any physical silicon. +This simulation brings up the full stack from BL1 to Linux shell using Buildroot. + +### Step 1: Download and Install the FVP Model + +Before downloading the RD‑V3 FVP, it’s important to understand that each reference design release tag corresponds to a specific version of the FVP model. + +For example, the **RD‑INFRA‑2025.07.03** release tag is designed to work with **FVP version 11.29.35**. + +You can refer to the [RD-V3 Release Tags](https://neoverse-reference-design.docs.arm.com/en/latest/platforms/rdv3.html#release-tags) for a full list of release tags, corresponding FVP versions, and their associated release notes, which summarize changes and validated test cases. + +Download the matching FVP binary for your selected release tag using the link provided in this course: + +```bash +mkdir -p ~/fvp +cd ~/fvp +wget https://developer.arm.com/-/cdn-downloads/permalink/FVPs-Neoverse-Infrastructure/RD-V3/FVP_RD_V3_11.29_35_Linux64_armv8l.tgz + +tar -xvf FVP_RD_V3_11.29_35_Linux64_armv8l.tgz +./FVP_RD_V3.sh +``` + +The FVP installation may prompt you with a few questions—choosing the default options is sufficient for this learning path. By default, the FVP will be installed in `/home/ubuntu/FVP_RD_V3`. + +### Step 2: Remote Desktop Set Up + +The RD‑V3 FVP model launches multiple UART consoles—each mapped to a separate terminal window for different subsystems (e.g., Neoverse V3, Cortex‑M55, Cortex‑M7, panel). + +If you're accessing the platform over SSH, these console windows won't open properly. +To interact with all UART consoles, we recommend installing a Remote Desktop environment using XRDP. + +In AWS Ubuntu 22.04 instance, you need install required packages: + + +```bash +sudo apt update +sudo apt install -y ubuntu-desktop xrdp xfce4 xfce4-goodies pv xterm sshpass socat retry +sudo systemctl enable --now xrdp +``` + +To allow remote desktop connections, you need to open port 3389 (RDP) in your EC2 security group: +- Go to the EC2 Dashboard → Security Groups +- Select the security group associated with your instance +- Under the Inbound rules tab, click Edit inbound rules +- Add the following rule: + - Type: RDP + - Port: 3389 + - Source: your local machine IP + +For better security, limit the source to your current public IP instead of 0.0.0.0/0. + + +***Switch to Xorg (required on Ubuntu 22.04):*** + +Wayland is the default display server on Ubuntu 22.04, but it is not compatible with XRDP. +To enable XRDP remote sessions, you need to switch to Xorg by modifying the GDM configuration. + +Open the `/etc/gdm3/custom.conf` in a text editor. +Find the line: + +``` +#WaylandEnable=false +``` + +Uncomment it by removing the # so it becomes: + +``` +WaylandEnable=false +``` + +Then restart the GDM display manager for the change to take effect: +```bash +sudo systemctl restart gdm3 +``` + +After reboot, XRDP will use Xorg and you should be able to connect to the Arm server via Remote Desktop. + +### Step 3: Launch the Simulation + +Once connected via Remote Desktop, open a terminal and launch the RD‑V3 FVP simulation: + +```bash +cd ~/rdv3/model-scripts/rdinfra +export MODEL=/home/ubuntu/FVP_RD_V3/models/Linux64_armv8l_GCC-9.3/FVP_RD_V3 +./boot-buildroot.sh -p rdv3 & +``` + +The command will launch the simulation and open multiple xterm windows, each corresponding to a different CPU. +You can start by locating the ***terminal_ns_uart0*** window — in it, you should see the GRUB menu. + +From there, select RD-V3 Buildroot in the GRUB menu and press Enter to proceed. +![img3 alt-text#center](rdv3_sim_run.jpg "GRUB Menu") + +Booting Buildroot will take a little while — you’ll see typical Linux boot messages scrolling through. +Eventually, the system will stop at the `Welcome to Buildroot` message on the ***terminal_ns_uart0*** window. + +At the `buildroot login:` prompt, type `root` and press Enter to log in. +![img4 alt-text#center](rdv3_sim_login.jpg "Buildroot login") + +Congratulations — you’ve successfully simulated the boot process of the RD-V3 software you compiled earlier, all on FVP! + +### Step 4: Understand the UART Outputs + +When you launch the RD‑V3 FVP model, it opens multiple terminal windows—each connected to a different UART channel. +These UARTs provide console logs from various firmware components across the system. + +Below is the UART-to-terminal mapping based on the default FVP configuration: + +| Terminal Window Title | UART | Output Role | Connected Processor | +|----------------------------|------|------------------------------------|-----------------------| +| `FVP terminal_ns_uart0` | 0 | Linux Kernel Console (BusyBox) | Neoverse‑V3 (AP) | +| `FVP terminal_ns_uart1` | 1 | TF‑A / UEFI Logs | Neoverse‑V3 (AP) | +| `FVP terminal_uart_scp` | 2 | SCP Firmware Logs (power, clocks) | Cortex‑M7 (SCP) | +| `FVP terminal_rse_uart` | 3 | RSE Secure Boot Logs | Cortex‑M55 (RSE) | +| `FVP terminal_uart_mcp` | 4 | MCP Logs (management, telemetry) | Cortex‑M7 (MCP) | +| `FVP terminal_uart_lcp` | 5 | LCP Logs (per-core power control) | Cortex‑M55 (LCP) | +| `FVP terminal_sec_uart` | 6 | Secure World / TF‑M Logs | Cortex‑M55 | + + +Logs are also captured under `~/rdv3/model-scripts/rdinfra/platforms/rdv3/rdv3`, each UART redirected to its own log file. +You can also explore refinfra-*.txt log files to validate subsystem states. + +For example, if you’d like to verify that each CPU core has its GICv3 redistributor and LPI table correctly initialized, you can refer to the relevant messages in refinfra-24812-uart-0-nsec_.txt. + + +``` +[ 0.000056] Remapping and enabling EFI services. +[ 0.000078] smp: Bringing up secondary CPUs ... +[ 0.000095] Detected PIPT I-cache on CPU1 +[ 0.000096] GICv3: CPU1: found redistributor 10000 region 0:0x0000000030200000 +[ 0.000096] GICv3: CPU1: using allocated LPI pending table @0x0000008080200000 +[ 0.000109] CPU1: Booted secondary processor 0x0000010000 [0x410fd840] +[ 0.000125] Detected PIPT I-cache on CPU2 +[ 0.000126] GICv3: CPU2: found redistributor 20000 region 0:0x0000000030240000 +[ 0.000126] GICv3: CPU2: using allocated LPI pending table @0x0000008080210000 +[ 0.000139] CPU2: Booted secondary processor 0x0000020000 [0x410fd840] +[ 0.000155] Detected PIPT I-cache on CPU3 +[ 0.000156] GICv3: CPU3: found redistributor 30000 region 0:0x0000000030280000 +[ 0.000156] GICv3: CPU3: using allocated LPI pending table @0x0000008080220000 +[ 0.000169] CPU3: Booted secondary processor 0x0000030000 [0x410fd840] +[ 0.000185] Detected PIPT I-cache on CPU4 +[ 0.000186] GICv3: CPU4: found redistributor 40000 region 0:0x00000000302c0000 +[ 0.000186] GICv3: CPU4: using allocated LPI pending table @0x0000008080230000 +[ 0.000199] CPU4: Booted secondary processor 0x0000040000 [0x410fd840] +[ 0.000215] Detected PIPT I-cache on CPU5 +[ 0.000216] GICv3: CPU5: found redistributor 50000 region 0:0x0000000030300000 +[ 0.000216] GICv3: CPU5: using allocated LPI pending table @0x0000008080240000 +[ 0.000229] CPU5: Booted secondary processor 0x0000050000 [0x410fd840] +[ 0.000245] Detected PIPT I-cache on CPU6 +[ 0.000246] GICv3: CPU6: found redistributor 60000 region 0:0x0000000030340000 +[ 0.000246] GICv3: CPU6: using allocated LPI pending table @0x0000008080250000 +[ 0.000259] CPU6: Booted secondary processor 0x0000060000 [0x410fd840] +... + +``` + +You can try to identify the SCP, RSE, and kernel boot logs across their respective terminals. + +Successfully tracing these logs confirms your simulation environment and firmware stack are functioning correctly—all without physical silicon. diff --git a/content/learning-paths/servers-and-cloud-computing/neoverse-rdv3-swstack/5_rdv3_modify.md b/content/learning-paths/servers-and-cloud-computing/neoverse-rdv3-swstack/5_rdv3_modify.md new file mode 100644 index 0000000000..2acfaa811d --- /dev/null +++ b/content/learning-paths/servers-and-cloud-computing/neoverse-rdv3-swstack/5_rdv3_modify.md @@ -0,0 +1,135 @@ +--- +title: Simulate Dual Chip RD-V3-R1 Platform +weight: 6 + +### FIXED, DO NOT MODIFY +layout: learningpathall +--- + +## Build and Run RDV3-R1 Dual Chip Platform + +The RD‑V3‑R1 platform is a dual-chip simulation environment built to model multi-die Arm server SoCs. It expands on the single-die RD‑V3 design by introducing a second application processor and a Management Control Processor (MCP). + +***Key Use Cases*** + +- Simulate chiplet-style boot flow with two APs +- Observe coordination between SCP and MCP across dies +- Test secure boot in a distributed firmware environment + +***Differences from RD‑V3*** +- Dual AP boot flow instead of single AP +- Adds MCP (Cortex‑M7) to support cross-die management +- More complex power/reset coordination + +### Step 1: Clone the RD‑V3‑R1 Firmware Stack + +Initialize and sync the codebase for RD‑V3‑R1: + +```bash +cd ~ +mkdir rdv3r1 +cd rdv3r1 +# Initialize the source tree +repo init -u https://git.gitlab.arm.com/infra-solutions/reference-design/infra-refdesign-manifests.git -m pinned-rdv3r1.xml -b refs/tags/RD-INFRA-2025.07.03 --depth=1 + +# Sync the full source code +repo sync -c -j $(nproc) --fetch-submodules --force-sync --no-clone-bundle +``` + +### Step 2: Install RD-V3-R1 FVP + +Refer to the [RD-V3-R1 Release Tags](https://neoverse-reference-design.docs.arm.com/en/latest/platforms/rdv3.html#release-tags) to determine which FVP model version matches your selected release tag. +Then download and install the corresponding FVP binary. + +```bash +mkdir -p ~/fvp +cd ~/fvp +wget https://developer.arm.com/-/cdn-downloads/permalink/FVPs-Neoverse-Infrastructure/RD-V3-r1/FVP_RD_V3_R1_11.29_35_Linux64_armv8l.tgz +tar -xvf FVP_RD_V3_R1_11.29_35_Linux64_armv8l.tgz +./FVP_RD_V3_R1.sh +``` + +### Step 3: Build the Firmware + +Since you have already created the Docker image for firmware building in a previous module, there is no need to rebuild it for RD‑V3‑R1. + +Run the full firmware build and packaging process: + +```bash +cd ~/rdv3r1 +docker run --rm \ + -v "$PWD:$PWD" \ + -w "$PWD" \ + --mount type=volume,dst="$HOME" \ + --env ARCADE_USER="$(id -un)" \ + --env ARCADE_UID="$(id -u)" \ + --env ARCADE_GID="$(id -g)" \ + -t -i rdinfra-builder \ + bash -c "./build-scripts/rdinfra/build-test-buildroot.sh -p rdv3r1 build && \ + ./build-scripts/rdinfra/build-test-buildroot.sh -p rdv3r1 package" +``` + +### Step 4: Launch the Simulation + +Once connected via Remote Desktop, open a terminal and launch the RD‑V3‑R1 FVP simulation: + +```bash +cd ~/rdv3r1/model-scripts/rdinfra +export MODEL=/home/ubuntu/FVP_RD_V3_R1/models/Linux64_armv8l_GCC-9.3/FVP_RD_V3_R1_R1 +./boot-buildroot.sh -p rdv3r1 & +``` + +This command starts the dual-chip simulation. +You’ll observe additional UART consoles for components like the MCP, and you can verify that both application processors (AP0 and AP1) are brought up in a coordinated manner. + +![img5 alt-text#center](rdv3r1_sim_login.jpg "RDV3 R1 buildroot login") + +Similar with previous session, the terminal logs are stored in `~/rdv3r1/model-scripts/rdinfra/platforms/rdv3r1/rdv3r1`. + + +### Step 5: Customize Firmware and Confirm MCP Execution + +To wrap up this learning path, let’s verify that your firmware changes can be compiled and simulated successfully within the RD‑V3‑R1 environment. + +Edit the MCP source file `~/rdv3r1/host/scp/framework/src/fwk_module.c` + +Locate the function fwk_module_start(). Add the following logging line just before return FWK_SUCCESS;: + +```c +int fwk_module_start(void) +{ + ... + FWK_LOG_CRIT("[FWK] Module initialization complete!"); + + // Custom log message for validation + FWK_LOG_CRIT("[FWK] Customer code here"); + return FWK_SUCCESS; +} +``` + +Rebuild and repackage the firmware: + +```bash +cd ~/rdv3r1 +docker run --rm \ + -v "$PWD:$PWD" \ + -w "$PWD" \ + --mount type=volume,dst="$HOME" \ + --env ARCADE_USER="$(id -un)" \ + --env ARCADE_UID="$(id -u)" \ + --env ARCADE_GID="$(id -g)" \ + -t -i rdinfra-builder \ + bash -c "./build-scripts/rdinfra/build-test-buildroot.sh -p rdv3r1 build && \ + ./build-scripts/rdinfra/build-test-buildroot.sh -p rdv3r1 package" +``` + +Launch the FVP simulation again and observe the UART output for MCP. + +![img6 alt-text#center](rdv3r1_sim_codechange.jpg "RDV3 R1 modify firmware") + + +If the change was successful, your custom log line will appear in the MCP console—confirming that your code was integrated and executed as part of the firmware boot process. + +You’ve now successfully simulated a dual-chip Arm server platform using RD‑V3‑R1 on FVP—from cloning firmware sources to modifying secure control logic. + +This foundation sets the stage for deeper exploration, such as customizing platform firmware or integrating BMC workflows in future development cycles. diff --git a/content/learning-paths/servers-and-cloud-computing/neoverse-rdv3-swstack/_index.md b/content/learning-paths/servers-and-cloud-computing/neoverse-rdv3-swstack/_index.md new file mode 100644 index 0000000000..df177c3f47 --- /dev/null +++ b/content/learning-paths/servers-and-cloud-computing/neoverse-rdv3-swstack/_index.md @@ -0,0 +1,55 @@ +--- +title: CSS-V3 Pre-Silicon Software Development Using Neoverse Servers + +minutes_to_complete: 90 + +who_is_this_for: This Learning Path is for firmware developers, system architects, and silicon validation engineers building Arm Neoverse CSS platforms. It focuses on pre-silicon development using Fixed Virtual Platforms (FVPs) for the CSS‑V3 reference design. You’ll learn how to build, customize, and validate firmware on the RD‑V3 platform using Fixed Virtual Platforms (FVPs) before hardware is available. + +learning_objectives: + - Understand the architecture of Arm Neoverse CSS‑V3 as the foundation for scalable server-class platforms + - Build and boot the RD‑V3 firmware stack using TF‑A, SCP, RSE, and UEFI + - Simulate multi-core, multi-chip systems with Arm FVP models and interpret boot logs + - Modify platform control firmware to test custom logic and validate it via pre-silicon simulation + +prerequisites: + - Access to an Arm Neoverse-based Linux machine (cloud or local), with at least 80 GB of storage + - Familiarity with Linux command-line tools and basic scripting + - Understanding of firmware boot stages and SoC-level architecture + - Docker installed, or GitHub Codespaces-compatible development environment + +author: + - Odin Shen + +### Tags +skilllevels: Advanced +subjects: Containers and Virtualization +armips: + - Neoverse +tools_software_languages: + - C + - Docker + - FVP +peratingsystems: + - Linux + +further_reading: + - resource: + title: Neoverse Compute Subsystems V3 + link: https://www.arm.com/products/neoverse-compute-subsystems/css-v3 + type: website + - resource: + title: Reference Design software stack architecture + link: https://neoverse-reference-design.docs.arm.com/en/latest/about/software_stack.html + type: website + - resource: + title: GitLab infra-refdesign-manifests + link: https://git.gitlab.arm.com/infra-solutions/reference-design/infra-refdesign-manifests + type: gitlab + + +### FIXED, DO NOT MODIFY +# ================================================================================ +weight: 1 # _index.md always has weight of 1 to order correctly +layout: "learningpathall" # All files under learning paths have this same wrapper +learning_path_main_page: "yes" # This should be surfaced when looking for related content. Only set for _index.md of learning path content. +--- diff --git a/content/learning-paths/servers-and-cloud-computing/neoverse-rdv3-swstack/_next-steps.md b/content/learning-paths/servers-and-cloud-computing/neoverse-rdv3-swstack/_next-steps.md new file mode 100644 index 0000000000..c3db0de5a2 --- /dev/null +++ b/content/learning-paths/servers-and-cloud-computing/neoverse-rdv3-swstack/_next-steps.md @@ -0,0 +1,8 @@ +--- +# ================================================================================ +# FIXED, DO NOT MODIFY THIS FILE +# ================================================================================ +weight: 21 # Set to always be larger than the content in this path to be at the end of the navigation. +title: "Next Steps" # Always the same, html page title. +layout: "learningpathall" # All files under learning paths have this same wrapper for Hugo processing. +--- diff --git a/content/learning-paths/servers-and-cloud-computing/neoverse-rdv3-swstack/rdf_single_chip.png b/content/learning-paths/servers-and-cloud-computing/neoverse-rdv3-swstack/rdf_single_chip.png new file mode 100644 index 0000000000..85937be535 Binary files /dev/null and b/content/learning-paths/servers-and-cloud-computing/neoverse-rdv3-swstack/rdf_single_chip.png differ diff --git a/content/learning-paths/servers-and-cloud-computing/neoverse-rdv3-swstack/rdinfra_sw_stack.jpg b/content/learning-paths/servers-and-cloud-computing/neoverse-rdv3-swstack/rdinfra_sw_stack.jpg new file mode 100644 index 0000000000..780c21f291 Binary files /dev/null and b/content/learning-paths/servers-and-cloud-computing/neoverse-rdv3-swstack/rdinfra_sw_stack.jpg differ diff --git a/content/learning-paths/servers-and-cloud-computing/neoverse-rdv3-swstack/rdv3_sim_login.jpg b/content/learning-paths/servers-and-cloud-computing/neoverse-rdv3-swstack/rdv3_sim_login.jpg new file mode 100644 index 0000000000..0bfc8474fc Binary files /dev/null and b/content/learning-paths/servers-and-cloud-computing/neoverse-rdv3-swstack/rdv3_sim_login.jpg differ diff --git a/content/learning-paths/servers-and-cloud-computing/neoverse-rdv3-swstack/rdv3_sim_run.jpg b/content/learning-paths/servers-and-cloud-computing/neoverse-rdv3-swstack/rdv3_sim_run.jpg new file mode 100644 index 0000000000..0178bb8228 Binary files /dev/null and b/content/learning-paths/servers-and-cloud-computing/neoverse-rdv3-swstack/rdv3_sim_run.jpg differ diff --git a/content/learning-paths/servers-and-cloud-computing/neoverse-rdv3-swstack/rdv3r1_sim_codechange.jpg b/content/learning-paths/servers-and-cloud-computing/neoverse-rdv3-swstack/rdv3r1_sim_codechange.jpg new file mode 100644 index 0000000000..3278620e94 Binary files /dev/null and b/content/learning-paths/servers-and-cloud-computing/neoverse-rdv3-swstack/rdv3r1_sim_codechange.jpg differ diff --git a/content/learning-paths/servers-and-cloud-computing/neoverse-rdv3-swstack/rdv3r1_sim_login.jpg b/content/learning-paths/servers-and-cloud-computing/neoverse-rdv3-swstack/rdv3r1_sim_login.jpg new file mode 100644 index 0000000000..610e5ac73d Binary files /dev/null and b/content/learning-paths/servers-and-cloud-computing/neoverse-rdv3-swstack/rdv3r1_sim_login.jpg differ diff --git a/content/learning-paths/servers-and-cloud-computing/openbmc-rdv3/1_introduction_openbmc.md b/content/learning-paths/servers-and-cloud-computing/openbmc-rdv3/1_introduction_openbmc.md new file mode 100644 index 0000000000..52d1265576 --- /dev/null +++ b/content/learning-paths/servers-and-cloud-computing/openbmc-rdv3/1_introduction_openbmc.md @@ -0,0 +1,54 @@ +--- +title: Introduction to OpenBMC and UEFI +weight: 2 + +### FIXED, DO NOT MODIFY +layout: learningpathall +--- + +OpenBMC and UEFI are foundational components in Arm-based server platforms. In this module, you’ll learn what they are, how they interact, and why simulating their integration on a pre-silicon reference design like RD-V3 is valuable for early-stage development and testing. + +## Introduction to OpenBMC + +[OpenBMC](https://www.openbmc.org/) is a collaborative open-source firmware stack for Baseboard Management Controllers (BMC), hosted by the Linux Foundation. +BMCs are embedded microcontrollers on server motherboards that enable both in-band and out-of-band system management. +Out-of-band access allows remote management even when the host system is powered off or unresponsive, while in-band interfaces support communication with the host operating system during normal operation. + +The OpenBMC stack is built using the Yocto Project and includes a Linux kernel, system services, D-Bus interfaces, Redfish/IPMI APIs, and support for hardware monitoring, fan control, power sequencing, and more. + +It is widely adopted by hyperscalers and enterprise vendors to manage servers, storage systems, and network appliances. +OpenBMC is particularly well-suited to Arm-based server platforms like **Neoverse RD-V3**, where it provides early-stage platform control and boot orchestration even before silicon is available. + +**Key features of OpenBMC include:** +- **Remote management:** power control, Serial over LAN (SOL), and virtual media +- **Hardware health monitoring:** sensors, fans, temperature, voltage, and power rails +- **Firmware update mechanisms:** support for signed image updates and secure boot +- **Industry-standard APIs:** IPMI, Redfish, PLDM, and MCTP +- **Modular and extensible design:** device tree-based configuration and layered architecture + +OpenBMC enables faster development cycles, open innovation, and reduced vendor lock-in across data centers, cloud platforms, and edge environments. + +**In this Learning Path**, you’ll simulate how OpenBMC manages the early-stage boot process, power sequencing, and remote access for a virtual Neoverse RD-V3 server. You will interact with the BMC console, inspect boot logs, and verify serial-over-LAN and UART communication with the host. + +## Introduction to UEFI + +The [Unified Extensible Firmware Interface (UEFI)](https://uefi.org/) is the modern replacement for legacy BIOS, responsible for initializing hardware and loading the operating system. +UEFI provides a robust, modular, and extensible interface between platform firmware and OS loaders. It supports: + +- A modular and extensible architecture +- Faster boot times and reliable system initialization +- Large storage device support using GPT (GUID Partition Table) +- Secure Boot for verifying boot integrity +- Pre-boot networking and diagnostics via UEFI Shell or applications + +UEFI executes after the platform powers on and before the OS kernel takes over. +It discovers and initializes system hardware, configures memory and I/O, and launches the bootloader. +It is governed by the UEFI Forum and is now the standard firmware interface across server-class, desktop, and embedded systems. + +In platforms that integrate OpenBMC, the BMC operates independently from the host CPU and manages platform power, telemetry, and recovery. +During system boot, UEFI and OpenBMC coordinate via mechanisms such as IPMI over KCS, PLDM over MCTP, or shared memory buffers. + +These interactions are especially critical in Arm server-class platforms—like Neoverse RD-V3—for secure boot, remote diagnostics, and system recovery during pre-silicon or bring-up phases. + +**In this Learning Path**, you will build and run UEFI firmware on the RD-V3 FVP host platform, observe boot log output, and simulate how UEFI coordinates with OpenBMC via shared interfaces to complete system initialization. + diff --git a/content/learning-paths/servers-and-cloud-computing/openbmc-rdv3/2_openbmc_setup.md b/content/learning-paths/servers-and-cloud-computing/openbmc-rdv3/2_openbmc_setup.md new file mode 100644 index 0000000000..4ecd1878b9 --- /dev/null +++ b/content/learning-paths/servers-and-cloud-computing/openbmc-rdv3/2_openbmc_setup.md @@ -0,0 +1,322 @@ +--- +title: Set Up OpenBMC Development Environment +weight: 3 + +### FIXED, DO NOT MODIFY +layout: learningpathall +--- + +## Set Up Development Environment + +This module guides you through installing dependencies, configuring repositories, and preparing your workspace to simulate the OpenBMC and UEFI firmware stack on the Arm Neoverse RD-V3 platform using Fixed Virtual Platforms (FVPs). + +Before getting started, it’s strongly recommended to review the previous Learning Path: [CSS-V3 Pre-Silicon Software Development Using Neoverse Servers](https://learn.arm.com/learning-paths/servers-and-cloud-computing/neoverse-rdv3-swstack). +That guide walks you through how to use the CSSv3 reference design on FVP to perform early-stage development and validation. + +Ensure your system meets the following requirements: +- Access to an Arm Neoverse-based Linux machine (either cloud-based or local) is required, with at least 80 GB of free disk space, 48 GB of RAM, and running Ubuntu 22.04 LTS. +- Working knowledge of Docker, Git, and Linux terminal tools +- Basic understanding of server firmware stack (UEFI, BMC, TF-A, etc.) +- Docker installed, or GitHub Codespaces-compatible development environment + +### Install Required Packages + +Install the base packages for building OpenBMC with the Yocto Project: + +```bash +sudo apt update +sudo apt install git gcc g++ make file wget gawk diffstat bzip2 cpio chrpath zstd lz4 bzip2 unzip +``` + +### Set Up the repo Tool + +```bash +mkdir -p ~/.bin +PATH="${HOME}/.bin:${PATH}" +curl https://storage.googleapis.com/git-repo-downloads/repo > ~/.bin/repo +chmod a+rx ~/.bin/repo +``` + +### Download the Arm FVP Model (RD-V3) + +Download and extract the RD-V3 FVP binary from Arm: +```bash +mkdir ~/fvp +cd ~/fvp +wget https://developer.arm.com/-/cdn-downloads/permalink/FVPs-Neoverse-Infrastructure/RD-V3-r1/FVP_RD_V3_R1_11.29_35_Linux64_armv8l.tgz +tar -xvf FVP_RD_V3_R1_11.29_35_Linux64_armv8l.tgz +./FVP_RD_V3_R1.sh +``` + +The FVP installation may prompt you with a few questions—choosing the default options is sufficient for this learning path. By default, the FVP will be installed in /home/ubuntu/FVP_RD_V3_R1. + + +### Initialize the Host Build Environment + +Set up a workspace for host firmware builds: + +```bash +mkdir ~/host +cd host + +~/.bin/repo init -u "https://git.gitlab.arm.com/infra-solutions/reference-design/infra-refdesign-manifests.git" \ + -m "pinned-rdv3r1-bmc.xml" \ + -b "refs/tags/RD-INFRA-2025.07.03" \ + --depth=1 + +repo sync -c -j $(nproc) --fetch-submodules --force-sync --no-clone-bundle +``` + + + +{{% notice Need Review %}} +We need descript how to / where can download the patch file? +{{% /notice %}} + +Then, download the `patch.zip` from ... into root directory and unzip the file. + +```bash +unzip ~/patch.zip +``` + +Create a file `~/apply_patch.sh` and copy following content into. + +```patch +FVP_DIR="host" +SOURCE=${PWD} + +GREEN='\033[0;32m' +NC='\033[0m' + +pushd ${FVP_DIR} > /dev/null +echo -e "${GREEN}\n===== Apply patches to edk2 =====\n${NC}" +pushd uefi/edk2 +git am --keep-cr ${SOURCE}/patch/edk2/*.patch +popd > /dev/null + +echo -e "${GREEN}\n===== Apply patches to edk2-platforms =====\n${NC}" +pushd uefi/edk2/edk2-platforms > /dev/null +git am --keep-cr ${SOURCE}/patch/edk2-platforms/*.patch +popd > /dev/null + +echo -e "${GREEN}\n===== Apply patches to edk2-redfish-client =====\n${NC}" +git clone https://github.com/tianocore/edk2-redfish-client.git +pushd edk2-redfish-client > /dev/null +git checkout 4f204b579b1d6b5e57a411f0d4053b0a516839c8 +git am --keep-cr ${SOURCE}/patch/edk2-redfish-client/*.patch +popd > /dev/null + +echo -e "${GREEN}\n===== Apply patches to buildroot =====\n${NC}" +pushd buildroot > /dev/null +git am ${SOURCE}/patch/buildroot/*.patch +popd > /dev/null + +echo -e "${GREEN}\n===== Apply patches to build-scripts =====\n${NC}" +pushd build-scripts > /dev/null +git am ${SOURCE}/patch/build-scripts/*.patch +popd > /dev/null +popd > /dev/null +``` + +Next, apply patches to the `host`. + +```bash +chmod +x ./apply_patch.sh +./apply_patch.sh +``` + +{{% notice Need Review %}} +We need descript more about what's the patch? +{{% /notice %}} + + +### Build RDv3 R1 Host Docker Image + +Before building the host image, update the following line in `~/host/grub/bootstrap` to replace the `git://` protocol. +Some networks or corporate environments restrict `git://` access due to firewall or security policies. Switching to `https://` ensures reliable and secure access to external Git repositories. + +``` +diff --git a/bootstrap b/bootstrap +index 5b08e7e2d..031784582 100755 +--- a/bootstrap ++++ b/bootstrap +@@ -47,7 +47,7 @@ PERL="${PERL-perl}" + me=$0 +-default_gnulib_url=git://git.sv.gnu.org/gnulib ++default_gnulib_url=https://git.savannah.gnu.org/git/gnulib.git +usage() { + cat < ../components/linux/Image +lrwxrwxrwx 1 ubuntu ubuntu 35 Aug 18 10:19 Image.defconfig -> ../components/linux/Image.defconfig +-rw-r--r-- 1 ubuntu ubuntu 4402315 Aug 18 10:19 fip-uefi.bin +lrwxrwxrwx 1 ubuntu ubuntu 34 Aug 18 10:19 lcp_ramfw.bin -> ../components/rdv3r1/lcp_ramfw.bin +lrwxrwxrwx 1 ubuntu ubuntu 33 Aug 18 10:19 lcp_ramfw_ns -> ../components/rdv3r1/lcp_ramfw_ns +lrwxrwxrwx 1 ubuntu ubuntu 26 Aug 18 10:19 lkvm -> ../components/kvmtool/lkvm +lrwxrwxrwx 1 ubuntu ubuntu 34 Aug 18 10:19 mcp_ramfw.bin -> ../components/rdv3r1/mcp_ramfw.bin +lrwxrwxrwx 1 ubuntu ubuntu 33 Aug 18 10:19 mcp_ramfw_ns -> ../components/rdv3r1/mcp_ramfw_ns +lrwxrwxrwx 1 ubuntu ubuntu 28 Aug 18 10:19 rmm.img -> ../components/rdv3r1/rmm.img +lrwxrwxrwx 1 ubuntu ubuntu 34 Aug 18 10:19 scp_ramfw.bin -> ../components/rdv3r1/scp_ramfw.bin +lrwxrwxrwx 1 ubuntu ubuntu 33 Aug 18 10:19 scp_ramfw_ns -> ../components/rdv3r1/scp_ramfw_ns +lrwxrwxrwx 1 ubuntu ubuntu 41 Aug 18 10:19 signed_lcp_ramfw.bin -> ../components/rdv3r1/signed_lcp_ramfw.bin +lrwxrwxrwx 1 ubuntu ubuntu 41 Aug 18 10:19 signed_mcp_ramfw.bin -> ../components/rdv3r1/signed_mcp_ramfw.bin +lrwxrwxrwx 1 ubuntu ubuntu 41 Aug 18 10:19 signed_scp_ramfw.bin -> ../components/rdv3r1/signed_scp_ramfw.bin +lrwxrwxrwx 1 ubuntu ubuntu 31 Aug 18 10:19 tf-bl1.bin -> ../components/rdv3r1/tf-bl1.bin +lrwxrwxrwx 1 ubuntu ubuntu 30 Aug 18 10:19 tf-bl1_ns -> ../components/rdv3r1/tf-bl1_ns +lrwxrwxrwx 1 ubuntu ubuntu 31 Aug 18 10:19 tf-bl2.bin -> ../components/rdv3r1/tf-bl2.bin +lrwxrwxrwx 1 ubuntu ubuntu 32 Aug 18 10:19 tf-bl31.bin -> ../components/rdv3r1/tf-bl31.bin +lrwxrwxrwx 1 ubuntu ubuntu 55 Aug 18 10:19 tf_m_flash.bin -> ../components/arm/rse/neoverse_rd/rdv3r1/tf_m_flash.bin +lrwxrwxrwx 1 ubuntu ubuntu 48 Aug 18 10:19 tf_m_rom.bin -> ../components/arm/rse/neoverse_rd/rdv3r1/rom.bin +lrwxrwxrwx 1 ubuntu ubuntu 50 Aug 18 10:19 tf_m_vm0_0.bin -> ../components/arm/rse/neoverse_rd/rdv3r1/vm0_0.bin +lrwxrwxrwx 1 ubuntu ubuntu 50 Aug 18 10:19 tf_m_vm0_1.bin -> ../components/arm/rse/neoverse_rd/rdv3r1/vm0_1.bin +lrwxrwxrwx 1 ubuntu ubuntu 50 Aug 18 10:19 tf_m_vm1_0.bin -> ../components/arm/rse/neoverse_rd/rdv3r1/vm1_0.bin +lrwxrwxrwx 1 ubuntu ubuntu 50 Aug 18 10:19 tf_m_vm1_1.bin -> ../components/arm/rse/neoverse_rd/rdv3r1/vm1_1.bin +lrwxrwxrwx 1 ubuntu ubuntu 33 Aug 18 10:19 uefi.bin -> ../components/css-common/uefi.bin +``` + + +{{% notice Note %}} +The other [Arm Learning Path](https://learn.arm.com/learning-paths/servers-and-cloud-computing/neoverse-rdv3-swstack/3_rdv3_sw_build/) provides a complete introduction to setting up the RDv3 development environment — feel free to refer to it for more details. +{{% /notice %}} + + +### Build OpenBMC Image + +you need use [BitBake](https://docs.yoctoproject.org/bitbake/) to build OpenBMC. + +{{% notice Need Review %}} + + +Should we need install bitbake? + +```bash +git clone https://git.openembedded.org/bitbake +cd bitbake +``` +How to install bitbake? Clone https://git.openembedded.org/bitbake or else? +{{% /notice %}} + +Start by cloning and building the OpenBMC image using the bitbake build system: + +```bash +cd ~ +git clone https://github.com/openbmc/openbmc.git +cd openbmc +source setup fvp +bitbake obmc-phosphor-image +``` + +During the OpenBMC build process, you may encounter a native compilation error when building Node.js (especially version 22+) due to high memory usage during the V8 engine build phase depends on your build machine. + +``` +g++: fatal error: Killed signal terminated program cc1plus +compilation terminated. +ERROR: oe_runmake failed +``` + +This is a typical Out-of-Memory (OOM) failure, where the system forcibly terminates the compiler due to insufficient available memory. + +To reduce memory pressure, explicitly limit parallel tasks in `conf/local.conf`: + +```bash +BB_NUMBER_THREADS = "2" +PARALLEL_MAKE = "-j2" +``` + +This ensures that BitBake only runs two parallel tasks and that each Makefile invocation limits itself to two threads. It significantly reduces peak memory usage and avoids OOM terminations. + +Once success build, you will observe the message like: + +``` +Loading cache: 100% | | ETA: --:--:-- +Loaded 0 entries from dependency cache. +Parsing recipes: 100% |#############################################################################################################| Time: 0:00:09 +Parsing of 3054 .bb files complete (0 cached, 3054 parsed). 5148 targets, 770 skipped, 0 masked, 0 errors. +NOTE: Resolving any missing task queue dependencies + +Build Configuration: +BB_VERSION = "2.12.0" +BUILD_SYS = "aarch64-linux" +NATIVELSBSTRING = "ubuntu-22.04" +TARGET_SYS = "aarch64-openbmc-linux" +MACHINE = "fvp" +DISTRO = "openbmc-phosphor" +DISTRO_VERSION = "nodistro.0" +TUNE_FEATURES = "aarch64 armv8-4a" +TARGET_FPU = "" +meta +meta-oe +meta-networking +meta-python +meta-phosphor +meta-arm +meta-arm-toolchain +meta-arm-bsp +meta-evb +meta-evb-fvp-base = "master:1b6b75a7d22262ec1bf5ab8e2bfa434ac84d981b" + +Sstate summary: Wanted 0 Local 0 Mirrors 0 Missed 0 Current 2890 (0% match, 100% complete)############################### | ETA: 0:00:00 +Initialising tasks: 100% |##########################################################################################################| Time: 0:00:03 +NOTE: Executing Tasks + +``` + +{{% notice Note %}} +The first build may take up to an hour depending on your system performance, as it downloads and compiles the entire firmware stack. +{{% /notice %}} + + +```text +├── FVP_RD_V3_R1 +├── apply_patch.sh +├── fvp +│   ├── FVP_RD_V3_R1.sh +│   ├── FVP_RD_V3_R1_11.29_35_Linux64_armv8l.tgz +│   └── license_terms +├── host +│   ├── build-scripts +│   ├── buildroot +│   ├── ... +├── openbmc +│   ├── ... +│   ├── build +│   ├── meta-arm +│   ├── ... +│   ├── poky +│   └── setup +├── patch +│   ├── build-scripts +│   ├── buildroot +│   ├── edk2 +│   ├── edk2-platforms +│   └── edk2-redfish-client +├── patch.zip +└── run.sh + +`` diff --git a/content/learning-paths/servers-and-cloud-computing/openbmc-rdv3/3_openbmc_simulate.md b/content/learning-paths/servers-and-cloud-computing/openbmc-rdv3/3_openbmc_simulate.md new file mode 100644 index 0000000000..395872dc4e --- /dev/null +++ b/content/learning-paths/servers-and-cloud-computing/openbmc-rdv3/3_openbmc_simulate.md @@ -0,0 +1,102 @@ +--- +title: Run Pre-Silicon OpenBMC + Host (UEFI) Simulation +weight: 4 + +### FIXED, DO NOT MODIFY +layout: learningpathall +--- + +## Run Pre-Silicon OpenBMC Simulation + +With your environment prepared, you can now simulate the full pre-silicon firmware boot flow using the Arm Neoverse RD-V3 reference design. +This includes building the OpenBMC image, launching the Fixed Virtual Platform (FVP), and validating the boot process of both the BMC and the host UEFI firmware. + + +This simulation launches multiple UART consoles—each mapped to a separate terminal window for different subsystems (e.g., Neoverse V3, Cortex‑M55, Cortex‑M7, and the Cortex-A BMC). + +These graphical terminal windows require a desktop session. If you're accessing the simulation over SSH (e.g., on an AWS instance), they may not display properly. + +To ensure proper display and interactivity, we recommend installing a Remote Desktop environment using XRDP. + +In AWS Ubuntu 22.04 instance, you need install required packages: + +```bash +sudo apt update +sudo apt install -y ubuntu-desktop xrdp xfce4 xfce4-goodies pv xterm sshpass socat retry +sudo systemctl enable --now xrdp +``` + +You may need to follow the Step 2 on RDv3 [learning path](https://learn.arm.com/learning-paths/servers-and-cloud-computing/neoverse-rdv3-swstack/4_rdv3_on_fvp/) to setup the networking and GDM configuration. + +Once connected via Remote Desktop, open a terminal and launch the RD‑V3 FVP simulation: + +Download the `run.sh` into home directory, execute + +```bash +./run.sh -m ~/FVP_RD_V3_R1/models/Linux64_GCC-9.3/FVP_RD_V3_R1 +``` + +{{% notice Need Review %}} +Explain how/where can download the run.sh. +Explain the content of run.sh. +{{% /notice %}} + + +The run.sh script will: +- Launch the OpenBMC FVP and wait for BMC boot +- Automatically start the host FVP for RD-V3 (running UEFI) +- Connect the UART consoles between the BMC and host via virtual pipes + +Once simulation running, the `OpenBMC FVP console` console will stop in Linux login message: + +``` +[ OK ] Started phosphor systemd target monitor. +[ OK ] Started Sensor Monitor. + Starting Hostname Service... + Starting Phosphor Software Manager... + Starting Phosphor BMC State Manager... + Starting Phosphor Time Manager daemon... +[ OK ] Finished SSH Key Generation. +[ OK ] Finished Wait for /xyz/openbmc_project/state/chassis0. +[ 27.454083] mctpserial0: invalid tx state 0 +[FAILED] Failed to start OpenBMC ipKVM daemon. +Phosphor OpenBMC (Phosphor OpenBMC Project Reference Distro) nodistro.0 fvp ttyAMA0 + Starting Time & Date Service... +fvp login: +``` + +Type OpenBMC default username `root` and password is `0penBmc`. + + +{{% notice Note %}} +The first character of the password is the number ***0***, not a capital ***O***. +{{% /notice %}} + +Once login, you will see fully function Linux operation system in OpenBMC FVP console. + +On the other side, the CSS-V3-R1 will launch and `FVP terminal_ns_uart0` console will show UEFI Firmware Setup Menu, selection `continue` then the Linux boot will runing. + +![img2 alt-text#center](openbmc_hostuefi.jpg "Simuation Success") + +The simulation will carry on the CSS-V3-R1 part, enter the GRUB menu. Press Enter to proceed. + +A successful simulation will show login prompts on both BMC and host consoles. You can also confirm success by seeing the final system state in the Web UI or UART output. + +![img2 alt-text#center](openbmc_cssv3_sim.jpg "Simuation Success") + + +The whole simulation procedure will be looks like: + +![img1 alt-text#center](openbmc_cssv3_running.jpg "Simuation Running") + + +After simulation completes, logs for both the BMC and host will be stored in `~/logs`. These are useful for verifying boot success or troubleshooting issues. + +- `obmc_boot.log`: BMC boot output +- `obmc_console.log`: BMC serial output +- `fvp_boot.log`: Host UEFI boot output + +By reviewing the contents of the logs folder, you can verify the expected system behavior or quickly diagnose +any anomalies that arise during boot or runtime. + +In the next module, you’ll extend this simulation by interacting with the BMC over UART from the host side. diff --git a/content/learning-paths/servers-and-cloud-computing/openbmc-rdv3/4_openbmc_communicate.md b/content/learning-paths/servers-and-cloud-computing/openbmc-rdv3/4_openbmc_communicate.md new file mode 100644 index 0000000000..b5318255c1 --- /dev/null +++ b/content/learning-paths/servers-and-cloud-computing/openbmc-rdv3/4_openbmc_communicate.md @@ -0,0 +1,71 @@ +--- +title: Extend Simulation Features +weight: 5 + +### FIXED, DO NOT MODIFY +layout: learningpathall +--- + +## Interact with Host Console via Serial over LAN (SOL) + +The OpenBMC platform provides Serial over LAN (SOL) to enable console access to the host system (RD-V3 FVP) through the BMC. This feature is useful for remotely interacting with the host without needing a direct serial cable. + +In this section, you’ll create a virtual serial bridge using socat, verify port mappings, and access the host console via the BMC Web UI. + + +### Step 1: Connect the BMC and Host Consoles + +Run the following command on your development Linux machine (where the simulation is running) to bridge the BMC and host UART ports: + +```bash +socat -x tcp:localhost:5005 tcp:localhost:5067 +``` + +This command connects the host-side UART port (5005) to the BMC-side port (5067), allowing bidirectional serial communication. + +{{% notice Note %}} +If you see a Connection refused error, check the FVP logs to verify the port numbers: +* In fvp_boot.log, look for a line like: +terminal_ns_uart0: Listening for serial connection on port 5005 +* In obmc_boot.log, confirm the corresponding line: +terminal_3: Listening for serial connection on port 5067 +{{% /notice %}} + +Ensure both ports are active and match the socat command arguments. + + +### Step 2: Manually Set Host Power State + +Once the SOL bridge is established, run the following command from the OpenBMC console shell to simulate the host being powered on: + +```bash +busctl set-property xyz.openbmc_project.State.Host \ +/xyz/openbmc_project/state/host0 xyz.openbmc_project.State.Host \ +CurrentHostState s xyz.openbmc_project.State.Host.HostState.Running +``` + +This updates the BMC’s internal host state, allowing UEFI to begin execution. + +### Step 3. Access Host Console from Web UI + +1. Open the BMC Web UI in your browser: + https://127.0.0.1:4223 +2. Log in using the default credentials: + * Username: root + * Password: 0penBmc +3. From the Overview page, click the SOL Console button. +4. You’ll see the host console output (UEFI or Linux prompt), and you can interact directly with it via the Web UI terminal. + + +![img3 alt-text#center](openbmc_webui_login.jpg "WebUI") + + +![img3 alt-text#center](openbmc_webui_overview.jpg "WebUI") + + +![img3 alt-text#center](openbmc_webui_sol.jpg "WebUI") + + +From here, you can monitor the UEFI boot sequence, interact with the host shell, and run diagnostic or validation commands—just as if you were connected to the physical serial port. + +This console also allows you to verify host-BMC coordination, observe system logs in real time, test UEFI shell commands, or trigger custom boot workflows for pre-silicon validation. \ No newline at end of file diff --git a/content/learning-paths/servers-and-cloud-computing/openbmc-rdv3/5_openbmc_xxx.md b/content/learning-paths/servers-and-cloud-computing/openbmc-rdv3/5_openbmc_xxx.md new file mode 100644 index 0000000000..75b21fa18a --- /dev/null +++ b/content/learning-paths/servers-and-cloud-computing/openbmc-rdv3/5_openbmc_xxx.md @@ -0,0 +1,10 @@ +--- +title: Simulate Dual Chip RD-V3-R1 Platform +weight: 6 + +### FIXED, DO NOT MODIFY +layout: learningpathall +--- + +## Redfish + diff --git "a/content/learning-paths/servers-and-cloud-computing/openbmc-rdv3/Image 8-18-25 at 6.30\342\200\257PM.jpg" "b/content/learning-paths/servers-and-cloud-computing/openbmc-rdv3/Image 8-18-25 at 6.30\342\200\257PM.jpg" new file mode 100644 index 0000000000..ecb2c0bd1b Binary files /dev/null and "b/content/learning-paths/servers-and-cloud-computing/openbmc-rdv3/Image 8-18-25 at 6.30\342\200\257PM.jpg" differ diff --git a/content/learning-paths/servers-and-cloud-computing/openbmc-rdv3/_index.md b/content/learning-paths/servers-and-cloud-computing/openbmc-rdv3/_index.md new file mode 100644 index 0000000000..5c0136dddb --- /dev/null +++ b/content/learning-paths/servers-and-cloud-computing/openbmc-rdv3/_index.md @@ -0,0 +1,60 @@ +--- +title: Simulate OpenBMC and UEFI Integration on Neoverse V3 Reference Design + +minutes_to_complete: 90 + +who_is_this_for: This Learning Path is for firmware developers, platform software engineers, and system integrators working on Arm Neoverse-based platforms. It is especially useful for those exploring pre-silicon development, testing, and integration of Baseboard Management Controllers (BMC) with UEFI firmware. If you are building or validating server-class reference platforms—such as RD-V3—before hardware is available, this guide will help you simulate and debug the full boot path using Fixed Virtual Platforms (FVPs). + +learning_objectives: + - Explain the roles of OpenBMC and UEFI in modern Arm server boot flows + - Set up and simulate firmware integration using the RD-V3 Reference Design + - Launch the OpenBMC and UEFI firmware stack on an Arm FVP model + - Test system-level interactions between BMC and host firmware via UART and SOL + +prerequisites: + - Access to an Arm Neoverse-based Linux machine (either cloud-based or local) is required, with at least 80 GB of free disk space, 48 GB of RAM, and running Ubuntu 22.04 LTS. + - Working knowledge of Docker, Git, and Linux terminal tools + - Basic understanding of server firmware stack (UEFI, BMC, TF-A, etc.) + - Docker installed, or GitHub Codespaces-compatible development environment + +author: + - Odin Shen + - Ken Zhang + +### Tags +skilllevels: Advanced +subjects: Containers and Virtualization +armips: + - Neoverse +tools_software_languages: + - C + - Docker + - FVP +peratingsystems: + - Linux + +further_reading: + - resource: + title: Neoverse Compute Subsystems V3 + link: https://www.arm.com/products/neoverse-compute-subsystems/css-v3 + type: website + - resource: + title: Reference Design software stack architecture + link: https://neoverse-reference-design.docs.arm.com/en/latest/about/software_stack.html + type: website + - resource: + title: OpenBMC website + link: https://www.openbmc.org/ + type: website + - resource: + title: OpenBMC GitHub repo + link: https://github.com/openbmc/openbmc + type: gitlab + + +### FIXED, DO NOT MODIFY +# ================================================================================ +weight: 1 # _index.md always has weight of 1 to order correctly +layout: "learningpathall" # All files under learning paths have this same wrapper +learning_path_main_page: "yes" # This should be surfaced when looking for related content. Only set for _index.md of learning path content. +--- diff --git a/content/learning-paths/servers-and-cloud-computing/openbmc-rdv3/_next-steps.md b/content/learning-paths/servers-and-cloud-computing/openbmc-rdv3/_next-steps.md new file mode 100644 index 0000000000..c3db0de5a2 --- /dev/null +++ b/content/learning-paths/servers-and-cloud-computing/openbmc-rdv3/_next-steps.md @@ -0,0 +1,8 @@ +--- +# ================================================================================ +# FIXED, DO NOT MODIFY THIS FILE +# ================================================================================ +weight: 21 # Set to always be larger than the content in this path to be at the end of the navigation. +title: "Next Steps" # Always the same, html page title. +layout: "learningpathall" # All files under learning paths have this same wrapper for Hugo processing. +--- diff --git a/content/learning-paths/servers-and-cloud-computing/openbmc-rdv3/openbmc_cssv3_sim.jpg b/content/learning-paths/servers-and-cloud-computing/openbmc-rdv3/openbmc_cssv3_sim.jpg new file mode 100644 index 0000000000..36d70252da Binary files /dev/null and b/content/learning-paths/servers-and-cloud-computing/openbmc-rdv3/openbmc_cssv3_sim.jpg differ diff --git a/content/learning-paths/servers-and-cloud-computing/openbmc-rdv3/openbmc_hostuefi.jpg b/content/learning-paths/servers-and-cloud-computing/openbmc-rdv3/openbmc_hostuefi.jpg new file mode 100644 index 0000000000..7dc02997af Binary files /dev/null and b/content/learning-paths/servers-and-cloud-computing/openbmc-rdv3/openbmc_hostuefi.jpg differ diff --git a/content/learning-paths/servers-and-cloud-computing/openbmc-rdv3/openbmc_webui_login.jpg b/content/learning-paths/servers-and-cloud-computing/openbmc-rdv3/openbmc_webui_login.jpg new file mode 100644 index 0000000000..b9051538a7 Binary files /dev/null and b/content/learning-paths/servers-and-cloud-computing/openbmc-rdv3/openbmc_webui_login.jpg differ diff --git a/content/learning-paths/servers-and-cloud-computing/openbmc-rdv3/openbmc_webui_overview.jpg b/content/learning-paths/servers-and-cloud-computing/openbmc-rdv3/openbmc_webui_overview.jpg new file mode 100644 index 0000000000..123b465a18 Binary files /dev/null and b/content/learning-paths/servers-and-cloud-computing/openbmc-rdv3/openbmc_webui_overview.jpg differ diff --git a/content/learning-paths/servers-and-cloud-computing/openbmc-rdv3/openbmc_webui_sol.jpg b/content/learning-paths/servers-and-cloud-computing/openbmc-rdv3/openbmc_webui_sol.jpg new file mode 100644 index 0000000000..e696134f79 Binary files /dev/null and b/content/learning-paths/servers-and-cloud-computing/openbmc-rdv3/openbmc_webui_sol.jpg differ