Friday, November 29, 2019

Zynq Ultrascale+ FatFs and Direct Speed Tests with Bare Metal NVMe via AXI-PCIe Bridge

Blue wire PCIe REFCLK still hanging in there...
It's time to return to the problem of sinking 1GB/s of data onto an NVMe drive from a Zynq Ultrascale+ SoC. Last time, I benchmarked the Xilinx Linux drivers and found that they were fast, but not quite fast enough. In the comments of that post, there were many good suggestions for how to make up the difference without having to resort to a hardware accelerator. The consensus is that the hardware, namely the stock AXI-PCIe bridge, should be fast enough.

While a lot of the suggestions were ways to speed up the data transfer in Linux, and I have no doubt those would work, I also just don't want or need to run Linux in this application. The sensor input and wavelet compression modules are entirely built in Programmable Logic (PL), with only a minimal interface to the Processing System (PS) for configuration and control. So, I'm able to keep my entire application in the 256KB On-Chip Memory (OCM), leaving the external DDR4 RAM bandwidth free for data transfer.

After compression, the data is already in the DDR4 RAM where it should be visible to whatever DMA mechanism is responsible for transferring data to an NVMe drive. As Ambivalent Engineer points out in the comments:
It should be possible to issue commands directly to the NVMe from software by creating a command and completion queue pair and writing directly to the command queue.
In other words, write a bare metal NVMe driver to interface with the AXI-PCIe bridge directly for initiating and controlling data transfers. This seems like a good fit, both to this specific application and to my general proclivity, for better or worse, to move to lower-level code when I get stuck. A good place to start is by exploring the functionality of the AXI-PCIe bridge itself.

AXI-PCIe Bridge

Part of the reason it took me a while to understand the AXI-PCIe bridge is that it has many names. The version for Zynq-7000 is called AXI Memory Mapped to PCI Express (PCIe) Gen2, and is covered in PG055. The version for Zynq Ultrascale is called AXI PCI Express (PCIe) Gen 3 Subsystem, and is covered in PG194. And the version for Zynq Ultrascale+ is called DMA for PCI Express (PCIe) Subsystem, and is nominally covered in PG195. But, when operated in bridge mode, as it will be here, it's actually still documented in PG194. I'll be focusing on this version. 

Whatever the name, the block diagram looks like this:

AXI-PCIe Bridge Root Port block diagram, adapted from PG194 Figure 1.
The AXI-Lite Slave Interface is straightforward, allowing access to the bridge control and configuration registers. For example, the PHY Status/Control Register (offset 0x144) has information on the PCIe link, such as speed and width, that can be useful for debugging. When the bridge is configured as a Root Port, as it must be to host an NVMe drive, this address space also provides access to the PCIe Configuration Space of both the Root Port itself, at offset 0x0, and the enumerated Endpoint devices, at other offsets.
PCIe Confinguration Space layout, adapted from PG213 Table 2-35.
If the NVMe drive has successfully enumerated, its Endpoint PCIe Configuration Space will be mapped to some offset in the AXI-Lite Slave register space. In my case, with no switch involved, it shows up as Bus 1, Device 0, Function 0 at offset 0x1000. Here, it's possible to check the Device ID, Vendor ID, and Class Codes. Most importantly, the BAR0 register holds the PCIe memory address assigned to the device. The AXI address assigned to BAR0 in the Address Editor in Vivado is mapped to this PCIe address by the bridge.

Reads from and writes to the AXI BAR0 address are done through the AXI Slave Interface. This is a full AXI interface supporting burst transactions and a wide data bus. In another class of PCIe device, it might be responsible for transferring large amounts of data to the device through the BAR0 address range. But for an NVMe drive, BAR0 just provides access to the NVMe Controller Registers, which are used to set up the drive and inform it of pending data transfers.

The AXI Master Interface is where all NVMe data transfer occurs, for both reads and writes. One way to look at it is that the drive itself contains the DMA engine, which issues memory reads and writes to the system (AXI) memory space through the bridge. The host requests that the drive perform these data transfers by submitting them to a queue, which is also contained in system memory and accessed through this interface.

Bare Metal NVMe

Fortunately, NVMe is an open standard. The specification is about 400 pages, but it's fairly easy to follow, especially with help from this tutorial. The NVMe Controller, which is implemented on the drive itself, does most of the heavy lifting. The host only has to do some initialization and then maintain the queues and lists that control data transfers. It's worth looking at a high-level diagram of what should be happening before diving in to the details of how to do it:

System-level look at NVMe data flow, with primary data streaming from a source to the drive.
After BAR0 is set, the host has access to the NVMe drive's Controller Registers through the bridge's AXI Slave Interface. They are just like any other device/peripheral control registers, used for low-level configuration, status, and control of the drive. The register map is defined in the NVMe Specification, Section 2.

One of the first things the host has to do is allocate some memory for the Admin Submission Queue and Admin Completion Queue. A Submission Queue (SQ) is a circular buffer of commands submitted to the drive by the host. It's written by the host and read by the drive (via the bridge AXI Master Interface). A Completion Queue (CQ) is a circular buffer of notifications of completed commands from the drive. It's written by the drive (via the bridge AXI Master Interface) and read by the host. 

The Admin SQ/CQ are used to submit and complete commands relating to drive identification, setup, and control. They can be located anywhere in system memory, as long as the bridge has access to them, but in the diagram above they're shown in the external DDR4. The host software notifies the drive of their address and size by setting the relevant Controller Registers (via the bridge AXI Slave Interface). After that, the host can start to submit and complete admin commands:
  1. The host software writes one or more commands to the Admin SQ.
  2. The host software notifies the drive of the new command(s) by updating the Admin SQ doorbell in the Controller Registers through the bridge AXI Slave Interface.
  3. The drive reads the command(s) from the Admin SQ through the bridge AXI Master Interface.
  4. The drive completes the command(s) and writes an entry to the Admin CQ for each, through the bridge AXI Master Interface. Optionally, an interrupt is triggered.
  5. The host reads the completion(s) and updates the Admin CQ doorbell in the Controller Registers, through the AXI Slave Interface, to tell the drive where to place the next completion.
In some cases, an admin command may request identification or capability data from the drive. If the data is too large to fit in the Admin CQ entry, the command will also specify an address to which to write the requested data. For example, during initialization, the host software requests the Controller Identification and Namespace Identification structures, described in the NVMe Specification, Section 5.15.2. These contain information about the capabilities, size, and low-level format (below the level of file systems or even partitions) of the drive. The space for these IDs must also be allocated in system memory before they're requested.

Within the IDs is information that indicates the Logical Block (LB) size, which is the minimum addressable memory unit in the non-volatile memory. 512B is typical, although some drives can also be formatted for 4KiB LBs. Many other variables are given in units of LBs, so it's important for the host to grab this value. There's also a maximum and minimum page size, defined in the Controller Registers themselves, which applies to system memory. It's up to the host software to configure the actual system memory page size in the Controller Registers, but it has to be between these two values. 4KiB is both the absolute minimum and the typical value. It's still possible to address system memory in smaller increments (down to 32-bit alignment); this value just affects how much can be read/written per page entry in an I/O command or PRP List (see below).

Once all identification and configuration tasks are complete, the host software can then set up one or more I/O queue pairs. In my case, I just want one I/O SQ and one I/O CQ. These are allocated in system memory, then the drive is notified of their address and size via admin commands. The I/O CQ must be created first, since the I/O SQ creation references it. Once created, the host can start to submit and complete I/O commands, using a similar process as for admin commands.

I/O commands perform general purpose writes (from system memory to non-volatile memory) or reads (from non-volatile memory to system memory) over the bridge's AXI Master Interface. If the data to be transferred spans more than two memory pages (typically 4KiB each), then a Physical Region Page (PRP) List is created along with the command. For example, a write of 24 512B LBs starting in the middle of a 4KiB page might reference the data like this:

A PRP List is required for data transfers spanning more than two memory pages.
The first PRP Address in the I/O command can have any 32-bit-aligned offset within a page, but subsequent addresses must be page-aligned. The drive knows whether to expect a PRP Address or PRP List Pointer in the second PRP field of the I/O command based on the amount of data being transferred. It will also only pull as much data as is needed from the last page on the list to reach the final LB count. There is no requirement that the pages in the PRP list be contiguous, so it can also be used as a scatter-gather with 4KiB granularity. The PRP List for a particular command must be kept in memory until it's completed, so some kind of PRP Heap is necessary if multiple commands can be in flight.

Some (most?) drives also have a Volatile Write Cache (VWC) that buffers write data. In this case, an I/O write completion may not indicate that the data has been written to non-volatile memory. An I/O flush command forces this data to be written to non-volatile memory before a completion entry is written to the I/O CQ for that flush command.

That's about it for things that are described explicitly in the specification. Everything past this point is implementation detail that is much more application-specific.

A key question the host software NVMe driver needs to answer is whether or not to wait for a particular completion before issuing another command. For admin commands that run once during initialization and are often dependent on data from previous commands, it's fine to always wait. For I/O commands, though, it really depends. I'll be using write commands as an example, since that's my primary data direction, but there's a symmetric case for reads.

If the host software issues a write command referencing a range of data in system memory and then immediately changes the data, without waiting for the write command to be completed, then the write may be corrupted. To prevent this, the software could:
  1. Wait for completion before allowing the original data to be modified. (Maybe there are other tasks that can be done in parallel.)
  2. Copy the data to an intermediate buffer and issue the write command referencing that buffer instead. The original data can then be modified without waiting for completion.
Both could have significant speed penalties. The copy option is pretty much out of the question for me. But usually I can satisfy the first constraint: If the data is from a stream that's being buffered in memory, the host software can issue NVMe write commands that consume one end of the stream while the data source is feeding in new data at the other end. With appropriate flow control, these write commands don't have to wait for completion.

My "solution" is just to push the decision up one layer: the driver never blocks on I/O commands, but it can inform the application of the I/O queue backlog as the slip between the queues, derived from sequentially-assigned command IDs. If a particular process thinks it can get away without waiting for completions, it can put more commands in flight (up to some slip threshold).
An example showing the driver ready to submit Command ID 72, with the latest completion being Command ID 67. The doorbells always point to the next free slot in the circular buffer, so the entry there has the oldest ID.
I'm also totally fine with polling for completions, rather than waiting for interrupts. Having a general-purpose completion polling function that takes as an argument a maximum number of completions to process in one call seems like the way to go. NVMeDirectSPDK, and depthcharge all take this approach. (All three are good open-source reference for light and fast NVMe drivers.)

With this set up, I am able to run a speed test by issuing read/write commands for blocks of data as fast as possible by trying to keep the I/O slip at a constant value:

Speed test for raw NVMe write/read on a 1TB Samsung 970 Evo Plus.
For smaller block transfers, the bottleneck is on my side, either in the driver itself or by hitting a limit on the throughput of bus transactions somewhere in the system. But for larger block transfers (32KiB and above) the read and write speeds split, suggesting that the drive becomes the bottleneck. And that's totally fine with me, since it's hitting 64% (write) and 80% (read) of the maximum theoretical PCIe Gen3 x4 bandwidth.

Sustained write speeds begin to drop off after about 32GiB. The Samsung Evo SSDs have a feature called TurboWrite that uses some fraction of the non-volatile memory array as fast Single-Level Cell (SLC) memory to buffer writes. Unlike the VWC, this is still non-volatile memory, but it gets transferred to more compact Multi-Level Cell (MLC) memory later since it's slower to write multi-level cells. The 1TB drive that I'm using has around 42GB of TurboWrite capacity according to this review, so a drop off in sustained write speeds after 32GiB makes sense. Even the sustained write speed is 1.7GB/s, though, which is more than fast enough for my application.

A bigger issue with sustained writes might be getting rid of heat. This drive draws about 7W during max speed writing, which nearly doubles the total dissipated power of the whole system, probably making a fan necessary. Then again, at these write speeds a 0.2kg chunk of aluminum would only heat up about 25ÂșC before the drive is full... In any case, the drive will also need a good conduction path to the rear enclosure, which will act as the heat sink.

FatFs

I am more than content with just dumping data to the SSD directly as described above and leaving the task of organizing it to some later, non-time-critical process. But, if I can have it arranged neatly into files on the way in, all the better. I don't have much overhead to spare for the file system operations, though. Luckily, ChaN gifted the world FatFs, an ultralight FAT file system module written in C. It's both tiny and fast, since it's designed to run on small microcontrollers. An ARM Cortex-A53 running at 1.2GHz is certainly not the target hardware for it. But, I think it's still a good fit for a very fast bare metal application.

FatFs supports exFAT, but using exFAT still requires a license from Microsoft. I think I can instead operate right on the limits of what FAT32 is capable of:
  • A maximum of 2^32 LBs. For 512B LBs, this supports up to a 2TiB drive. This is fine for now.
  • A maximum cluster size (unit of file memory allocation and read/write operations) of 128 LBs. For 512B LBs, this means 64KiB clusters. This is right at the point where I hit maximum (drive-limited) write speeds, so that's a good value to use.
  • A maximum file size of 4GiB. This is the limit of my RAM buffer size anyway. I can break up clips into as many files as I want. One file per frame would be convenient, but not efficient.
Linking FatFs to NVMe couldn't really get much simpler: FatFs's diskio.c device interface functions already request reads and writes in units of LBs, a.k.a. sectors. There's also a sync function that matches up nicely to the NVMe flush command. The only potential issue is that FatFs can ask for byte-aligned transfers, whereas NVMe only allows 32-bit alignment. My tentative understanding is that this can only happen via calls to f_read() or f_write(), so the application can guard against it.

For file system operations, FatFs reads and writes single sectors to and from a working buffer in system memory. It assumes that the read or write is complete when the disk_read() or disk_write() function returns, so the diskio.c interface layer has to wait for completion for NVMe commands issued as part of file system operations. To enforce this, but still allow high-speed sequential file writing from a data stream, I check the address of the disk_write() system memory buffer. If it's in OCM, I wait for completion. If it's in DDR4, I allow slip. For now, I wait for completion on all disk_read() calls, although a similar mechanism could work for high-speed stream reading. And of course, disk_ioctl() calls for CTRL_SYNC issue an NVMe flush command and wait for completion.

Interface between FatFs and NVMe through diskio.c, allowing stream writes from DDR4.
I also clear the queue prior to a read to avoid unnecessary read/write turnarounds in the middle of a streaming write. This logic obviously favors writes over reads. Eventually, I'd like to make a more symmetric and configurable diskio.c layer that allows fast stream reading and writing. It would be nice if the application could dynamically flag specific memory ranges as streamable for reads or writes. But for now this is good enough for some write speed testing:

Speed test for FatFs NVMe write on a 1TB Samsung 970 Evo Plus.
There's a very clear penalty for creating and closing files, since the process involves file system operations, including reads and flushes, that will have to wait for NVMe completions. But for writing sequentially to large (1GiB) files, it's still exceeding my 1GB/s requirement, even for total transfer sizes beyond the TurboWrite limit. So I think I'll give it a try, with the knowledge that I can fall back to raw writing if I really need to.

Utilization Summary

The good news is that the NVMe driver (not including the Queues, PRP Heap, and IDs) and FatFs together take up only about 27KB of system memory, so they should easily run in OCM with the rest of the application. At some point, I'll need to move the .text section to flash, but for now I can even fit that in OCM. The source is here, but be aware it is entirely a test implementation not at all intended to be a drop-in driver for any other application.

The bad news is that the XCZU4 is now pretty much completely full...

We're gonna need a bigger chip.
The AXI-PCIe bridge takes up 12960 LUTs, 17871 FFs, and 34 BRAMs. That's not even including the additional AXI Interconnects. The only real hope I can see for shrinking it would be to cut the AXI Slave Interface down to 32-bit, since it's only needed to access Controller Registers at BAR0. But I don't really want to go digging around in the bridge HDL if I can avoid it. I'd rather spend time optimizing my own cores, but I think no matter what I'll need more room for additional features, like decoding/HDMI preview and the subsampled 2048x1536 mode that might need double the number of Stage 1 Wavelet horizontal cores. 

So, I think now is the right time to switch over to the XCZU6, along with the v0.2 Carrier. It's pricey, but it's a big step up in capability, with twice the DDR4 and more than double the logic. And it's closer to impedance-matched to the cost of the sensor...if that's a thing. With the XCZU6, I think I'll have plenty of room to grow the design. It's also just generally easier to meet timing constraints with more room to move logic around, so the compile times will hopefully be lower.

Hopefully the next update will be with the whole 3.8Gpx/s continuous image capture pipeline working together for the first time!

68 comments:

  1. Replies
    1. Updated link to the lightweight NVMe driver source:

      https://github.com/coltonshane/SSD_Test

      (I moved it to Vivado/Vitis 2021.1, added project creation scripts, and fixed a few minor issues.)

      Delete
  2. Very impressive work on this camera from your side Shane.
    I have been thinking to do something similar for our own cameras at Optomotive, but then I found your blog. I also used some TE0807 ZU7EV FPGA modules for development, but now they are accumulating dust.
    Can you please email me directly to talk?

    ReplyDelete
    Replies
    1. Thanks! I took a look at some of your cameras - they look awesome! You can reach me directly at colton (dot) shane (at) gmail (dot) com. I might have some questions for you about SLVS-EC and about utilizing the EV's VCU, if you're able to share that info.

      Delete
  3. Dear Shane
    Could you share the bd file of the WAVE-vivado project file?

    Have a nice day.
    Thank you & regards

    ReplyDelete
    Replies
    1. New 2021.1 repo with BD and project configuration scripts is here:

      https://github.com/coltonshane/SSD_Test

      Delete
  4. Hi Shane,

    Excellent work! I'm using your driver for a multiple sensors based 360 camera. My SSD is the cheaper XPG S11 pro. I did some modifications to your driver code (mainly the namespace differences of my SSD), I was able to achieve 2300MB/s write and 3500MB/s raw read. It's pretty close to manufacturer's claim.

    Regarding your driver, I think you can restore the CPU DCache. But instead of flushing/invalidating the entire cache, you could just flush/invalidate a single cache line. The I/O SQ/CQ entries are less than 64bytes in total.

    Right now I'm porting these to the PS side GTR-PCIe. 1.5GB/s of write speed should be good enough for most cases under Gen2 x4. BTW, do you have any experience with that. I could access the controller's BAR space but namespace probing failed with NVME time out error. My guess is the controller could not access the AXI-master port to R/W the main DDR4 memory.

    Best,

    ReplyDelete
    Replies
    1. That's excellent! Do you know how big the S11 Pro's SLC cache is? And what sort of write speeds you can get after the SLC cache is full? Wondering if I should add it to my list of drives to benchmark.

      Thanks for the tip on the DCache flushing, that makes sense.

      I haven't tried porting it to the PS-side GTR PCIe. It should be possible, but I'd have to do a bit of reading on the PS PCIe controller to understand the differences. Maybe the address translation/masking is different?

      Delete
    2. The SLC cache is pretty big on S11 Pro 512G. I got more than a minute (close to 90s) of sustained writes from 1.5GB/s total camera streams before being throttled down to 600MB/s. Then it continued on for a very long time. I probably need to do tests in a more systematic way.

      The register mapping on PS-PCIe looks very different. Probably it's IP from a different vendor.

      Delete
    3. I got it working on PSU PCIe. I need to manually enable memory access and bus mastering bit on the PCIe configuration for rootport.

      SSD R/W speed could saturate Gen2 x4 link.

      Delete
    4. Nice! I'm just now starting a new project: building a PCIe controller (down to the MAC) in PL logic. For chips that don't have the Integrated Block for PCI Express, the only other solutions cost as much as a sports car. Insanity. I bought a PCIe book for $90...we'll see how far that takes me.

      Delete
  5. Wow, that'd be a lot of work. In part due to managing all the status/ctrl/crc on your own for the link layer.

    Are you referring to the NWLogic soft-core being very expensive? FYI, I noticed Xilinx will have an Artix Ultrascale+ with hardened Gen 4.0 x4 core within. That FPGA might be cheaper given its market aim. I also got a Zynq 7015 SoM. Timing closure is hard for a Gen2x4 on Artix-7 fabric. Xilinx did a poor job on their AXI Memory Mapped Bridge IP. I saw something like 14 layer depth of combinatory logic internal to their IP core.

    I recently hacked the SLVS-EC protocol using 7 series HRIO by reducing its frequency. If siganl integrity is properly tuned, I believe it's possible to overclock correctly at 1600Mbps or 2304 on a HP IO bank. Will write that up later on my block. https://landingfield.wordpress.com/category/cmos-camera-project/

    ReplyDelete
  6. SLVS-EC on HPIO is definitely something I would be interested in. I've only ever seen it done with GTx ports, I guess since those have clock recovery built in. I'll have to catch up on your blog so I can follow along!

    I've looked into NWLogic (Rambus now, I guess) and PLDA PCIe cores. They're all outside of my price range, for sure. My main motivation is to be able to use chips that don't have hard PCIe blocks, since Xilinx supply is so constrained now that you have to take what you can get. But it's also just a fun challenge.

    Interestingly, the ZU+ -2 speed grade GTH can technically do Gen4 with the PHY IP. So maybe with a custom MAC and DMA I could also eventually get Gen4 speeds on future ZU+ projects.

    ReplyDelete
  7. As a follow-up, I've since tested this with exFAT (still through FatFs) and the performance is significantly better than FAT32. Maybe not all the way back to raw disk_write() speeds, but certainly getting up towards that. The main reason is that exFAT handles sequential cluster allocation with a bitmap (0 = free, 1 = allocated) rather than a 32-bit entry in the FAT chain. So, the file system overhead for sequential writing is 32x lower. It also allows cluster sizes greater than the FAT32 limit of 64KiB, which means fewer calls to disk_write() for larger files. At some point I will do some new benchmarking and post an update.

    ReplyDelete
    Replies
    1. Are you going to update your project on github? Also, can you provide your block diagram, or vivado project on github?

      Delete
    2. Finally got around to updating the project to Vivado/Vitis 2021.1 and adding some project creation scripts. New repo is here: https://github.com/coltonshane/SSD_Test.

      Delete
  8. Hey Shane, thanks for the super cool post!

    I'm trying to do something similar, and I've taken your WAVE_TestSSD Vitis application and tried to run it to get the same baseline as you for reading and writing to/from an NVMe SSD. I'm running into an issue, however, in the PCIe Enumeration step; when I run XDmaPcie_EnumerateFabric in my main function, my application is erroring out, and I've traced it to a call of XDmaPcie_ReserveBarMem. It looks like the PMemMaxAddr is 0 for me, and I'm checking that it's not and it errors out. Did you ever have to set this parameter separately in order for that function call to work for you? Thanks!

    ReplyDelete
    Replies
    1. I think there must be some configuration difference. In my project, PMemMaxAddr is also zero, but the check in the XDMA driver is disabled by an #ifdef. Anyway, I've updated the project to Vivado/Vitis 2021.1 with project creation scripts, so maybe that will resolve the issue. New repo is here:

      https://github.com/coltonshane/SSD_Test.

      Delete
  9. This comment has been removed by the author.

    ReplyDelete
    Replies
    1. I haven't worked with the RFSoC at all - the PCIE and GT primitives are a little different. That said, the format of the constraints looks fine to me. What error message are you getting? If it's not able to resolve the get_cells argument, I usually just keep tweaking it in the TCL console with the Sythesized design open until I get a hit, then move that back to the .xdc.

      Delete
  10. Hi,
    I am trying to do same project but on Zynq MYD-C7Z015, i am stuck in the NVME initialization step particulary in this part:
    // NVME Controller Registers, via AXI BAR
    // Root Port Bridge must be enabled through regRootPortStatusControl for R/W access.
    u64 * regCAP = (u64 *)(0xB0000000); // Controller Capabilities
    u32 * regCC = (u32 *)(0xB0000014); // Controller Configuration
    u32 * regCSTS = (u32 *)(0xB000001C); // Controller Status
    u32 * regAQA = (u32 *)(0xB0000024); // Admin Queue Attributes
    u64 * regASQ = (u64 *)(0xB0000028); // Admin Submission Queue Base Address
    u64 * regACQ = (u64 *)(0xB0000030); // Admin Completion Queue Base Address
    u32 * regSQ0TDBL = (u32 *)(0xB0001000); // Admin Submission Queue Tail Doorbell
    u32 * regCQ0HDBL = (u32 *)(0xB0001004); // Admin Completion Queue Head Doorbell
    u32 * regSQ1TDBL = (u32 *)(0xB0001008); // I/O Submission Queue Tail Doorbell
    u32 * regCQ1HDBL = (u32 *)(0xB000100C); // I/O Completion Queue Head Doorbell
    i dont know where is these addreses, and where should i mape it when using my Zynq device

    ReplyDelete
    Replies
    1. Those are the addresses for the NVMe controller registers through the AXI Slave interface of the XDMA bridge (PG194/PG195), which is mapped at 0xB0000000. These get translated the BAR0 address by the bridge.

      For the 7Z015, I think you're using the AXI Memory Mapped to PCI Express (PG055)? I think it works similarly. Wherever its AXI Slave IF is mapped in your design, that's the base address for these registers in the AXI domain.

      Hope that helps!

      Delete
    2. In my address editior i can see that i have 2 slave interfaces, the first one with Base Name BAR0 at offset address 0x50000000, and the second one with Base Name CTL0. so i must replace your address base 0xB0000000 with 0x50000000 right?

      Delete
    3. Yes, that sounds correct.

      The other interface (CTL0) is the AXI-Lite Slave that connects to the PCIe controller. That's the one that would have things like the PHY Status and Control register (offset 0x144) and the PCIe Configuration Space (where the BAR0 address is set). Most of that should be handled by the Xilinx driver for the bridge, though.

      Delete
    4. First i would like to thank you for your continues support. I tried using address 0x50000000. this address is also found in xparameters.h file as XPAR_AXI_PCIE_0_AXIBAR_0. i changed the code to the following:
      // NVME Controller Registers, via AXI BAR
      // Root Port Bridge must be enabled through regRootPortStatusControl for R/W access.
      u64 * regCAP = (u64 *)(0x50000000); // Controller Capabilities
      u32 * regCC = (u32 *)(0x50000014); // Controller Configuration
      u32 * regCSTS = (u32 *)(0x5000001C); // Controller Status
      u32 * regAQA = (u32 *)(0x50000024); // Admin Queue Attributes
      u64 * regASQ = (u64 *)(0x50000028); // Admin Submission Queue Base Address
      u64 * regACQ = (u64 *)(0x50000030); // Admin Completion Queue Base Address
      u32 * regSQ0TDBL = (u32 *)(0x50001000); // Admin Submission Queue Tail Doorbell
      u32 * regCQ0HDBL = (u32 *)(0x50001004); // Admin Completion Queue Head Doorbell
      u32 * regSQ1TDBL = (u32 *)(0x50001008); // I/O Submission Queue Tail Doorbell
      u32 * regCQ1HDBL = (u32 *)(0x5000100C); // I/O Completion Queue Head Doorbell

      But the code now stuck in the nvmeInitController function at the following line
      *regCC &= ~REG_CC_IOCQES_Msk;
      *regCC |= (0x4) << REG_CC_IOCQES_Pos;

      it seems that the zynq is unable to access the address of the regCC. i am sure that i am missing something, do i need to initialize the BAR0 ? do i need to do any kind of initialization to access this address?

      Delete
    5. The bridge must be enabled in order to read and write those addresses. The function nvmeInitBridge checks some registers before enabling the bridge - maybe its failing those checks? Or maybe the control interface registers= pointers (regPhyStatusControl, regRootPortStatusControl, regDeviceClassCode) also need to be updated to match your address mapping?

      Delete
    6. I checked that the bridge is enables by checking that the value is 1, also i tried that baremetal rc_enumerate_example and it enables the bride using the following code
      /* Bridge enable */
      XAxiPcie_GetRootPortStatusCtrl(AxiPciePtr, &RegVal);
      RegVal |= XAXIPCIE_RPSC_BRIDGE_ENABLE_MASK;
      XAxiPcie_SetRootPortStatusCtrl(AxiPciePtr, RegVal);
      but also after that i cannot access address 0x50000000 using Xil_in32 function or using pointer, the processor completly stuck.
      maybe if you tell me how you figured out the 0xB0000000 address i can follow the exact steps to make sure that i am not missing anything

      Delete
    7. Have you also checked the mapping of the three CTL0 registers in nvme.c (regPhyStatusControl, regRootPortStatusControl, regDeviceClassCode)? In my design, the AXI-Lite Slave (CTL0) is mapped to 0x50000000. So if you now have the AXI-Slave (BAR0) mapped there, then CTL0 must be mapped to a different base address.

      As for why the AXI-Slave BAR0 interface is mapped to 0xB0000000 in my design: There's no specific reason for it. I allowed the Address Editor in Vivado to assign it and it wound up there. In my design, 0x00000000-0x7FFFFFFF is PS DDR4 RAM (2GiB) and then beyond that are some other AXI peripherals, so I guess 0xB0000000 was the first free space.

      Delete
    8. Yes I have checked the mapping of the three CTL0 registers, and i changed the base address to 0x60000000 and i can read the Class Codes of the SSD.
      Also i deeply debuged the initialization process and i found that the processor stuck exactly inside the nvmeInitController(u32 tTimeout_ms) function starting from the follwoing line:
      // I/O Completion Queue Entry Size
      // TO-DO: This is fixed at 4 in NVMe v1.4, but should be pulled from Identify Controller CQES.
      *regCC &= ~REG_CC_IOCQES_Msk;
      *regCC |= (0x4) << REG_CC_IOCQES_Pos;

      So basically i can access the BAR0 addresses:
      u32 * regAQA = (u32 *)(0x50000024); // Admin Queue Attributes
      u64 * regASQ = (u64 *)(0x50000028); // Admin Submission Queue Base Address
      u64 * regACQ = (u64 *)(0x50000030); // Admin Completion Queue Base Address

      but the processor stuck when it try to access regCC with address u32 * regCC = (u32 *)(0x50000014); // Controller Configuration
      what do you suggest?

      Delete
    9. Another note is that my PCI NMME is 1 X not 4 X and i modifed PHY_OK as follow

      #define PHY_OK 0x00008b0

      in order to perform nvmeInitBridge() function

      Delete
    10. I figured out the problem, it was in the BAR0 vector translation in the IP, by default it is set to 0x40000000, i changed it to 0x00000000 and everything worked

      Delete
    11. Great! Glad you were able to figure it out.

      Delete
  11. Also please Kindly explain to me how to determine the base address 0x10000000 using for the following pointers in your code:

    // Submission and Completion Queues
    // Must be page-aligned at least large enough to fit the queue sizes defined above.
    sqe_prp_type * asq = (sqe_prp_type *)(0x10000000); // Admin Submission Queue
    cqe_type * acq = (cqe_type *)(0x10001000); // Admin Completion Queue
    sqe_prp_type * iosq = (sqe_prp_type *)(0x10002000); // I/O Submission Queue
    cqe_type * iocq = (cqe_type *)(0x10003000); // I/O Completion Queue

    // Identify Structures
    idController_type * idController = (idController_type *)(0x10004000);
    idNamespace_type * idNamespace = (idNamespace_type *)(0x10005000);
    logSMARTHealth_type * logSMARTHealth = (logSMARTHealth_type *)(0x10006000);

    // Dataset Management Ranges (256 * 16B = 4096B)
    dsmRange_type * dsmRange = (dsmRange_type *)(0x10007000);

    ReplyDelete
    Replies
    1. These are the NVMe queues themselves and some other NVMe data structures. They can be anywhere in system memory, as long as the AXI Master interface of the bridge has access to them. I've put them in PS DDR4 RAM at 0x10000000 in order to reserve the bottom 256MiB (0x00000000-0x0FFFFFFF) for executing code. But they can go anywhere.

      Their location is passed to the NVMe controller during initialization, for example during nvmeInitAdminQueue() and nvmeCreateIOQueues().

      Delete
  12. Hi Shane! This is some great work... an amazing read.
    I am curious though - why 16 commands for the IO slip? Was this a number you found convenient or a compromise you found during testing? I would imagine submitting more commands shifts any bottleneck towards the PCIe core, but then results in a larger backlog (and larger queue) to keep track of.

    ReplyDelete
    Replies
    1. Thanks! I think you're right that as long as the allowed slip is enough to accommodate the latency, it won't be the bottleneck. I just chose a number that clearly got me into drive-write-speed-limited operation. I don't think there's much harm in making it larger, though, up to half the maximum queue depth size. It does increase the time that the data and PRP lists have to stay resident in memory, I guess.

      Delete
  13. Hi Shane,
    Many thanks for the amazing tutorial, it's a bright light one can see at the end of the long and tedious FPGA design tunnel (at least to me).
    I'm currently trying to build a system comprising of the zynqmp (zcu102) + ad9361 to capture RF data and unload directly to the PS-PCIe NVMe drive.
    The issue I'm facing is the speed. I target 200MB/s sustained speed within 15min of continuous wr operations. The maximum I could achieve is 75MB/s at the end of 15min. My current configuration utilises:
    * meta-adi-core/ & meta-adi-xilinx/ user layers
    * PS-PCIe x2 using BAR0 = 64MB
    * everything else is set to default

    Is there a way I can push the PS-PCIe to 200MB/s?
    Many thanks in advance

    ReplyDelete
    Replies
    1. You might have to do some A/B testing to figure out what the bottleneck is. I don't think it's the bus: Gen2 x2 would still have a theoretical maximum of 1250MB/s, so even with overhead it should be more than fast enough. A commenter above mentions using the PS-GTR (in x4) for a 360 camera with pretty high data rates.

      I'm not familiar with meta-adi-*, but I guess that means you are in Linux. Have you tried a "dd" speed test, to rule out the ADI layer as a source of slowdown? That was where I started on this project, in this post: https://scolton.blogspot.com/2019/07/benchmarking-nvme-through-zynq.html. There are a bunch of useful comments there on using direct transfers to speed up the Linux NVMe driver as well.

      Delete
  14. This comment has been removed by the author.

    ReplyDelete
  15. Hi shane, I did have an issue of writing zeros to the file rather than the data. I just found out that I need to set the cluster size of EXFAT to 128k for it to actually insert the data correctly into the file. This is on a Samsung 980 pro 1 TB. I believe your example hasn't set to 1 MB. I just thought I'd throw this out there if anyone else experiences the same problem.

    ReplyDelete
    Replies
    1. Were you able to troubleshoot this any further? Is it something specific about writing zeros, rather than data? What does the failure look like? I can try to replicate it to see if it's something in the driver itself.

      Delete
  16. Hi Shane, thank you for this. I was able to reproduce it with outstanding results. Now I'm looking to do the same using the PCIe with the processing system. I can get it running in PetaLinux but would like to do it baremetal. Do you happen to know if nvme baremetal drivers are available for PS?

    ReplyDelete
    Replies
    1. There aren't any that I'm aware of, but there is a PS-PCIe driver with example code that gets you up to the point where the bus is enumerated and BAR0 assigned. From there, it should be possible to link this NVMe driver to that BAR0 and make it work. (There are some other minor changes required such as the check that the link is up will now have to check the PS-PCIe registers.) Someone has definitely gotten it working, though. (See comment thread starting from March 20, 2021.)

      Delete
    2. I was able to configure the proper BAR address and got it working. Thank you! Your example helped a lot. I just had to remove the NVME bridge init as the PS isn't using the PCIe subsystem.

      Thanks again!

      Delete
  17. What a brilliant idea and execution thereof! It is exactly what I am looking for on my VCU118 using a 32 bit microblaze. The entire interface communication using SQ and CQ seems to be a popular programming concept (looking at the ERNIC IP / RoCE). I am in the process of porting your SSD_test code to my bare metal microblaze and I am getting a NVME_ERROR_ACQ_TIMEOUT during nvmeIdentifyController() because I assume the admin command is never executed. I believe this is because the driver only writes the SQE to DDR memory but does not trigger the PCIe IP to fetch the new SQE. (Maybe my BAR0 addressing is still wrong) but I am missing a mechanism to ring the SQ doorbell. Do you have a pointer why I am getting the timeout error or how to manually ring the SQ doorbell?

    ReplyDelete
    Replies
    1. I see where my problem is. The queues are not in DDR but in the PCIe memory space, correct? Where exactly is 0x1000_0000 because I cannot find it in your AXI address space.

      sqe_prp_type * asq = (sqe_prp_type *)(0x10000000); // Admin Submission Queue

      Delete
    2. The queues can technically be anywhere the PCIe IP's AXI master can access, but in this case they are in fact in DDR. So 0x1000_0000 is a physical DDR address for the PS DDR, with no extra translation or mapping.

      The driver does ring the ASQ doorbell at the end of nvmeSubmitAdminCommand(). If the BAR0 mapping wasn't correct, regSQ0TDBL wouldn't be pointing to the right place. But I don't think you'd make it past nvmeInitController() in that case, since the other NVMe Controller registers would also be wrong or unreadable.

      You can try setting a breakpoint just before nvmeSubmitAdminCommand() and reading regCSTS on the debugger to see if it's readable and, if so, if it reports ready (Bit 0 set). Be careful not to leave the watch open when restarting, though, because it will be a memfault until the PCIe link is up.

      Delete
    3. To clarify further about the BAR0 mapping: the addresses assigned to the NVMe Controller registers should match the address you assign to the BAR0 AXI Slave of the PCIe IP in the Vivado Address Editor. For example,

      u64 * regCAP = (u64 *)(0xB0000000);

      implies that AXI Slave is assigned to 0xB000_0000. It should really be pulled from xparameters.h rather than hard-coded, but I was probably being lazy.

      However if you made it past nvmeInitAdminQueue() and nvmeInitController(), you probably already figured that out.

      Delete
    4. Just reading through the comment history on this post reminded me of two more things:

      One person above ran into an issue where the BAR0 _translation_ was set to 0x4000_0000 by default. This would mean that if you write to 0xB000_0000 in AXI space, it would map to 0x4000_0000 in PCIe space. It should still work correctly if the PCIe driver assigns 0x4000_0000 to the device's BAR0 register, but if for some reason it doesn't, that could also lead to problems. You can check the device's PCIe Config BAR0 register after pcieInit() to see what's actually there, and if it matches the configuration of the PCIe IP in Vivado. The address translation is confusing, so I just set it to 0x0000_0000.

      Second, since you're using a 32-bit processor, you'll have to break up regCAP, regASQ, and regACQ into two parts, I think. I did it like this:

      u32 * regCAP_L = (u32 *)(NVME_CONTROLLER_BASE + 0x0000);
      u32 * regCAP_H = (u32 *)(NVME_CONTROLLER_BASE + 0x0004);
      u32 * regCC = (u32 *)(NVME_CONTROLLER_BASE + 0x0014);
      u32 * regCSTS = (u32 *)(NVME_CONTROLLER_BASE + 0x001C);
      u32 * regAQA = (u32 *)(NVME_CONTROLLER_BASE + 0x0024);
      u32 * regASQ_L = (u32 *)(NVME_CONTROLLER_BASE + 0x0028);
      u32 * regASQ_H = (u32 *)(NVME_CONTROLLER_BASE + 0x002C);
      u32 * regACQ_L = (u32 *)(NVME_CONTROLLER_BASE + 0x0030);
      u32 * regACQ_H = (u32 *)(NVME_CONTROLLER_BASE + 0x0034);
      u32 * regSQ0TDBL = (u32 *)(NVME_CONTROLLER_BASE + 0x1000);
      u32 * regCQ0HDBL = (u32 *)(NVME_CONTROLLER_BASE + 0x1004);
      u32 * regSQ1TDBL = (u32 *)(NVME_CONTROLLER_BASE + 0x1008);
      u32 * regCQ1HDBL = (u32 *)(NVME_CONTROLLER_BASE + 0x100C);

      I guess you already figured that out too, though, or it wouldn't even compile.

      Delete
    5. Thanks for the explanation. I now understand better how the SQ mechanism is supposed to work (regSQ0TDBL). I still cannot get any Admin SQE to complete. My best guess is that I did not do a correct portation from 64 bit to 32 bit architecture. In my opinion I don't need to split the registers into to _H and _L. But perphaps somewhere else I don't have the correct setting yet (maybe when memcpy of the SQE, endianness problems or the countless pointer casts). Unfortunately, Vitis bugs prevent me from using a 64 bit microblaze. Do you have a 32 bit version laying around somewhere?

      I am currently also suppressing the assert error in XDmaPcie_ReserveBarMem from XDmaPcie_EnumerateFabric that other users reported. And I am on PCIe x2 but adjusted for the value in PHY_OK.

      Delete
    6. You're right about the register pointers - they shouldn't need to be split. I forgot that I did that for a completely different reason, as part of a PCIe IP optimization.

      Aside from fixing some casts to clear warnings, I don't think I made any substantial changes for 32-bit compatibility when I set up a version for the Cortex-R5. I will double check.

      Were you able to check regCSTS to see if the device is reporting ready status (0x00000001)?

      Delete
    7. Yes, CSTS is 1, and the other registers seem to be correct too.

      // AXI PCIE bridge IP (AXI-Lite config interface BASE=0x2000_0000)
      0x2000_0144 = 0x0000_1882 // x2 lanes, Gen 3, link is up (PHY Status/Control Register)
      0x2010_0008 = 0x0108_02b0 // class code
      0x2000_0148 = 0x0000_0001 // bridge is enabled (Root Port Status/Control Register)

      // PCIE BAR0 (AXI_B BASE=0x6000_0000) (DDR4 BASE=0x8000_0000)
      // PCIE controller register
      0x6000_0000 = 0x0080_0030_1400_03ff (controler capability, CAP)
      0x6000_0014 = 0x0046_0001 // enabled (controler configuration, CC)
      0x6000_0024 = 0x000f_000f // AQA
      0x6000_001c = 0x0000_0001 // (controler status, CSTS)
      0x6000_0028 = 0x0000_0000_8000_0000 // in DDR offset 0x0000 (ASQ)
      0x6000_0030 = 0x0000_0000_8000_1000 // in DDR offset 0x1000 (ACQ)

      // create SQE in DDR memory
      // ASQ in DDR4 starting at 0x8000_0000
      0x8000_0000 = 0x0000_0006 // opcode 6h: identify
      0x8000_0004 = 0x0000_0000
      0x8000_0008 = 0x0000_0000
      0x8000_000C = 0x0000_0000
      0x8000_0010 = 0x0000_0000
      0x8000_0014 = 0x0000_0000
      0x8000_0018 = 0x8000_4000 // PRP1 = pointer to destination buffer to write results of ID opcode
      0x8000_001C = 0x0000_0000
      0x8000_0020 = 0x0000_0000
      0x8000_0024 = 0x0000_0000
      0x8000_0028 = 0x0000_0001 // CID = 1
      0x8000_002C = 0x0000_0000

      0x6000_1000 = 0x01 // write SQ0TDBL (Admin Submission Queue Tail Doorbell)

      Delete
  18. Hi,

    First of all, I would like to say this post is extremely helpful and appreciate the level of detail you explained PCIe in. I am using your project as a base and building off of it for my own implementation. I am very new to PCIe so I apologize if this is a dumb question, but how would I modify the project to become a one-lane implementation. I understand what has to be done on the PL side, but I am having a hard time digging through the vitis code for where the number of lanes is defined. Any help with this would be amazing. Thank you!

    ReplyDelete
    Replies
    1. I think the only necessary software change for x1 would be to the value of PHY_OK (nvme_priv.h, line 34). The bit definitions for the PHY Status/Control Register are in PG194.

      The rest should be handled automatically by the AXI-PCIe Bridge IP, so that the driver doesn't need to know how many lanes there are.

      Delete
    2. Okay awesome, again I thank you so much for your help! I will look into this. Also, I was wondering if there is any specific format the SSD needs to be in to R/W to the raw disk? Do I format it to cleared?

      Delete
    3. You shouldn't need to do anything to the SSD to be able to do raw disk R/W. There is an admin command for formatting and an I/O command for TRIM that can be used to deallocate all or part of the disk, but this isn't strictly necessary. If you issue a write command to an already-allocated block, it will overwrite the old data. Depending on the drive, link speed, and write workload, this is might affect performance and endurance.

      Delete
  19. So I modified the PHY_OK parameter and that didn't work, so now I am digging through the functions again. I believe my issue is before this step, because I am failing to establish the PCIe link. It fails when checking if the Link is up. Code snip:

    /* check if the link is up or not */
    for (Retries = 0; Retries < XDMAPCIE_LINK_WAIT_MAX_RETRIES; Retries++) {
    if (XDmaPcie_IsLinkUp(XdmaPciePtr)){
    Status = TRUE;
    }
    usleep(XDMAPCIE_LINK_WAIT_USLEEP_MIN);
    }
    if (Status != TRUE ) {
    xil_printf("Warning: PCIe link is not up.\r\n");
    return XST_FAILURE;
    }

    I assume that the core is not being able to establish a connection. Any debugging ideas? Maybe I made a mistake in the PL?

    ReplyDelete
    Replies
    1. Some ideas for things to try:

      1. Make sure you're using Lane 0 of the PHY and the SSD. I don't think it can make an x1 link on any other lane. (Lane 3 might also work, depending on lane swapping ability?)

      2. Make sure the REFCLK is present at the PHY GT and SSD. Make sure it's set to the correct channel and frequency in the PL setup.

      3. Make sure the GPIO is set up correctly to drive the SSD's PERST# line.

      4. Make sure the GT Tx diff pairs have correct-sized (220nF) AC coupling capacitors. (If not, Rx detect might fail.)

      5. Check the LTSSM state (regPhyStatusControl[8:3]) - this might give a clue as to how far the PCIe controller is making it through initialization.

      Delete
    2. Thank you so much for these suggestions!

      "1. Make sure you're using Lane 0 of the PHY and the SSD. I don't think it can make an x1 link on any other lane. (Lane 3 might also work, depending on lane swapping ability?)"
      - I am using the "FPGADrive Gen4" card in conjunction with the ZCU104. I have the PCIe constrained to GTH Quad 226 (the zcu104 only has 2 quads, this is the quad that is connected to the FMC_LPC). The PCIe lanes are properly constrained to the FMC pins that connect to the "FPGADrive" card. Is that what you were asking about?

      "2. Make sure the REFCLK is present at the PHY GT and SSD. Make sure it's set to the correct channel and frequency in the PL setup."
      - The REFCLK is properly constrained from the "FPGADrive" card. I will test if the clock is running via and ILA.

      "3. Make sure the GPIO is set up correctly to drive the SSD's PERST# line."
      - I am confident that this is connected properly.

      "4. Make sure the GT Tx diff pairs have correct-sized (220nF) AC coupling capacitors. (If not, Rx detect might fail.)"
      - I will have to get back to you on this.

      "5. Check the LTSSM state (regPhyStatusControl[8:3]) - this might give a clue as to how far the PCIe controller is making it through initialization."
      - Is this the same as the dedicated output of the XDMA_PCIE IP block?

      Thank you so much!

      Delete
    3. Since you're using the ZCU104 and FPGADrive, that rules out a lot of potential hardware issues. The 220nF AC-coupling capacitors are on the FPGADrive board. They also have the lanes properly constrained in their .xdc, as far as I can tell. (PCIe Lane 0 is constrained to B226 CH3.) It might be worth double-checking that in the Implemented Design.

      Have you successfully built and run their ZCU104 reference design? It has the bare-metal example project (xdmapcie_rc_enumerate_example.c) in it that should achieve Link Up state. If that works, then it might be best to replace the C source in that project with this driver as a next step, modifying the addresses as necessary.

      For the LTSSM, I think the register value is the same as the hard-wired output, but I haven't tested that myself. You're looking for 0x10, which is L0 state. Any other state might indicate where the initialization is failing. (The states are listed in PG213.)

      Delete
    4. Okay, I think I have it working now. I went back to the "FPGADrive" example design and then used your PS code for the driver. Thank you for the help!

      Also, I am having a hard time understanding the PF0_BAR0 inside of the XDMA_PCIE IP block. Do I need it? I believe that you have this option checked in your example design, however, the "FPGADrive" example design does not. I was wondering if you could explain why that is?

      Thanks again!

      Delete
    5. Also, what is the difference between a PCIe BAR and an AXI BAR?

      Delete
    6. Great, glad it's working now!

      I just ran create_projects.tcl from scratch in Vivado 2021.1 and PF0_BAR0 is not enabled for me. I think that does address translation in the device-to-host direction, which I don't use for anything. (SSD just sends read and write requests to the physical memory address.) There is some sparse documentation on it in PG194:

      https://docs.xilinx.com/r/en-US/pg194-axi-bridge-pcie-gen3/Root-Port-BAR

      The AXI BAR0 can be used to do address translation in the host-to-device direction. This would be useful if you had multiple functions or devices and wanted to assign them different addresses in PCIe address space that aren't just mapped to a contiguous block in the bridge's memory assignment. But for a single SSD, you can just set it to zero and anything you write to the assigned BAR0 address in the memory manager will go to that SSD's NVMe Controller registers.

      Delete
    7. Thank you so much for all of your help! Thank you for the explanation about BAR0 as well.

      Delete
    8. Does your driver start reading/writing at the beginning of the SSD or does it start at some offset?
      I am asking this because I am attempting to read off an SSD I have written raw data to. I opened the SSD in linux as "/dev/sda" and then wrote some data to it. I know this works fine and have checked it by reading the data off on another linux machine. However, the data is not matching up when I then read it off via the fpga.

      Delete
    9. The srcLBA/destLBA in nvmeRead()/nvmeWrite() is the raw LBA, no partition offset or anything like that. It should match dev/sda, I think.

      Does the nvmeRead() actually modify the memory at destByte, just with incorrect values? Or is it not modifying the memory at all?

      Delete
    10. Okay awesome, thank you! Hopefully that won't be an issue then.

      I believe that the destByte is being populated, just with incorrect values.

      I wrote some code to populate the DDR buffer with about a kB of data. I then use your diskWriteTest() function to write that data (still write 1GB so that all of the block size stuff is taken care of) to the SSD. I then clear DDR by setting that kB space to zeros to ensure that I am not just reading off the same data in DDR but rather from the SSD. I then use your diskReadTest() function to read the data from the SSD and the data that I wanted to write is returned back to me.
      Now I am clearing the buffer in DDR for x amount of bytes and then reading 1GB off of the SSD via the diskReadTest() function. I check the data in the buffer and it is not non-zero values.

      Delete
    11. So upon further investigation, it looks like it is only writing one block at a time to DDR. I feel dumb because when I look at the code this makes sense. I misunderstood the code originally. I thought that it wrote one GB of data in DDR at a time but this is not the case. I see now that it does individual transfers to the base address of the "data" pointer.

      Delete