Sunday, September 12, 2021

TinyCross: New UI and Front Wheel Traction Control

 In the last post, I finally did some actual data logging with TinyCross set up in 4WD, 80A peak per motor, which is the rated current. Based on tinyKart, I know they can handle a a bit more for short durations, maybe even up to 120A. But the data logs (and many instances of having rocks flung into my face) demonstrate that the front wheels reach their traction limit somewhere around 60A on asphalt.

The behavior of front wheel slip on a go-kart is something new to me. In a straight line, the initiation of the slip and the acceleration of the wheel actually isn't the biggest problem. It's when the wheel regains traction and slows down that bad things happen. The restored grip combines with the energy being dumped from the wheel's moment of inertia to generate a quick pulse of torque on that side, which creates a lot of torque steer.

To deal with this, I wanted to implement some form of traction control, at least for the front wheels, so that I could get the most torque out of them as possible without the steering disturbances and rock shooting. But first, I needed a way to easily configure both the motor currents and the traction control settings without having to drag around my laptop everywhere. So, I finally built out the steering wheel UI to include a bunch of settings:

Sorry for the exposure; it's the only way to capture the full OLED refresh period.

Anyone familiar with the MōVI Controller might recognize the OLED display. I chose this for daylight visibility and responsiveness (~50Hz update rate). The menu interface is essentially the same as the one I built the day before NAB 2014... The left knob scrolls through the menu. The right knob adjust settings and, by clicking or holding, performs actions.

In the four corners are three motor parameters for the corresponding motors: S for Status, which shows error codes. F for Forward peak current, and R for Reverse (braking, or actually reversing) peak current. Setting both to zero masks out the CAN command from that motor, triggering a timeout that turns off the gate drivers entirely. A click and hold on S triggers an encoder recalibration for that motor.

In the second column from the left, the first three settings relate to data logging: LS for Logger Status, FN for File Number (click to start a new file), and LT for Logger Time, the time in [ms] for a single row of the data log to be written. Then, there are two parameters for tuning traction control: TT for Traction Threshold, and TG for Traction Gain, which I will explain shortly.

The reason I wanted to be able to adjust peak currents from the steering wheel is because I agree with this early Tesla blog post: "...it's much safer to avoid wheelspin altogether than react to it." If I know the surface supports front wheel current around 60A, there's not much point in setting it higher than that. But, I want to be able to set it higher for testing, or adjust it for different surfaces.

As for the traction control itself, there are a lot of corner cases to think about in 4WD, but the main problem I'm trying to solve is front wheel slip. If I assume the rear wheels are not slipping, then I can use their average speed as a reference. From there, it's easy to see if a front wheels is running faster than that reference, and reduce the current to that motor if so. This only needs two settings: a Traction Threshold (TT) that sets how much wheel slip is allowed, and a Traction Gain (TG) that sets how much to reduce the current per unit slip above the threshold. The Traction Threshold prevents overactuation in normal conditions and allows for speed differential due to turning radius.

But what happens if a rear wheel does slip? Well, then the front wheel might slip too. At that point, I'm probably in some kind of a four wheel sideways drift anyway, so alternate control laws are going to apply. Being able to trigger some rear wheel slip with the throttle is part of the fun, too, so having complete 4WD traction control isn't something I necessarily need to solve.

With the new UI setup and the simple front wheel traction control in place, it was time to do some tuning...

...or not.

At first, everything seemed to be going okay. I did a couple of runs at 60A front current and 80A rear current and the traction control seemed to be working as intended. But then during light regenerative braking at around 30mph, I heard the all-too-familiar sound of a FET popping, followed by some more bad noises and smells from the front drive. Upon inspection, only two FETs actually died, but they also took out many of the power traces, meaning this board was trash.

So what happened? Well, unfortunately, the data log was not very helpful in this case. It did show the speed (30mph) and current command (around -10A), but nothing out of the ordinary up until the point of failure. There is only one data point showing a Q-Axis current of 286A on the front left motor, followed by an undervoltage fault, which might have been the battery sagging or the power input traces getting blown up. So whatever happened, happened quick.

It's been a while since I've actually destroyed a motor controller, so I was a little disappointed. But after some thought, I didn't think this was due to the new traction control stuff. That's only applied during acceleration, and this failure definitely happened under braking. I think it's more likely that the front left motor just lost sync and the back EMF at 30mph was high enough to do damage. Up until now, I have only had a relatively slow overcurrent limit of 160A (or more) for 10ms. These FETs have a pretty insane Safe Operating Area (SOA), but that limit does leave room for exceeding it with currents above 400A:

This system could easily generate a 400A transient if a motor loses sync at 30mph. And the motor position and speed data does cut out at the same data point as the failure. But that's not enough to determine cause and effect. So for now I can only make changes that might help and hope for the best. I added in several more stages of faster overcurrent protection, up to 300A for a single ADC/PWM cycle (42.7μs). These overlap enough to cover the entire R_DS(on)-limited boundary of the SOA (up to the pulse rating of 1450A for 100μs!).

A faster overcurrent trip doesn't help with whatever caused the motor to lose sync in the first place (if that is what happened). I have seen at least a couple previous instances where the encoders, which supply emulated Hall effect sensor signals, have behaved as if they were completely reset. Even though I only use the buffered and optically isolated virtual Hall effect sensor signals for commutation, I was still reading the SPI data anyway. Maybe a SPI read got corrupted by noise and turned into a write that either reconfigured or entirely reset the encoder mid-run? To protect against this, I now disabled the SPI transactions entirely other than during initialization and calibration.

So with these changes and my last and only spare drive, I went back out for another try. This time, I ran into no motor drive issues and was actually able to test and tune the front wheel traction control as I originally intended. The difference is immediately obvious while driving and in the data. First, a test at 80A front, 90A rear, with no traction control:

Front wheel traction control off.

As before, the front right wheel starts slipping at about 60A and spins up to 2-3x the actual ground speed. The front right always seems to lose grip first, a mystery to solve another day. When I let off the throttle and it regains traction, the torque pulse creates substantial torque steer, jerking the steering wheel almost 20º to the left, which I then have to counteract immediately to stay on course. Overall, it's impossible to sustain peak acceleration for more than a second or so before having to deal with the wheel spin and torque steer.

And now with the same currents, but front wheel traction control on:

Front wheel traction control on.

The front right (FR) current now averages a bit below 60A and its speed is held to just a small margin above the actual ground speed. It's never able to build up momentum and then "catch", inducing torque steer. This allows continuous acceleration up to and past 30mph. The front left (FL) also starts to slip in the 20-30mph range, but the traction control catches it too. The overall result is a much more controllable launch and far fewer rocks being thrown up by the front wheels.

After finding traction control settings that I liked, I switched back to current settings that more closely match the actual traction limits: 60A front and 100A rear. This still gives a reasonable 0.45g launch, but with less likelihood of triggering the traction control on asphalt. I'd like to push to >0.5g, to match tinyKart's most extreme configuration, but that'll either require 120A on the rear or changing the gear ratio a bit. At 60A / 100A, the front motors still share enough of the load that the rear motors stay at healthy temperature after some acceleration runs:

Rear motors are doing most of the work, but...

...they are at a reasonable temperature.

And finally I did some less structured testing by just driving through the gravel corner in my parking lot and intentionally adding throttle to induce slip. It behaves pretty well, slipping and oversteering about the right amount to be controllable but still fun:


I think at this point most of the handling bottlenecks are back on the mechanical side. There's a small amount of backlash in the steering column that definitely exaggerates the residual torque steer, especially at high speeds. It's almost all coming from the U-joint, which I may try to shim or replace with one with tighter tolerances. Other than that, I need to do some suspension geometry tweaking to improve handling of lateral transients. Speaking of which, here's one last data capture. See if you can figure out what's going on here...

Mystery data log.

Sunday, August 15, 2021

TinyCross: 4WD 80A Data Logging

It's been a long time since I did a proper test drive with TinyCross, although I've taken it out just for fun a few times. Since I completed the weight/width reduction pass last week, I wanted to get it out again and do some proper data logging in 4WD, with the peak current set to 80A for all four motors. This is still below the ultimate target of 100-120A (for short bursts), but plenty for parking lot testing.

Really enjoying the extra 2" of clearance - I can get through most of the "doors" in my building now.

I had to inflate the tires, but amazingly the air shocks don't seem to have leaked at all after a year of neglect. And they still do a pretty impressive job of soaking up the awful topography of my parking lot.

I wanted to do some more thorough data logging in 4WD to characterize some of the issues I've felt while just driving around for fun. The steering wheel PCB collects data from the front and rear motor drives over CAN, appends some of its own data, and writes the whole thing to a microSD card. When I first set this up, I just had it overwrite the existing data log every power cycle. But in the couple of years since I set that up, I've had to master FatFs. So setting it up to create new files on the fly without messing up any of the real-time stuff was an easy upgrade.

Here's what a 4x80A launch looks like:

4x80A launch (attempt).

The main problem is pretty obvious from the data: the front wheels just don't have enough weight on them to support 80A. If there's even a little bit of a loose surface, one or both front wheels will lose grip. Excessive wheel slip is inefficient, so the peak acceleration isn't as high as it could be if all four wheels hugged their grip limit. But front wheel slip is especially bad because it results in massive torque steer. (I actually used this to make remote-control TinyCross.) It also has a habit of throwing rocks up into the driver's face.

I've even debated whether the front wheel drive on TinyCross is worth the extra weight and complexity. tinyKart handled pretty well with RWD only: I could put in a controlled amount of oversteer with the throttle. In fact, I got a chance to test out how TinyCross feels with RWD only when I had - let's call it an 80/20 failure - on the front right upright:

Always check your T-nuts! The only real casualty was the encoder wire.

Although I was able to fix the mechanicals with the single hex driver I always bring with me, a few crimps pulled out of the encoder wire and I didn't have the tools to fix it. I could probably add a failover to sensorless operation for individual motors, but I'm not sure how well it'd work on the front motors, again because of torque steer. (Both fronts would have to agree to not produce torque until the flux estimator converges on the sensorless motor.) For now, I just removed power from the front drive.

In terms of handling, RWD works fine. But the launch is a mere 0.25g at 2x80A. There's no slip, and even if there was, it wouldn't matter as much on the rear since it doesn't induce torque steer.

2x80A launch.

Even at 120A, this would only be about a 0.4g launch. tinyKart, in its last and somewhat scary configuration, was hitting about 0.5-0.6g. Part of this is down to gearing: TinyCross, with 12.5" wheel, has to be geared for higher speeds. I could always ditch the front motors and switch to 80mm motors with more torque on the rear. But I think that goes against the spirit of TinyCross. Having full independent suspension and 4WD has always been the point.

So I think I'll finally have to dive in to writing some simple traction/launch control software. Just looking at the 4x80A launch data, it's easy to pick out the wheel that's slipping and imagine that the software could just fold back the current command to that wheel as its speed starts to diverge from the other three. But there are so many logical knots on the path to generalizing that to 4WD, where any subset of the four wheels could be slipping, that it makes my brain hurt to even think about.

There are some amazing technical blog posts from the early days of Tesla (back when it was more of an engineering project than a consumer electronics device) where they talk about how it took months to go from a controller with excellent high-bandwidth torque control to functioning traction control, and even then a lot of it was subjective. One observation I really liked:

This type of feedforward traction control can be hugely beneficial; for instance, it's much safer to avoid wheelspin altogether than react to it.

This was regarding a lateral G observer that was fed into the friction model that the traction control software used to help limit motor torque to what it thought the tires could reasonably handle. This way, wheel slip might be limited to cases where there truly is a sudden drop in friction at one wheel. I think that should be the goal for this as well. I might even be able to just do slip detection on the front wheels. It'll be an interesting experiment, at least.

Saturday, August 7, 2021

TinyCross Weight and Width Reduction Pass

It's summer, which means it's time to work on go-karts. This round, it's a modification to TinyCross that I've been wanting to make ever since I first got it together about two years ago. The main issue is that I designed it around stock rear 12.5" scooter wheels. These are almost symmetric and have threading on both sides of the hub that are meant for mounting the drive sprocket and brake disk. But - and this is maybe my favorite bit of packaging on this project - I've got the brake and drive sprocket both mounted to the inboard side, with the brake caliper sitting right in the middle of the belt:

The brake and drive sprocket are both mounted to the inboard side of the wheel, making the outboard side of the hub dead weight.

This makes the extended length of the outboard side of the hub useless. But, I left it as stock for simplicity. I figured if I ever needed to replace the wheels, it would be easier to drop in a new stock 12.5" wheel. But, this drives the overall width of the kart up to about 35" for no good reason:

The total width, about 35", is driven in part by the symmetric 12.5" wheel hubs.

It's also unnecessary weight, especially factoring in the beefier 5"x5/8" hex standoffs I used to close the structural loop around each wheel. I figured I could eliminate 2" off the total width and about 1lb off the total weight if I just bit the bullet and re-machined the 12.5" wheel hubs. It still wouldn't fit through a 32" door frame, but it would be easier to wiggle through indoor spaces and fit in my car. It also would just look a lot nicer.

One of the reasons I put off this modification for so long is because I thought it would involve disassembling the entire wheel module, but it turns out that it's just barely possible to remove the wheel without removing the motor. I can take off the brake caliper and slip the belt off the pulley to give it just enough slack to pull the wheel off the spindle shaft. I don't remember intentionally designing it this way, but let's pretend I did. It'll be good for fixing flats, too. 

The next obstacle to overcome was removing the outboard bearings. I didn't have a bearing puller on-hand, but I discovered that an 80/20 T-Nut (which I obviously have hundreds of...) is just about exactly the right size to push on the outer race of these bearings. So I came up with this improvised tool:

Improvised bearing pusher.

The tool is built inside the hub by slipping the 80/20 T-Nut through the bearing, flipping it horizontal, then dropping in the hex standoff from the other side. After fastening it together with a 1/4-20, it's ready for the press. Luckily, I didn't Loctite these bearings in, so they pressed out pretty easily.

Pressing out the bearings using the makeshift pusher.

The 12.5" wheels don't fit on my mini lathe, but they do just barely fit on my mini-mill. I knew this ahead of time, so I bought a 22mm end mill specifically for cutting the new bearing pocket. (One of the nice features of this mini-mill is its use of a regular R8 spindle, so it's possible to get large tools for it.) I did have to get a little creative with fixturing. The brake disk is bolted down to a piece of 80/20, which is clamped in the mill. But, to make things stiff enough, I also had to ground the rim itself directly to the bed with some long clamping screws.

Clamping situation: not great, not terrible.

Pretty sure this mill was never meant to hold a tool this big.

I decided to extend the bearing pocket by 1.000" first, before machining down the hub by 1.000". I'm not sure if this was the best order of operations, but it all went pretty smoothly. Here's 7:45 of relaxing slow-motion bearing pocket cutting, captured at 4K 420fps with my Wave:

These hubs are cast aluminum, so it wasn't surprising to find that there were some voids in the newly-machined faces. They're nothing that I think would affect the structural integrity, but it's an interesting consequence of the manufacturing process.

Casting voids exposed by re-machining the hubs.

One of the downsides of doing this operation on the mill is that I didn't have a choice of machining the new bearing pocket to an interference fit. But I was pleased to see that, with all the extra effort put into stiffening the fixture, it was still a nice slip fit. I can always add Loctite later if needed.

After re-machining, the bearings are now a nice slip fit.

That just leaves the 7075 spindle shafts, which also needed to be shortened by 1.000". Cutting off the extra length and extending the outboard mounting hole was a quick task for the mini-lathe. Then, it just needed to be re-tapped.

Shortening the 7075 spindle shafts...

...and re-tapping.

Finally, I put everything back together, substituting much lighter 4"x1/2" hex standoffs to span the gap at the top of each wheel module. The total process took only about two hours per wheel, including disassembly and reassembly. So something I have put off for two years was really only one day of work...typical. Anyway, the final result is a kart that's now 2" narrower and about 1lb lighter.

The pile at the front is roughly the weight saved. (5"x5/8" standoffs were replaced by 4"x1/2", but an equivalent amount of weight was taken out of each hub.)

I have a few more tasks I want to do on this kart. It still needs to be fully weather-proofed. I have a plan for enclosing the motor drives, but need to figure out something for the steering wheel PCB. I may redesign that board from scratch since I don't think I'll ever get to using the battery balancing circuit on it. It can be much smaller and simpler without that. Lastly, there's always motor drive stuff to fiddle with to squeeze out more torque and/or speed.

For now, though, I'm glad it's a little lighter and a lot narrower. It'll make deploy that much easier, which ultimately means more actual testing and use.

Saturday, April 18, 2020

Full-Speed CMV12000 Subsampled Readout: 1440fps 1080p

Now that I've got a continuous multi-Gpx/s image capture pipeline running, it's time to rearrange some things to break the 1000fps barrier:



For this clip I'm using the CMV12000's X/Y subsampling mode to trade resolution for frame rate, hitting 1440fps at 2048x1088. The overall pixel rate is a little lower than in 4K (3.2Gpx/s vs. 3.8Gpx/s), so it's feasible to send this through the same Zynq Ultrascale+ capture pipeline, with some modifications, to record continuously to an NVMe SSD. With ~4:1 wavelet compression, this writes about 1GB/s to the drive, up to 1000s (16.7min) for a 1TB drive. That would be 16.7 hours of playback at 24fps, though. I figured 30 seconds real-time and 30 minutes of playback was enough water droplet footage for now.

CMV12000 Subsampling

In a previous post, I covered the pipeline architecture for continuously recording 400fps 4K video from a CMV12000 image sensor to an NVMe SSD. That was a 4096x2304 (16:9) frame, slightly larger than 4K UHD. The sensor's native resolution is 4096x3072 (4:3), which it can read in at 300fps. By reading in fewer rows, the maximum frame rate is increased. Going wider than 16:9 would allow frame rates higher than 400fps, but since the sensor always reads in full 4096px-wide rows, the speed gain is only linear.

To go much faster, it's necessary to read in fewer columns as well. Not all sensors can do this; reading whole rows may be baked into the hardware architecture. The CMV12000 doesn't support arbitrary readout width, but it does support 2x subsampling. In this mode, every other four-pixel square (Bayer group) is skipped in both the X and Y directions. The remaining squares are transmitted on the LVDS channels using an alternate packing:

CMV12000 subsampled readout (color, X-flipped).
Each of the 64 LVDS channels alternates between two rows, with the lower 32 channels handling two even (G1/R1) rows and the upper 32 channels handling two odd (B1/G2) rows. This alternate data packing allows the subsampled image, with 1/4 as many total pixels, to be read out nearly 4x faster. There is a small amount of extra overhead time that makes the actual gain not quite 4x.

Subsampling drops the resolution from 4K to 2K but preserves the crop factor of the sensor, since the full width and height are still used. This is preferable to cropping a 2048px-wide image out of the middle. It doesn't give any increase in sensitivity though; to do that would require binning (averaging the larger 4x4 squares to generate the final 2x2). The CMV12000 does support binning, but the overhead is so bad that you might as well read out the 4K image and do it in post (assuming you have the data storage bandwidth, which I certainly do). So to go ~4x faster, I will need ~4x more light.

Light sensitivity of subsampling vs. binning.
Before worrying about a shortage of photons, though, I first need to deal with a shortage of programmable logic. To fit everything on the XCZU4, my main bottlenecks are BRAMs and LUTs. I managed to add the decoder for HDMI output with no increase in either by sacrificing the third wavelet stage. But I've known for a long time that the day would come when I would need to add 128 more Stage 1 horizontal cores to handle the subsampled inputs.

It might seem odd that more cores are needed to process a smaller image. Even at the higher frame rate, the pixel input rate is lower than in 4K. Surely the existing horizontal cores could time-multiplex to handle the data? But, the wavelet cores must operate on groups of adjacent pixels. In this case, adjacency describes the nearest horizontal pixels of the same color, since applying a difference operation to pixels of different colors would not have the desired result. And whatever the color, pixels from another row are not horizontally adjacent. Since each LVDS channel now services two color fields and two rows, it must feed four independent wavelet cores.

In 2K mode, each LVDS channel feeds four independent Stage 1 horizontal cores.
So, the total number of Stage 1 horizontal cores doubles from 128 to 256. This jump has been on my mind since the early stages of the design, and I tried to optimize the horizontal cores as much as possible. A big part of this was reducing the operating pixel width from 16-bit to 12-bit, which brought the per-core LUT count down from 107 to 83. As this is the first stage of the pipeline, it's easy to verify that it won't saturate on 10-bit inputs. The horizontal cores operate in-line with the input using only distributed memory, so no additonal BRAMs are required. But there's now way around the additional 10,000 or so LUTs, and that will bring me right up to the limits of this chip.

Since I knew there would be very few LUTs remaining for switching modes, I originally thought the 4K and 2K modes might have to exist as entirely separate PL configurations, their bitstreams loaded on as-needed by software. I've seen other cameras do this; it looks like a software reset when changing capture formats. And while it only takes a few seconds, I really dislike the workflow and the idea of maintaining two configurations.

So, I spent some time looking at the actual differences between modes at all stages of the pipeline and decided that I could and should build the switch. I had this mode change in mind early in the design, so I tried to minimize the number of touch points required in each of the modules to switch between 4K and 2K. Even so, there are a number of small changes needed in the Wavelet, Encoder, and HDMI modules. They are collectively driven by a master switch in each module's AXI slave registers. I'll go through them in pipeline order below.

Wavelet Stage 4K/2K Switch

First, no actual switching is required to distribute the inputs to the Stage 1 horizontal cores; each channel always connects to the same four cores. Instead, the cores are gated by a master pixel counter based on their color and, when in 2K mode, also their row. The 2K mode switch turns on this extra enable gate and offsets the counter that handles first/last row states by one bit, to account for the half-width rows. Miraculously, this did not add any LUTs to the horizontal cores. I assume the extra logic just got merged into existing smaller LUTs...I'll take it.

The most complicated part of the switch happens next, at the interface between the Stage 1 horizontal and vertical cores. Instead of distributing outputs from four adjacent horizontal cores into a single row of a vertical core BRAM, the 2K interface distributes outputs from eight horizontal cores into two rows of a vertical core BRAM. Since the rows are half as wide, this takes the same number of pixel clock cycles (128). So, as will be the case at many points in the pipeline, this just boils down to rearranging the bits of the BRAM write address:

Aspect ratio change and read/write addressing of the Stage 1 vertical core BRAMs in 4K vs. 2K mode.
Conceptually, the aspect ratio of the vertical core BRAM changes from 8 rows of 256px to 16 rows of 128px. The figure above shows where writes and reads occur in the BRAM at a given relative pixel count. Reads occur on half-counts since the Stage 1 vertical DWT operates at double px_clk frequency. The read address generator is also modified by the switch to account for the new aspect ratio. Only the eight most recent rows are actively written or read, so in 2K mode the BRAM is twice as big as it needs to be. The latency of the vertical core is also halved, since it's determined by the number of rows required to complete the vertical DWT operation. This will come into play later.

The Stage 1 vertical core buffers the alternating-row 2K mode inputs into a single-row format that's compatible with the rest of the pipeline, so changes after this point are relatively minor. Each Stage 1 vertical core feeds its output row to a Stage 2 horizontal core. The only modification required there is to offset the counter that handles first/last row states by one bit, to account for the half-width rows. Then, the Stage 2 vertical core just needs some more BRAM address rearrangement:

Aspect ratio change and read/write addressing of the Stage 2 vertical core BRAMs in 4K vs. 2K mode.
Like the Stage 1 vertical core BRAM, the aspect ratio is changed from 8 rows of 256px to 16 rows of 128px. But since the first stage already rearranged things into single rows, the write addressing here is more straighforward: In both 4K and 2K mode, only a single row is filled at a time (by two adjacent Stage 2 horizontal cores). The row width is halved, but there's no write interleaving between the two rows. Ultimately, this is just a different arrangement of the write address bits. The read address generator is similarly modified to grab the right data for the Stage 2 vertical DWT. As with the Stage 1 vertical core, the BRAM is twice as big as it needs to be, and the latency is halved.

Encoder 4K/2K Switch

The compression stage doesn't care about the aspect ratio change, since the only context it uses for variable-length encoding is an immediate group of four pixels. However, it does need to know the adjusted latency of both wavelet stages, since the first pixel to be encoded will arrive sooner in 2K mode. For that, I just made all the latency offsets software-defined, through the encoder's AXI slave registers. And that should be the only change required here...

Except things are never that easy. I noticed after plugging in the expected latency values in 2K mode, two of the four color fields (R1 and G2) were actually dropping one pixel per row. It took a while to isolate this to the encoder, and then even more staring at this module to figure out what the problem was. Since the only change I made was to the latency offsets, I figured there had to be some fundamental difference between how the local pixel counter (px_count_e) drives the encoder states during row transitions with different offsets, and there was:

Encoder gating in 4K mode, showing the difference between sequential and combinational px_count_e_updated.
The above shows px_count_e at the first row overhead time (ROT) in 4K mode. It's negative since pixels haven't made it to the encoder yet, but the same behavior happens at all subsequent row transitions. During ROT, the sensor is not sending pixel data and all the pixel counters (including px_count_e) hold their previous values. A signal called px_count_e_updated is cleared, which gates the encoder from sending pixels to RAM (via an intermediate shift register called e_buffer). This signal was previously sequential, which would add one clock cycle delay between the ROT and when the encoder is gated. It should have been combinational, to line up correctly with the ROT.

But the write to e_buffer also only takes place every other group of four pixel clocks, for reasons discussed here. In 4K mode, the ROT happens to fall in a period where writes don't occur anyway. The sequential vs. combinational difference didn't matter to the final e_buffer_wr_en signal. But in 2K mode, the new latency offsets just happen to put the ROT one cycle before the start of a four-cycle write sequence, where the difference does matter:

Encoder gating in @K mode, showing the difference between sequential and combinational px_count_e_updated.
After switching over to combinational logic for px_count_e_updated, the missing pixel returned, and things were almost happy again. It turns out there was a similar issue at the quantizer and encoder modules themselves, before the write to e_buffer. This was simply due to them not being enable-gated at all, though. (Again, it must have been working thanks to lucky latency offsets in 4K mode.) Gating each with the same combinational px_count_e_updated signal worked fine.

HDMI 4K/2K Switch

But wait, isn't the HDMI output always 1080p? While that is true, it doesn't mean there's nothing to be done here. In 4K mode, only the Stage 2 wavelet compression is decoded, leaving a 2K preview image (really, four color fields that are each 1024px wide) to be output via HDMI. This greatly reduces the size of the HDMI module, since it only has to decode four of the sixteen codestreams and do one stage of inverse DWT. However, getting to the same preview size in 2K mode would mean complete decoding, require all sixteen codestreams and two wavelet stages. I simply don't have room to do that, so I'm going to cheat.

The first step is to change how the viewport is mapped to a pixel count. To achieve arbitrary scaling of the preview image, I first normalize the viewport to 16-bit, i.e. top-left (0, 0) to bottom-right (65535, 65535). The x and y components, vxNorm and vyNorm, are shifted around to create the pixel counters that drives the output pipeline. When switching from 4K to 2K, each component gets right-shifted by one and the split between x and y moves over by one bit in the final counter:
Mapping between 16-bit (vxNorm, vyNorm) coordinates and opx_count in 4K vs. 2K mode.
This remapping means that the entire output pipeline operates at half resolution in 2K mode. The preview will actually just be scaled up from the four LL1 color fields, which are each 512px wide. There will still be bilinear interpolation to help smooth out the result, but it will be blurrier than the 1080p preview in 4K mode. But again there isn't really an alternative, at least not with the resources I have left on this chip.

The output pixel counter (opx_count) drives all parts of the decoding process, starting with a RAM reading FIFO through the HDMI module's AXI master. No changes are required there or in the decoder itself, other than modifying the latency offsets accordingly. These have always been software-defined, so I just added the expected values for 2K mode and they worked without any hassle. (There was no equivalent sequential vs. computational bug, thankfully.)

After this, the modifications to the Stage 2 inverse vertical wavelet cores are pretty simple and almost the same as in the forward direction. Each color field's IV2 core uses a single URAM for row storage. In 2K mode, the aspect ratio is changed from 16 rows of 1024px to 32 rows of 512px, by rearranging read and write address bits:

Aspect ratio change and read/write addressing of the Stage 2 inverse vertical core URAMs in 4K vs. 2K mode.
Unlike the forward direction, the Stage 2 inverse horizontal wavelet cores also use URAMs for row storage and these likewise need address bit rearrangement to change aspect ratios for 2K mode:

Aspect ratio change and read/write addressing of the Stage 2 inverse horizontal core URAMs in 4K vs. 2K mode.
And finally, the bilinear interpolation module needs to be adjusted to automatically scale up the preview image by 2x, so it can fill the viewport using the 512px-wide color field LL1 outputs. This can be done quickly by passing the shifted vxNorm and vyNorm values to the module, although this isn't quite correct, as will be discussed below. It's good enough for now, though.

Debayering

Applying an ordinary debayering algorithm, whatever it is, to the 2K subsampled raw data doesn't really work. This is because the physical spacing between pixels is no longer symmetric. For example, a red pixel is closer to its green and blue neighbors to the left and below than to the right and above. A proper bilinear interpolation needs to take this asymmetry into account, by modifying the location of pixel centers for each color field accordingly. More advanced algorithms are still built on the assumption of symmetric neighbors, so they'd all need modification to some degree.

Asymmetric neighboring pixels in subsampled mode can be handled by modifying interpolation pixel centers (left) or with an intermediate supersampling step (right).
Alternatively, the subsampled data can be supersampled by 2x to estimate the missing pixels (G2' and G1' in the image above) and then run through the ordinary debayer algorithm in 4K. The final output can then be scaled back to 2K to reflect the true information content of the data. This path takes longer for what may be an equivalent result for simpler debayer algorithms, but it might have advantages for more complex algorithms. All this will probably be obsoleted by neural networks that upscale 240p images to 16K in a few year anyway, so I'm not going to worry about it.

It is important to adapt the debayer algorithm for the subsampled pixel locations somehow, though, or there will be significant artifacts. The following comparison shows three different algorithms, nearest-neighbor, bilinear, and a Microsoft 5x5 interpolator that I like. For each, a reference 4K capture and 4K debayer is compared to a 2K subsampled capture with an unmodified 2K debayer and a 2K subsampled capture with a supersampled 4K debayer.

Comparison of three different interpolation algorithms with 4K capture/debayer, 2K subsampled capture with unmodified 2K debayer, and 2K subsampled capture with supersampled 4K debayer.
None of these simple algorithms can do much to recover resolution - for that I defer to the AI supersampling state of the art - but using an unmodified 2K debayer on subsampled raw data creates significant color checkerboarding artifacts on edges. Supersampling the data by 2x and running a simple 4K debayer at least bypasses the problem of neighboring pixel asymmetry.

Resource Utilization

Squeezing in the 4K/2K switch was beyond what I'd hoped to fit on the XCZU4, but it just barely works. The switch itself really only adds LUTs where BRAM/URAM address bits are remapped or where pixel counts are shifted to account for the aspect ratio change. The main addition is the 128 new Stage 1 horizontal wavelet cores, which really push the resource utilization to the limits.

The XCZU4 with everything crammed in.
At this point I'm at 77143 LUTs (87.82%), 93883 FFs (53.44%), 118 BRAMs (92.20%), 14 URAMs (29.17%) and 146 DSPs (20.05%). But, since most of my cores are running at px_clk (60MHz) or HDMI clock (74.25MHz) frequency, the timing constraints are not too difficult to meet. The exception seems to be things that interact with the 250MHz AXI clock, including the encoder and decoder BRAM FIFOs. These have to have some amount of manual placement help to meet timing.

The good news is I don't really have much else to add to the programmable logic. I've already built in placeholder URAMs for UI overlays in the HDMI module, so those just need to be filled in by software. I might add some more color processing to the HDMI output, but that will mostly use DSPs, and possibly URAMs for color look-up tables, which should be no problem to add. I'm really happy that everything fits on the XCZU4, not just because the bigger chips are way more expensive, but because it's been a much better lesson in optimizing cores to fit resource constraints than if I had just switched to the XCZU7 early on.

Saturday, March 14, 2020

HDMI, the Hard Way

If I were to rank the components of this project in terms of the ratio of their actual vs. expected difficulty, the NVMe interface would probably be lowest, since it was nowhere near as hard as I thought it would be. The CMV12000 input (easy, expected to be easy) and wavelet engine (hard, expected to be hard) would be somewhere in the middle. And the new top of the list, the hardest module that should have been easy, would be the HDMI output.

HDMI

There seem to be two main reference designs for outputting an HDMI signal from a Zynq SoC. Zynq-7000 series boards such as the ZC70x and Zedboard use an external HDMI transmitter, the ADV7511, to convert a parallel RGB interface into serial HDMI TMDS outputs. Zynq Ultrascale+ boards such as the ZCU10x and UltraZed-EV Carrier Card use the built-in serial transceivers of the ZU+ to drive the TMDS outputs through a SN65DP159 HDMI retimer. The latter is a more modern approach, supporting up to 4K60 through the HDMI TX Subsystem IP. But, that IP is not included with Vivado. It also requires three free GTH transceiver channels, which I don't have on the XCZU4. (Its four available channels are in use for PCIe Gen3 to the SSD.)

There's nothing wrong with using an external HDMI transmitter with the ZU+, though. I left a PL GPIO bank open specifically for a parallel RGB pixel bus, either for an LCD controller or an HDMI interface. I opted for the slightly newer ADV7513, which supports up to 1080p60 at 8-bit. This is perfectly acceptable as a preview and local playback resolution. Outputting a full 4K frame over HDMI might be useful for interfacing with a RAW recorder, but that is out of the question anyway at 400fps. In fact, I only really need a 24-30fps HDMI output, which means a very manageable 74.25MHz pixel clock, based on the CEA-861 standard.

HDMI timing parameters for 1920x1080p 24/25/30Hz with a 74.25MHz pixel clock.
Generating the required pixel clock, sync, and dummy RGB signals in the ZU+ Programmable Logic (PL) is pretty simple; I set that up as a module on day one of playing with HDMI. Typically, you'd just point this module at a frame buffered in RAM and let it pull the real data. (There are video DMAs and drivers that will do this more-or-less automatically.) But here's where I run into a slight problem: I don't actually have a frame buffered in RAM.

The Hard Way

While it is possible to write the full 3.8Gpx/s raw frame data to RAM on the ZU+, it would be futile to try doing any significant processing on it there. Even if I used all three 128b AXI bus connections between the PL and the memory controller at 250MHz, that would allow for less than three accesses per pixel...including the initial write. The Processing System (PS) has a similar memory access constraint, although processing pixels serially on the ARM cores is much too slow anyway. So I made the decision early on to implement the wavelet compression engine in PL hardware and write the ~5:1 compressed codestreams to RAM instead, on their way to the SSD.
The capture pipeline, with the main data path highlighted and shown decreasing in width where compression occurs at the PL Encoder, before data is written to DDR4 RAM.
"No problem," you might say, "just split off raw data from the sensor and feed it to the HDMI module." Unfortunately, this doesn't quite work: In the time it takes the HDMI scan to complete one row, the capture pipeline has processed 50+ rows from the CMV12000. The input and output are just not in sync, and any attempt to buffer partial frames between them would require much more block RAM than I have available. It would also cause frame tearing that would ruin any attempt to preview periodic phenomenon with the global shutter.

The only real choice is to put the HDMI output module after the RAM buffer, which means decoding compressed frame data on the way out:
The only logical place to put the HDMI output, and not just because I left space for it there in the block diagram.
The HDMI module reads codestream data from RAM as an AXI Master, decodes the pixel values, and runs an Inverse Discrete Wavelet Transform (IDWT) to recover the raw image. While this is a lot more work, it pays off twofold because the same module can be used for playback by reading frames back out of the SSD into RAM and pointing the decoder at them.

Notwithstanding the design effort, the actual resource utilization of this module should be pretty low. For one, only four of the sixteen codestreams need to be decoded to reconstruct a 2048px-wide image to use for the preview; there's no need to decode any LH1, HL1, or HH1 data. Also, the preview frame rate is at least 10x slower than the capture frame rate, so the amount of parallelism needed in the decoding and IDWT pipeline is much lower. Still, it's more logic on an already-crowded chip.

Kill Your Darlings

At this point I'm stubbornly committed to fitting this design on the XCZU4. With the capture pipeline complete, I was getting pretty close to maxing out this chip, especially the LUTs (65593 / 87840) and BRAMs (122 / 128). And this was after a significant optimization pass on all the cores, including trimming pixel math operations from 16-bit to 12-bit where applicable and removing debug interfaces. These bottlenecks were already causing routing difficulty that was pushing up compile times, so I needed to make more room somehow. And then one day I woke up and decided to delete Wavelet Stage 3.
An example showing the effect of deleting the third DWT stage without changing the target compression ratios of any other stages. The red bars are each sized proportionally to the  compressed sub-band they represent.
Stage 3 only handles 1/16 of the total data throughput, but it is visually the most significant and thus uses the least amount of compression. In the example above, replacing Stage 3's output with a raw 1/4-scale average image (LL2) has a relatively small effect on the overall compression ratio. It's also not a complete loss, since the 1:1 LL2 will yield slightly better visual quality if the other subbands remain unchanged. The distribution of bandwidth that achieves the best image quality with an overall compression ratio of 5:1 is still an unknown, but ditching Stage 3 probably isn't restricting the search space too far.

Although Stage 3 is by far the smallest wavelet core, removing it also simplifies a lot of downstream logic. The "XX3" encoder, which previously handled all four Stage 3 subbands by cycling through different inputs and quantizer settings, now becomes a pass-through for raw LL2 data. It also now has the same latency as the HL2, LH2, and HH2 encoders. This latency is the new maximum and is significantly lower than the former XX3 latency. (It's no longer necessary to wait for six whole LL2 rows for the Stage 3 DWT.) There's a symmetric payoff on the decoder side as well.

So while I'm sad to see it go, I think it's the right call for now. Having three stages probably does improve the compression performance (objectively, the PSNR at a given compression ratio), but I think I can still achieve good image quality at an overall ratio of 5:1 with only two. Not even including prospective decoder savings, the reduction in LUTs (-4575), FFs (-5320), and most crucially BRAMs (-8) is well worth-it.

Working Backwards

In may ways, the HDMI output module is just a mirror image of the pixel input pipeline, from the deserialized CMV12000 input pixels to the AXI Master that writes encoded data to RAM. The 74.25MHz HDMI clock runs a master pixel counter that scans across and down the output frame. Whereas the CMV12000 clocks in 64 pixels in parallel, though, the HDMI only has to clock out one.

Or does it? Each HDMI pixel (in RGB 4:4:4 format) consists of an 8-bit red, green, and blue value, whereas the Bayer-masked sensor input is split into four interleaved color fields. Each color field's decoded LL1 image will only be 1024px wide. One option would be to center this in the HDMI frame and pull the 8-bit R, G, and B values directly from each color field's LL1:
1:1 scaling from LL1 color field pixels to HDMI pixels.
In this case, each HDMI clock requires one pixel from each of the four color fields (the two greens are averaged). The logic couldn't really get any simpler. But, it makes poor use of the 1920x1080 HDMI frame, especially for widescreen aspect ratios. An alternative would be to scale everything up by a factor of two:
2:1 scaling from LL1 color field pixels to HDMI pixels.
Now, a debayering method has to be used to reconstruct the missing color values at each pixel. For this application, a simple average of the neighboring pixels would be fine. (The off-line decoder uses a more complex, higher-quality method.) Each HDMI pixel now references as many as four pixels from each color field. But, these pixels don't all update at each HDMI clock. The average pixel consumption from each color field is actually only one per four HDMI clocks, as expected from the 2:1 scaling factor.

But a 2:1 scaled preview doesn't fit in 1920x1080. The cropping isn't too bad for widescreen aspect ratios, but it's unusable for 4:3. Switching between 1:1 and 2:1 scaling depending on the aspect ratio would work, but adds a lot of conditional logic for a still-compromised result. An arbitrary software-controlled scaling between 1:1 and 2:1 would be so much better. So, time to break out the DSPs:
Arbitrary scaling from LL1 color field pixels to HDMI pixels, using bilinear interpolation.
To achieve arbitrary scaling, the four 1024px-wide LL1 color fields are resampled onto a 65536px-wide grid, accounting for the offsets between the centers of pixels of each color. Then, a viewport is defined within the HDMI frame and normalized onto this 16-bit grid (using DSPs). The four pixel centers of each color field that box in the normalized viewport coordinate are used for bilinear interpolation (using more DSPs) to produce the R, G, and B values. This is also the debayer step, thanks to the pixel center offsets.

One thing I actually do have plenty of is DSPs, and this seems like a great use for 14 of them. Being able to reposition and rescale the preview image from software makes life a lot easier. The down-side is that sixteen LL1 pixels are required to generate a single HDMI pixel. But as with the 2:1 case, the input pixels don't all change with every HDMI clock. The average LL1 pixel consumption rate will depend on the scale, but if the viewport width is always at least 1024px, it will never exceed one LL1 pixel per color field per HDMI clock. All upstream logic in the decoder is designed with this constraint in mind.

Ultra-IDWT

Next upstream is the Inverse Discrete Wavelet Transform (IDWT). One of the most significant simplifications achieved by deleting Wavelet Stage 3 is that the HDMI output module only has to do one stage of IDWT: Stage 2. This stage recovers LL1 from the LL2, LH2, HL2, and HH2 subbands. The order of operations is reversed in the IDWT: vertical first, then horizontal. Since we're working backwards from the HDMI output, let's look at the horizontal core first.

The forward horizontal DWT core is heavily optimized for speed and size using only FF-based distributed memory. In the inverse direction, there's a lot more breathing room. Only four cores are needed (one per color field) and they only need to process at most one pixel per HDMI clock. So, I am able to combine the horizontal IDWT with a block RAM buffer and output shift register pretty easily. I'm almost completely out of BRAMs, but I have plenty of UltraRAM (URAM) for this.
Horizontal IDWT and output buffer for one color field built around a single URAM.
Each URAM is 32KiB, enough to store 16 rows of LL1 data. The oldest two rows (N+0 and N+1) feed output shift registers that end in the four pixels the bilinear interpolator needs. The horizontal IDWT is performed on data from Row N+3, its result written back to Row N+2. As in the forward direction, pixels are processed in 64-bit groups of four: two interleaved pairs of low-pass and high-pass values become four LL1 outputs. Two half-speed shift registers unpack 64-bit URAM reads for the IDWT and pack the results into 64-bit writes. Running the IDWT as a single combinational step is not as efficient as using sequential lifting steps, as in the forward horizontal DWT, but it's a bit simpler to do with shift registers. Meanwhile, new data from the vertical stage is fed in at Row N+6.
Vertical IDWT for one color field built around a single URAM.
The vertical IDWT cores are also each built around a single URAM. In this case, the URAM is split in half for low-pass (HL2/LL2) and high-pass (HH2/LH2) vertical data. Four pixels each from three rows of low-pass data (N+0 to N+2) and one row of high-pass data (N+9) are processed every four clocks to create two four-pixel outputs to write to horizontal core URAM. In a shameful waste of clock cycles, input rows are scanned twice and the output write alternates between the even and odd IDWT results. (There are other ways to deal with the 2:1 row scanning ratio, but I'm willing to trade power for simpler logic right now.) Meanwhile, raw interleaved LL2, LH2, HL2, and HH2 data are written in to rows somewhere just ahead of the IDWT read pointers.

Decompressor and Distributor

Each horizontal and vertical core operates on a single color field, but the four input codestreams are instead separated by subband (LL2, LH2, HL2, HH2), with all four color fields being present in each codestream. The codestreams also cycle through four different column positions in a given row, since the Stage 2 forward vertical DWT uses four cores in parallel. A distributor remaps decoded subband data to the appropriate write address in one of the vertical IDWT cores. This is also a good place to interleave the high-pass and low-pass data, which facilitates the horizontal IDWT.
After decoding, subband pixels are redistributed to the appropriate location in each color field's vertical IDWT buffer.
The distributor writes four pixels into one of the four vertical core URAMs at most once per HDMI clock, to satisfy the one pixel per color field per clock constraint discussed above. For viewport widths greater than 1024px, the distribution is gated by the master pixel counter, which only updates when the interpolators actually need new pixels.

Continuing upstream, the distributor receives 16-bit signed pixel values from the four codestream decompressors. Each one takes in codestream data from RAM as-needed, decoding four pixels at a time by reversing the variable length code used by the encoder. The pixels are then multiplied by the inverse of the quantizer multiplication factor, using more DSPs, to recover their full range.

Raw codestream data is read in from RAM by an AXI Master into BRAM FIFOs at the entrance to each decompressor. I'm using precious BRAMs here, for the built-in FIFO functionality and to make the decoder RAM reader symmetric to the encoder RAM writer. A round-robin arbiter checks the FIFO levels to see when more data needs to be read. I'm only using a 64-bit AXI Master on the decoder, since the bandwidth already far exceeds the worst-case HDMI output requirement.

Start-Of-Frame Context

So far, the HDMI output pipeline looks a lot like the sensor input pipeline in reverse. But one subtle way in which they differ is in Start-Of-Frame (SOF) context: the state of the pipeline at the beginning of each frame. In the interest of speed, the input pipeline is not flushed between frames. Furthermore, codestream addresses for a given frame are updated during the Frame Overhead Time (FOT) interrupt, while some data is still in the pipeline, so the very bottom of Frame N-1 becomes the top of Frame N in memory.

Overlap between Frame N-1 and Frame N in memory. SOF N marks the sector-aligned start of "Frame N" in RAM, set during the FOT interrupt from the CMV12000. The decoder seeks the actual start of Frame N data.
If the decoder processes every frame, this isn't a problem: it can wrap cleanly through the overlapping region to get the data it needs for both frames. But the HDMI output only processes a subset of the frames captured. It needs to be able to find the start of any individual frame and process it independently. This is needed for seeking in a playback context too. But I can't afford the time it would take to flush the input pipeline between each frame. So instead I need to completely capture the state of the pipeline at the SOF boundary.

As it turns out, this isn't too bad, since there are only a few places where data can remain in the input pipeline at the SOF: 
  1. In the pre-encoder pixel memory: registers or BRAM buffers that are part of sensor input, DWT or quantizer operations. These have a fixed latency of 6336px for Stage 1+2. The decoder can offset its pixel counter by this amount, essentially discarding the overlapping pixels into the space between VSYNC and the start of the viewport.
     
  2. In the 128-bit e_buffer register of each codestream that accumulates encoded data before writing it to that codesteram's BRAM FIFO. The number of bits remaining in this register is neatly captured by its write index, e_buffer_idx.
     
  3. In the codestream BRAM FIFO itself. This is captured by the FIFO read level, already used as the AXI write trigger. Since these FIFOs are 64-bit write and 128-bit read, care must be taken to keep track the write level LSB as well, to know if there's an extra half-word in memory that can't be read yet.
The last two combine to give a number of bits to discard for each codestream: 

e_buffer_idx + 128 * fifo_rd_count + 64 * fifo_wr_count[0] 

To fully capture the SOF context, these three values are written to the frame headers during the FOT interrupt. A VSYNC interrupt from the HDMI module prompts software to read the header of the next frame to be displayed, calculate the number of bits to discard for each codestream, and pass it to the decoder along with the codestream start addresses. That number of bits are then discarded by the decoders prior to attempting to decode any pixels.

High-level architecture of the encoder and decoder interactions with the CPU and RAM.
In total, the HDMI output module (decoder and all) uses 4363 LUTs, 4227 FFs, and 4 BRAMs, less than what was saved by deleting Wavelet Stage 3. It adds 8 URAMs and 26 DSPs, but I'm not running short of those (yet). Except for the AXI Master, it runs entirely on the 74.25MHz HDMI clock, so it shouldn't be too much of a timing burden. There might be room for a bit more optimization, but I'm happy with the functionality it gives for its size.

Focus Assist

The main reason I wanted to get the HDMI module done now, ahead of some of the other remaining tasks, is so I can use the real-time preview for testing. It sucks to have to pull frames off one-by-one through USB to iterate on framing, exposure, and especially focus. Having a 1080p 30fps preview on something like an Atomos Shinobi on-camera monitor makes life a lot easier, and moves in the direction of standalone operation.


One neat trick you an do with wavelets is overmultiply the high-pass subbands (LH1, HL1, HH1) to highlight edges in the preview image. This effect is useful for focus assist. Most on-camera monitors can do this anyway (by running a high-pass filter on the HDMI data), but it's essentially free to do in the decoder since the subbands pass through a multiplier anyway to undo the quantization division. I'll take free features any day.

Macro Machining

With the newfound ability to actually focus the image in a reasonable amount of time, I'm finally able to play with a new lens: The Irix Cine 150mm T3.0 Macro. I started drooling over this lens for close-up high-speed stuff after watching this review. I'm no lens expert, but I feel like this lens series competes with ones 3x its price in terms of image quality. My first test was to attempt to get some macro shots of my mini mill:

Shooting my mini-mill at 400fps with the Irix 150mm T3.0 Macro lens.
The HDMI output was crucial for this, since the lens has an insanely shallow depth-of-field at T3.0, less than the width of the cutting tool. The CMV12000 is not a particularly good low-light sensor, so with an exposure time of around 1.87ms, I needed to add a good deal of light. To make things more interesting, I threw in some cheap IKEA RGBs as well. It took a while to get set up, but the result was promising:


I'll probably repeat this with a more interesting subject (this was just a piece of scrap aluminum) and a more stable mount. If I can get more light, it might be good to close down to T5.6 or so as well, to get a bit more depth of field, and drop the exposure to 180º shutter for less motion blur on the cutter. But the lens is terrific and I'm happy with the quality of the two-stage wavelet compression so far. The above clip has an average compression ratio of right around 6:1, helped along by the ultra-shallow depth of field.

Next Up

The last major HDL task on this project is modifying the pipeline to accept 2K subsampled frames from the sensor at higher frame rates (up to around 1400fps at 1080p!). This will probably be a separate Vivado project and bitstream, since it requires substantial modifications to the input pipeline. It also needs twice as many Stage 1 horizontal cores, since four rows are being read in simultaneously instead of two.

But I may tackle some simpler but no less important usability tasks first. For one, I still don't have pass-through Mass Storage Device access to the SSD over USB C. This is necessary for getting footage off without opening the camera (or using RAM as intermediate storage). With that and a bit of on-camera UI work (record button, simple menus), I can finally run everything completely standalone soon.