Skip to content

Conversation

@PSUdaemon
Copy link
Contributor

Fix StreamPriorityKey::cmp implementation so it doesn't violate Rust's Ord trait contract while keeping the intended round-robin behavior.

This is based on the discussion that was had in #2282. @antoniovicente had some concerns about the implementation with respect to the Ord trait contract.

This is my attempt at fixing that and maintaining the round-robin requirement.

  • Added a sequence field to preserve consistent ordering
  • Added cycle_flushable() to assign new sequence numbers to push incremental streams to the back once they have been visited

@PSUdaemon PSUdaemon requested a review from a team as a code owner December 15, 2025 22:11
more_frames: true,
};
assert_eq!(ev, ev_headers);
assert_eq!(s.poll_server(), Ok((0, Event::Data)));
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Needing to change this test seems a bit fishy to me.

While RFC 9218 primarily targets prioritization of serving responses, we apply stream scheduling to all QUIC streams. The default priority for a new stream is, in quiche's model, for urgency=127 and incremental=true (see

fn default() -> Self {
). That should trigger round-robin sending of all request data.

In the previous test, we saw the server reading events in the round-robin order of stream sends. Although that's only partly true because the recv_body() function attempts to greedily read all available data frames independent of the peer's sender scheduling.

The changes to this test seem to imply that sending has turned into FIFO, where HEADERS and DATA are sent together (or, the peer is somehow receiving them together). It's not abundantly clear to me what's going on. I think it would pay to get some more understanding why this test triggered some change while others didn't

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The old "interleaved" test results were an artifact of undefined behavior from the broken Ord implementation. With the fix, the h3 test now shows correct receive side sequential (priority) ordering per-stream.

To demonstrate that round-robin actually works, I added a new end to end test (incremental_stream_round_robin) that tracks per-packet send order. It shows the expected interleaving: [4, 8, 12, 4, 8, 12, 4, 8, 12].

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Interesting, thanks for the explanation. This sounds a bit like a Bob Ross happy accident. It might be an undesirable change since it can be in the server's interest to "skim" across headers that is is receiving in order to do some useful work.

It might be that we need to make the choice of that behaviour more application configurable. That would potentially involves separate read and write priorities (which I appreciate is beyond the scope of a simple fix up).

A stop gap might be, on the reader side, to cycle the stream after generating an Event::Headers?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A stop gap might be, on the reader side, to cycle the stream after generating an Event::Headers?

Yeah, that's an interesting idea. Are you looking to have that added in this PR, or as a followup one?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think for parity with the current behaviour, we'd need that in this PR. Would you mind trying it and see if it achieves the old ordering?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This was simpler than I expected.

@LPardue
Copy link
Contributor

LPardue commented Dec 16, 2025

Thanks Phil. I appreciate the work to try and make this area more rusty. I'd like to dig in more to one of the tests as I commented above. But I think this is probably a path we can follow through with.

Comment on lines 503 to 505
/// This is used for round-robin scheduling: after processing headers from
/// a stream, it should be moved to the back of its priority group to give
/// other streams a chance.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: the QUIC layer doesn't know anything about headers. Maybe we can say something like

Suggested change
/// This is used for round-robin scheduling: after processing headers from
/// a stream, it should be moved to the back of its priority group to give
/// other streams a chance.
/// This is used for round-robin scheduling: after processing some data from
/// a stream, it can be moved to the back of its priority group to give
/// other streams a chance of being read.

// order, which may vary; we just need some data to be sent.
let (len, _) = pipe.server.send(&mut buf).unwrap();
assert_eq!(len, 1200);
assert!(len > 0);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This also seems suspect - the round robin should be determininstic no?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is, but I didn't think the specific value here was important and it could just cause churn in the future if other changes instigated a change here. I will change this to the new value.

@PSUdaemon
Copy link
Contributor Author

Happy new year! 🎆

Checking back in to see if there are any more concerns for this PR?

// to update last_tx_data.
let (len, _) = pipe.server.send(&mut buf).unwrap();
assert_eq!(len, 1200);
assert_eq!(len, 848);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How come this decreased?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe the stream ordering changed. If you look at lines 8241-8242 above, you can see 1200 bytes vs 800 bytes in two different streams. It's 800 bytes + 48 bytes of overhead. Why not 1248 then? The buffer on line 8240 is only 1200 bytes.

If you would like I can change this test more to prove that out.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I looked into this in detail, and it's a little more complicated (dealing with PTO and retransmission), but switching the order that we buffer the streams shows that it's stream 8 being selected. Specifically this is because we had emit_flight sending on the other stream (4) which increases the sequence number (sends it to the back) and thus a different stream (8) is selected.

@PSUdaemon PSUdaemon force-pushed the stream_ordering branch 4 times, most recently from 5e96cc0 to 87d88d7 Compare January 27, 2026 18:36
@PSUdaemon PSUdaemon force-pushed the stream_ordering branch 2 times, most recently from 85599d6 to 9179e4a Compare January 30, 2026 16:12
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants