Skip to content

Conversation

@arihant2math
Copy link
Collaborator

No description provided.

struct RwlockState {
is_locked: Option<RwlockLockStateInner>,
read_waiters: VecDeque<Waker>,
write_waiters: VecDeque<Waker>,
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Any reason a WakerSet wasn't used here?

Also, do we need to distringuish between readers and writers?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I copied it over from the mutex code, perhaps that has to be changed as well.

Ideally when choosing what to wake, we would wake up all the readers or a single writer.

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hm, interesting. I'll take a look at the mutex code and modify that I think. I suspect that if a lock is heavily contended, doing this check every time is worse than a spurious wake up.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah ok, I see why a VecDeque is most efficient. I think for fairness sake we do FIFO, but WakerSet uses a BTree internally.

})
}
Some(RwlockLockStateInner::Write) => {
if state.read_waiters.iter().all(|w| !w.will_wake(cx.waker())) {
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Any reason we do this check? I only ask as I don't do this in any other sync primitives, Mutex being the main example since it's async. I suppose it's a matter of weighing up the small likelyhood of a spurious wakeup vs iterating through this list every time the lock is contended?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I got this from the mutex code: https://github.com/hexagonal-sun/moss-kernel/blame/96fe0378b7c183aebb4ba27743ba2e9843fcdd8a/libkernel/src/sync/mutex.rs#L96C30-L96C70. I actually have no idea whether we should do this or not, I suppose it is a big performance hit though.

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Interesting. Maybe I need to go back and revisit the Mutex code. That seems like it would be a lot of wasted cycles for contended lock.

@arihant2math arihant2math force-pushed the rwlock branch 4 times, most recently from 4978fca to e3c8f6e Compare January 15, 2026 17:30
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants