-
Notifications
You must be signed in to change notification settings - Fork 72
Implement rwlock #144
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Implement rwlock #144
Conversation
libkernel/src/sync/rwlock.rs
Outdated
| struct RwlockState { | ||
| is_locked: Option<RwlockLockStateInner>, | ||
| read_waiters: VecDeque<Waker>, | ||
| write_waiters: VecDeque<Waker>, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Any reason a WakerSet wasn't used here?
Also, do we need to distringuish between readers and writers?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I copied it over from the mutex code, perhaps that has to be changed as well.
Ideally when choosing what to wake, we would wake up all the readers or a single writer.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hm, interesting. I'll take a look at the mutex code and modify that I think. I suspect that if a lock is heavily contended, doing this check every time is worse than a spurious wake up.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah ok, I see why a VecDeque is most efficient. I think for fairness sake we do FIFO, but WakerSet uses a BTree internally.
libkernel/src/sync/rwlock.rs
Outdated
| }) | ||
| } | ||
| Some(RwlockLockStateInner::Write) => { | ||
| if state.read_waiters.iter().all(|w| !w.will_wake(cx.waker())) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Any reason we do this check? I only ask as I don't do this in any other sync primitives, Mutex being the main example since it's async. I suppose it's a matter of weighing up the small likelyhood of a spurious wakeup vs iterating through this list every time the lock is contended?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I got this from the mutex code: https://github.com/hexagonal-sun/moss-kernel/blame/96fe0378b7c183aebb4ba27743ba2e9843fcdd8a/libkernel/src/sync/mutex.rs#L96C30-L96C70. I actually have no idea whether we should do this or not, I suppose it is a big performance hit though.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Interesting. Maybe I need to go back and revisit the Mutex code. That seems like it would be a lot of wasted cycles for contended lock.
4978fca to
e3c8f6e
Compare
No description provided.