-
Notifications
You must be signed in to change notification settings - Fork 18
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix panic on closed channel #13
base: main
Are you sure you want to change the base?
Fix panic on closed channel #13
Conversation
Hey, Nice catch! I fixed the CI process, now it should work. |
The modifications are approved. Thanks for the work here! |
I had to add an RWMutex to stop |
defer s.chMu.RUnlock() | ||
|
||
// check s.ch isn't closed (we are holding the RLock, so s.ch won't be closed until the end of this function) | ||
select { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You are using the closed boolean on the nonBlockingSubscriber, It's not possible to use the same here?
I think it's cleaner and makes them consistent.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I read your comment and found a bug in my implementation as well :)
Consider the following sequence of events:
- Set takes RLock
- s.closed is checked inside Set (s.closed == false)
- Set blocks at select for a long time
- Close is called, it sets s.closed to true
- Close tries to take Lock, but it can't, because Set is still running
- Deadlock
In other words, to take the lock, Close has to stop all blocking operations that have RWLock, but afaik the only way to stop a blocking operation is with a select
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
About the problem of closing a channel written by several goroutines I think we have two solutions:
First, the solution described on https://www.leolara.me/blog/closing_a_go_channel_written_by_several_goroutines/: https://gist.github.com/leolara/f6fb5dfc04d64947487f16764d6b37b6
The only thing I would change is the send method not use a goroutine because this method is already been called by a goroutine and I don't want to make them async and without control.
The other solution is to change the hub and make the Unsubscribe and Publish concurrent safe using an read lock for publish (read the subscribers) and the Close, Subscribe, Unsubscribe use write lock.
WDYT?
About the codecov, you are right, We will ignore the patch. |
Codecov Report
@@ Coverage Diff @@
## main #13 +/- ##
==========================================
- Coverage 87.09% 86.50% -0.60%
==========================================
Files 4 4
Lines 310 326 +16
==========================================
+ Hits 270 282 +12
- Misses 30 33 +3
- Partials 10 11 +1
Flags with carried forward coverage won't be shown. Click here to find out more.
Continue to review full report at Codecov.
|
Fix panic in subscribers
When hub publishes messages, it first finds all matching subscribers and then calls
sub.Set(m)
on them:However, if
Unsubscribe
is called after the filtering but before theSet
, the subscriber's underlying channel is closed andSet
tries to write into a closed channel.This PR adds channels and goroutines responsible for closing the channel.