-
Notifications
You must be signed in to change notification settings - Fork 1.1k
Redis Lock is only aquired after configured lock timeout although lock is not held by any other instance #3716
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
would you mind to take a look here? Thanks |
@muellerml |
I'm really struggling to compile an example. We are doing nothing too fancy here. Just acquiring the lock in various service methods to perform some computations. I'm currently trying to debug deeper into the issue. When I added some logging to the RedisLockRegistry and the RedisUnLockNotifyMessageListener our builds got reliably green again. To me it seems like the Future.get() call blocks for the whole time although no one else holds the lock. Trying to find some sweet spot now where I can better trace the issue, will continue with this after the weekend. |
Thanks for the explanation, @muellerml ! Just to be sure that we are on the same page: are you sure that all your instances are upgraded to the same latest version for this |
Pretty sure, nothing (logs, metrics, kubectl) indicates that there is another instance with an older version running although that would very well explain the behavior we experience. As I said before I will further investigate the behavior next week. Just some guessing what maybe could happen:
|
@muellerml
If the problem occurs only with locks requested before being initialized by the first lock The initialization part of RedisMessageListenerContainer doesn't take long, but in the case you mentioned, i want to change it because we can miss the message. |
Yes it seems like it. I didn't think of the initialization process as being the issue until I added some debug log statements and looked in the code in RedisLockRegistry more in detail. As our tests are also executed in parallel and some are doing the same steps nearly at the same time, this explains why we frequently ran into the issue. I just started to also find out there are debug logs for when the RedisMessageListenerContainer is started and this confirms my assumption: I can relate any issue we faced to this behaviour, so I don't think there's another issue after the RedisMessageListenerContainer is initialized. |
Fixes #3716 If the `redisMessageListenerContainer` is starting, waiting for it to complete without doing `subscribeUnlock()` * Introduce `isRunningRedisMessageListenerContainer` state since the `running` in the `RedisMessageListenerContainer` is set in the beginning of the `start()` misleading on the concurrent calls to the `RedisLockRegistry` **Cherry-pick to `5.5.x`**
In what version(s) of Spring Integration are you seeing this issue?
Since 5.5.8 (could also be existing in version 5.5.7)
Describe the bug
Since the upgrade to Spring Boot 2.6.3 (which includes Spring Integration 5.5.8), we see some unexpected behaviour during our internal integration tests.
We have deployed two instances of our service and retrieve in them locks for short duration. Since the upgrade we notice that we sometimes run in a condition where the lock is already released from the first instance but the second instance fails to acquire it immediately and waits for the lock to timeout before actually acquiring it, although no other instance is holding the lock now.
From our internal logging it seems like the release of the lock and the attempt to acquire the lock happens nearly at the same time, but we are not really able to verify that exactly.
To Reproduce
No consistent workflow to reproduce the bug available.
Expected behavior
Locks are aquired by the second instance immediately.
The text was updated successfully, but these errors were encountered: