-
Notifications
You must be signed in to change notification settings - Fork 60
The padding in a connection shard structure #197
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
We need to calculate the difference for original structures: const requestsMap = 32
type ConnShard1 struct {
rmut sync.Mutex
requests [requestsMap]*Future
first *Future
last **Future
bufmut sync.Mutex
buf smallWBuf
enc *msgpack.Encoder
bcache smallBuf
}
type ConnShard2 struct {
rmut sync.Mutex
requests [requestsMap]*Future
first *Future
last **Future
bufmut sync.Mutex
buf smallWBuf
enc *msgpack.Encoder
bcache smallBuf
pad [16]uint64 //nolint: unused,structcheck
}
It seems like the last result should be 512. 6 years have passed, perhaps something has changed in the GoLang or I missed something. Anyway, it doesn't look usefull for the current implementation (and it was broken a long time ago). Let's delete the padding. |
This patch removes the padding field in a connection shard structure. The _pad field is unused in the code and there is no sense to keep it anymore. Closes #197
I haven't read yet, but FYI https://dave.cheney.net/2015/10/09/padding-is-hard. @DifferentialOrange @oleg-jukovec I have also heard, that there is a practice in C, to add some byte field after some pointer in the structure and this field is needed for allocation a memory for this pointer, something like that. |
It doesn't look like our case at first glance.
You could dig deeper and to google something like: "C struct padding CPU cacheline", "C struct padding cache miss", "why array is better than list in C", "cache/memory locality". I can't remember a good article right now, but something like this may help: https://medium.com/software-design/why-software-developers-should-care-about-cpu-caches-8da04355bb8a I assume that the optimization in this case was to avoid intersection of a cache line by nearby connShards inside the array [shards]connShard . As an example, if a thread modifies a field of connShard[X] it should not affect to an another thread that works with a field on connShard[X+1]. This is just a guess, I don't see the point now to find out exactly the reason. It was broken for a long time |
There is a 128-byte padding in a connection shard structure and there is no any obvious reason for it.
go-tarantool/connection.go
Line 155 in e1bb59c
The output:
Originally posted by @vr009 in #173 (comment)
The text was updated successfully, but these errors were encountered: