You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Noticed after upgrading from 2.5.5 to 2.6.0. If RedisElementWriter returns ByteBuffer that is not completely filled (limit() < capacity()), Spring Data will still use the entire buffer (including trailing junk bytes), instead of using only the filled part. This happens for ZSet values at least, I haven't checked other scenarios.
Reproducer:
importorg.springframework.data.redis.connection.RedisStandaloneConfiguration;
importorg.springframework.data.redis.connection.lettuce.LettuceConnectionFactory;
importorg.springframework.data.redis.core.ReactiveRedisTemplate;
importorg.springframework.data.redis.serializer.RedisElementReader;
importorg.springframework.data.redis.serializer.RedisElementWriter;
importorg.springframework.data.redis.serializer.RedisSerializationContext;
importjava.nio.ByteBuffer;
publicclassFoo {
publicstaticvoidmain(String[] args) {
RedisStandaloneConfigurationconfiguration = newRedisStandaloneConfiguration();
LettuceConnectionFactoryconnectionFactory = newLettuceConnectionFactory(configuration);
connectionFactory.afterPropertiesSet();
RedisElementReader<Integer> reader = buffer -> { thrownewUnsupportedOperationException(); };
RedisElementWriter<Integer> writer = element -> {
// Buffer is intentionally larger than necessary.ByteBufferbuffer = ByteBuffer.allocate(16);
buffer.putInt(element);
buffer.flip();
System.out.printf("Serialized value (%d) has %d bytes%n", element, buffer.remaining());
returnbuffer;
};
RedisSerializationContext<Integer, Integer> serializationContext = RedisSerializationContext
.<Integer, Integer>newSerializationContext()
.key(reader, writer)
.value(reader, writer)
.hashKey(reader, writer)
.hashValue(reader, writer)
.build();
ReactiveRedisTemplate<Integer, Integer> intTemplate =
newReactiveRedisTemplate<>(connectionFactory, serializationContext);
intTemplate.opsForZSet().add(20, 21, 22.0).block();
ReactiveRedisTemplate<byte[], byte[]> byteTemplate =
newReactiveRedisTemplate<>(connectionFactory, RedisSerializationContext.byteArray());
byteTemplate.opsForZSet()
.scan(newbyte[] { 0, 0, 0, 20 })
.doOnNext(tuple -> {
// This prints "4 bytes" on 2.5.5, but "16 bytes" on 2.6.0.System.out.printf("Deserialized value has %d bytes%n", tuple.getValue().length);
})
.then()
.block();
}
}
The text was updated successfully, but these errors were encountered:
mp911de
changed the title
ZSet value serialization includes junk bytes ByteUtils.getBytes(ByteBuffer) does not respect buffer position for heap buffers
Dec 14, 2021
…er).
We now properly extract the byte array from a ByteBuffer by copying its content respecting the read position and limits.
Closes#2204
Original Pull Request: #2213
Noticed after upgrading from 2.5.5 to 2.6.0. If
RedisElementWriter
returnsByteBuffer
that is not completely filled (limit() < capacity()
), Spring Data will still use the entire buffer (including trailing junk bytes), instead of using only the filled part. This happens for ZSet values at least, I haven't checked other scenarios.Reproducer:
The text was updated successfully, but these errors were encountered: