Skip to content

Commit c470262

Browse files
committed
Rewrite ConcurrentLruCache implementation
Prior to this commit, the `ConcurrentLruCache` implementation would not perform well under certain conditions. As long as the cache capacity was not reached, the cache would avoid maintaining an eviction queue (reordering entries depending with least/most recently read). When the cache capacity was reached, the LRU queue was updated for each read/write operation. This decreased performance significantly under contention when the capacity was reached. This commit completely rewrites the internals of `ConcurrentLruCache`. `ConcurrentLruCache` is now a specialized version of the `ConcurrentLinkedHashMap` [1]. This change focuses on buferring read and write operations, only processing them at certain times to avoid contention. When a cached entry is read, a read operation is queued and buffered operations are drained if the buffer reached a fixed limit. When a new cache entry is added or removed, a write operation is queued and triggers a drain attempt. When the capacity is outgrown, the cache polls items from the eviction queue, which maintains elements with the least recently used ones first. Entries are removed until the capacity is under control. The behavior described here and the buffer sizes are optimized with the number of available processors in mind. Work is localized as much as possible on a per-thread basis to avoid contention on the eviction queue. The new implementation has been tested with the JMH benchmark provided here, comparing the former `COncurrentLruCache`, the new implementation as well as the `ConcurrentLinkedHashMap` [1]. When testing with a cache reaching capacity, under contention, with a 10% cache miss, we're seeing a 40x improvement compared to the previous implementation and performance on par with the reference. See [2] for how to replicate the benchmark. [1] https://github.com/ben-manes/concurrentlinkedhashmap [2] https://github.com/spring-projects/spring-framework/wiki/Micro-Benchmarks Closes gh-26320
1 parent 706c1ec commit c470262

File tree

4 files changed

+597
-93
lines changed

4 files changed

+597
-93
lines changed
Lines changed: 76 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,76 @@
1+
/*
2+
* Copyright 2002-2022 the original author or authors.
3+
*
4+
* Licensed under the Apache License, Version 2.0 (the "License");
5+
* you may not use this file except in compliance with the License.
6+
* You may obtain a copy of the License at
7+
*
8+
* https://www.apache.org/licenses/LICENSE-2.0
9+
*
10+
* Unless required by applicable law or agreed to in writing, software
11+
* distributed under the License is distributed on an "AS IS" BASIS,
12+
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13+
* See the License for the specific language governing permissions and
14+
* limitations under the License.
15+
*/
16+
17+
package org.springframework.util;
18+
19+
import java.util.ArrayList;
20+
import java.util.List;
21+
import java.util.Random;
22+
import java.util.function.Function;
23+
24+
import org.openjdk.jmh.annotations.Benchmark;
25+
import org.openjdk.jmh.annotations.BenchmarkMode;
26+
import org.openjdk.jmh.annotations.Level;
27+
import org.openjdk.jmh.annotations.Mode;
28+
import org.openjdk.jmh.annotations.Param;
29+
import org.openjdk.jmh.annotations.Scope;
30+
import org.openjdk.jmh.annotations.Setup;
31+
import org.openjdk.jmh.annotations.State;
32+
import org.openjdk.jmh.infra.Blackhole;
33+
34+
/**
35+
* Benchmarks for {@link ConcurrentLruCache}.
36+
* @author Brian Clozel
37+
*/
38+
@BenchmarkMode(Mode.Throughput)
39+
public class ConcurrentLruCacheBenchmark {
40+
41+
@Benchmark
42+
public void lruCache(BenchmarkData data, Blackhole bh) {
43+
for (String element : data.elements) {
44+
String value = data.lruCache.get(element);
45+
bh.consume(value);
46+
}
47+
}
48+
49+
@State(Scope.Benchmark)
50+
public static class BenchmarkData {
51+
52+
ConcurrentLruCache<String, String> lruCache;
53+
54+
@Param({"100"})
55+
public int capacity;
56+
57+
@Param({"0.1"})
58+
public float cacheMissRate;
59+
60+
public List<String> elements;
61+
62+
public Function<String, String> generator;
63+
64+
@Setup(Level.Iteration)
65+
public void setup() {
66+
this.generator = key -> key + "value";
67+
this.lruCache = new ConcurrentLruCache<>(this.capacity, this.generator);
68+
Assert.isTrue(this.cacheMissRate < 1, "cache miss rate should be < 1");
69+
Random random = new Random();
70+
int elementsCount = Math.round(this.capacity * (1 + this.cacheMissRate));
71+
this.elements = new ArrayList<>(elementsCount);
72+
random.ints(elementsCount).forEach(value -> this.elements.add(String.valueOf(value)));
73+
this.elements.sort(String::compareTo);
74+
}
75+
}
76+
}

0 commit comments

Comments
 (0)