Skip to content

Commit 1ba23fe

Browse files
authored
Merge branch 'master' into dynaminc_startup_nodes
2 parents bb0a431 + 6da8086 commit 1ba23fe

File tree

6 files changed

+384
-14
lines changed

6 files changed

+384
-14
lines changed

CHANGES

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -12,6 +12,8 @@
1212
* Fix auth bug when provided with no username (#2086)
1313
* Fix missing ClusterPipeline._lock (#2189)
1414
* Added dynaminc_startup_nodes configuration to RedisCluster
15+
* Fix reusing the old nodes' connections when cluster topology refresh is being done
16+
* Fix RedisCluster to immediately raise AuthenticationError without a retry
1517
* 4.1.3 (Feb 8, 2022)
1618
* Fix flushdb and flushall (#1926)
1719
* Add redis5 and redis4 dockers (#1871)

docs/examples.rst

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -10,4 +10,5 @@ Examples
1010
examples/asyncio_examples
1111
examples/search_json_examples
1212
examples/set_and_get_examples
13-
examples/search_vector_similarity_examples
13+
examples/search_vector_similarity_examples
14+
examples/pipeline_examples

docs/examples/pipeline_examples.ipynb

Lines changed: 308 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,308 @@
1+
{
2+
"cells": [
3+
{
4+
"cell_type": "markdown",
5+
"metadata": {},
6+
"source": [
7+
"# Pipeline examples\n",
8+
"\n",
9+
"This example show quickly how to use pipelines in `redis-py`."
10+
]
11+
},
12+
{
13+
"cell_type": "markdown",
14+
"metadata": {},
15+
"source": [
16+
"## Checking that Redis is running"
17+
]
18+
},
19+
{
20+
"cell_type": "code",
21+
"execution_count": 1,
22+
"metadata": {},
23+
"outputs": [
24+
{
25+
"data": {
26+
"text/plain": [
27+
"True"
28+
]
29+
},
30+
"execution_count": 1,
31+
"metadata": {},
32+
"output_type": "execute_result"
33+
}
34+
],
35+
"source": [
36+
"import redis \n",
37+
"\n",
38+
"r = redis.Redis(decode_responses=True)\n",
39+
"r.ping()"
40+
]
41+
},
42+
{
43+
"cell_type": "markdown",
44+
"metadata": {},
45+
"source": [
46+
"## Simple example"
47+
]
48+
},
49+
{
50+
"cell_type": "markdown",
51+
"metadata": {},
52+
"source": [
53+
"### Creating a pipeline instance"
54+
]
55+
},
56+
{
57+
"cell_type": "code",
58+
"execution_count": 2,
59+
"metadata": {},
60+
"outputs": [],
61+
"source": [
62+
"pipe = r.pipeline()"
63+
]
64+
},
65+
{
66+
"cell_type": "markdown",
67+
"metadata": {},
68+
"source": [
69+
"### Adding commands to the pipeline"
70+
]
71+
},
72+
{
73+
"cell_type": "code",
74+
"execution_count": 3,
75+
"metadata": {},
76+
"outputs": [
77+
{
78+
"data": {
79+
"text/plain": [
80+
"Pipeline<ConnectionPool<Connection<host=localhost,port=6379,db=0>>>"
81+
]
82+
},
83+
"execution_count": 3,
84+
"metadata": {},
85+
"output_type": "execute_result"
86+
}
87+
],
88+
"source": [
89+
"pipe.set(\"a\", \"a value\")\n",
90+
"pipe.set(\"b\", \"b value\")\n",
91+
"\n",
92+
"pipe.get(\"a\")"
93+
]
94+
},
95+
{
96+
"cell_type": "markdown",
97+
"metadata": {},
98+
"source": [
99+
"### Executing the pipeline"
100+
]
101+
},
102+
{
103+
"cell_type": "code",
104+
"execution_count": 4,
105+
"metadata": {},
106+
"outputs": [
107+
{
108+
"data": {
109+
"text/plain": [
110+
"[True, True, 'a value']"
111+
]
112+
},
113+
"execution_count": 4,
114+
"metadata": {},
115+
"output_type": "execute_result"
116+
}
117+
],
118+
"source": [
119+
"pipe.execute()"
120+
]
121+
},
122+
{
123+
"cell_type": "markdown",
124+
"metadata": {},
125+
"source": [
126+
"The responses of the three commands are stored in a list. In the above example, the two first boolean indicates that the the `set` commands were successfull and the last element of the list is the result of the `get(\"a\")` comand."
127+
]
128+
},
129+
{
130+
"cell_type": "markdown",
131+
"metadata": {},
132+
"source": [
133+
"## Chained call\n",
134+
"\n",
135+
"The same result as above can be obtained in one line of code by chaining the opperations."
136+
]
137+
},
138+
{
139+
"cell_type": "code",
140+
"execution_count": 5,
141+
"metadata": {},
142+
"outputs": [
143+
{
144+
"data": {
145+
"text/plain": [
146+
"[True, True, 'a value']"
147+
]
148+
},
149+
"execution_count": 5,
150+
"metadata": {},
151+
"output_type": "execute_result"
152+
}
153+
],
154+
"source": [
155+
"pipe = r.pipeline()\n",
156+
"pipe.set(\"a\", \"a value\").set(\"b\", \"b value\").get(\"a\").execute()"
157+
]
158+
},
159+
{
160+
"cell_type": "markdown",
161+
"metadata": {},
162+
"source": [
163+
"## Performance comparison\n",
164+
"\n",
165+
"Using pipelines can improve performance, for more informations, see [Redis documentation about pipelining](https://redis.io/docs/manual/pipelining/). Here is a simple comparison test of performance between basic and pipelined commands (we simply increment a value and measure the time taken by both method)."
166+
]
167+
},
168+
{
169+
"cell_type": "code",
170+
"execution_count": 6,
171+
"metadata": {},
172+
"outputs": [],
173+
"source": [
174+
"from datetime import datetime\n",
175+
"\n",
176+
"incr_value = 100000"
177+
]
178+
},
179+
{
180+
"cell_type": "markdown",
181+
"metadata": {},
182+
"source": [
183+
"### Without pipeline"
184+
]
185+
},
186+
{
187+
"cell_type": "code",
188+
"execution_count": 7,
189+
"metadata": {},
190+
"outputs": [],
191+
"source": [
192+
"r.set(\"incr_key\", \"0\")\n",
193+
"\n",
194+
"start = datetime.now()\n",
195+
"\n",
196+
"for _ in range(incr_value):\n",
197+
" r.incr(\"incr_key\")\n",
198+
"res_without_pipeline = r.get(\"incr_key\")\n",
199+
"\n",
200+
"time_without_pipeline = (datetime.now() - start).total_seconds()"
201+
]
202+
},
203+
{
204+
"cell_type": "code",
205+
"execution_count": 8,
206+
"metadata": {},
207+
"outputs": [
208+
{
209+
"name": "stdout",
210+
"output_type": "stream",
211+
"text": [
212+
"Without pipeline\n",
213+
"================\n",
214+
"Time taken: 21.759733\n",
215+
"Increment value: 100000\n"
216+
]
217+
}
218+
],
219+
"source": [
220+
"print(\"Without pipeline\")\n",
221+
"print(\"================\")\n",
222+
"print(\"Time taken: \", time_without_pipeline)\n",
223+
"print(\"Increment value: \", res_without_pipeline)"
224+
]
225+
},
226+
{
227+
"cell_type": "markdown",
228+
"metadata": {},
229+
"source": [
230+
"### With pipeline"
231+
]
232+
},
233+
{
234+
"cell_type": "code",
235+
"execution_count": 9,
236+
"metadata": {},
237+
"outputs": [],
238+
"source": [
239+
"r.set(\"incr_key\", \"0\")\n",
240+
"\n",
241+
"start = datetime.now()\n",
242+
"\n",
243+
"pipe = r.pipeline()\n",
244+
"for _ in range(incr_value):\n",
245+
" pipe.incr(\"incr_key\")\n",
246+
"pipe.get(\"incr_key\")\n",
247+
"res_with_pipeline = pipe.execute()[-1]\n",
248+
"\n",
249+
"time_with_pipeline = (datetime.now() - start).total_seconds()"
250+
]
251+
},
252+
{
253+
"cell_type": "code",
254+
"execution_count": 10,
255+
"metadata": {},
256+
"outputs": [
257+
{
258+
"name": "stdout",
259+
"output_type": "stream",
260+
"text": [
261+
"With pipeline\n",
262+
"=============\n",
263+
"Time taken: 2.357863\n",
264+
"Increment value: 100000\n"
265+
]
266+
}
267+
],
268+
"source": [
269+
"print(\"With pipeline\")\n",
270+
"print(\"=============\")\n",
271+
"print(\"Time taken: \", time_with_pipeline)\n",
272+
"print(\"Increment value: \", res_with_pipeline)"
273+
]
274+
},
275+
{
276+
"cell_type": "markdown",
277+
"metadata": {},
278+
"source": [
279+
"Using pipelines provides the same result in much less time."
280+
]
281+
}
282+
],
283+
"metadata": {
284+
"interpreter": {
285+
"hash": "84048e2f8e89effc8610b2fb270e4858ef00e9403d223856d62b05266db287ca"
286+
},
287+
"kernelspec": {
288+
"display_name": "Python 3.9.2 ('.venv': venv)",
289+
"language": "python",
290+
"name": "python3"
291+
},
292+
"language_info": {
293+
"codemirror_mode": {
294+
"name": "ipython",
295+
"version": 3
296+
},
297+
"file_extension": ".py",
298+
"mimetype": "text/x-python",
299+
"name": "python",
300+
"nbconvert_exporter": "python",
301+
"pygments_lexer": "ipython3",
302+
"version": "3.9.2"
303+
},
304+
"orig_nbformat": 4
305+
},
306+
"nbformat": 4,
307+
"nbformat_minor": 2
308+
}

0 commit comments

Comments
 (0)