Skip to content

Commit b47bf78

Browse files
committed
Merge pull request #158 from wizzat/add_tests
Improve Tests, fix connection error timeout, other issues
2 parents 3b18043 + b81bf5f commit b47bf78

37 files changed

+2730
-2001
lines changed

.gitmodules

Lines changed: 6 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,6 @@
1-
[submodule "kafka-src"]
2-
path = kafka-src
3-
url = git://github.com/apache/kafka.git
1+
[submodule "servers/0.8.0/kafka-src"]
2+
path = servers/0.8.0/kafka-src
3+
url = https://github.com/apache/kafka.git
4+
[submodule "servers/0.8.1/kafka-src"]
5+
path = servers/0.8.1/kafka-src
6+
url = https://github.com/apache/kafka.git

.travis.yml

Lines changed: 14 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -1,20 +1,23 @@
11
language: python
22

33
python:
4-
- 2.7
4+
- 2.6
5+
- 2.7
6+
- pypy
57

68
before_install:
7-
- git submodule update --init --recursive
8-
- cd kafka-src
9-
- ./sbt clean update package assembly-package-dependency
10-
- cd -
9+
- git submodule update --init --recursive
10+
- sudo apt-get install libsnappy-dev
11+
- ./build_integration.sh
1112

1213
install:
13-
- pip install .
14-
# Deal with issue on Travis builders re: multiprocessing.Queue :(
15-
# See https://github.com/travis-ci/travis-cookbooks/issues/155
16-
- sudo rm -rf /dev/shm && sudo ln -s /run/shm /dev/shm
14+
- pip install tox
15+
- pip install .
16+
# Deal with issue on Travis builders re: multiprocessing.Queue :(
17+
# See https://github.com/travis-ci/travis-cookbooks/issues/155
18+
- sudo rm -rf /dev/shm && sudo ln -s /run/shm /dev/shm
1719

1820
script:
19-
- python -m test.test_unit
20-
- python -m test.test_integration
21+
- tox -e `./travis_selector.sh $TRAVIS_PYTHON_VERSION`
22+
- KAFKA_VERSION=0.8.0 tox -e `./travis_selector.sh $TRAVIS_PYTHON_VERSION`
23+
- KAFKA_VERSION=0.8.1 tox -e `./travis_selector.sh $TRAVIS_PYTHON_VERSION`

README.md

Lines changed: 36 additions & 27 deletions
Original file line numberDiff line numberDiff line change
@@ -7,8 +7,6 @@ high-level consumer and producer classes. Request batching is supported by the
77
protocol as well as broker-aware request routing. Gzip and Snappy compression
88
is also supported for message sets.
99

10-
Compatible with Apache Kafka 0.8.1
11-
1210
http://kafka.apache.org/
1311

1412
# License
@@ -17,8 +15,17 @@ Copyright 2013, David Arthur under Apache License, v2.0. See `LICENSE`
1715

1816
# Status
1917

20-
The current version of this package is **0.9.0** and is compatible with
21-
Kafka brokers running version **0.8.1**.
18+
The current version of this package is **0.9.1** and is compatible with
19+
20+
Kafka broker versions
21+
- 0.8.0
22+
- 0.8.1
23+
- 0.8.1.1
24+
25+
Python versions
26+
- 2.6.9
27+
- 2.7.6
28+
- pypy 2.2.1
2229

2330
# Usage
2431

@@ -155,6 +162,7 @@ python setup.py install
155162

156163
Download and build Snappy from http://code.google.com/p/snappy/downloads/list
157164

165+
Linux:
158166
```shell
159167
wget http://snappy.googlecode.com/files/snappy-1.0.5.tar.gz
160168
tar xzvf snappy-1.0.5.tar.gz
@@ -164,6 +172,11 @@ make
164172
sudo make install
165173
```
166174

175+
OSX:
176+
```shell
177+
brew install snappy
178+
```
179+
167180
Install the `python-snappy` module
168181
```shell
169182
pip install python-snappy
@@ -173,40 +186,36 @@ pip install python-snappy
173186

174187
## Run the unit tests
175188

176-
_These are broken at the moment_
177-
178189
```shell
179-
tox ./test/test_unit.py
180-
```
181-
182-
or
183-
184-
```shell
185-
python -m test.test_unit
190+
tox
186191
```
187192

188193
## Run the integration tests
189194

190-
First, checkout the Kafka source
195+
The integration tests will actually start up real local Zookeeper
196+
instance and Kafka brokers, and send messages in using the client.
191197

198+
Note that you may want to add this to your global gitignore:
192199
```shell
193-
git submodule init
194-
git submodule update
195-
cd kafka-src
196-
./sbt update
197-
./sbt package
198-
./sbt assembly-package-dependency
200+
.gradle/
201+
clients/build/
202+
contrib/build/
203+
contrib/hadoop-consumer/build/
204+
contrib/hadoop-producer/build/
205+
core/build/
206+
core/data/
207+
examples/build/
208+
perf/build/
199209
```
200210

201-
And then run the tests. This will actually start up real local Zookeeper
202-
instance and Kafka brokers, and send messages in using the client.
203-
211+
First, check out and the Kafka source:
204212
```shell
205-
tox ./test/test_integration.py
213+
git submodule update --init
214+
./build_integration.sh
206215
```
207216

208-
or
209-
217+
Then run the tests against supported Kafka versions:
210218
```shell
211-
python -m test.test_integration
219+
KAFKA_VERSION=0.8.0 tox
220+
KAFKA_VERSION=0.8.1 tox
212221
```

build_integration.sh

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,5 @@
1+
#!/bin/bash
2+
3+
git submodule update --init
4+
(cd servers/0.8.0/kafka-src && ./sbt update package assembly-package-dependency)
5+
(cd servers/0.8.1/kafka-src && ./gradlew jar)

kafka/client.py

Lines changed: 22 additions & 33 deletions
Original file line numberDiff line numberDiff line change
@@ -1,15 +1,16 @@
11
import copy
22
import logging
3+
import collections
4+
5+
import kafka.common
36

4-
from collections import defaultdict
57
from functools import partial
68
from itertools import count
7-
8-
from kafka.common import (ErrorMapping, ErrorStrings, TopicAndPartition,
9+
from kafka.common import (TopicAndPartition,
910
ConnectionError, FailedPayloadsError,
10-
BrokerResponseError, PartitionUnavailableError,
11-
LeaderUnavailableError,
12-
KafkaUnavailableError)
11+
PartitionUnavailableError,
12+
LeaderUnavailableError, KafkaUnavailableError,
13+
UnknownTopicOrPartitionError, NotLeaderForPartitionError)
1314

1415
from kafka.conn import collect_hosts, KafkaConnection, DEFAULT_SOCKET_TIMEOUT_SECONDS
1516
from kafka.protocol import KafkaProtocol
@@ -39,29 +40,23 @@ def __init__(self, hosts, client_id=CLIENT_ID,
3940
self.topic_partitions = {} # topic_id -> [0, 1, 2, ...]
4041
self.load_metadata_for_topics() # bootstrap with all metadata
4142

43+
4244
##################
4345
# Private API #
4446
##################
4547

4648
def _get_conn(self, host, port):
4749
"Get or create a connection to a broker using host and port"
48-
4950
host_key = (host, port)
5051
if host_key not in self.conns:
51-
self.conns[host_key] = KafkaConnection(host, port, timeout=self.timeout)
52+
self.conns[host_key] = KafkaConnection(
53+
host,
54+
port,
55+
timeout=self.timeout
56+
)
5257

5358
return self.conns[host_key]
5459

55-
def _get_conn_for_broker(self, broker):
56-
"""
57-
Get or create a connection to a broker
58-
"""
59-
if (broker.host, broker.port) not in self.conns:
60-
self.conns[(broker.host, broker.port)] = \
61-
KafkaConnection(broker.host, broker.port, timeout=self.timeout)
62-
63-
return self._get_conn(broker.host, broker.port)
64-
6560
def _get_leader_for_partition(self, topic, partition):
6661
"""
6762
Returns the leader for a partition or None if the partition exists
@@ -99,10 +94,9 @@ def _send_broker_unaware_request(self, requestId, request):
9994
conn.send(requestId, request)
10095
response = conn.recv(requestId)
10196
return response
102-
except Exception, e:
97+
except Exception as e:
10398
log.warning("Could not send request [%r] to server %s:%i, "
10499
"trying next server: %s" % (request, host, port, e))
105-
continue
106100

107101
raise KafkaUnavailableError("All servers failed to process request")
108102

@@ -130,7 +124,7 @@ def _send_broker_aware_request(self, payloads, encoder_fn, decoder_fn):
130124

131125
# Group the requests by topic+partition
132126
original_keys = []
133-
payloads_by_broker = defaultdict(list)
127+
payloads_by_broker = collections.defaultdict(list)
134128

135129
for payload in payloads:
136130
leader = self._get_leader_for_partition(payload.topic,
@@ -151,7 +145,7 @@ def _send_broker_aware_request(self, payloads, encoder_fn, decoder_fn):
151145

152146
# For each broker, send the list of request payloads
153147
for broker, payloads in payloads_by_broker.items():
154-
conn = self._get_conn_for_broker(broker)
148+
conn = self._get_conn(broker.host, broker.port)
155149
requestId = self._next_id()
156150
request = encoder_fn(client_id=self.client_id,
157151
correlation_id=requestId, payloads=payloads)
@@ -164,11 +158,11 @@ def _send_broker_aware_request(self, payloads, encoder_fn, decoder_fn):
164158
continue
165159
try:
166160
response = conn.recv(requestId)
167-
except ConnectionError, e:
161+
except ConnectionError as e:
168162
log.warning("Could not receive response to request [%s] "
169163
"from server %s: %s", request, conn, e)
170164
failed = True
171-
except ConnectionError, e:
165+
except ConnectionError as e:
172166
log.warning("Could not send request [%s] to server %s: %s",
173167
request, conn, e)
174168
failed = True
@@ -191,16 +185,11 @@ def __repr__(self):
191185
return '<KafkaClient client_id=%s>' % (self.client_id)
192186

193187
def _raise_on_response_error(self, resp):
194-
if resp.error == ErrorMapping.NO_ERROR:
195-
return
196-
197-
if resp.error in (ErrorMapping.UNKNOWN_TOPIC_OR_PARTITON,
198-
ErrorMapping.NOT_LEADER_FOR_PARTITION):
188+
try:
189+
kafka.common.check_error(resp)
190+
except (UnknownTopicOrPartitionError, NotLeaderForPartitionError) as e:
199191
self.reset_topic_metadata(resp.topic)
200-
201-
raise BrokerResponseError(
202-
"Request for %s failed with errorcode=%d (%s)" %
203-
(TopicAndPartition(resp.topic, resp.partition), resp.error, ErrorStrings[resp.error]))
192+
raise
204193

205194
#################
206195
# Public API #

0 commit comments

Comments
 (0)