|
| 1 | +# Topcoder - Member Processor |
| 2 | + |
| 3 | +## Dependencies |
| 4 | + |
| 5 | +- nodejs https://nodejs.org/en/ (v8+) |
| 6 | +- Kafka |
| 7 | +- ElasticSearch v6 |
| 8 | +- Docker, Docker Compose |
| 9 | + |
| 10 | +## Configuration |
| 11 | + |
| 12 | +Configuration for the notification server is at `config/default.js`. |
| 13 | +The following parameters can be set in config files or in env variables: |
| 14 | +- DISABLE_LOGGING: whether to disable loggin |
| 15 | +- LOG_LEVEL: the log level; default value: 'debug' |
| 16 | +- KAFKA_URL: comma separated Kafka hosts; default value: 'localhost:9092' |
| 17 | +- KAFKA_CLIENT_CERT: Kafka connection certificate, optional; default value is undefined; |
| 18 | + if not provided, then SSL connection is not used, direct insecure connection is used; |
| 19 | + if provided, it can be either path to certificate file or certificate content |
| 20 | +- KAFKA_CLIENT_CERT_KEY: Kafka connection private key, optional; default value is undefined; |
| 21 | + if not provided, then SSL connection is not used, direct insecure connection is used; |
| 22 | + if provided, it can be either path to private key file or private key content |
| 23 | +- CREATE_PROFILE_TOPIC: create profile Kafka topic, default value is 'member.action.profile.create' |
| 24 | +- UPDATE_PROFILE_TOPIC: update profile Kafka topic, default value is 'member.action.profile.update' |
| 25 | +- DELETE_PROFILE_TOPIC: delete profile Kafka topic, default value is 'member.action.profile.delete' |
| 26 | +- CREATE_TRAIT_TOPIC: create trait Kafka topic, default value is 'member.action.profile.trait.create' |
| 27 | +- UPDATE_TRAIT_TOPIC: update trait Kafka topic, default value is 'member.action.profile.trait.update' |
| 28 | +- DELETE_TRAIT_TOPIC: delete trait Kafka topic, default value is 'member.action.profile.trait.delete' |
| 29 | +- CREATE_PHOTO_TOPIC: create photo Kafka topic, default value is 'member.action.profile.photo.create' |
| 30 | +- UPDATE_PHOTO_TOPIC: update photo Kafka topic, default value is 'member.action.profile.photo.update' |
| 31 | +- esConfig: ElasticSearch config |
| 32 | + |
| 33 | +Refer to `esConfig` variable in `config/default.js` for ES related configuration. |
| 34 | +Usually, you need to configure the ES_HOST environment variable according to setup ES, e.g. |
| 35 | +export ES_HOST=localhost:9200 |
| 36 | + |
| 37 | + |
| 38 | +Also note that there is a `/health` endpoint that checks for the health of the app. This sets up an expressjs server and listens on the environment variable `PORT`. It's not part of the configuration file and needs to be passed as an environment variable |
| 39 | + |
| 40 | +## Local Kafka setup |
| 41 | + |
| 42 | +- `http://kafka.apache.org/quickstart` contains details to setup and manage Kafka server, |
| 43 | + below provides details to setup Kafka server in Mac, Windows will use bat commands in bin/windows instead |
| 44 | +- download kafka at `https://www.apache.org/dyn/closer.cgi?path=/kafka/1.1.0/kafka_2.11-1.1.0.tgz` |
| 45 | +- extract out the doanlowded tgz file |
| 46 | +- go to extracted directory kafka_2.11-0.11.0.1 |
| 47 | +- start ZooKeeper server: |
| 48 | + `bin/zookeeper-server-start.sh config/zookeeper.properties` |
| 49 | +- use another terminal, go to same directory, start the Kafka server: |
| 50 | + `bin/kafka-server-start.sh config/server.properties` |
| 51 | +- note that the zookeeper server is at localhost:2181, and Kafka server is at localhost:9092 |
| 52 | +- use another terminal, go to same directory, create some topics: |
| 53 | + `bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic member.action.profile.create` |
| 54 | + `bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic member.action.profile.update` |
| 55 | + `bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic member.action.profile.delete` |
| 56 | + `bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic member.action.profile.trait.create` |
| 57 | + `bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic member.action.profile.trait.update` |
| 58 | + `bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic member.action.profile.trait.delete` |
| 59 | + `bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic member.action.profile.photo.create` |
| 60 | + `bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic member.action.profile.photo.update` |
| 61 | +- verify that the topics are created: |
| 62 | + `bin/kafka-topics.sh --list --zookeeper localhost:2181`, |
| 63 | + it should list out the created topics |
| 64 | +- run the producer and then write some message into the console to send to the `member.action.profile.create` topic: |
| 65 | + `bin/kafka-console-producer.sh --broker-list localhost:9092 --topic member.action.profile.create` |
| 66 | + in the console, write message, one message per line: |
| 67 | + `{ "topic": "member.action.profile.create", "originator": "member-api", "timestamp": "2018-02-16T00:00:00", "mime-type": "application/json", "payload": { "userId": 1111, "userHandle": "handle", "email": "[email protected]", "sex": "male", "created": "2018-01-02T00:00:00", "createdBy": "admin" } }` |
| 68 | +- optionally, use another terminal, go to same directory, start a consumer to view the messages: |
| 69 | + `bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic member.action.profile.create --from-beginning` |
| 70 | +- writing/reading messages to/from other topics are similar |
| 71 | + |
| 72 | + |
| 73 | +## ElasticSearch setup |
| 74 | + |
| 75 | +You may download ElasticSearch v6, install and run it locally. |
| 76 | +Or to setup ES service using AWS. |
| 77 | +Another simple way is to use docker compose: |
| 78 | +go to docker-es folder, run `docker-compose up` |
| 79 | + |
| 80 | + |
| 81 | +## Local deployment |
| 82 | + |
| 83 | +- install dependencies `npm i` |
| 84 | +- run code lint check `npm run lint`, running `npm run lint-fix` can fix some lint errors if any |
| 85 | +- initialize Elasticsearch, create configured Elasticsearch index if not present: `npm run init-es` |
| 86 | +- or to re-create the index: `npm run init-es force` |
| 87 | +- run tests `npm run test` |
| 88 | +- start processor app `npm start` |
| 89 | + |
| 90 | +## Local Deployment with Docker |
| 91 | + |
| 92 | +To run the Member ES Processor using docker, follow the below steps |
| 93 | + |
| 94 | +1. Navigate to the directory `docker` |
| 95 | + |
| 96 | +2. Rename the file `sample.api.env` to `api.env` |
| 97 | + |
| 98 | +3. Set the required AWS credentials in the file `api.env` |
| 99 | + |
| 100 | +4. Once that is done, run the following command |
| 101 | + |
| 102 | +``` |
| 103 | +docker-compose up |
| 104 | +``` |
| 105 | + |
| 106 | +5. When you are running the application for the first time, It will take some time initially to download the image and install the dependencies |
| 107 | + |
| 108 | +## Unit tests and Integration tests |
| 109 | + |
| 110 | +Integration tests may use different index `member-test` which may not same as the usual index. |
| 111 | + |
| 112 | +Please ensure to create the index `member-test` or the index specified in the environment variable `ES_INDEX_TEST` before running the Integration tests. You could re-use the existing scripts to create index but you would need to set the below environment variable |
| 113 | + |
| 114 | +``` |
| 115 | +export ES_INDEX=member-test |
| 116 | +``` |
| 117 | + |
| 118 | +#### Running unit tests and coverage |
| 119 | + |
| 120 | +To run unit tests alone |
| 121 | + |
| 122 | +``` |
| 123 | +npm run test |
| 124 | +``` |
| 125 | + |
| 126 | +To run unit tests with coverage report |
| 127 | + |
| 128 | +``` |
| 129 | +npm run cov |
| 130 | +``` |
| 131 | + |
| 132 | +#### Running integration tests and coverage |
| 133 | + |
| 134 | +To run integration tests alone |
| 135 | + |
| 136 | +``` |
| 137 | +npm run e2e |
| 138 | +``` |
| 139 | + |
| 140 | +To run integration tests with coverage report |
| 141 | + |
| 142 | +``` |
| 143 | +npm run cov-e2e |
| 144 | +``` |
| 145 | + |
| 146 | + |
| 147 | +## Verification |
| 148 | + |
| 149 | +- start kafka server, start elasticsearch, initialize Elasticsearch, start processor app |
| 150 | +- start kafka-console-producer to write messages to `member.action.profile.create` topic: |
| 151 | + `bin/kafka-console-producer.sh --broker-list localhost:9092 --topic member.action.profile.create` |
| 152 | +- write message: |
| 153 | + `{ "topic": "member.action.profile.create", "originator": "member-api", "timestamp": "2018-02-16T00:00:00", "mime-type": "application/json", "payload": { "userId": 1111, "userHandle": "handle", "email": "[email protected]", "sex": "male", "created": "2018-02-16T00:00:00", "createdBy": "admin" } }` |
| 154 | +- run command `npm run view-data profile1111` to view the created data, you will see the data are properly created: |
| 155 | + |
| 156 | +```bash |
| 157 | +info: Elasticsearch data: |
| 158 | +info: { |
| 159 | + "userId": 1111, |
| 160 | + "userHandle": "handle", |
| 161 | + |
| 162 | + "sex": "male", |
| 163 | + "created": "2018-02-16T00:00:00", |
| 164 | + "createdBy": "admin", |
| 165 | + "resource": "profile" |
| 166 | +} |
| 167 | +``` |
| 168 | + |
| 169 | +- you may write invalid message like: |
| 170 | + `{ "topic": "member.action.profile.create", "originator": "member-api", "timestamp": "2018-02-16T00:00:00", "mime-type": "application/json", "payload": { "user-id": "1111", "userHandle": "handle", "sex": "male", "created": "2018-01-02T00:00:00", "createdBy": "admin" } }` |
| 171 | +- then in the app console, you will see error message |
| 172 | + |
| 173 | +- start kafka-console-producer to write messages to `member.action.profile.update` topic: |
| 174 | + `bin/kafka-console-producer.sh --broker-list localhost:9092 --topic member.action.profile.update` |
| 175 | +- write message: |
| 176 | + `{ "topic": "member.action.profile.update", "originator": "member-api", "timestamp": "2018-03-02T00:00:00", "mime-type": "application/json", "payload": { "userId": 1111, "userHandle": "handle", "email": "[email protected]", "sex": "male", "created": "2018-01-02T00:00:00", "createdBy": "admin", "updated": "2018-03-02T00:00:00", "updatedBy": "admin" } }` |
| 177 | +- run command `npm run view-data profile1111` to view the updated data, you will see the data are properly updated: |
| 178 | + |
| 179 | +```bash |
| 180 | +info: Elasticsearch data: |
| 181 | +info: { |
| 182 | + "userId": 1111, |
| 183 | + "userHandle": "handle", |
| 184 | + |
| 185 | + "sex": "male", |
| 186 | + "created": "2018-01-02T00:00:00", |
| 187 | + "createdBy": "admin", |
| 188 | + "resource": "profile", |
| 189 | + "updatedBy": "admin", |
| 190 | + "updated": "2018-03-02T00:00:00" |
| 191 | +} |
| 192 | +``` |
| 193 | + |
| 194 | +- start kafka-console-producer to write messages to `member.action.profile.delete` topic: |
| 195 | + `bin/kafka-console-producer.sh --broker-list localhost:9092 --topic member.action.profile.delete` |
| 196 | +- write message: |
| 197 | + `{ "topic": "member.action.profile.delete", "originator": "member-api", "timestamp": "2018-04-16T00:00:00", "mime-type": "application/json", "payload": { "userId": 1111, "userHandle": "handle" } }` |
| 198 | +- run command `npm run view-data profile1111` to view the deleted data, you will see the data are properly deleted: |
| 199 | + |
| 200 | +```bash |
| 201 | +info: The data is not found. |
| 202 | +``` |
| 203 | + |
| 204 | +- management of other data are similar, below gives valid Kafka messages for other resource types, so that you may test them easily |
| 205 | +- create trait: |
| 206 | + `{ "topic": "member.action.profile.trait.create", "originator": "member-api", "timestamp": "2018-02-16T00:00:00", "mime-type": "application/json", "payload": { "userId": 1111, "userHandle": "handle", "traitId": 123, "created": "2018-02-16T00:00:00", "createdBy": "admin" } }` |
| 207 | + `{ "topic": "member.action.profile.trait.create", "originator": "member-api", "timestamp": "2018-02-16T00:00:00", "mime-type": "application/json", "payload": { "userId": 1111, "userHandle": "handle", "traitId": 456, "created": "2018-02-16T00:00:00", "createdBy": "admin" } }` |
| 208 | +- update trait: |
| 209 | + `{ "topic": "member.action.profile.trait.update", "originator": "member-api", "timestamp": "2018-02-17T00:00:00", "mime-type": "application/json", "payload": { "userId": 1111, "userHandle": "handle", "traitId": 123, "created": "2018-02-16T00:00:00", "createdBy": "admin", "updated": "2018-02-17T00:00:00", "updatedBy": "admin" } }` |
| 210 | +- delete trait: |
| 211 | + `{ "topic": "member.action.profile.trait.delete", "originator": "member-api", "timestamp": "2018-02-18T00:00:00", "mime-type": "application/json", "payload": { "userId": 1111, "userHandle": "handle", "memberProfileTraitIds": [123, 456] } }` |
| 212 | + |
| 213 | +- create photo: |
| 214 | + `{ "topic": "member.action.profile.photo.create", "originator": "member-api", "timestamp": "2018-02-16T00:00:00", "mime-type": "application/json", "payload": { "userId": 1111, "userHandle": "handle", "photoURL": "http://test.com/123.png", "created": "2018-02-16T00:00:00", "createdBy": "admin" } }` |
| 215 | +- update photo: |
| 216 | + `{ "topic": "member.action.profile.photo.update", "originator": "member-api", "timestamp": "2018-02-17T00:00:00", "mime-type": "application/json", "payload": { "userId": 1111, "userHandle": "handle", "photoURL": "http://test.com/456.png", "created": "2018-02-16T00:00:00", "createdBy": "admin", "updated": "2018-02-16T00:00:00", "updatedBy": "admin" } }` |
| 217 | + |
| 218 | +- to view photo data, run command `npm run view-data profile<userId>photo`, e.g. `npm run view-data profile1111photo` |
| 219 | +- to view trait data, run command `npm run view-data profile<userId>trait<traitId>`, e.g. `npm run view-data profile1111trait123` |
| 220 | + |
| 221 | + |
| 222 | +## Notes |
| 223 | +- the processor will add resource field (profile/photo/trait) to the message payload to be indexed in ElasticSearch, |
| 224 | + ('profile' + userId) is used to identify profile, |
| 225 | + ('profile' + userId + 'photo') is used to identify photo, |
| 226 | + ('profile' + userId + 'trait' + traitId) is used to identify trait |
| 227 | + |
0 commit comments