|
| 1 | +# Topcoder - Resources Elasticsearch Processor |
| 2 | + |
| 3 | +## Dependencies |
| 4 | + |
| 5 | +- nodejs https://nodejs.org/en/ (v10) |
| 6 | +- Kafka |
| 7 | +- Elasticsearch 6.8.4 |
| 8 | +- Docker, Docker Compose |
| 9 | + |
| 10 | +## Configuration |
| 11 | + |
| 12 | +Configuration for the processor is at `config/default.js`. |
| 13 | +The following parameters can be set in config files or in env variables: |
| 14 | + |
| 15 | +- DISABLE_LOGGING: whether to disable logging; default value is false |
| 16 | +- LOG_LEVEL: the log level; default value: 'debug' |
| 17 | +- KAFKA_URL: comma separated Kafka hosts; default value: 'localhost:9092' |
| 18 | +- KAFKA_GROUP_ID: the Kafka group id; default value: 'resource-processor-es' |
| 19 | +- KAFKA_CLIENT_CERT: Kafka connection certificate, optional; default value is undefined; |
| 20 | + if not provided, then SSL connection is not used, direct insecure connection is used; |
| 21 | + if provided, it can be either path to certificate file or certificate content |
| 22 | +- KAFKA_CLIENT_CERT_KEY: Kafka connection private key, optional; default value is undefined; |
| 23 | + if not provided, then SSL connection is not used, direct insecure connection is used; |
| 24 | + if provided, it can be either path to private key file or private key content |
| 25 | +- RESOURCE_CREATE_TOPIC: create resource Kafka topic, default value is 'challenge.action.resource.create' |
| 26 | +- RESOURCE_DELETE_TOPIC: delete resource Kafka topic, default value is 'challenge.action.resource.delete' |
| 27 | +- RESOURCE_ROLE_CREATE_TOPIC: create resource role Kafka topic, default value is 'challenge.action.resource.role.create' |
| 28 | +- RESOURCE_ROLE_UPDATE_TOPIC: update resource role Kafka topic, default value is 'challenge.action.resource.role.update' |
| 29 | +- ES.HOST: Elasticsearch host, default value is 'localhost:9200' |
| 30 | +- ES.AWS_REGION: AWS Region to be used if we use AWS ES, default value is 'us-east-1' |
| 31 | +- ES.API_VERSION: Elasticsearch API version, default value is '6.8' |
| 32 | +- ES.RESOURCE_INDEX: Elasticsearch index name for resources, default value is 'resources' |
| 33 | +- ES.RESOURCE_TYPE: Elasticsearch index type for resources, default value is '_doc' |
| 34 | +- ES.RESOURCE_ROLE_INDEX: Elasticsearch index name for resource roles, default value is 'resource_roles' |
| 35 | +- ES.RESOURCE_ROLE_TYPE: Elasticsearch index type for resource roles, default value is '_doc' |
| 36 | + |
| 37 | +Also note that there is a `/health` endpoint that checks for the health of the app. |
| 38 | +This sets up an expressjs server and listens on the environment variable `PORT`. |
| 39 | +It's not part of the configuration file and needs to be passed as an environment variable. |
| 40 | +Default health check port is 3000 if not set. |
| 41 | + |
| 42 | + |
| 43 | +## Local Kafka setup |
| 44 | + |
| 45 | +- `http://kafka.apache.org/quickstart` contains details to setup and manage Kafka server, |
| 46 | + below provides details to setup Kafka server in Mac, Windows will use bat commands in bin/windows instead |
| 47 | +- download kafka at `https://www.apache.org/dyn/closer.cgi?path=/kafka/1.1.0/kafka_2.11-1.1.0.tgz` |
| 48 | +- extract out the doanlowded tgz file |
| 49 | +- go to extracted directory kafka_2.11-0.11.0.1 |
| 50 | +- start ZooKeeper server: |
| 51 | + `bin/zookeeper-server-start.sh config/zookeeper.properties` |
| 52 | +- use another terminal, go to same directory, start the Kafka server: |
| 53 | + `bin/kafka-server-start.sh config/server.properties` |
| 54 | +- note that the zookeeper server is at localhost:2181, and Kafka server is at localhost:9092 |
| 55 | +- use another terminal, go to same directory, create some topics: |
| 56 | + `bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic challenge.action.resource.create` |
| 57 | + `bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic challenge.action.resource.delete` |
| 58 | + `bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic challenge.action.resource.role.create` |
| 59 | + `bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic challenge.action.resource.role.update` |
| 60 | +- verify that the topics are created: |
| 61 | + `bin/kafka-topics.sh --list --zookeeper localhost:2181`, |
| 62 | + it should list out the created topics |
| 63 | +- run the producer and then write some message into the console to send to the `challenge.action.resource.create` topic: |
| 64 | + `bin/kafka-console-producer.sh --broker-list localhost:9092 --topic challenge.action.resource.create` |
| 65 | + in the console, write message, one message per line: |
| 66 | + `{ "topic": "challenge.action.resource.create", "originator": "topcoder-resources-api", "timestamp": "2019-02-16T00:00:00", "mime-type": "application/json", "payload": { "id": "173803d3-019e-4033-b1cf-d7205c7f774c", "challengeId": "123", "memberId": "456", "memberHandle": "tester", "roleId": "172803d3-019e-4033-b1cf-d7205c7f774a" } }` |
| 67 | +- optionally, use another terminal, go to same directory, start a consumer to view the messages: |
| 68 | + `bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic challenge.action.resource.create --from-beginning` |
| 69 | +- writing/reading messages to/from other topics are similar |
| 70 | + |
| 71 | + |
| 72 | +## Elasticsearch setup |
| 73 | + |
| 74 | +Just run `docker-compose up` in `local` folder. |
| 75 | + |
| 76 | + |
| 77 | +## Local deployment |
| 78 | + |
| 79 | +- install dependencies `npm i` |
| 80 | +- run code lint check `npm run lint` |
| 81 | +- fix some code lint errors `npm run lint:fix` |
| 82 | +- initialize Elasticsearch, create (recreate if present) configured Elasticsearch indices: `npm run init-es` |
| 83 | +- start processor app `npm start` |
| 84 | + |
| 85 | +## Local Deployment with Docker |
| 86 | + |
| 87 | +To run the Resources ES Processor using docker, follow the below steps |
| 88 | + |
| 89 | +1. Navigate to the directory `docker` |
| 90 | + |
| 91 | +2. Rename the file `sample.api.env` to `api.env` |
| 92 | + |
| 93 | +3. Set the required AWS credentials in the file `api.env` |
| 94 | + |
| 95 | +4. Once that is done, run the following command |
| 96 | + |
| 97 | +``` |
| 98 | +docker-compose up |
| 99 | +``` |
| 100 | + |
| 101 | +5. When you are running the application for the first time, It will take some time initially to download the image and install the dependencies |
| 102 | + |
| 103 | + |
| 104 | +## Verification |
| 105 | + |
| 106 | +- setup kafka server, start elasticsearch, initialize Elasticsearch, start processor app |
| 107 | +- start kafka-console-producer to write messages to `challenge.action.resource.create` topic: |
| 108 | + `bin/kafka-console-producer.sh --broker-list localhost:9092 --topic challenge.action.resource.create` |
| 109 | +- write message: |
| 110 | + `{ "topic": "challenge.action.resource.create", "originator": "topcoder-resources-api", "timestamp": "2019-02-16T00:00:00", "mime-type": "application/json", "payload": { "id": "173803d3-019e-4033-b1cf-d7205c7f774c", "challengeId": "123", "memberId": "456", "memberHandle": "tester", "roleId": "172803d3-019e-4033-b1cf-d7205c7f774a" } }` |
| 111 | +- run command `npm run view-data resources 173803d3-019e-4033-b1cf-d7205c7f774c` to view the created data, you will see the data are properly created: |
| 112 | + |
| 113 | +```bash |
| 114 | +info: Elasticsearch data: |
| 115 | +info: { |
| 116 | + "id": "173803d3-019e-4033-b1cf-d7205c7f774c", |
| 117 | + "challengeId": "123", |
| 118 | + "memberId": "456", |
| 119 | + "memberHandle": "tester", |
| 120 | + "roleId": "172803d3-019e-4033-b1cf-d7205c7f774a" |
| 121 | +} |
| 122 | +info: Done! |
| 123 | +``` |
| 124 | + |
| 125 | +- you may write invalid message like: |
| 126 | + `{ "topic": "challenge.action.resource.create", "originator": "topcoder-resources-api", "timestamp": "2019-02-16T00:00:00", "mime-type": "application/json", "payload": { "challengeId": "123", "memberId": "456", "memberHandle": "tester", "roleId": "172803d3-019e-4033-b1cf-d7205c7f774a" } }` |
| 127 | + |
| 128 | + `{ "topic": "challenge.action.resource.create", "originator": "topcoder-resources-api", "timestamp": "abc", "mime-type": "application/json", "payload": { "id": "173803d3-019e-4033-b1cf-d7205c7f774c", "challengeId": "123", "memberId": "456", "memberHandle": "tester", "roleId": "172803d3-019e-4033-b1cf-d7205c7f774a" } }` |
| 129 | + |
| 130 | + `{ [ { abc` |
| 131 | +- then in the app console, you will see error messages |
| 132 | + |
| 133 | +- start kafka-console-producer to write messages to `challenge.action.resource.delete` topic: |
| 134 | + `bin/kafka-console-producer.sh --broker-list localhost:9092 --topic challenge.action.resource.delete` |
| 135 | + |
| 136 | +- write message to delete data: |
| 137 | + `{ "topic": "challenge.action.resource.delete", "originator": "topcoder-resources-api", "timestamp": "2019-02-16T00:00:00", "mime-type": "application/json", "payload": { "id": "173803d3-019e-4033-b1cf-d7205c7f774c", "challengeId": "123", "memberId": "456", "memberHandle": "tester", "roleId": "172803d3-019e-4033-b1cf-d7205c7f774a" } }` |
| 138 | +- run command `npm run view-data resources 173803d3-019e-4033-b1cf-d7205c7f774c` to view the deleted data, you will see the data are properly deleted: |
| 139 | + |
| 140 | +```bash |
| 141 | +info: The data is not found. |
| 142 | +``` |
| 143 | + |
| 144 | + |
| 145 | +- start kafka-console-producer to write messages to `challenge.action.resource.role.create` topic: |
| 146 | + `bin/kafka-console-producer.sh --broker-list localhost:9092 --topic challenge.action.resource.role.create` |
| 147 | + |
| 148 | +- write message to create data: |
| 149 | + `{ "topic": "challenge.action.resource.role.create", "originator": "topcoder-resources-api", "timestamp": "2019-02-16T00:00:00", "mime-type": "application/json", "payload": { "id": "171803d3-019e-4033-b1cf-d7215c7f123a", "name": "role1", "fullAccess": true, "isActive": true, "selfObtainable": false } }` |
| 150 | +- run command `npm run view-data resource_roles 171803d3-019e-4033-b1cf-d7215c7f123a` to view the created data, you will see the data are properly created: |
| 151 | + |
| 152 | +```bash |
| 153 | +info: Elasticsearch data: |
| 154 | +info: { |
| 155 | + "id": "171803d3-019e-4033-b1cf-d7215c7f123a", |
| 156 | + "name": "role1", |
| 157 | + "fullAccess": true, |
| 158 | + "isActive": true, |
| 159 | + "selfObtainable": false |
| 160 | +} |
| 161 | +info: Done! |
| 162 | +``` |
| 163 | + |
| 164 | +- start kafka-console-producer to write messages to `challenge.action.resource.role.update` topic: |
| 165 | + `bin/kafka-console-producer.sh --broker-list localhost:9092 --topic challenge.action.resource.role.update` |
| 166 | + |
| 167 | +- write message to update data: |
| 168 | + `{ "topic": "challenge.action.resource.role.update", "originator": "topcoder-resources-api", "timestamp": "2019-02-16T00:00:00", "mime-type": "application/json", "payload": { "id": "171803d3-019e-4033-b1cf-d7215c7f123a", "name": "role2", "fullAccess": false, "isActive": true, "selfObtainable": true } }` |
| 169 | +- run command `npm run view-data resource_roles 171803d3-019e-4033-b1cf-d7215c7f123a` to view the updated data, you will see the data are properly updated: |
| 170 | + |
| 171 | +```bash |
| 172 | +info: Elasticsearch data: |
| 173 | +info: { |
| 174 | + "id": "171803d3-019e-4033-b1cf-d7215c7f123a", |
| 175 | + "name": "role2", |
| 176 | + "fullAccess": false, |
| 177 | + "isActive": true, |
| 178 | + "selfObtainable": true |
| 179 | +} |
| 180 | +info: Done! |
| 181 | +``` |
| 182 | + |
| 183 | +- to test the health check API, |
| 184 | + run `export PORT=5000` (default port is 3000 if not set), |
| 185 | + start the processor, |
| 186 | + then browse `http://localhost:5000/health` in a browser, |
| 187 | + and you will see result `{"checksRun":1}` |
| 188 | + |
0 commit comments