You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Mar 12, 2025. It is now read-only.
Copy file name to clipboardExpand all lines: README.md
+47-29Lines changed: 47 additions & 29 deletions
Original file line number
Diff line number
Diff line change
@@ -21,32 +21,49 @@
21
21
22
22
Configuration for the application is at `config/default.js` and `config/production.js`. The following parameters can be set in config files or in env variables:
23
23
24
-
- LOG_LEVEL: the log level
25
-
- PORT: the server port
26
-
- AUTH_SECRET: TC Authentication secret
27
-
- VALID_ISSUERS: valid issuers for TC authentication
28
-
- PAGE_SIZE: the default pagination limit
29
-
- MAX_PAGE_SIZE: the maximum pagination size
30
-
- API_VERSION: the API version
31
-
- DB_NAME: the database name
32
-
- DB_USERNAME: the database username
33
-
- DB_PASSWORD: the database password
34
-
- DB_HOST: the database host
35
-
- DB_PORT: the database port
36
-
- ES_HOST: Elasticsearch host
37
-
- ES_REFRESH: Should elastic search refresh. Default is 'true'. Values can be 'true', 'wait_for', 'false'
38
-
- ELASTICCLOUD_ID: The elastic cloud id, if your elasticsearch instance is hosted on elastic cloud. DO NOT provide a value for ES_HOST if you are using this
39
-
- ELASTICCLOUD_USERNAME: The elastic cloud username for basic authentication. Provide this only if your elasticsearch instance is hosted on elastic cloud
40
-
- ELASTICCLOUD_PASSWORD: The elastic cloud password for basic authentication. Provide this only if your elasticsearch instance is hosted on elastic cloud
41
-
- ES.DOCUMENTS: Elasticsearch index, type and id mapping for resources.
42
-
- SKILL_INDEX: The Elastic search index for skill. Default is `skill`
43
-
- SKILL_ENRICH_POLICYNAME: The enrich policy for skill. Default is `skill-policy`
44
-
- TAXONOMY_INDEX: The Elastic search index for taxonomy. Default is `taxonomy`
45
-
- TAXONOMY_PIPELINE_ID: The pipeline id for enrichment with taxonomy. Default is `taxonomy-pipeline`
46
-
- TAXONOMY_ENRICH_POLICYNAME: The enrich policy for taxonomy. Default is `taxonomy-policy`
47
-
- MAX_BATCH_SIZE: Restrict number of records in memory during bulk insert (Used by the db to es migration script)
48
-
- MAX_BULK_SIZE: The Bulk Indexing Maximum Limits. Default is `100` (Used by the db to es migration script)
24
+
-`LOG_LEVEL`: the log level
25
+
-`PORT`: the server port
26
+
-`AUTH_SECRET`: TC Authentication secret
27
+
-`VALID_ISSUERS`: valid issuers for TC authentication
28
+
-`PAGE_SIZE`: the default pagination limit
29
+
-`MAX_PAGE_SIZE`: the maximum pagination size
30
+
-`API_VERSION`: the API version
31
+
-`DB_NAME`: the database name
32
+
-`DB_USERNAME`: the database username
33
+
-`DB_PASSWORD`: the database password
34
+
-`DB_HOST`: the database host
35
+
-`DB_PORT`: the database port
36
+
-`ES_HOST`: Elasticsearch host
37
+
-`ES_REFRESH`: Should elastic search refresh. Default is 'true'. Values can be 'true', 'wait_for', 'false'
38
+
-`ELASTICCLOUD_ID`: The elastic cloud id, if your elasticsearch instance is hosted on elastic cloud. DO NOT provide a value for ES_HOST if you are using this
39
+
-`ELASTICCLOUD_USERNAME`: The elastic cloud username for basic authentication. Provide this only if your elasticsearch instance is hosted on elastic cloud
40
+
-`ELASTICCLOUD_PASSWORD`: The elastic cloud password for basic authentication. Provide this only if your elasticsearch instance is hosted on elastic cloud
41
+
-`ES`.DOCUMENTS: Elasticsearch index, type and id mapping for resources.
42
+
-`SKILL_INDEX`: The Elastic search index for skill. Default is `skill`
43
+
-`TAXONOMY_INDEX`: The Elastic search index for taxonomy. Default is `taxonomy`
44
+
-`MAX_BATCH_SIZE`: Restrict number of records in memory during bulk insert (Used by the db to es migration script)
45
+
-`MAX_BULK_SIZE`: The Bulk Indexing Maximum Limits. Default is `100` (Used by the db to es migration script)
46
+
47
+
-`AUTH0_URL`: Auth0 URL, used to get TC M2M token
48
+
-`AUTH0_AUDIENCE`: Auth0 audience, used to get TC M2M token
49
+
-`TOKEN_CACHE_TIME`: Auth0 token cache time, used to get TC M2M token
50
+
-`AUTH0_CLIENT_ID`: Auth0 client id, used to get TC M2M token
51
+
-`AUTH0_CLIENT_SECRET`: Auth0 client secret, used to get TC M2M token
52
+
-`AUTH0_PROXY_SERVER_URL`: Proxy Auth0 URL, used to get TC M2M token
53
+
54
+
-`BUSAPI_URL`: Topcoder Bus API URL
55
+
-`KAFKA_ERROR_TOPIC`: The error topic at which bus api will publish any errors
56
+
-`KAFKA_MESSAGE_ORIGINATOR`: The originator value for the kafka messages
57
+
-`SKILLS_ERROR_TOPIC`: Kafka topic for report operation error
58
+
59
+
**NOTE** AUTH0 related configuration normally is shared on challenge forum.
60
+
61
+
## DB and Elasticsearch In Docker
62
+
- Navigate to the directory `docker-pgsql-es` folder. Rename `sample.env` to `.env` and change any values if required.
63
+
- Run `docker-compose up -d` to have docker instances of pgsql and elasticsearch to use with the api
49
64
65
+
**NOTE** To completely restart the services, run `docker-compose down --volumes` and then `docker-compose up`.
66
+
Notice the `--volumes` argument is passed to the `docker-compose down` command to remove the volume that stores DB data. Without the `--volumes` argument the DB data will be persistent after the services are put down.
50
67
51
68
## Local deployment
52
69
@@ -58,17 +75,16 @@ Setup your Postgresql DB and Elasticsearch instance and ensure that they are up
58
75
- Run the migrations - `npm run migrations up`. This will create the tables.
59
76
- Then run `npm run insert-data` and insert mock data into the database.
60
77
- Run `npm run migrate-db-to-es` to sync data with ES.
61
-
- Startup server `npm run start`
78
+
- Startup server `npm run start:dev`
62
79
63
80
## Migrations
64
81
65
82
Migrations are located under the `./scripts/db/` folder. Run `npm run migrations up` and `npm run migrations down` to execute the migrations or remove the earlier ones
66
83
67
84
## Local Deployment with Docker
85
+
Setup your Postgresql DB and Elasticsearch instance and ensure that they are up and running.
68
86
69
-
- Navigate to the directory `docker-pgsql-es` folder. Rename `sample.env` to `.env` and change any values if required.
70
-
- Run `docker-compose up -d` to have docker instances of pgsql and elasticsearch to use with the api
71
-
87
+
- Configure AUTH0 related parameters via ENV variables. Note that normally you don't need to change other configuration.
72
88
- Create database using `npm run create-db`.
73
89
- Run the migrations - `npm run migrations up`. This will create the tables.
74
90
- Then run `npm run insert-data` and insert mock data into the database.
@@ -102,6 +118,8 @@ Migrations are located under the `./scripts/db/` folder. Run `npm run migrations
102
118
|`npm run delete-data`| Delete the data from the database |
103
119
|`npm run migrations up`| Run up migration |
104
120
|`npm run migrations down`| Run down migration |
121
+
|`npm run create-index`| Create Elasticsearch indexes. Use `-- --force` flag to skip confirmation |
122
+
|`npm run delete-index`| Delete Elasticsearch indexes. Use `-- --force` flag to skip confirmation |
105
123
|`npm run generate:doc:permissions`| Generate [permissions.html](docs/permissions.html) |
106
124
|`npm run generate:doc:permissions:dev`| Generate [permissions.html](docs/permissions.html) on any changes (useful during development). |
0 commit comments