Skip to content

Add support for M2M #15

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Jun 7, 2019
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
49 changes: 18 additions & 31 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,8 @@ The following parameters can be set in config files or in env variables:
- GROUPS_API_URL: TC groups API base URL
- COPILOT_RESOURCE_ROLE_IDS: copilot resource role ids allowed to upload attachment
- HEALTH_CHECK_TIMEOUT: health check timeout in milliseconds

- SCOPES: the configurable M2M token scopes, refer `config/default.js` for more details
- M2M_AUDIT_HANDLE: the audit name used when perform create/update operation using M2M token

Set the following environment variables so that the app can get TC M2M token (use 'set' insted of 'export' for Windows OS):

Expand All @@ -44,46 +45,30 @@ Set the following environment variables so that the app can get TC M2M token (us
- export AUTH0_URL=https://topcoder-dev.auth0.com/oauth/token
- export AUTH0_AUDIENCE=https://m2m.topcoder-dev.com/

Also properly configure AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, AWS_REGION, ATTACHMENT_S3_BUCKET, IS_LOCAL_DB config parameters.

Also properly configure AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, AWS_REGION, ATTACHMENT_S3_BUCKET config parameters.

## DynamoDB Setup
We can use DynamoDB setup on Docker for testing purpose. Just run `docker-compose up` in `local` folder.
You can also use your own AWS DynamoDB service for testing purpose.

## DynamoDB Setup with Docker
We will use DynamoDB setup on Docker.
Note that you may need to modify regions in `local/init-dynamodb.sh` and `local/config`.
## AWS S3 Setup
Go to https://console.aws.amazon.com/ and login. Choose S3 from Service folder and click `Create bucket`. Following the instruction to create S3 bucket.

Just run `docker-compose up` in local folder
## Mock api
For postman verification, please use the mock api under mock-api folder. It provides mock endpoint to fetch challenge resources and groups.
Go to `mock-api` folder and run command `npm run start` to start the mock-api listening on port 4000

If you have already installed aws-cli in your local machine, you can execute `./local/init-dynamodb.sh` to
create the table. If not you can still create table following `Create Table via awscli in Docker`.

## Create Table via awscli in Docker
## Create Tables
1. Make sure DynamoDB are running as per instructions above.

2. Run the following commands
```
docker exec -ti dynamodb sh
```
Next
```
./init-dynamodb.sh
```

3. Now the tables have been created, you can use following command to verify
```
aws dynamodb scan --table-name Challenge --endpoint-url http://localhost:7777
aws dynamodb scan --table-name ChallengeType --endpoint-url http://localhost:7777
aws dynamodb scan --table-name ChallengeSetting --endpoint-url http://localhost:7777
aws dynamodb scan --table-name AuditLog --endpoint-url http://localhost:7777
aws dynamodb scan --table-name Phase --endpoint-url http://localhost:7777
aws dynamodb scan --table-name TimelineTemplate --endpoint-url http://localhost:7777
aws dynamodb scan --table-name Attachment --endpoint-url http://localhost:7777
```
2. Make sure you have configured all config parameters. Refer [Configuration](#configuration)
3. Run `npm run create-tables` to create tables.

## Scripts
1. Drop/delete tables: `npm run drop-tables`
2. Creating tables: `npm run create-tables`
3. Seed/Insert data to tables: `npm run seed-tables`
4. Initialize database in default environment: `npm run init-db`
5. View table data in default environment: `npm run view-data <ModelName>`, ModelName can be `Challenge`, `ChallengeType`, `ChallengeSetting`, `AuditLog`, `Phase`, `TimelineTemplate`or `Attachment`

### Notes
- The seed data are located in `src/scripts/seed`
Expand All @@ -93,9 +78,11 @@ aws dynamodb scan --table-name Attachment --endpoint-url http://localhost:7777
- Install dependencies `npm install`
- Run lint `npm run lint`
- Run lint fix `npm run lint:fix`
- Create tables `npm run create-tables`
- Clear and init db `npm run init-db`
- Start app `npm start`
- App is running at `http://localhost:3000`
- Start mock-api, go to `mock-api` folder and `npm start`, mock api is running at `http://localhost:4000`

## Verification
Refer to the verification document `Verification.md`
Expand Down
13 changes: 1 addition & 12 deletions Verification.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,18 +5,7 @@
- run tests from up to down in order

## DynamoDB Verification
1. Open a new console and run the command `docker exec -ti dynamodb sh` to use `aws-cli`

2. On the console you opened in step 1, run these following commands you can verify the data that inserted into database during the executing of postman tests
```
aws dynamodb scan --table-name Challenge --endpoint-url http://localhost:7777
aws dynamodb scan --table-name ChallengeType --endpoint-url http://localhost:7777
aws dynamodb scan --table-name ChallengeSetting --endpoint-url http://localhost:7777
aws dynamodb scan --table-name AuditLog --endpoint-url http://localhost:7777
aws dynamodb scan --table-name Phase --endpoint-url http://localhost:7777
aws dynamodb scan --table-name TimelineTemplate --endpoint-url http://localhost:7777
aws dynamodb scan --table-name Attachment --endpoint-url http://localhost:7777
```
Run command `npm run view-data <ModelName>` to view table data, ModelName can be `Challenge`, `ChallengeType`, `ChallengeSetting`, `AuditLog`, `Phase`, `TimelineTemplate`or `Attachment`

## S3 Verification

Expand Down
13 changes: 11 additions & 2 deletions app-routes.js
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,13 @@ module.exports = (app) => {

actions.push((req, res, next) => {
if (req.authUser.isMachine) {
next(new errors.ForbiddenError('M2M is not supported.'))
// M2M
if (!req.authUser.scopes || !helper.checkIfExists(def.scopes, req.authUser.scopes)) {
next(new errors.ForbiddenError('You are not allowed to perform this action!'))
} else {
req.authUser.handle = config.M2M_AUDIT_HANDLE
next()
}
} else {
req.authUser.userId = String(req.authUser.userId)
// User roles authorization
Expand Down Expand Up @@ -74,7 +80,10 @@ module.exports = (app) => {
if (!req.authUser) {
next()
} else if (req.authUser.isMachine) {
next(new errors.ForbiddenError('M2M is not supported.'))
if (!def.scopes || !req.authUser.scopes || !helper.checkIfExists(def.scopes, req.authUser.scopes)) {
req.authUser = undefined
}
next()
} else {
req.authUser.userId = String(req.authUser.userId)
next()
Expand Down
48 changes: 46 additions & 2 deletions config/default.js
Original file line number Diff line number Diff line change
Expand Up @@ -33,11 +33,55 @@ module.exports = {
FILE_UPLOAD_SIZE_LIMIT: process.env.FILE_UPLOAD_SIZE_LIMIT
? Number(process.env.FILE_UPLOAD_SIZE_LIMIT) : 50 * 1024 * 1024, // 50M
CHALLENGES_API_URL: process.env.CHALLENGES_API_URL || 'http://localhost:4000/v5/challenges',
GROUPS_API_URL: process.env.GROUPS_API_URL || 'http://api.topcoder-dev.com/v5/groups',
GROUPS_API_URL: process.env.GROUPS_API_URL || 'http://localhost:4000/v5/groups',
// copilot resource role ids allowed to upload attachment
COPILOT_RESOURCE_ROLE_IDS: process.env.COPILOT_RESOURCE_ROLE_IDS
? process.env.COPILOT_RESOURCE_ROLE_IDS.split(',') : ['10ba038e-48da-487b-96e8-8d3b99b6d18b'],

// health check timeout in milliseconds
HEALTH_CHECK_TIMEOUT: process.env.HEALTH_CHECK_TIMEOUT || 3000
HEALTH_CHECK_TIMEOUT: process.env.HEALTH_CHECK_TIMEOUT || 3000,

SCOPES: {
CHALLENGES: {
READ: process.env.SCOPE_CHALLENGES_READ || 'read:challenges',
CREATE: process.env.SCOPE_CHALLENGES_CREATE || 'create:challenges',
UPDATE: process.env.SCOPE_CHALLENGES_UPDATE || 'update:challenges',
ALL: process.env.SCOPE_CHALLENGES_ALL || 'all:challenges'
},
CHALLENGE_TYPES: {
CREATE: process.env.SCOPE_CHALLENGE_TYPES_CREATE || 'create:challenge_types',
UPDATE: process.env.SCOPE_CHALLENGE_TYPES_UPDATE || 'update:challenge_types',
ALL: process.env.SCOPE_CHALLENGE_TYPES_ALL || 'all:challenge_types'
},
CHALLENGE_SETTINGS: {
READ: process.env.SCOPE_CHALLENGE_SETTINGS_READ || 'read:challenge_settings',
CREATE: process.env.SCOPE_CHALLENGE_SETTINGS_CREATE || 'create:challenge_settings',
UPDATE: process.env.SCOPE_CHALLENGE_SETTINGS_UPDATE || 'update:challenge_settings',
ALL: process.env.SCOPE_CHALLENGE_SETTINGS_ALL || 'all:challenge_settings'
},
CHALLENGE_AUDIT_LOGS: {
READ: process.env.SCOPE_CHALLENGE_AUDIT_LOGS_READ || 'read:challenge_audit_logs'
},
CHALLENGE_PHASES: {
READ: process.env.SCOPE_CHALLENGE_PHASES_READ || 'read:challenge_phases',
CREATE: process.env.SCOPE_CHALLENGE_PHASES_CREATE || 'create:challenge_phases',
DELETE: process.env.SCOPE_CHALLENGE_PHASES_DELETE || 'delete:challenge_phases',
UPDATE: process.env.SCOPE_CHALLENGE_PHASES_UPDATE || 'update:challenge_phases',
ALL: process.env.SCOPE_CHALLENGE_PHASES_ALL || 'all:challenge_phases'
},
TIMELINE_TEMPLATES: {
READ: process.env.SCOPE_TIMELINE_TEMPLATES_READ || 'read:timeline_templates',
CREATE: process.env.SCOPE_TIMELINE_TEMPLATES_CREATE || 'create:timeline_templates',
DELETE: process.env.SCOPE_TIMELINE_TEMPLATES_DELETE || 'delete:timeline_templates',
UPDATE: process.env.SCOPE_TIMELINE_TEMPLATES_UPDATE || 'update:timeline_templates',
ALL: process.env.SCOPE_TIMELINE_TEMPLATES_ALL || 'all:timeline_templates'
},
CHALLENGE_ATTACHMENTS: {
READ: process.env.SCOPE_CHALLENGE_ATTACHMENTS_READ || 'read:challenge_attachments',
CREATE: process.env.SCOPE_CHALLENGE_ATTACHMENTS_CREATE || 'create:challenge_attachments',
ALL: process.env.SCOPE_CHALLENGE_ATTACHMENTS_ALL || 'all:challenge_attachments'
}
},

M2M_AUDIT_HANDLE: process.env.M2M_AUDIT_HANDLE || 'TopcoderService'
}
4 changes: 4 additions & 0 deletions docs/swagger.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -21,12 +21,16 @@ info:

## Access levels

- M2M token is supported, all non-public-accessed endpoint can be accessed using M2M token with proper scopes.

- Only admins and copilots can create/update an entity.

- Copilots can **only** update entities they have created. (eg. copilot A
cannot update a challenge created by copilot B)

- Non-admin users can access challenges with groups only if they belong to any of the groups

- It will be considered as admin user if using valid M2M token(having read challenge scope) to list challenges or retrieve challenge by id
host: api.topcoder.com
basePath: /v5
schemes:
Expand Down
Loading