-
Notifications
You must be signed in to change notification settings - Fork 3
Reviewer rating #52
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
With a good rating system, we may:
|
We had a reviewer rating, and TC staff decided to remove it because it didn't work well. |
@lsentkiewicz My idea was to provide feedback (rating) only when there is a reason to give negative rating. Something like giving penalties to reviewers and not require PMs/copilots to review reviewers. |
@lsentkiewicz |
If the reviewer gets "penalty points" he will try to appeal it, and it will create another discussion copilot vs. reviewer. TC staff tried to implement a system that payments are related to the review quality. It was probably 6 or 8 years ago. They dropped it and decided to create a simple system. btw |
@lsentkiewicz
What about a system, that such problematic cases are collected in some queue, and a selected jury of most experienced copilots/managers/members/reviewers reviews these cases with some periodicity? Their decisions won't have an effect on the outcomes of the contests affected by the cases (it would create a real mess, as the money have been paid already, and, probably, the winning code already have passed further into dev/production), but they will have impact on the reviewer rating, contributing to preventing similar issues in further. This will remove the overhead from specific contests (the copilot will have just file the case into the system), but will (i) range reviewers by the reliability; (ii) create a reference for reviewers; (iii) reduce the appellation hell, as it will be a group decision of reputed guys, so one have to accept it. What do you think? |
They won't work for free, and TC admin don't want to pay anything extra. |
Well, you get CAB, and hot discussions in Slack working for free, why won't it work for such jury? :) |
Reviewing is a time-consuming task :) |
Speaking hypothetically, what about something similar to the lyft/uber system? The way they operate, riders and drivers rate each other 1-5 stars to ensure good quality. If a driver gets less than 3.5 stars or something they are then banned from being a driver on their app for a period of time. If I rate a driver 5 stars, it is optional for me to write any further comments. If I rate lower than 5 stars I am required to give a reason why. This would make it easier for the Topcoder Team as we wouldn't have to monitor every situation and every reviewer, rather the focus would be on the reviewer(s) that have fallen below the mark and investigate as to why that is and if it is fair. |
@hokienick that's exactly what I had in mind 😄 |
@hokienick We had almost the same system. Copilots didn't have time to analyze the scorecards, and in 90% of all reviews, they assigned 'Good' rate. 10% was re-appeals or very bad review. |
I think it should be the submitter who should be given a chance to give feedback . And the co pilot can look at feedbacks that are below a certain threshold. Ultimately if the competitors think that if the review is fair and good then only people will be motivated to take part in the challenges . So if it's time vs attracting quality competitors in would choose the latter. |
@lijulat then every submitter who won't be placed 1st/2nd (get paid) will give negative feedback :) |
@ThomasKranitsas
|
Putting @dmessing comment from Slack here:
|
@hokienick and @dmessing In fact I wonder what is 'did not produce worthwhile' results. |
CAB Meeting
|
Cardillo leads Reviewers |
Any update or schedule for new rating system is introduced? |
There were many discussions on Slack about bad reviews that lead to the win of an average or even worse a bad submission and in most cases re-appeals.
Many members asked for a rating system for reviewers.
Having a rating system for reviewers might not directly affect the choice of the reviewers for a challenge but it's a good reason for reviewers to perform better reviews. Potentially Topcoder could give some bonus $$ to reliable reviewers.
The rating of a reviewer could be a factor on the reviewer's payment formula thus a good rating will increase the payment but a bad rating will decrease the final payment.
What could be a reason to give a negative rate to a reviewer:
The text was updated successfully, but these errors were encountered: