-
Notifications
You must be signed in to change notification settings - Fork 3
Analysis of Last 200 Code Scorecards Filled/ Importance of Code Reviews #82
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
I believe, your question already contains one of the answers: to provide an adequate feedback on these questions reviewers should be really into the nuances of technology / problem, and potentially spend more time on the review. Apparently, in many cases many reviewers prefer just to skip these sections and focus on verification that solution work, and on catching user-facing issues. IMHO introduction of feedback on reviews / reviewer ranking, etc. (#52, and some other related tickets) will help to mediate this problem; but it sure not coming any soon. Regarding security / performance, I believe, in majority of our day-to-day projects it is quite irrelevant. If one does not make a huge (probably intentional) mistake to open a huge security whole / create a big performance issue, in general security / performance stay within some reasonable limits, and further enhancing it demands a special efforts (dedicated separate challenge(s)), if relevant. I might be wrong, but in my understanding the main reason to include 1.2.4 and 1.2.5 into the scorecard lies in the domain of marketing. So that one can show the scorecard to customer and say: yeah, security and performance is always at the list of focus for Topcoder projects. |
Is it true that good and bad reviewers do the same number of reviews? |
I don't know how currently 'bad' reviewers are identified. I know copilots report them to support and at least 1 reviewer was suspended for consistently doing poor reviews, but that was a one-off - I know few others who've been reported but they continue to be on the review board. And yes, to answer your question, since the review allocation is manual, I believe it's done in a manner so as to keep the distribution fair by number of reviews. I know that a reviewer certification program is in progress but I don't know when that will go live. My intent of raising this thread is more for reviewer education and the quality of code reviews as part of challenge reviews. |
I think opening new reviewer positions would also help, for example, I've tried applying one year ago, but entrance was closed. There are new competitors every day, some of them turn out to be very good, I think having a system of "trial reviewers" every few months, would encourage new people to raise the quality of reviews, this, in turn, motivating existing ones to be more careful. |
@talesforce The another problem is that there is no feedback. The reviewers don't know if they do a good or a bad review. I wrote about this problem 2 years ago, and I just simply write about the same things all the time. I remember I did that in CSFV contests. The copilot asked me to provide the feedback about the reviewers, and it looked like this:
I tried to do in the drones series, but all my emails were ignored 😉 @birdofpreyru @mishacucicea |
Over the last few weeks and since the introduction of new scorecards, I have sampled data on last 200 scorecards filled by multiple reviewers across different challenges. I noticed the following statistics
Code Quality
1.2.1 Does the submission follow standard coding best practices?
1.2.2 Does the submission include an appropriate amount of comments?
These sections are blank for 80% of the scorecards - Having been a reviewer for last 5+ years, I've always emphasised on code reviews and I try to do it thoroughly for challenges which I review. Among other reviewers, I know wcheung and vvvpig do so. But I've come across several reviewers who never even look at the code - which is appalling to say the last.
Code quality is a key metric for building a high quality application and that seems to be constantly ignored by several reviewers. I see this especially on iOS challenges where non iOS developers get picked for reviews (just to ensure all reviewers have equal distribution) and all they do is run the submissions on simulators without looking at nuances of the code/ architecture.
Security & Performance
1.2.3 Has obsolete or unnecessary code been cleaned up?
1.2.4 Has reasonable consideration been given to security?
1.2.5 Has reasonable consideration been given to performance?
The scores on these questions have NOT been used for >90% challenges. I'm not really sure what was the tipping point to get these included in the scorecard but the current stats on usage for these is not very encouraging for sure.
I'd be interested in hearing thoughts from the current CAB on this - @lsentkiewicz @ThomasKranitsas @birdofpreyru ?
Just to be clear, I'm not pointing fingers or looking for bashing of individuals/ groups, I'm looking for constructive inputs which would help improve reviews and bring back the focus on code quality.
cc @dmessing @hokienick @rootelement
The text was updated successfully, but these errors were encountered: