The Blog

Stories from humans who build DoSelect.

‹ Back to Blog

Introducing Code Quality Analysis Engine - Of Comparing Apples To Apples

Two code samples. Both pass the specified test cases.

Question: How does your current assessment engine determine the better sample of the two?

Answer: It can’t.

Until now.

That hit you hard. Didn’t it?

Ask any engineering manager about factors that contribute to “technical debt”, and “shitty code” pops out as the first, almost knee-jerk like reaction. Now, shitty code has a lot of ingredients to it and one of them is whether the code adheres to the best-practice conventions as defined by the creators or the community of the language. Following these conventions are important because it ensures sanity when a lot of people are working together on a project and enables them to contribute constructively without a lot of hair-pulling, trauma, sleepless nights and unintended murders. The danger that underlies shitty code is the fact that it is functionally correct but if it piles up, new developers who want to improve a functionality or add an extension, find it a nerve-wracking exercise, because of the non-standardization in it.
Coming back to the question stated above — Are you measuring it, right now?

Because if you aren’t, chances are that you would hire somebody whose code is passing all intended test cases but writes what you have been reading all through — shit code. Popular assessment engines are guilty (and they are guilty of some more, but that is for some other day) of scoring a candidate’s performance on mere code functionality. They don’t differentiate whether Coder A wrote better code than Coder B and that’s where your quality per hire goes for a toss.

This won’t plague you anymore.

Starting today, all code submissions on DoSelect will be graded for their quality in addition to functionality. You would be able to see a quality score for each code solution, and an average quality score on a test report. This gives you more actionable data about the candidate that you can do to make an informed hiring decision – and your engineering teams would love you for this. To the Moon. And back.

For instance, even though both Kevin and Ms. Granger scored 200 in the assessment, Ms. Granger beat the minion to it. How did DoSelect conjure the machination to topple the minion?

How does it work?

Note: The above graphic is just an illustration used to compare two codes. Actual product may vary.

Whenever a solution is submitted in a programming language, the code quality analysis engine will detect violations of conventions of that language. Depending on the number and severity of these violations, it assigns a score between 0 to 5 – where 5 means no violations, hence good code; and 0 means too many violations, hence bad code. This update is now available in all new DoSelect test reports by default. No need to push any buttons or call our helpline! Code quality analysis is currently available for these languages - Python, Go, PHP, Bash, Haskell, and JavaScript. We are working on adding more languages soon.

It’s a brand new day, ladies and gentlemen :)

“DoSelect has been an absolute game changer for us” - Says Kevin Freitas, Global Head - HR, InMobi. We are out of private beta and available for progressive HR and Technical teams to give us a spin. Drop your co-ordinates here –

Update on 25.11.2016

It’s about a week and we’ve upgraded code quality analyzer support for C, C++, Java 7 and Java 8. Shouts of joy please!

This is now enabled for companies as well as developers. Login and see for yourself.

Become an insider