The Blog

Stories from humans who build DoSelect.

‹ Back to Blog

8 Reasons To Dump Your Current Tech. Assessments Vendors, Now.


Did you notice we mentioned them as “vendors” and not partners?

We have been having numerous discussions with progressive HR and Technical teams about the perils of working with vendors in the current assessment landscape. If technology pundits and experts are anything to go by, non-linear growth would be the proverbial pot of gold at the end of the rainbow for the service landscape, considering that shrinking margins and throwing “people at the problem” comprise the current linear growth models.

What does that imply for hiring teams?

You can’t be relying on your current technical assessment vendors, for the rapid pace of an organized shift in the skills bank required to achieve the new growth wave, will see you aligning yourself with partners. Partners who understand that Machine Learning, Artificial Intelligence and Mobility would be driving much of your Automation agenda. Partners who would be crunching data from external sources and helping you with intelligence about a particular candidate’s record on GitHub and Stack Overflow. Partners who are quick to adapt themselves to the changing technology landscape and able to help you with assessments on new age frameworks.

Vendors help you with shortlisting basis assessment scores on legacy technologies. Partners help you derive insights out of the scores. For a vendor, an assessment is the end. For partners, assessments are means to an end.

Which begs the question — What are the ends?

a) A guaranteed increase in quality of technical hires.

b) Insights into a candidate’s prowess around core language concepts.

c) Insights around code quality. Ask any tech. head if she can allow a candidate whose code violates conventions for a language and she will shout “Hell, No”. A partner helps you evaluate candidates on code quality.

DoSelect’s ethos before even writing the first line of its code was to shape itself as a true partner for hiring teams.

Why?

Because the founding team had worked with all popular technical assessment vendors. They were found wanting in almost all aspects which are of mighty importance to technical teams when they induct new members in their teams. Granted that the aggressive sales and marketing machinery of these vendors have served them well enabling them to fire a decent client roster for themselves, but it was imperative for us to build a tool that truly help hiring teams with better insights around the candidates they are mulling to hire.

In almost every discussion we have with hiring teams, one question invariably pops up — “How are you different from HackerRank/ HackerEarth/ Mettl/ Aspiring Minds/ CoCubes/ Codility/ Testdome/ Expert Rating/ (Insert your current vendor here)?”

The answers are many. Let us delve into them one by one.

a) Code Quality Analyzer — DoSelect’s reports point out the exact deficiencies in a candidate’s code. Whenever the code violates set conventions of a particular language, our engine detects and flags it. The underlying dangers of not analyzing code quality is that faulty code might execute test cases well but not following convention leads to “technical debt” which is abhorred by tech. teams. They are not able to scale the code for new enhancements if conventions are not followed.

The image underneath is a grab of our report which highlights the code quality score.

b) Language Based Assessments — DoSelect’s suite of assessments enable you to assess candidates on core concepts in a particular language. Vendors mostly assess these by the aid of algorithms and Data Structures whilst the role might or might not be big on algorithms. Simply put, a candidate you are considering to hire for a Java role should be assessed for concepts like Inheritance, Polymorphism, Method Overloading, Strings, Classes, Methods etc. and not mere algorithmic knowledge.

Below is an excerpt from the report that details know-how in core language concepts.

c) “Crunch” — DoSelect’s “Crunch” pulls data from external sources like GitHub and Stack Overflow to follow the technologies where there have been contributions from a particular candidate. Wouldn’t it be uber helpful if you are aware that a particular candidate who is being considered for a Python role has also been contributing to Android and PHP channels in these forums?

This image underneath showcases our report which outlines a candidate’s submissions to GitHub and Stack Overflow.

d) Measurement of API skills — For developers working on home grown products, working with external APIs forms one of the most crucial components of a typical day at work. Current assessment vendors cant’ assess developers on API skills because their core isn’t engineered to talk to other networks (read: internet). DoSelect’s API assessments see candidates working with Open APIs (Twitter, NASA etc) and solve challenges by diving into them.

Following is one of the sample problems in the DoSelect API library.

e) Automated Testing Framework® (ATF)— Current assessment engines are opaque about how they allot scores in a particular test. DoSelect’s Automated Testing Framework® makes this black box redundant by not marking candidates on mere input-output results but also the inner workings behind the functionality of the code. Simply put, whether a Java code has correct implementation of Classes would be one parameter on which ATF would evaluate final scores.

f) Auto-Droid — Android assessments that evaluate a candidate on not mere build but functionality as well. Current engines evaluate candidates on mere build (“build” stands for the concept where the mobile app. is evaluated only on one parameter — whether the code is written correctly or not — it does not enlighten you with the metrics around “correct functionality” of the code)

To evaluate for functionality, the hiring team has to download the application and check for test cases. Imagine your pain if you have to evaluate 150 candidates — you are looking at downloading 150 applications, leading to a draining time per hire and probable chances of you missing out on a great candidate if she is placed as the 75th applicant in the pool of 150.** By the time you evaluate her code, she has been snapped up by your rivals.**

DoSelect’s Auto-Droid evaluates the candidates on both build as well as functionality — the report saves you crucial hours by pointing whether the application does it’s job well or not.

The following image is a grab from our Auto-Droid report. The GIF on the right showcases the functionality of the application, in an automated format.

g) Auto UI — An automated UI assessment framework that informs recruiters whether the candidate’s application fulfills the functional accuracy in the UI front. For instance, if the candidate is creating a temperature converter, the DoSelect report can detail whether the functionality of the converter is working or not.

The report underneath details the UI of the application, spawned from the code, written by the candidate.
h) Machine Learning and Data Science Assessments — Since you are high on an automation drive, you would need to assess candidates on modern Machine Learning paradigms. An MCQ assessment can be faked, guessed and manipulated in more ways than one. Considering that the talent pool in these technologies is limited, you can’t afford to go wrong with your next hire. This team is going to drive your non-linear growth, after all.

What needs to be assessed then?

1. Application problems on real world data sets

One of the questions which are put forth in our assessments are

“You have N images with one of the two tags — “Apparels” or “Electronics”. Build a model to tag any image in the following large data set as one of the two”

2. Sentiment Analysis

“Given a set of reviews of a particular product in a marketplace, determine the overall sentiment with either a “positive” or “negative” tag”

DoSelect’s suite empowers you to assess candidates on use cases like the ones mentioned above. This is the closest your team can get to measure actual, on-job proficiency in ML and AI skills.

These 8 reasons should couple DoSelect as your ideal technical skills measurement partner, in the truest sense of the word.

The current assessment scenario needs an overhaul if vendors have to become partners of HR teams to ensure their success in identifying the right talent, in lines with the strategic imperatives. DoSelect attempts to do just that, and more, by lending crucial insights which impacts your engineering team’s performance, bit by bit, literally.

What are your thoughts? We would love to gather your brains around the seismic shift that is due to hit hiring teams and the arsenal you are trying to build for your business units.

Please drop us a note at hello@doselect or a message at +91–9711919089 and we can help you a tad (actually, more than a tad) to devise the right assessments for their skills.

Till next time.

Update 1: Some readers asked for a grab of how the candidate interface looks like in DoSelect. The following GIF is taken from a live test happening as I type this



Update 2: Readers also asked for a sample report for our assessments. One such report for a Java assessment can be accessed by clicking this link - http://dos.lc/java-report-sample

Become an insider

Categories