Mere performance scores are misleading. Here’s why.
At DoSelect, we have been focused on rethinking technical assessments. We realized that the traditional “coding tests” approach was ignoring the most crucial aspect associated with a codebase -
The inability to gauge how a candidate’s skill-sets align with the requirements of a job role.
Today’s candidates rank interview experience as one of the large motivators for them to join an organization. A relevant assessment framework is at the heart of this great (or yuck!) experience.
As we speak, a host of organizations continuously struggle with the limitation in the breadth of evaluation of technologies, which often results in instances such as this -
Or this -
Want to read some candidate verbatims that we collected over the past 12 months?
“A week before the test, I stuff my innards with algorithms and let them out on assessment day. It does the job.”
“Look - they don’t even test my program’s functionality. They look at it as a black box that matches an output to a pre-defined input.”
To break it down - functional testing of code is the order of the day. After all, you are hiring developers for their ability to write deployment-ready code sans errors and conceptual violations. The present-day assessment mentality of “input-matches-output-hence-he-ought-to-be-good” is harmful at best and calamitous at worst. Couple it with an utter disdain for the quality of questions that developers are asked to solve, and the state of the problem compounds. After all, how many Fuzzy Logic algorithm variations can be spun around in a lifetime?
What is the alternative?
An evaluation mechanism that exists to assess the interactions of various elements in a code and ensure that they are executing what they intend to. Think of an assessment that aligns itself closely to the software development life cycle - developer writes code and unit tests it before QA comes into the picture. Your assessment engine should serve as the unit testing & QA framework for the developer to validate whether their code is fulfilling all prerequisites before getting into production.
At the risk of oversimplifying, let us present an everyday scenario where such mechanisms come into play.
Imagine this web page that one of your candidates is asked to develop -
Imagine this web page that one of your candidates is asked to develop -
Questions like this are a remarkable departure from the limited testing dimensions of a standard input/output assessment engine, and ensure a limitless assessment horizon for recruiters and candidates alike.
Enter DoSelect
Having been a bunch of developers that saw the missing sense with the current world order, we began our pursuit to develop an assessment engine that could evaluate for the many behaviors of a code aside from the expected executions over a few standard test cases.
With an omni-applicable design, our engine enables you to test a developer on any technology that you wish for. Yes, you got that right – any technology.
From
Testing whether a candidate understands the most fundamental and crucial concepts of a programming language,
To
Evaluating them against real-world scenarios like the one you’ve seen above - implementing autocomplete based on user’s query.
The Coverage
Our platform’s core reason of existence is being able to grade a candidate across all major languages (C, C++, Java, Python – you name it, we’ve got it) and technologies (front end, back end, full stack, DevOps, et. al) for a more accurate assessment of their programming chops.
Our product team is continuously working at introducing new technologies driven by popularity and demand. Oh, and did we tell you? – the advanced backbone of our platform enables us to configure any technology in under 120 minutes.
Our product team is continuously working at introducing new technologies driven by popularity and demand. Oh, and did we tell you? – the advanced backbone of our platform enables us to configure any technology in under 120 minutes.
Pleasant Candidate Experience
Imagine a candidate’s delight at being graded on the overall robustness of their code and not just its ability to execute a couple of I/O test cases! An Android developer can now be graded on the application’s ability to tie itself together, piece by piece, with the outcome. A UI developer can be assessed on their design’s interaction capabilities across the spread.
Remember the rants above? Not anymore.
P.S: DoSelect is now deployed by forward-thinking organizations like Postman, Dream 11, Edgeverve, Amazon, InMobi, Tata Communications, Practo, UpGrad etc. Like you, they had been looking at unshackling their assessment experience. Let’s get in touch at hello@doselect.com if this rings a bell for you.