DoSelect feature roster

The ingredients behind world's most exhaustive technical assessment engine

Get a taste

Language specific assessments

Imagine this - you are hiring for a backend Java developer. What are some language constructs you should be assessing them on?

Answers to this question are many - Inheritance, Polymorphism, OOPS etc. What do the popular assessment engines assess them on, though?

Algorithms and Data Structures. This, irrespective of the fact that the role might or might not be big on algorithms. This is the source of a fundamental disconnect between what should be assessed and what is assessed leading to potential bad hiring. DoSelect’s suite of language based assessments evaluates code on language construct and ability to navigate them. This does not absolve our content bank of algorithms - we have tons of algorithmic questions. Beyond algorithms though, the bank comprises of a vast suite of questions that gauge conceptual ability across 24+ programming languages.

Code quality analysis

DoSelect’s assessment reports measure the supreme parameter of importance whilst hiring a developer - their code quality. While assessment scores should be used as a pointer, insights into a developer’s code quality serve your source of truth. DoSelect enables you to dive into these truths before your technical managers get into sequential rounds with candidates.

Our reports classify code deficiencies into two categories - Errors and Warnings. These deficiencies are further sub-classified as bug risks, security concerns, complexity issues, performance metrics, duplication, styling errors and lack of clarity. These deficiencies also serve as fodder for sequential rounds of interviews where technical hiring managers can focus on these code-level metrics rather than subjectivity enveloping the discussion.

Read more about this feature here.

Crunch

What lends impetus to assessment scores?

Social footprints that validate these assessment scores by drawing an elaborate and informative sketch of a candidate’s contributions across established technical forums. DoSelect’s crunch pulls data from submissions and contributions across Github and Stackoverflow and pools it with DoSelect’s assessment scores. The result is an activity log informing recruitment and learning teams of secondary skills that a candidate possesses. You might be hiring a candidate for Python, but it always helps to know that they pushed commits to Angular as well, doesn’t it?

Crunch takes into account the frequencies of activity, project contributions, answers & corresponding ratings on these social networks and normalizes it to portray a candidate’s proficiencies in every language, across multiple technologies. Our contribution activity graph follows the growth patterns in a candidate’s technical prowess over a period of time. The best bit? Developers get to keep a tab on their progress as data is reported in a historical format across a span of weeks, months and years.

API skills

In another industry-first, DoSelect now enables teams to assess candidates and employees on their skills in deciphering and solving API based questions.

Teams can use our internal question bank comprising of questions that revolve around Open APIs from Twitter and NASA, to name a few, or expose their Open APIs to create assessments.

Mobility

Why should a talent acquisition team download 100 applications from 100 candidates to decipher if they work? This fundamental problem is the genesis of our Mobility platform.

DoSelect’s Mobility assessment platform evaluates an application on both build as well as functionality (as opposed to mere build scores - the prevalent norm for popular assessment engines). Application screenshots with test case execution enable teams to filter only those candidatures whose applications sailed through functional test cases. No need to download each application to evaluate them.

However, if you wish to download them, there is an APK download provision plugged in the platform.

Auto UI

Because it all boils down to the usability, we fly in to save the day. Under automated UI assessments, candidates can write their solutions on our cloud IDE while getting to preview their code simultaneously. After submission, our automated evaluation engine intensively checks the functionality of the front-end applications and brings back a detailed report – all in a browser!

Database

In an industry-first, DoSelect enables teams to assess candidates both SELECT and DDL/DML Queries. To elaborate:

SELECT Query based assessments that test a candidate’s understanding of basic querying and of more involved concepts (for e.g., the types of JOINS) over a reasonably large database.

DDL/DML Query based assessments that enable exhaustive testing of a candidate’s capabilities at creating and manipulating databases.

These incorporations ensure an in-depth evaluation and offers high flexibility to the assessment teams with a broad range of questions.

Machine learning and data science

Teams can now choose Machine Learning and Data Science problems from the problem marketplace and use it in tests. Upon submission, the developer's solution would be evaluated to check if the predictions are within the error bounds as required by the problem statement. If the solution contains any plots, it will be generated and available in the reports.

An assessment engine for developers. By developers.

Take a spin of an engine that evaluates true technical competence.

Get Started Today Contact