(Replying to PARENT post)
For example...
>The fizzbuzz-style coding problems, however, did not perform as well. While the confidence intervals are large, the current data shows less correlation with interview results. [...] The coding problems were also harder for people to finish. We saw twice the drop off rate on the coding problems as we saw on the quiz.
I read that paragraph several times and I don't understand what he's actually saying. If those candidates "dropped off" on the fizzbuzz, were they also still kept for further evaluation in the following extended coding session? A later paragraph says...
>So we started following up with interviews where we asked people to write code. Suddenly, a significant percentage of the people who had spoken well about impressive-sounding projects failed, in some cases spectacularly, when given relatively simple programming tasks. Conversely, people who spoke about very trivial sounding projects (or communicated so poorly we had little idea what they had worked on) were among the best at actual programming.
For the fizzbuzz failures to be non-correlative and counterintuitive, it means he did not reject them for failing fizzbuzz and they later ended up doing spectacularly well in the larger coding sessions. If that's what happened, then yes, that is a very counterintuitive result. What were the topics of the larger coding sessions?
(Replying to PARENT post)
Why is that a problem? Maybe almost everyone is decently good (as evidenced by having a string of jobs, and presumably, references), and your interviews are creating tons of false negatives.Or heck, vice versa. You don't know.
You are presuming your conclusions. You have no basis to make conclusions yet, you just have incomplete data. It's iteresting data, and I'm gleefully happy that somebody is looking at this in the context of programmers (too many studies are very broad, across many different career/job type, IMO). But I think all you have right now is data. Fishing for correlations at this point is nearly bound to lead you astray.
With that aside, I'm very interested in the eventual correlation with test performance and job performance. I'm biased - I dislike IQ tests, but I must admit there is a lot of research on them out there. For me personally, I perform spectacularly on this sort of test, pretty poorly in whiteboard tests, so-so in pair program to get a job, and generally top of the heap in actual job performance. It would definitely help me personally if these tests were true. Yet, still, I wonder, do they measure "get things done"? Do they measure "don't piss off the CEO/customer" skills? There's a ton of things that I think are important beyond pure cognitive skills.
(Replying to PARENT post)
This does create some danger of circular reasoning
(perhaps we're just carefully describing our own biases).
But we have to start somewhere, and basing our
evaluations on how people write actual code seems like a
good place.The really exciting point comes when we can
re-run all this analysis, basing it on actual job
performance, rather than interview results.
Absolutely. Results on the earlier screens and results on the later interview aren't exactly independent variables, and neither is the one that really seems to matter - subsequent on-the-job success. There are all sorts of biases and confounding factors likely to be shared between them, especially since there's no indication that the later interviews were even done blind w.r.t. the earlier screens. Until then, we're just measuring correlations between different interview techniques, and it should be no surprise that two different kinds of code-focused interviews show the highest correlation.(Replying to PARENT post)
I'm not sure what kind of hackers they were looking for, but I've been directly involved with creating the infrastructure used in marketing campaigns with the likes of CNN, McDonalds, Infiniti, and more. I've turned an idea into a company with 8 full time employees and have investors seriously interested in one of my side projects. I'm currently involved with leading a project that integrates with a large bank.
I'm a full stack ruby dev learning clojure in my spare time and heavily involved with self improvement. Anyone who watches me for a moment can see that I can solve problems very quickly. I didn't care much about being selected, I have a solid job and offers coming in.
Would anyone who got selected by Triplebyte care to list their credentials/achievements? My main motivation was to see how I compare against others at my current level.
(Replying to PARENT post)
(Replying to PARENT post)
This was a great approach to me, because it didn't particularly focus on anything outside of the present. We worked on solving real problems, and contributing to the project. It's a great, low-stress, method of gauging if someone has the chops for what is typically the "day-to-day" life at the given shop.
(Replying to PARENT post)
What's the metric that shows it works well?
> The really exciting point comes when we can re-run all this analysis, basing it on actual job performance, rather than interview results.
And how precisely do you measure job performance? If this is achievable, I've got a line of companies out my door that would love to pay for a service that systematically measures job performance.
(Replying to PARENT post)
I have been rejected many, many, many times because the first screening (CV check by non-technical recruiter). My last example was at a well know tech startup were I had to hack my way to get noticed in order to get the first interview. The funny thing is that I was the fasted candidate to get hired + I won a company-wide award for my work at the company just 4 months after joining.
I haven't finished a degree because I thought was boring and I was learning things I already taught myself before, but this fact makes my resume go down the list very fast. Because interviewers don't have time to lose and thousands of candidates to check I'm sure they will find very useful the use of technology on getting those good prospects in front of everyone else.
Something I've seen many times at my past jobs is having good technical applicants, some of them are even referred by one team member and are turned down later because culture. I don't know why but engineers and technical people are more likely to fail at those than others. The surprising thing is that they check culture as the last step because those who can run those type of interview are a few and can't become full-time culture keepers. This is an enormous waste of time and resources for the applicant, the interviewers and the company itself.
(Replying to PARENT post)
This indicates to me that either the "simple programming tasks" are not well-designed, or the the discussion about the past projects was not long enough. It still sounds like this interview process is only identifying candidates who are good at coding while someone is watching over their shoulder.
However, what I find to be the bigger issue with this article is that "success" is considered to be "passed the interview". Ultimately, all this article tells us are what currently correlates with qualities Triplebyte likes to see in candidates, not what correlates with good hires. To be fair, they do mention this at the end of the article.
(Replying to PARENT post)
Are they asking critical questions on what decisions and trade-offs were made? Their past projects, can they explain well the reasoning for choice of tools used? Can they talk about what types of improvements they wanted to see in the pipeline process of build-test-deploy?
I'm just surprised that this question is singled out as "poor."
(Replying to PARENT post)
"Fizz buzz style coding problems are less predictive of ability to do well in a programming interview"
I'm sure this is 100% true, but I thought the point of fizzbuzz-type problems were to weed out people who couldn't program at all? It's not to identify good programmers or even competent ones, it's to identify blatantly incompetent ones, which are surprisingly common even when hiring in SV.
I've never personally asked fizzbuzz when interviewing because my company's hiring process seems to do well enough to not require it. However, based on what I read here it's also very good for filtering out narcissistic divas (i.e., the occasional HN posters who pop a monocle when they get asked fizzbuzz: "how dare someone ask a dumb question that is beneath me?!? Needless to say, I walked out of the interview immediately! Harrumph!").
Maybe Triplebyte's article is using the term "fizzbuzz-type problem" to refer to any contrived programming problem, but in common usage fizzbuzz-type problems are bozo filters that serve no higher purpose than filtering out bozos.
(Replying to PARENT post)
I'm surprised they didn't get stronger results from fizz buzz, but I noticed among the candidates I saw that the percentage of 'non-coders' is substantial but not a majority.
One thing missing from this investigation is a measure of solution quality. A good portion of candidates who actually finished coding questions with me ended without thoroughly understanding how their code worked and/or had code that would be hard to maintain. Other candidates would write top-notch code but were unable to explain their thought process to some extent. These are critical pieces to the interview that contribute much more 'color' than 'score' and are important to note.
(Replying to PARENT post)
There seems to be a big assumption that "our programming questions are going to be good and predictive, even if everyone else's are bad." What if being able to describe in-depth a past (real) project correlates just as well (or better) to on-the-job performance as being able to design and code one of their artificial ones? Or what if those artificial ones just don't correlate that well with on-the-job performance in the first place?
It is definitely harder to BS-detect/grade, though.
They want to re-run against actual job performance in the future, that's nice, but it seems like they're throwing ideas out awfully early, then.
(Replying to PARENT post)
hiring for that small startup? you'll want multi-hat wearing people first, brilliant programmers second.
hiring for a large enterprise team? you'll want to hire for "plays well with others" first, and brilliant programmers second.
that's not to say you should hire schleps, for sure. they should at least be competent programmers. i guess what i'm saying is (despite how it sounds), hiring someone who can program brilliantly is important, but not as important as hiring someone who can navigate your company's software-making requirements successfully.
firing the brilliant engineer who thinks he's more talented than everyone else in the small company so he keeps demanding to be put in charge? yup, that's a thing. firing the brilliant engineer who fights tooth-and-nail over some inconsequential feature the product team wants to change? that's a thing too. assigning a brilliant engineer to crap, meaningless work because no one else on the team wants to work with them? yuppers -- seen it.
in any organization, you are either the only one in charge or you're following someone else's orders -- both of which require different aspects around working well with others.
(Replying to PARENT post)
From Triplebytes' FAQ:
"When do I meet companies?
If we decided to work together after giving you feedback post our technical interviews, we'll start introducing you to the companies and guiding you through their hiring process."
So, just to be clear, first you quiz/screenshare/interview with Triplebyte, and then you still have to go through each company's search process? Or do companies partner with Triplebyte to fast-track candidates who've already been vetted?
(Replying to PARENT post)
(Replying to PARENT post)
(Replying to PARENT post)
From the article: > Our process has four steps:
> 1. Online technical screen.
> 2. 15-minute phone call discussing a technical project.
> 3. 45-minute screen share interview where the candidate writes code.
> 4. 2-hour screen share where they do a larger coding project.
Then later:
> ...we can't afford to send people we're unsure about to companies
Does every applicant in this system really have to go through four rounds of screening before even talking to someone who works at the actual company? I can't imagine doing that unless I was desperate.
(Replying to PARENT post)
(Replying to PARENT post)
The reality of hiring is you're going to make mistakes, like every other part of running a business. Even in an extended "interview" such as dating for a potential life partner, people make mistakes so I'm not sure how the hiring process can be quantified to remove said error. The interview process is so excruciating these days I often hate the companies I'm talking with.
While we're at it, the skills requirements listed with jobs today are astounding. My experience is that a company wants to hire a programmer with at least a journeymen's level of expertise in 6-8 skills. If you have 5 and are comfortable you can learn the other 3, you're dead in the water. Let's be honest, the latest Javascript framework isn't that complicated. The latest NoSQL database isn't that hard to learn.
The truly hard parts of joining a new company are learning how projects are managed, getting the political lay of the land, finding a sherpa to answer your questions in the first couple of weeks, and learning where you fit within the organization.
(Replying to PARENT post)
I agree with the blog post author that current hiring processes mostly show that "too many companies are content to do what they've always done." And the idea of a standardized, automated quiz of programming knowledge sounds interesting. But what has to happen next is to an actual validation study and find out if programmers hired by this process do better as programmers in actual workplaces than programmers hired by some other process.
Regular readers of HN are aware that I have a FAQ post about this topic of company hiring procedures.[1] Company hiring procedures are the research focus of industrial and organizational psychologists, who have almost a century of research to look back on with long-term, large n studies to provide data on what works and what doesn't work for hiring capable workers. It's a shame that most company human resource departments ignore basically all of that research.
(Replying to PARENT post)
I'm not sure how this gets around the circularity arguments though, since you never get to evaluate the job performance of someone you selected out already. Only the tiny fraction of coders that make it past the initial test get evaluated, which could serve to reinforce the potential biases rather than ameliorate them.
The one case in which this would work is if they hired a number of coders that didn't work out well, and could add or update a feature as a negative predictor of job success.
I'm assuming that they're not at the scale of a larger company with thousands of engineers, and that the observations going into a regression model are relatively sparse. If this is a startup with a 20 hires, I'd be surprised if there was much to do to refine the model after a round or two of evaluations, but would be excited to learn otherwise.
(Replying to PARENT post)
(Replying to PARENT post)
Why are people so afraid to pick up the phone and talk to references? I'm always happy to give out my references, and always delighted to talk about the good devs I've worked with, with specifics about what they've done.
Standardized tests don't work for schools and don't work for jobs.
(Replying to PARENT post)
Here are the important metrics in my opinion, in order of decreasing importance:
- How many people were hired? Or what percentage of positions were filled? How long does it take to fill a position? Nothing in the article mentioned how many people were actually hired.
- False positives (people making it to the most expensive stage of the interview, typically a day-long on site interview, and being rejected there). What percentage of people that went to on site interviews got offers? Personally, I have always advocated processes that eliminate as many false positives as possible, even if it comes at the cost of some false negatives. Of course, you have to be careful not to filter out people too aggressively, because then you're just not going to hire anyone.
- False negatives (incorrectly rejecting good candidates early). By definition that's impossible to measure exactly. However, if you are not hiring fast enough, then maybe you have a problem with your screening process. At this point you could do an experiment and relax the screening process for half of the candidates and see what happens. But it could be just a sourcing problem, that is, you are not getting good candidates in the pipeline to begin with. It's very hard to tell whether you are being too harsh and not believing enough in people's abilities (or not willing to develop talent), or you are just not attractive to the kind of people that you want to hire.
Of course, all of the above is from the employer's point of view. If you are also trying to provide job seekers with a good service, then you can devise other metrics for success and for efficiency from their perspective.
(edited for formatting)
(Replying to PARENT post)
Either way I applaud every afford to improve the hiring process. However I'm a tad bit skeptical. They should release the dataset (unless I missed it) because it's pretty convenient that the results seem to indicate that hiring can be improved by a quiz which they could build and sell.
I'd be interested in the following screening filter: Have a programmer at the company read through the projects the candidates supplied (as a replacement to "read CV") and then come to a conclusion of yes/no. No projects = no job offer by default. You can always think of a different approach for people with no projects if you feel like you should hire from that group. Possibly have multiple programmers read the code and discuss it.
(Replying to PARENT post)
Also, not being able to speak well about a past project is highly correlated with doing well on a coding test.
Sounds like one or the other should be thrown out. (Or maybe only the small percentage who do well on both will go on to do well on the job?)
(Replying to PARENT post)
The advantage is that we're building a corpus of solutions to the same problem that we can compare against each other, which is interesting. More importantly, we're building a corpus of solutions that we can then pick from to have the candidate analyze in-person, and talk us through what they see, what they'd do differently, what they like/don't like, etc.
In short, we familiarize them with the problem via their own answer, and then ask them to analyze someone else's (anonymized) answer. Our sample set so far is too small to draw definitive conclusions from, but it feels better than our old ways of doing things.
(Replying to PARENT post)
Specifically, in order to be a blind study of the relationship between the screening exam and technical interview performance, the technical interviewers should not know the results of the screening exam before they make their decision. While they do not state this clearly, it seems possible that since the same 2 people were conducting all steps themselves, that they were not properly blinded.
Thus we cannot rule out confirmation bias in the interviewers themselves, i.e. that they were impressed by good performance on the programming quiz, not that it was an independent predictor of good performance in the technical interview.
Now, maybe one person did the screening and the other did the technical interview with no information sharing in every case, but this would need to be clarified.
(Replying to PARENT post)
IMO, this article demonstrates the need to certify software engineers; using a process similar to the interviewing process described. Therefore, when hiring, we can skip most of the "do you know how to code" and get down to cultural fit and mutual interest.
(Replying to PARENT post)
(Replying to PARENT post)
In near future, I'll be requesting exercises that can be treated and used as a micro library. They are free to use it and I get to post it on github.
(Replying to PARENT post)
Yes, this is when you have something real.
(Replying to PARENT post)
Unsurprising. Turns out talk really is cheap and doesn't indicate one can do. I've even seen people able to maintain jobs over a period of years by talk alone without ever really doing much. Or even being able to do much.
(Replying to PARENT post)
I feel like this should be listed among the least surprising things in the world. Being a good programmer is about MAKING that hard project look easy, by approaching it in the correct way!
(Replying to PARENT post)
(Replying to PARENT post)
(Replying to PARENT post)
(Replying to PARENT post)
Hi, I had a reading of the conclusions you made and I felt as if you defined a process of hiring machines to code rather than humans. So I took a few moments to read your manifesto(https://triplebyte.com/manifesto) (the premise on which your entire conclusion is made), and here is my take on it.
1. /"Whiteboard coding and algorithm questions aren't good predictors of how effective someone will be at writing real code." Whiteboard coding show how someone really thinks. It illustrates the though process of the person and that helps the interviewer to judge him on his rational thinking and his logical approaches. Algorithms add to this by illustrating the problem solving ability. A person may not be able to solve an Algorithm actually, but the attempt on a whiteboards speaks more than his implementation on a online platform.
2./ "Candidates deserve a consistent experience and consistent evaluation". The entire USP of an interview is the diversity which allows the interviewer to judge if someone is able to adapt to new situations and come out of his comfort zone. What you are suggesting is to change the interview process into a GRE exam which will only in-turn develop the culture among developers to prepare for that exam for 2 years.
3./"Hiring decisions should be made using a clear scoring system, not gut feelings" Most of the companies have a 3round or 4 round interview process. It is obvious enough to remove the gut feeling factor. If you wanna argue that it may be possible that a candidate got selected based on the gut feeling of all 4 interviewers then my counter argument is that he is worth being selected if he could generate that gut feeling in so many people.
4./"The hiring process should be focused on discovering strengths, not uncovering weaknesses" Agree to the point. However, the irony is that you are trying to define a particular process to hiring. I wonder if it could actually perform the "discovery" part.
5./"Candidates should be told exactly what to expect in an interview, and be allowed to prepare in advance. " So basically you want to hire the guy who has studied the most over the smartest guy in the room. From my experience, I can surely say if companies like "Google" and "Fb" used to follow that practice, I wouldn't be even writing their name here.
6./"truthful feedback on how they did so they know how they can improve" Agreed. Something that should be adapted by all companies in their recruiting process.
7./"Good programmers come from all types of background." You enforce my point with this statement. Good programmers need not be just people who could quickly write a program for a search in a large of data using hash maps but can also be people who have brilliant problem solving ability and are slow in transforming that into code, or people who are amazing in thinking software design and scalability and probably cannot remember code syntax so well. A company needs a good blend of all these people. Then only a good ecosystem to growth is created rather than just making a team of 10 machines who could transform a pseudo code into Java in 10 minutes.
8./" The software industry needs to experiment more with hiring processes and figure out what really works." I think many are already doing that by doing Tech Hackathons, online challenges, weekend projects, open source contribution references etc. So, not something new which you guys figured out.
9./"Candidates are at a fundamental disadvantage in salary and equity negotiations" Not sure what kind of companies you have surveyed. I think most well known companies maintain clear standards of salary and compensation plan. Though people will surely be flattered reading this. :)
10./"Companies should not have to make recruiting a core competency" Now you are just trying to open the market for yourself by yourself. No comments. :P
Would love to hear your counter arguments. Mail me. :-)
(Replying to PARENT post)
They mention evaluating the effectiveness of giving a candidate a project to do "in their own time." I recently had a interview that included this and I can share the result: I accepted an offer from a different company that didn't require it. I doubt my life is that different than anyone else's, with a full-time job and a full-time life outside of work. Spending that much time to qualify for a single job is too much to ask of anyone. If it were to pass a generic proficiency certification applicable to many positions, I would consider it, but this does not scale if a candidate is applying for multiple positions.