๐Ÿ‘คuptown๐Ÿ•‘10y๐Ÿ”ผ324๐Ÿ—จ๏ธ237

(Replying to PARENT post)

It's great to see some objective research being done on this, and I am very interested in following the results.

They mention evaluating the effectiveness of giving a candidate a project to do "in their own time." I recently had a interview that included this and I can share the result: I accepted an offer from a different company that didn't require it. I doubt my life is that different than anyone else's, with a full-time job and a full-time life outside of work. Spending that much time to qualify for a single job is too much to ask of anyone. If it were to pass a generic proficiency certification applicable to many positions, I would consider it, but this does not scale if a candidate is applying for multiple positions.

๐Ÿ‘คprotonfish๐Ÿ•‘10y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

Imo, this post did not organize its data and findings into a coherent presentation.

For example...

>The fizzbuzz-style coding problems, however, did not perform as well. While the confidence intervals are large, the current data shows less correlation with interview results. [...] The coding problems were also harder for people to finish. We saw twice the drop off rate on the coding problems as we saw on the quiz.

I read that paragraph several times and I don't understand what he's actually saying. If those candidates "dropped off" on the fizzbuzz, were they also still kept for further evaluation in the following extended coding session? A later paragraph says...

>So we started following up with interviews where we asked people to write code. Suddenly, a significant percentage of the people who had spoken well about impressive-sounding projects failed, in some cases spectacularly, when given relatively simple programming tasks. Conversely, people who spoke about very trivial sounding projects (or communicated so poorly we had little idea what they had worked on) were among the best at actual programming.

For the fizzbuzz failures to be non-correlative and counterintuitive, it means he did not reject them for failing fizzbuzz and they later ended up doing spectacularly well in the larger coding sessions. If that's what happened, then yes, that is a very counterintuitive result. What were the topics of the larger coding sessions?

๐Ÿ‘คjasode๐Ÿ•‘10y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

"as soon as we started doing them however, I saw a problem. Almost everyone was passing."

Why is that a problem? Maybe almost everyone is decently good (as evidenced by having a string of jobs, and presumably, references), and your interviews are creating tons of false negatives.Or heck, vice versa. You don't know.

You are presuming your conclusions. You have no basis to make conclusions yet, you just have incomplete data. It's iteresting data, and I'm gleefully happy that somebody is looking at this in the context of programmers (too many studies are very broad, across many different career/job type, IMO). But I think all you have right now is data. Fishing for correlations at this point is nearly bound to lead you astray.

With that aside, I'm very interested in the eventual correlation with test performance and job performance. I'm biased - I dislike IQ tests, but I must admit there is a lot of research on them out there. For me personally, I perform spectacularly on this sort of test, pretty poorly in whiteboard tests, so-so in pair program to get a job, and generally top of the heap in actual job performance. It would definitely help me personally if these tests were true. Yet, still, I wonder, do they measure "get things done"? Do they measure "don't piss off the CEO/customer" skills? There's a ton of things that I think are important beyond pure cognitive skills.

๐Ÿ‘คRogerL๐Ÿ•‘10y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

Here's the part that really seems to matter the most.

    This does create some danger of circular reasoning
    (perhaps we're just carefully describing our own biases).
    But we have to start somewhere, and basing our 
    evaluations on how people write actual code seems like a
    good place.The really exciting point comes when we can 
    re-run all this analysis, basing it on actual job
    performance, rather than interview results.
Absolutely. Results on the earlier screens and results on the later interview aren't exactly independent variables, and neither is the one that really seems to matter - subsequent on-the-job success. There are all sorts of biases and confounding factors likely to be shared between them, especially since there's no indication that the later interviews were even done blind w.r.t. the earlier screens. Until then, we're just measuring correlations between different interview techniques, and it should be no surprise that two different kinds of code-focused interviews show the highest correlation.
๐Ÿ‘คnotacoward๐Ÿ•‘10y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

I got a phone interview with Harj and wasn't considered for anything further.

I'm not sure what kind of hackers they were looking for, but I've been directly involved with creating the infrastructure used in marketing campaigns with the likes of CNN, McDonalds, Infiniti, and more. I've turned an idea into a company with 8 full time employees and have investors seriously interested in one of my side projects. I'm currently involved with leading a project that integrates with a large bank.

I'm a full stack ruby dev learning clojure in my spare time and heavily involved with self improvement. Anyone who watches me for a moment can see that I can solve problems very quickly. I didn't care much about being selected, I have a solid job and offers coming in.

Would anyone who got selected by Triplebyte care to list their credentials/achievements? My main motivation was to see how I compare against others at my current level.

๐Ÿ‘คNaomarik๐Ÿ•‘10y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

Very interesting methodology, but it would be very nice to correlate this data with long-term job performance. Interview decisions (of which of course you get more than long-term results, and they are clearer to quantify) are hopefully, but not necessarily an indicator of whether an employee works out for your company. Otherwise you run the risk of optimizing the quiz/screening process around metrics that influence your interview (i.e. test for how you personally weigh performance indicators, not for how these performance indicators actually affect performance)
๐Ÿ‘คjbangert๐Ÿ•‘10y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

One of the most enjoyable and mutually effective interviews I had included a one-hour pairing session which was language agnostic that required me to describe how I'd implement a basic data structure. The interviewer drove and implemented. This was then followed by a day long session of pairing on real problems with a number of different interviewers. Lunch was spent with some of the other team members, where we discussed basic things like culture, day-to-day affairs, and each of our histories.

This was a great approach to me, because it didn't particularly focus on anything outside of the present. We worked on solving real problems, and contributing to the project. It's a great, low-stress, method of gauging if someone has the chops for what is typically the "day-to-day" life at the given shop.

๐Ÿ‘คobfk๐Ÿ•‘10y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

> In our first 30 days, we've come up with a replacement for resume screens, and shown that it works well.

What's the metric that shows it works well?

> The really exciting point comes when we can re-run all this analysis, basing it on actual job performance, rather than interview results.

And how precisely do you measure job performance? If this is achievable, I've got a line of companies out my door that would love to pay for a service that systematically measures job performance.

๐Ÿ‘คmbesto๐Ÿ•‘10y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

I really think technical, or any other kind of hiring is broken. As you showed best indicators are questions (like your quizzes) or by showing "live" you can rather than a pretty CV, certificate or a fancy university name (I'm talking about tech not medicine, construction engineering or others that really require those).

I have been rejected many, many, many times because the first screening (CV check by non-technical recruiter). My last example was at a well know tech startup were I had to hack my way to get noticed in order to get the first interview. The funny thing is that I was the fasted candidate to get hired + I won a company-wide award for my work at the company just 4 months after joining.

I haven't finished a degree because I thought was boring and I was learning things I already taught myself before, but this fact makes my resume go down the list very fast. Because interviewers don't have time to lose and thousands of candidates to check I'm sure they will find very useful the use of technology on getting those good prospects in front of everyone else.

Something I've seen many times at my past jobs is having good technical applicants, some of them are even referred by one team member and are turned down later because culture. I don't know why but engineers and technical people are more likely to fail at those than others. The surprising thing is that they check culture as the last step because those who can run those type of interview are a few and can't become full-time culture keepers. This is an enormous waste of time and resources for the applicant, the interviewers and the company itself.

๐Ÿ‘คjordigg๐Ÿ•‘10y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

>Suddenly, a significant percentage of the people who had spoken well about impressive-sounding projects failed, in some cases spectacularly, when given relatively simple programming tasks.

This indicates to me that either the "simple programming tasks" are not well-designed, or the the discussion about the past projects was not long enough. It still sounds like this interview process is only identifying candidates who are good at coding while someone is watching over their shoulder.

However, what I find to be the bigger issue with this article is that "success" is considered to be "passed the interview". Ultimately, all this article tells us are what currently correlates with qualities Triplebyte likes to see in candidates, not what correlates with good hires. To be fair, they do mention this at the end of the article.

๐Ÿ‘คdadrian๐Ÿ•‘10y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

The skill of the interviewer in interviewing candidates should correlate just as strongly with everything else. How are you capturing that? I ask because our #1 determinant after an at home code sample is the #3 thing you find are not predictive.

Are they asking critical questions on what decisions and trade-offs were made? Their past projects, can they explain well the reasoning for choice of tools used? Can they talk about what types of improvements they wanted to see in the pipeline process of build-test-deploy?

I'm just surprised that this question is singled out as "poor."

๐Ÿ‘คwheaties๐Ÿ•‘10y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

Preamble: not middlebrow dismissal, I really like what they're doing here and will pay attention to them going forward. This is just picking a nit that my hypersensitive self just can't resist:

"Fizz buzz style coding problems are less predictive of ability to do well in a programming interview"

I'm sure this is 100% true, but I thought the point of fizzbuzz-type problems were to weed out people who couldn't program at all? It's not to identify good programmers or even competent ones, it's to identify blatantly incompetent ones, which are surprisingly common even when hiring in SV.

I've never personally asked fizzbuzz when interviewing because my company's hiring process seems to do well enough to not require it. However, based on what I read here it's also very good for filtering out narcissistic divas (i.e., the occasional HN posters who pop a monocle when they get asked fizzbuzz: "how dare someone ask a dumb question that is beneath me?!? Needless to say, I walked out of the interview immediately! Harrumph!").

Maybe Triplebyte's article is using the term "fizzbuzz-type problem" to refer to any contrived programming problem, but in common usage fizzbuzz-type problems are bozo filters that serve no higher purpose than filtering out bozos.

๐Ÿ‘คmwfunk๐Ÿ•‘10y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

I've done over 1000 interviews and my experience agrees with their findings that talking about a project is not a good predictor of coding. I usually left detailed resume questions for the end since so many candidates would bomb the coding part of the interview.

I'm surprised they didn't get stronger results from fizz buzz, but I noticed among the candidates I saw that the percentage of 'non-coders' is substantial but not a majority.

One thing missing from this investigation is a measure of solution quality. A good portion of candidates who actually finished coding questions with me ended without thoroughly understanding how their code worked and/or had code that would be hard to maintain. Other candidates would write top-notch code but were unable to explain their thought process to some extent. These are critical pieces to the interview that contribute much more 'color' than 'score' and are important to note.

๐Ÿ‘คchoppaface๐Ÿ•‘10y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

It seems like they're only evaluating phone screen methods against their pre-designed coding interview problem? But what if there are issues with that problem?

There seems to be a big assumption that "our programming questions are going to be good and predictive, even if everyone else's are bad." What if being able to describe in-depth a past (real) project correlates just as well (or better) to on-the-job performance as being able to design and code one of their artificial ones? Or what if those artificial ones just don't correlate that well with on-the-job performance in the first place?

It is definitely harder to BS-detect/grade, though.

They want to re-run against actual job performance in the future, that's nice, but it seems like they're throwing ideas out awfully early, then.

๐Ÿ‘คmajormajor๐Ÿ•‘10y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

while this process feels like it's hitting the sweet spot for finding out who can write brilliant code, that's half or less of the battle in hiring people. personally (and as a hiring manager), i feel like a majority of the hiring process is dependent (obviously) on the environment you're hiring into.

hiring for that small startup? you'll want multi-hat wearing people first, brilliant programmers second.

hiring for a large enterprise team? you'll want to hire for "plays well with others" first, and brilliant programmers second.

that's not to say you should hire schleps, for sure. they should at least be competent programmers. i guess what i'm saying is (despite how it sounds), hiring someone who can program brilliantly is important, but not as important as hiring someone who can navigate your company's software-making requirements successfully.

firing the brilliant engineer who thinks he's more talented than everyone else in the small company so he keeps demanding to be put in charge? yup, that's a thing. firing the brilliant engineer who fights tooth-and-nail over some inconsequential feature the product team wants to change? that's a thing too. assigning a brilliant engineer to crap, meaningless work because no one else on the team wants to work with them? yuppers -- seen it.

in any organization, you are either the only one in charge or you're following someone else's orders -- both of which require different aspects around working well with others.

๐Ÿ‘คm3mnoch๐Ÿ•‘10y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

Neat idea, especially for someone like me who's trying to get out of support and into development.

From Triplebytes' FAQ:

"When do I meet companies?

If we decided to work together after giving you feedback post our technical interviews, we'll start introducing you to the companies and guiding you through their hiring process."

So, just to be clear, first you quiz/screenshare/interview with Triplebyte, and then you still have to go through each company's search process? Or do companies partner with Triplebyte to fast-track candidates who've already been vetted?

๐Ÿ‘คdikaiosune๐Ÿ•‘10y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

I interviewed with TripleByte and it was different from a normal job interview in some ways, but fundamentally also the same. There is just no culture fit aspect to the interview. Keep in mind that they are not addressing the problem that coding interviews do not predict on-the-job success. They are basically addressing the needs of a broken system.
๐Ÿ‘คbrobdingnagian๐Ÿ•‘10y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

Correlation between testing and interviewing success tells you nothing about the correlation between either of them and actual work performance.
๐Ÿ‘คjameshart๐Ÿ•‘10y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

It's very nice to see hiring advice based on data rather than anecdotes. But I wonder if the process described in the article is pre-selecting for people who are out of work and desperate, rather than currently employed and casually looking for something better.

From the article: > Our process has four steps:

> 1. Online technical screen.

> 2. 15-minute phone call discussing a technical project.

> 3. 45-minute screen share interview where the candidate writes code.

> 4. 2-hour screen share where they do a larger coding project.

Then later:

> ...we can't afford to send people we're unsure about to companies

Does every applicant in this system really have to go through four rounds of screening before even talking to someone who works at the actual company? I can't imagine doing that unless I was desperate.

๐Ÿ‘คbjt๐Ÿ•‘10y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

Wait, how is this telling us anything? They don't seem to be predicting what determines the best candidate by looking at how he/she performed on the job. All this is doing is predicting how well they will do in a different part of the interview. Am I wrong?
๐Ÿ‘คhueving๐Ÿ•‘10y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

I personally hate the "take home test" approach to interviewing. I've had multiple such tests that take anywhere from 10-25 hours to complete because simply answering the question isn't enough; you need to give textbook correct answers and your code must be formatted perfectly with the requisite comments and documentation. In short, it's pretty similar to an upper-level college course's final exam; however, in college, you can get a good grade with a few mistakes; in interviewing, you get rejected for a few mistakes. I'm done giving a company 15 hours of my time just to get to a first interview; this is arrogant, condescending, and completely devalues my time.

The reality of hiring is you're going to make mistakes, like every other part of running a business. Even in an extended "interview" such as dating for a potential life partner, people make mistakes so I'm not sure how the hiring process can be quantified to remove said error. The interview process is so excruciating these days I often hate the companies I'm talking with.

While we're at it, the skills requirements listed with jobs today are astounding. My experience is that a company wants to hire a programmer with at least a journeymen's level of expertise in 6-8 skills. If you have 5 and are comfortable you can learn the other 3, you're dead in the water. Let's be honest, the latest Javascript framework isn't that complicated. The latest NoSQL database isn't that hard to learn.

The truly hard parts of joining a new company are learning how projects are managed, getting the political lay of the land, finding a sherpa to answer your questions in the first couple of weeks, and learning where you fit within the organization.

๐Ÿ‘คseajosh๐Ÿ•‘10y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

The blog post kindly shared here reports on a startup's experiments with offering a new kind of hiring screening service. "We launched Triplebyte one month ago, with the goal of improving the way programmers are hired. . . . Well, a little over a month has now passed. In the last 30 days, we've done 300 interviews. We've started to put our ideas into practice, to see what works and what doesn't, and to iterate on our process." They are currently validating what they are doing on the first steps just against what happens at the later steps in their process: "For now, we're evaluating all of our experiments against our final round interview decisions. This does create some danger of circular reasoning (perhaps we're just carefully describing our own biases)."

I agree with the blog post author that current hiring processes mostly show that "too many companies are content to do what they've always done." And the idea of a standardized, automated quiz of programming knowledge sounds interesting. But what has to happen next is to an actual validation study and find out if programmers hired by this process do better as programmers in actual workplaces than programmers hired by some other process.

Regular readers of HN are aware that I have a FAQ post about this topic of company hiring procedures.[1] Company hiring procedures are the research focus of industrial and organizational psychologists, who have almost a century of research to look back on with long-term, large n studies to provide data on what works and what doesn't work for hiring capable workers. It's a shame that most company human resource departments ignore basically all of that research.

[1] https://news.ycombinator.com/item?id=4613543

๐Ÿ‘คtokenadult๐Ÿ•‘10y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

"The really exciting point comes when we can re-run all this analysis, basing it on actual job performance, rather than interview results"

I'm not sure how this gets around the circularity arguments though, since you never get to evaluate the job performance of someone you selected out already. Only the tiny fraction of coders that make it past the initial test get evaluated, which could serve to reinforce the potential biases rather than ameliorate them.

The one case in which this would work is if they hired a number of coders that didn't work out well, and could add or update a feature as a negative predictor of job success.

I'm assuming that they're not at the scale of a larger company with thousands of engineers, and that the observations going into a regression model are relatively sparse. If this is a startup with a 20 hires, I'd be surprised if there was much to do to refine the model after a round or two of evaluations, but would be excited to learn otherwise.

๐Ÿ‘คetrautmann๐Ÿ•‘10y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

The chart is kind of dumb IMHO. It's nice that they show correlation between different factors and their final hiring decisions, that doesn't indicate the actual quality of the people hired. All it does is reveal an unwritten formula behind their hiring decisions. In other words, the time to complete a programming task is being ignored in their decision process and so should be removed from the process. That's independent of how well it indicates a good candidate. Or am I missing something?
๐Ÿ‘คphkahler๐Ÿ•‘10y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

Most of the problems of coding interviews can be solved by good reference checking. Code problems in isolation will never expose how a developer will do when faced with real-world issues, users, QA, and teammates.

Why are people so afraid to pick up the phone and talk to references? I'm always happy to give out my references, and always delighted to talk about the good devs I've worked with, with specifics about what they've done.

Standardized tests don't work for schools and don't work for jobs.

๐Ÿ‘คmichaelvkpdx๐Ÿ•‘10y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

I found the article very interesting, but it seems to me that the metrics are the wrong ones. You cannot treat hire and no hire outcomes equally. Let's not forget that the goal of the interview process is to actually hire people. A successful hire is worth millions, a correct no-hire decision is just avoiding further losses of time and money. On the other hand, in terms of efficiency, a candidate that wasn't rejected early is the most expensive error. Well, not as expensive as actually hiring the wrong person, but it usually takes months to figure that out. Whereas the fact that a whole man-day was spent interviewing the wrong person on site becomes obvious in a few hours.

Here are the important metrics in my opinion, in order of decreasing importance:

- How many people were hired? Or what percentage of positions were filled? How long does it take to fill a position? Nothing in the article mentioned how many people were actually hired.

- False positives (people making it to the most expensive stage of the interview, typically a day-long on site interview, and being rejected there). What percentage of people that went to on site interviews got offers? Personally, I have always advocated processes that eliminate as many false positives as possible, even if it comes at the cost of some false negatives. Of course, you have to be careful not to filter out people too aggressively, because then you're just not going to hire anyone.

- False negatives (incorrectly rejecting good candidates early). By definition that's impossible to measure exactly. However, if you are not hiring fast enough, then maybe you have a problem with your screening process. At this point you could do an experiment and relax the screening process for half of the candidates and see what happens. But it could be just a sourcing problem, that is, you are not getting good candidates in the pipeline to begin with. It's very hard to tell whether you are being too harsh and not believing enough in people's abilities (or not willing to develop talent), or you are just not attractive to the kind of people that you want to hire.

Of course, all of the above is from the employer's point of view. If you are also trying to provide job seekers with a good service, then you can devise other metrics for success and for efficiency from their perspective.

(edited for formatting)

๐Ÿ‘คblue11๐Ÿ•‘10y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

Maybe I glanced over it but who did the candidates interview for? Were those just simulated interviews or are they working with companies that actually hire people? Or are they hiring for themselves? I'm very confused by what "success" means. The last paragraph indicates that it wasn't actually used for a hiring process? I see a commenter on their site has the exact same reaction.

Either way I applaud every afford to improve the hiring process. However I'm a tad bit skeptical. They should release the dataset (unless I missed it) because it's pretty convenient that the results seem to indicate that hiring can be improved by a quiz which they could build and sell.

I'd be interested in the following screening filter: Have a programmer at the company read through the projects the candidates supplied (as a replacement to "read CV") and then come to a conclusion of yes/no. No projects = no job offer by default. You can always think of a different approach for people with no projects if you feel like you should hire from that group. Possibly have multiple programmers read the code and discuss it.

๐Ÿ‘คkriro๐Ÿ•‘10y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

Most candidates can speak well about interesting projects they'd worked on in the past, but a significant percentage of those can't pass a coding test.

Also, not being able to speak well about a past project is highly correlated with doing well on a coding test.

Sounds like one or the other should be thrown out. (Or maybe only the small percentage who do well on both will go on to do well on the job?)

๐Ÿ‘คlackbeard๐Ÿ•‘10y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

Where I work, we ask candidates to do a take-home task after they've passed the first in-person interview. The take-home task is trivial, something most developers with a few years experience should be able to knock out in less than an hour. All components of it are easily googleable.

The advantage is that we're building a corpus of solutions to the same problem that we can compare against each other, which is interesting. More importantly, we're building a corpus of solutions that we can then pick from to have the candidate analyze in-person, and talk us through what they see, what they'd do differently, what they like/don't like, etc.

In short, we familiarize them with the problem via their own answer, and then ask them to analyze someone else's (anonymized) answer. Our sample set so far is too small to draw definitive conclusions from, but it feels better than our old ways of doing things.

๐Ÿ‘คsenorprogrammer๐Ÿ•‘10y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

This is a noble project but I'm concerned that it may not be a methodologically valid blind study.

Specifically, in order to be a blind study of the relationship between the screening exam and technical interview performance, the technical interviewers should not know the results of the screening exam before they make their decision. While they do not state this clearly, it seems possible that since the same 2 people were conducting all steps themselves, that they were not properly blinded.

Thus we cannot rule out confirmation bias in the interviewers themselves, i.e. that they were impressed by good performance on the programming quiz, not that it was an independent predictor of good performance in the technical interview.

Now, maybe one person did the screening and the other did the technical interview with no information sharing in every case, but this would need to be clarified.

๐Ÿ‘คabalone๐Ÿ•‘10y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

Many fields require serious professional certification. You can't become a doctor unless you go through a board certification that includes simulated patient interaction. Likewise, you can't build a bridge or a dam without engineering certification.

IMO, this article demonstrates the need to certify software engineers; using a process similar to the interviewing process described. Therefore, when hiring, we can skip most of the "do you know how to code" and get down to cultural fit and mutual interest.

๐Ÿ‘คgwbas1c๐Ÿ•‘10y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

This is interesting, but are they using "performance on a sample coding task" as the truth? If we knew that was a good proxy for being a good hire, we'd already be done.
๐Ÿ‘คkenferry๐Ÿ•‘10y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

I've had mixed reaction to this take home exercise. I've done that some 4 times and 3 of them resulted in getting called in for an onsite (Amazon). If problem is interesting and there's something to learn. I tend to go for it. But couple of them were lame. Neither it was a good use of my time, nor it was an indicative of my skills.

In near future, I'll be requesting exercises that can be treated and used as a micro library. They are free to use it and I get to post it on github.

๐Ÿ‘คdj_doh๐Ÿ•‘10y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

> The really exciting point comes when we can re-run all this analysis, basing it on actual job performance, rather than interview results.

Yes, this is when you have something real.

๐Ÿ‘คavodonosov๐Ÿ•‘10y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

"suddenly, a significant percentage of the people who had spoken well about impressive-sounding projects failed, in some cases spectacularly, when given relatively simple programming tasks."

Unsurprising. Turns out talk really is cheap and doesn't indicate one can do. I've even seen people able to maintain jobs over a period of years by talk alone without ever really doing much. Or even being able to do much.

๐Ÿ‘คjqm๐Ÿ•‘10y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

>Suddenly, a significant percentage of the people who had spoken well about impressive-sounding projects failed, in some cases spectacularly, when given relatively simple programming tasks.

I feel like this should be listed among the least surprising things in the world. Being a good programmer is about MAKING that hard project look easy, by approaching it in the correct way!

๐Ÿ‘คxamuel๐Ÿ•‘10y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

There are different kinds of jobs and different kinds of work environments. Grouping all of the situations together and trying to find just a single way of interviewing people flies in face of diversity in human nature. While this research is great and startups in this field are doing well, I see this huge gap that no one so far is trying to address.
๐Ÿ‘คfillskills๐Ÿ•‘10y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

I'd really like to see how references correlate with the other factors. Did they not call references?
๐Ÿ‘คccleve๐Ÿ•‘10y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

I should apologize to Triplebyte in the name of those of us, including me, who immediately created fake profiles and took the test just to see if we would do well against this new rubric.
๐Ÿ‘คBrandonY๐Ÿ•‘10y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

OPEN LETTER ================

Hi, I had a reading of the conclusions you made and I felt as if you defined a process of hiring machines to code rather than humans. So I took a few moments to read your manifesto(https://triplebyte.com/manifesto) (the premise on which your entire conclusion is made), and here is my take on it.

1. /"Whiteboard coding and algorithm questions aren't good predictors of how effective someone will be at writing real code." Whiteboard coding show how someone really thinks. It illustrates the though process of the person and that helps the interviewer to judge him on his rational thinking and his logical approaches. Algorithms add to this by illustrating the problem solving ability. A person may not be able to solve an Algorithm actually, but the attempt on a whiteboards speaks more than his implementation on a online platform.

2./ "Candidates deserve a consistent experience and consistent evaluation". The entire USP of an interview is the diversity which allows the interviewer to judge if someone is able to adapt to new situations and come out of his comfort zone. What you are suggesting is to change the interview process into a GRE exam which will only in-turn develop the culture among developers to prepare for that exam for 2 years.

3./"Hiring decisions should be made using a clear scoring system, not gut feelings" Most of the companies have a 3round or 4 round interview process. It is obvious enough to remove the gut feeling factor. If you wanna argue that it may be possible that a candidate got selected based on the gut feeling of all 4 interviewers then my counter argument is that he is worth being selected if he could generate that gut feeling in so many people.

4./"The hiring process should be focused on discovering strengths, not uncovering weaknesses" Agree to the point. However, the irony is that you are trying to define a particular process to hiring. I wonder if it could actually perform the "discovery" part.

5./"Candidates should be told exactly what to expect in an interview, and be allowed to prepare in advance. " So basically you want to hire the guy who has studied the most over the smartest guy in the room. From my experience, I can surely say if companies like "Google" and "Fb" used to follow that practice, I wouldn't be even writing their name here.

6./"truthful feedback on how they did so they know how they can improve" Agreed. Something that should be adapted by all companies in their recruiting process.

7./"Good programmers come from all types of background." You enforce my point with this statement. Good programmers need not be just people who could quickly write a program for a search in a large of data using hash maps but can also be people who have brilliant problem solving ability and are slow in transforming that into code, or people who are amazing in thinking software design and scalability and probably cannot remember code syntax so well. A company needs a good blend of all these people. Then only a good ecosystem to growth is created rather than just making a team of 10 machines who could transform a pseudo code into Java in 10 minutes.

8./" The software industry needs to experiment more with hiring processes and figure out what really works." I think many are already doing that by doing Tech Hackathons, online challenges, weekend projects, open source contribution references etc. So, not something new which you guys figured out.

9./"Candidates are at a fundamental disadvantage in salary and equity negotiations" Not sure what kind of companies you have surveyed. I think most well known companies maintain clear standards of salary and compensation plan. Though people will surely be flattered reading this. :)

10./"Companies should not have to make recruiting a core competency" Now you are just trying to open the market for yourself by yourself. No comments. :P

Would love to hear your counter arguments. Mail me. :-)

๐Ÿ‘คapexkid๐Ÿ•‘10y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

Great. So where can I do this quiz and these small programming tests? :)
๐Ÿ‘คestomagordo๐Ÿ•‘10y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

Gosh I hope they are controlling for time-of-day, day-of-week, and weather considering how closely packed together those 300 interviews were.
๐Ÿ‘คhaversine๐Ÿ•‘10y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

Without any demographic data about who the interviewees were, this is basically useless. Are they all under 30, white and Asian men? Where?
๐Ÿ‘คmichaelvkpdx๐Ÿ•‘10y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

๐Ÿ‘คoliverc2๐Ÿ•‘10y๐Ÿ”ผ0๐Ÿ—จ๏ธ0