(Replying to PARENT post)
For every task you need some minimal iq. Some tasks need higher iq than others. A programmer is someone who can at least do the least iq requiring task. He will fail on some more difficult tasks.
Iq just a standin for mental compute capacity.
(Replying to PARENT post)
(Replying to PARENT post)
And after a (inordinately long) time reading, the light bulb moment happened and I added a single line of code.
In my view, when you are in the business of making Seven-League Boots, you don't need to sprint.
Yes we want to deliver products quickly, but the link between good products, effective business and speed of code writing is tenuous at best. Take your time, line up your shots, and be sure you are a value multiplier. (That's the real 10x programmer. 10x as valuable. That might mean using good SEO techniques to get paying customers, but using boring old SQL back ends)
(Link to the years old article I have not written yet, comments welcome http://www.mikadosoftware.com/articles/slowcodemovement)
(Replying to PARENT post)
(Replying to PARENT post)
The main findings from this investigation of the dataset variance.data can be summarized as follows:
The interpersonal variability in working time is rather dif- ferent for different types of tasks.
More robust than comparing the slowest to the fastest in- dividual is a comparison of, for example, the slowest to the fastest quarter (precisely: the medians of the quarters) of the subjects, called S F .
The ratio of slowest versus fastest quarter is rarely larger than 4:1, even for task types with high variability. Typ- ical ratios are in the range 2:1 to 3:1. The data from the Grant/Sackman experiment (with values up to 8:1) is rather unusual in comparison.
Caveat: Maybe most experiments represented in variance.data underestimate the realistic interper- sonal variability somewhat, because in practical contexts the population of software engineering staff will often be more inhomogeneous than the populations (typically CS students) used in most experiments.
Still only little is known about the shape of working time distributions. However, variance.data exhibits a clear trend towards positive skewness for task types with large variability.
The effect size (relative difference of the work time group means) is very different from one experiment to the next. The median is about 14%.
The oft-cited ratio of 28:1 for slowest to fastest work time in the Grant/Sackman experiment is plain wrong. The cor- rect value is 14:1.(Replying to PARENT post)
(Replying to PARENT post)
To «prove» a 10x programmer existence you would need a bi-modal repartition on the percentile of workers/speed.
The grant sackman/peopleware/The Mythical Man Month all try to answer a question that is tricky : what makes someone creative productive?
People focus on the speed. But they are just forgetting the most important part of the experiment.
One of the most important part of G/S experiment that everybody forget is the lack of correlation between performance and
1) diploma
2) experience after 2 years of practice.
Having done more than one job, other fields of works that are also creativity based, the «feeling» was that it is not only about coders but musicians, intellectual professions, journalists, project manager...
What are the implication of the lack of relation between diploma and experience?
1) Diploma are overpriced, the job market is artificially skewed in favor of those who have the money for it;
2) New devs are underpaid, old devs overpaid.
The burden of the proof that a diploma/experience is relevant for a job should be in the hand of the one selling diploma. Diploma especially in computer science seems to be a SCAM
The effect of this scam is :
1) young workers enslaved by loans in jobs they may not be good at/liking;
2) a rigid job market that prevent people from moving hence artificially creating difficulties to have full employment
3) an artificial exacerbated competition resulting in cheating from both sides.
(Replying to PARENT post)
(Replying to PARENT post)
> [] Three of the twelve subjects did not use the recommended high-level language JTS for solving the task, but rather programmed in assembly language instead. Two of these three in fact required the longest working times of all subjects. One might argue that the decision for using assembly is part of the individual differences, but presumably most programmers would not agree that doing the program in assembly is the same task as doing it in a high-level language.
In my experience, making the right decisions like that is the real difference between good and not so good programmers. Good programmers do on average better choices that results in less code, code that is easier to maintain and reason about, and choosing language and architecture that fit the problem at hand. It is not that good programmers develop so much faster usually.
(Replying to PARENT post)
(Replying to PARENT post)
Finding metrics which work well even when people try to game them is incredibly difficult (if not impossible).
(Replying to PARENT post)
Inefficient and complicated solutions build up and the mediocre developer ends up fixing old problems.
(slow is just an indication of mediocre)
Unfortunately the long term effects are not visible until after a long time (duh), hiding the individual differences.
(Replying to PARENT post)
It's MUCH better to have an engineer which takes 3 days to implement a feature in such a way that it doesn't need to be revised/fixed again for at least 1 year than to have an engineer which takes 4 hours to build that same feature but in such a way that the feature has to be revised 10 times by 5 different engineers within the course of the year.
The second approach actually consumes much more time in the medium and long term. By putting too much pressure on engineers to implement features quickly, you encourage them to create technical debt which another engineer will have to deal with later.
It's basically a blame-shifting strategy.