Confidence Through Feedback, or Why Imposter Syndrome is the Wrong Metaphor

Imposter syndrome is often presented as a personal failing.  A lack of confidence, our wrong-headed beliefs not matching the reality of how competent we are, or worst of a flaw of our gender.  Just tell yourself you are wrong!  Imagine everyone else is just like you!  Have confidence in all parts of yourself except that part that tells you not to have confidence!

Unsurprisingly, these interventions are not often effective.  At best, they change behavior, frequently while making people who already feel bad about themselves feel worse.  At worst, they lead people to stop trying to improve the environment they have found themselves in.

I would like to offer an alternative story: imposter syndrome is a rational response to insufficient feedback. 

Until I joined TripAdvisor I had no real way of knowing whether I was a good programmer.  I knew that often our customers were happy, but that was often three to six months after I wrote the code.  I knew that sometimes I couldn’t make something work, and I assumed that was my fault whether it was or not.  I sometimes knew I was grateful to be working in my own code months later and sometimes I wasn’t, but again that was far from where I had made those decisions and it was only my own opinion.  I was anxiously waiting to be discovered and dismissed.  I covered with appeals to good engineering principles I had heard about, largely ignored by the people around me.

Without feedback, there are three options: I can believe, without evidence, that I am an awesome programmer.  I can believe, without evidence, that I am a terrible programmer and quit to go do something else.  Or finally, I can believe, without evidence, that I am a terrible programmer somehow successfully pretending to be an awesome programmer.

The problem with believing, baselessly, in one’s own competence is that eventually we all reach the limits of our capabilities.  Suddenly confronted with evidence of their lack of perfection, overconfident people become defensive and blame anything other than their own failings for the problems, making them impossible to fix.  Any evidence of failure becomes personally devastating.  It is hard to protect one’s self from any possible feedback.  Among programmers, I often see such people retreat into believing that they are this nebulous category known as “smart” and that “smart” is always enough to do anything.  These people appear deeply confident, but their confidence is a protective illusion and the cost is born by the people around them who have to protect their fragility.

The people who take the second choice and drop out simply disappear.  They never have a chance to find the feedback they were missing, but they also don’t have to confront their personal insufficiencies.  Personally, I certainly wasn’t going to drop out.  Until I became a programmer I was making $27,000 a year as a hospital manager.  I might not be good, I figured, but I was good enough to bring home my paycheck.

The genius of imposter syndrome is that it is adaptive and self-perpetuating.  We don't have to disregard when we fall short, for such failures fit our internal narrative.  Failure is a terrifying chance to be found out, but it is not an existential challenge.  When we succeed, we can believe it is part of our act.  Look how well I have fooled everyone by doing work they think is good!  We often work hard, to cover for our obvious faults, and may take into account the possibility of human failures, which inevitably occur, both of which can produce decent results.

However, these successful results come at significant cost.  We have trouble accepting real feedback, since any feedback is based on our facade and we “know” better.  Imposter syndrome also brings with it anxiety and shame, preventing us from feeling the thrill of accomplishment when we do succeed.  It robs us of the joy we earn.

When I joined TripAdvisor I found that all code was code reviewed.  On my team we required three ShipIts to commit code, so at least three people had to read any code I wrote.  It was absolutely terrifying to post my first code review.  My first real project took a month, and I was panicking over my failure to deliver in the estimated time.  I was working in all brand-new languages, in a brand-new domain: I didn’t ask questions because I didn’t even know what questions to ask.  Instead I sweated and tried things out and Googled madly.  At last the day came when I couldn’t put it off anymore and I posted my review. 

Naturally, it being the first thing I had done as a web developer beyond bug fixes, I got back pages of comments.  And then… the world didn’t end.  I addressed the comments.  I got ShipIts.  The project worked.  I moved on to the next thing.  Next time, I had learned the coding conventions and how scope worked in JavaScript.  The time after that I finally understood closures.  On and on, my stupidity was laid bare for the world to see and it turned out it didn’t matter.  No one cared, and when I took the feedback on board for next time they thought it was awesome.  It wasn’t that we were all covering for how bad we were at our job: it is that we were all plenty competent and becoming more so.  The flaws we had weren't disastrous because they were accounted for by the system.

Eventually I began learning to teach, to provide effective feedback and create a shared vision of what “good” code looked like.  I built up the people around me, as they built me up.

After a few years of that, I realized I no longer imagined I was bad at my job.  With no place to hide, I would have been found out years ago.  Also, by this point I was significantly more effective, and the people around me were too.  None of us were some binary “good coder” versus “bad coder”: we were all coders, with things we did well and places we struggled and areas where we were learning.  I realized that I was a "good coder" not because my code was aways right or always worked straight away, but because I had processes I followed to ensure that my code became right and worked before being released and communicated clearly my intentions to my collaborators.  I started to look for other feedback mechanisms: aesthetics and unit tests and pair programming for code, shared values and professional belonging for our team, manual testing and user experience feedback and analytics for our products.

Once I got to this state, it no longer mattered whether I was “good” or not.  I became part of an effective team, and together we made useful things that satisfied our users' needs.  They were easy to experiment with, extend and maintain.  Anyone who looked at that and thought what mattered was whether we were “smart” seemed to me to be asking the wrong question.

The cost of imposter syndrome is shutting ourselves away from the very thing that could cure it.  It becomes a trap.  It prevents us from becoming improving where we could improve.  It keeps us from connecting with our collaborators, with whom we can accomplish greater things than even the best programmer could do alone.  These stories we tell ourselves to get by strangle our true voices, and in silence there is only fear.

I have at times offered the advice of “Be Brave”, but I now believe that is not quite right.  I try instead to be curious when confronted with fear.  When I find that I am afraid, I poke at it, prodding for information, finding what feedback I am missing.  When I am done I can still choose to do the things I was afraid of, only more safely because I know more.  Sometimes what I learn is that this is not something to do, and I set it aside.  Fear points the way to all the most interesting things, and when we are safe and secure we can use it as that tool.

I believe that the reason so many underrepresented programmers feel imposter syndrome is two-fold.  First, we seldom have the option of being baselessly overconfident while those around us protect us from the truth lest we throw a tantrum, so in the absence of feedback we can choose between imposter syndrome or quitting.  Second, we have been deprived of some of the pathways others can use to get easy feedback and reassurance.  We can not look around and be automatically reassured of our belonging.  We get feedback that is extraneous to our actual job while too often lacking the casual feedback among peers about our work.  We can't simply follow the cultural conventions we already know and be accepted.  When we are busy responding to feedback that is about our very selves, we cannot build the professional confidence that protects us when we seek out feedback that lets us improve the craft.  When we are rendered invisible and ignored, we have stolen from us the opportunity to build true confidence.  This does not imply that is is a problem exclusive to those who are underrepresented, nor that all underrepresented developers feel this way.  It is simply understandable that this adaptation would be more common.  Until we can offer everyone who want to program inclusive and effective forms of unavoidable feedback there will continue to be disparities in experiences based in how people are regarded.

There is a way through imposter syndrome and out the other side, into confidence based in knowledge and not delusion.  I have personally found this a more satisfying approach than merely telling myself other people have felt the same.  I no longer fear being discovered to be a bad programmer.  I am no perfect coding machine, but no human will ever be so.  I strive to know my weaknesses, and find ways they can be valuable.  I seek or build contexts where I can be most effective and have the most impact.  The most important piece of that is getting feedback on whether what I am doing is the right thing to do, early and often and unavoidably.  There is always more work to do, and we can never be “good” enough to do it all. 

By accepting feedback we can all be constructive, and that is sufficient.

Email Template For Addressing Conference Gender Diversity

Someone I worked with had asked for recommendations when I noted the speaking lineup of a conference he was attending was exclusively men, and I figured I'd share the letter I came up with in case it is useful to others:

While I attended $CONFERENCE_NAME last weekend, I was disappointed to notice the oversights in your speaker line up leading to it being made up of nothing but men.  Perhaps your prioritization of people with their own books to sell led to inadvertent systematic discrimination, as you were reliant on the discriminatory publishing world and more generally on people without a significant non-work-related demands on their time (who are most likely to be either single men or men in non-egalitarian marriages.)  {Depending on your impression of the conference itself, something like: "Since I was also disappointed in how much of the conference devolved into the speakers plugging their own books, I am confident you could kill two birds with one stone by instead seeking out the most qualified speakers.” could fit here too.}

I wanted to convey that when trying to build a group from $COMPANY to attend this weekend, a woman who is normally excited to attend local conferences had no interest at all.  Without any women speaking, a code of conduct or even the barest token of effort towards diversity, there was no evidence that there would exist other women there or that the men involved see women as peers.  She expected that the weekend would, at best, be full of getting interrupted so men could explaining things she already understood, dudes hitting on her, men quizzing her about the alien experience of being "one of those", people assuming she was part of the conference organizing staff or from recruiting or some attendee’s wife, or simply ignoring her all together.  She also assumed that any complaints would be brushed off as disruptive to the existing exclusionary atmosphere that it appears the organizers have cultivated.

Going forward, if I see another line up of all men speaking in Boston, I will have to assume she’s right and you are actively working to run a conference to alienate women.  Since I’m not interested in that environment, this may be my last $CONFERENCE_NAME conference; I could instead have attended $OTHER_CONFERENCE_THAT_MADE_AN_EFFORT the week before where both I and the women I know would both have felt more welcome.

Now, the reasoning behind this approach.  I like trying to turn it into a contest between conferences, since the only eventual pressure to change will come through economic pressure.  It also circumvents the argument that it's not possible or there are no qualified women, without ever having to point out just how incredibly insulting that argument is.  Other conferences have worked hard to change the make up on their conventions: accepting speakers through blind proposals (rather than just inviting people they know of or their currently-non-diverse attendees recommend), advertising a code of conduct widely and enforcing it when it comes up, creating scholarships for women who want to attend but who’s companies won’t support it and seeking out and addressing feedback from women speaking and attending.  It’s not like this stuff is easy; it’s jut possible.

I did have one more recommendation for the guy I was talking with: 

If you want to be helpful while you are there, be your usual polite and outgoing and aware self and discuss the technical work of any women you do meet, especially listening to their ideas and learning about the work they are doing.  A good interaction or two can brighten up even the most awkward conference.

The Dyslexic Programmer

I am dyslexic, and these are my experiences.  They certainly won’t be universal, especially as there isn’t just one form of dyslexia [0].  To identify my strain of dyslexia, I read quite quickly (though only somewhat accurately) through pure pattern-recognition. I can look at, say, "word" and identify that the second letter is an ‘o’, but if I want to understand it as a concept I ignore the letters involved all together [1].  Essentially, I’ve memorized how each word in the English language looks as a complete entity. (I also, thankfully, have an excellent conceptual memory.) This approach, of knowing something is made up of individual parts but not needing to worry about what specifically those parts are unless absolutely necessary, extends to how I approach math, history, social sciences and fantasy world-building as well. 

I believe that this tendency to generalization is why I am able to jump between levels of abstraction quite easily.  The concept of emergence, and the specific cases of recursion and polymorphism, are obvious to me.  Everything in the universe is made up of component parts, interacting in ways that give rise to the meta-phenomenon we observe, like “matter” and “consciousness”, and I can keep that in mind without worrying particularly about what those components are.  It is odd to me when people consider things to be discrete, isolated wholes; it can be useful to talk about them that way, but I usually don’t actually believe it.

Programming at any scale beyond scripting involves building systems of components that interact to produce the desired outcome, so this conceptualization is handy.  Even more applicably, debugging issues is the skill of diving into only those component parts that might possibly be causing the observed undesirable behavior.  I’ll see a problem and know that somewhere there is an errant IF-statement, and can accurately guess the path or two it might be down.  (The exceptions are when people break my assumptions, like yesterday’s bug caused by an untyped, string-identified property in the global context, but since people shouldn’t do those things anyway it’s usually not a problem.)

So what are the problems?

My dyslexia means that the most important thing for me about a language is the tool support, which often rules out new, hip languages.  It took me a while to figure out that my dyslexia was the reason I and the command-line centric programmers would never agree.  I've face prejudice against non-text-editor programmers, but often only until the first time they watch me debug something in my head.  We all have our strengths ;-)

IDEs mean I don’t have to worry about misspellings or obvious syntactic errors and syntax-highlighting is invaluable.  It’s even better when books use it too: the easiest textbooks for me to read were the computer science books with syntax-highlighting.  Unfortunately, this was not most of them.  Code can also often be represented visually: I highly recommend the HeadFirst series in particular [2].  This means that learning independently from books isn’t based on the ability to parse large blocks of prose [3], like it is in many other fields.  For obvious reasons, I strongly prefer the object-oriented paradigm for large programs, though I've also gotten into aspect-oriented JavaScript, supported by Chrome's developer tools.

The hardest thing has been learning to communicate my understandings to other people.  Things that work for me, like sitting down and reading a book of design patterns from front to back, simply don’t have the same effect for others.  Most people remember less of the code base than I do, and lack my multi-dimensional mental model of our class hierarchy.  My visual diagrams frequently contain information encoded in ways that are easy for me to understand and far less useful to others.  It is hard to explain my intuitive, aesthetic sense of good code, such as when polymorphism would simplify a method.  The heuristics I can substitute are incomplete, flawed reflections of the generative principles that motivate them and people who like rules tend to reject them when they can think of counter-examples.  I rely on metaphor and examples a lot, because they have proven more effective at translating my thoughts into something other people can understand.

On the other hand, because I could never take communication for granted I believe I have become better at code switching and figuring out why I am failing to communicate.  When feedback is available, I can debug conversations by spotting mis-matched assumptions or misunderstood statements and correcting them.  This works less well in writing, obviously [4].  I have also found that programming talks lend themselves exceptionally well to visual communicators.  I enjoy crafting a talk that grows a concept and bringing an audience along through the story.  This has led me to interesting conferences, where I can refine my ideas and meet all sorts of people.  Software development is an exceptionally social discipline and programmers have more resources to gather together and share our craft.

And the advantages?

Obviously, it is entirely possible to be a good programmer and be dyslexic. I would go further, though: I believe that in some ways dyslexia makes me a better programmer.

The greatest strength of the dyslexic programmer is that if it is possible for syntax to be confusing, I will almost certainly confuse it.  This is useful because so would almost everyone else, eventually.  Most people have used > when they meant < at least once.  Language constructs like meaningful whitespace or a lack of parentheses to delineate scope have caused major bugs, including the recent SSL vulnerability in OSX; they also make it almost impossible for me to comprehend code.  I can never ever rely on numbers (they all mostly look the same), so I use readable constants instead.  When I write Java Comparators, for example, I’ll have them return FIRST_GREATER, BOTH_EQUAL or SECOND_GREATER.  Ordering is nearly useless and flag arguments incomprehensible.  If I call the method twice it’s likely going to be faster to create a new object than do the extra work to ensure two arguments of the same type aren't flipped.

It's really about incentives: I’m less likely to tolerate ambiguities because I will mess them up constantly, instead of once in a blue moon when it may really matter and is less likely to be caught.  I support coding styles, static analysis and languages choices that don’t allow anyone to make careless mistakes, reducing overall bugs [5].  They may catch errors in some near-perfect programmer’s code once every 100,000 lines instead of once every 1,000, but those are still bugs that have been prevented and were completely unnecessary.

I am also forced to design.  I don’t get to write 500 line methods, because I get lost the first time a method swaps abstraction levels.  I think secretly many people do: they just have a higher tolerance, whereas I’ll get knocked off-task and forget what I was trying to do in the first place until I extract a method or object.  In order to write the fast, hacky approach I will often get halfway to a well-designed solution just in figuring out the problem: I may then go with the hacky quick-fix, but I’ll also remember how the code should look the next time I come back.

What has programming done for me?

Programming has been great for me, and would have been useful even if I didn’t pursue a career in software development.

Thanks to our tools, programming is a training ground with instant feedback.  The easiest example isn’t programming-specific: spellcheck taught me to spell.  First, it means as I write I don’t worry about whether I’m spelling something creatively.  I can always fix it later, which keeps my cognitive load down and lets me focus on getting things done.  Second, when I misspell a word it is immediately underlined in red.  Over time I’ve even learned the pattern of keystrokes that keep the little red squiggle from appearing, much the same way I know Kilik's leg sweep in Soul Caliber.  Now I can type many words correctly most of the time, without even once having to think about letters at all. 

Similar dynamics work with programming languages in a good IDE.  I am told instantly if I left off a semicolon or mismatched brackets or misspelled a function name.  All I need to have at my fingertips are the concepts, metaphors and design: the fiddly symbols come from my tools.  Even without an IDE, compilers and unit tests both helpfully tell me when I’m wrong.  The more often I can make mistakes the more likely I am to learn not to.  It is okay to be wrong about the things I am often wrong about when I program, because I have automated mitigation strategies.

Finally, programming provides positive, incremental feedback for writing and engaging with creative language.  This was especially useful when I was starting out and is why I think programming may be great for dyslexic kids.  “I want to make this box red” is a clear, direction, actionable goal in a way that “write in your journal for five minutes” isn’t. It also didn’t matter if it took me five tries to get it right, because each time I would get useful responses that got me closer to success.  Each time I wrote something that didn’t work, I was still writing.  Compare this to composing a paper, where there is no feedback at all and editing cycles tend to be much longer.  Even with an outline, if a sentence didn’t make sense I’d just get lost in the paragraph.  In a program, you can always hit “run” and see what happens, or write it the obviously wrong way and improve from there.

Conclusion

I will probably never be as fast a programmer as I would be if I were not dyslexic, but the other great thing about programming is that a wide variety of coders are plenty good enough to be employable and speed is only one small component of our productivity.  I have been able to build on my strengths and mitigate my weaknesses to develop a successful and rewarding career.  When I tell people what I build, I get to hear “I love that site!”

And the greatest challenge?  The biggest issues [6] all involve dealing with arrogant procedurally-minded programmers who continually over-generalize their experience [7].  Also, a word to the wise: you really never want me setting up the build system.

How is Gender Studies hard?

Someone somewhere, of course, was mocking Gender Studies majors for taking an easy course because they couldn't hack liberal arts.  I made the point that the Gender Studies classes I took were significantly more difficult than my computer science classes.  Someone then asked me why I thought that was, and I came up with an answer:

In Brief: Evolution

Evolution isn't about "good" or "bad".  It is simply a word for a specific emergent process. It describes all the things that happened that led to the current state of affairs.  Sometimes they happened for reasons, under specific and identifiable pressures, but other times just by accident. It gets way over-simplified, especially by people looking for answers, since evolution is bad at providing answers, or reasons, because it's a description of an emergent system and not a driving force.

The original building block of evolution were these two observations:
1. Does a trait make it likely you will die early or otherwise won't reproduce when someone else will, given current evolutionary pressures? Then a trait is likely to be expressed in very few members of a species.

2. Does it directly lead you to have more kids, given the current environmental pressures? Then a trait is likely to spread, being is expressed in a larger percentage of the following generation of a population that contain that trait. "Being able to digest milk when food is scarce" is a good and recent example. Note that even this doesn't imply a value judgement unless you think humans' value is based on reproduction (which some evolutionists do because they're wrong.) People who can't digest milk aren't defective. Indeed, environmental pressures can change and which traits are adaptive will change with them: now that we have better nutrition being able to process lactose may no longer be an evolutionary advantage.

It turns out that in addition to the two more obvious dynamics, there are a bunch of other cases too:

3. Is a trait situationally useful, sometimes helpful and sometimes not?  It's likely to show up in some of the population, but not most. (There is an interesting cluster of traits that occur with 8-15% prevalence in humans, including male pattern baldness and ADHD.)  This is similar to a mixed equilibrium in game theory.

4. Does the trait allow for on-the-fly adaptation?  As programmers, we know how powerful reconfiguration can be.  The human brain, for example, is highly plastic and can adapt to changing circumstances, and our muscles grow better at performing exactly the tasks we perform with them.  Specialization is "expensive", in that it leaves the organism vulnerable to changes in the environment; allowing for cultural, technological or physical adaptation during a lifetime is an easier way to get a similar effect.

5. It is a trait that was once useful or is useful for some people even if not for you, and is not actively harmful? It may stick around! Dimorphism is complicated to evolve and thus usually only occurs under pressure.  This is why women have a prostate and men have nipples. Once something has evolved, it takes pressure to make it go away entirely, which is why we go through a phase in utero when we develop proto-gills.

6. Is it fun/attractive/entertaining/not actively annoying? Then it may not contribute to inherent fitness, but it is likely to be selected for anyway, because evolution isn't a passive thing done to us. It is a dialectic process: the process shaped us, and we get to shape the process. Cultural tastes or norms can lead to evolutionary pressure just a surely as any other environmental factor (which is how the Hapsburg's lasted as long as they did: cultural power was more influential than any pressures against genetic disorders.)  This is similar to mechanism design in game theory: if we don't like the outcome of the game, change the game.

7. Is it genetically linked with something that is subject to any of the other positive dynamics?  Even if a trait itself is not useful or desirable or advantageous, it may share a common cause with something that is.  

8. Finally, does a trait have no reliable impact on reproductive success? Then it might happen anyway! This is called "genetic drift". Sometimes answer to "why?" is "eh, why not?"

Assuming that something is one of the first two may seem really cool, but when trying to impress your friends and intimidate your enemies always remember: a trait might just not be bad enough to be worth getting rid of.

Reading List Referenced at Usenix Talk

My Usenix talk this year uses various books I've drawn on for inspiration as backgrounds for my slides.  The goal of this was to share some of the broader world beyond what we usually look to as computer scientists.  Some of these books are accessible, while others are extremely dense.  I recommend picking things up and putting them down if they don't speak to you.  It's all about what is useful, helpful and challenging to you where ever you are right now.

When Buying Computer Components

I'm doing a workshop on putting together computers in two hours, and rather than do hand outs I figured I'd toss the links up on my blog. When buying components for a computer, I usually read:

Ars Technica

If you are buying the components for a machine, they do systems guilds that are a useful report on the state-of-the-art each December-ish.  The one for 2012 is available here:

http://arstechnica.com/gadgets/2012/12/ars-technica-system-guide-december-2012/


Video Card Benchmarks

They have performance statistics and a useful ranking of video cards by performance per price that I find particularly useful.  It is helpful to remember that many of these are relatively arbitrary, so if you have some specific game or application in mind it is useful to find reviews specifically for that application.

http://www.videocardbenchmark.net/gpu_value.html


Corsair

This is the easy way to find the RAM that goes with your motherboard.

http://www.corsair.com

 

Tom’s Hardware 

Has comparative reviews of various components, like hard drives, though I find their comparisons less easy-to-read than the video card benchmarks site.

http://www.tomshardware.com/

 

Fractal Design Patterns

The difference between Architecture and Code blurs quickly when refactoring becomes sufficiently common, so the distinction made between various pattern languages never seemed especially helpful to me. Between Architecture and Service the line is firmer: this code is mine, that code is yours, here is the interface. At the same time, I've found that the design patterns that work when I'm writing methods and classes still apply when I'm working with services. The goal is still to increase cohesion and decrease coupling, even if often I have no control over half of the code.

Thus, the idea of a Fractal Design Patterns. Instead of the usual pattern description, which describes the pattern at a specific level of abstraction, a Fractal Pattern would illustrate it at multiple levels and try to get at the underlying principle.

For example, I'll take the algorithm-swapping-base-on-state that is described by the Strategy pattern.

The Internet is Convincing Women Not To Study Computer Science

A summary from YodasEvilTwin on Slashdot:

"The internet is dominated by sexist men, which discourages women from getting involved in related fields."  

I add a bunch more caveats, references and empirical data, but that is a good summary of how I interpret the evidence.


Introduction

There is currently a responsibility-dodging contest between industry and academia over who is to blame for the declining enrollment of women in Computer Science and declining employment of women in software development. I hear people in industry bemoan the "empty pipeline", while academics maintain that women aren't entering their programs because of perceptions of the industry.  I have compiled some data that may help resolve the question by highlighting a third factor common to both: access to an Internet-based culture of computing.

Assumptions Make Programming Possible

Scott McCloud, in Understanding Comics, uses a simple image to explain how people employ assumptions when reading comics:

I may have drawn an axe being raised in this example, but I'm not the one who let it drop or decided how hard the blow or who screamed or why.  That, dear reader, was your special crime, each of you committing it in your own style.

I argue that the same is true when reading code.  The difference, however, is that with executables we can check those assumptions against our invented reality.