I have been finding it challenging to put my thoughts about code reviews into a linear format. I've realized that it is because code reviews are a part of the broader and context-specific work of engineering a productive software development process.
Rather than talk about code reviews on their own, today I'm going to try something different: I'm going to talk about a specific heuristic you can look at. These heuristics aren't metrics you can optimize, but they can offer insight into the health of your software development process and suggest tools you might try to address them.
The first thing I look for:
The answer should be more often than "never" and less often than "usually".
In cases where most approaches to changing the system don't work the first time, the team is either missing norms, skills, communication or an architecture that supports their current work.
Code reviews can be a tool to address all those issues, but if you see lots of PR churn they aren't being enough.
If information about a change is surfacing for the first time during code review, is often a signal that there aren't enough conversations happening.
I have never seen this problem occur when the team is pairing or doing ensemble sessions with any regularity, but those techniques aren't the only ways to boost communication. The first thing to try is resolving whatever is keeping people from talking. The technical review process may be too heavy-weight for people to be willing to go through it. Maybe tickets are handed down so tightly-scoped developers are simply executing on pseudo-code, or the team isn't talking through user stories together to identify technical requirements. Potentially a scarcity mindset has taken root and people are concerned about taking the time for casual discussions, or worse being seen to take the time for casual discussions. Cutting scope, adding slack and creating specific spaces to talk can all speed up development by reducing rework.
Whatever the reason, when the only venue for architectural input and technical requirements to come to light is code reviews, it is not surprising that most PRs won't satisfy them purely by luck. The solution is to seek out times and spaces for wide-spread conversations, in whatever formats might work for your team.
The other root cause I've seen is a lack of convergence.
If the team doesn't agree on what properties a good architecture has it creates churn. It is also frustrating if different reviewers provide dramatically different feedback: developers give up on finding a good solution and just try to satisfy whoever it is who is keeping them from merging today. If reviewers expect people to write code in a specific way or demand changes without explaining what purpose those changes serve it creates a sense of learned helplessness, where good code and actual agreement is impossible.
Even when everyone is trying to be collaborative, you can still end up in a situation where there is no common agreement. There may be too many private discussions, or discussions within separate cliques who come to very different conclusions. If a team isn't confident in its ability to engage in productive conflict, people may avoid public discussion specifically to avoid stepping on an undetected third rail.
Regardless of the reason, techniques like event storming, ensemble programming, or question-only reviews can help a team converge on a shared culture of practice. If those would be new to your organization, even just doing a quick five-whys review of whatever was discovered in the code review processes that week at the team meeting can be a good start. It gives the team a facilitated place to practice conflict resolution, and spreads whatever was learned to the whole of the team.
This is worse. When they don't include architectural feedback, reviews aren't able to build quality into your code. There are always things that only become obvious when you take a step back from the painting and look at the bigger picture. If those either aren't being remarked on or aren't being fixed when they are noticed, the code that ships will be worse than it could be.
There are a lot of reasons those opportunities for improvement may be being missed, but I'm going to talk about three here: pressure to merge, a lack of safety, and missing aesthetic feedback.
If developers either don't want to take the time to make code better, or don't feel like they have permission to make the code better, the software will suck.
This is distinct from the situation where there are real reasons for delivering shitty software: in those cases the whole team should be aligned around that vision and the trade-offs made in the particular PR, as well as a plan for how those problems will be addressed once the short-term value opportunity has been realized. If you are going from short-term value opportunity to short-term value opportunity that is still a problem, but those situations do sometimes come up.
But usually the reasons businesses push developers to write shitty software are fake. Invented deadlines, arbitrary OKR review cycles, misuse of velocity metrics, low-trust relationships between product and development, promotion systems that incentivize only short-term feature delivery, and so on and so forth. The list of ways businesses manage to shoot themselves in the foot when it comes to their investment in tech is long and varied, each one making future development more expensive than it needed to be, increasing the fixed cost of the system, and hurting developers' health and well-being.
As a business leader, if you discover that people feel pressured to merge PRs as soon as possible, that is within your sphere of control. Dig in on why, and figure out how you can set up the incentives so developers believe in their soul that you want them to build software in the way that will be most valuable to the business. Enough companies prefer legibility to business value that it's a non-trivial task, but if you get it right the competitive advantage is enormous.
Sometimes we are missing a community where it is safe to express ignorance or confusion. If people feel like they have to come to the table with declarative statements and concrete answers, we miss the opportunities to clarify the semantics of our system and create graceful architectures.
There is no one right way to write a piece of software, and a good-enough-to-function way may still be awkward enough that it's a missed opportunity. Building a culture where it is safe enough to ask questions or express opinions even if they are unlikely to change the code review ironically is what opens up the door to discover the long-tail of major changes that have the greatest payoff to the business.
We spend a lot of time and money explaining code, reading code and communicating about code. Any change we want to make requires starting by understanding the system how it is now. We make that work visible and correctly incentivize creating easy-to-explain architectures by having people ask questions in code reviews rather than struggling alone to find the answers. When people believe it is possible to have the software make sense, and feel entitled to architectures that can be explained to them, we end up with software that is reliable, extensible and maintainable.
Unfortunately, this useful work is socially risky.
To build the most business value, we are asking developers to set aside some of our most basic social instincts. There are some tricks that make that easier, like responding to any question with "thank you for asking!" (and meaning it). Engineering leaders can model being ignorant whenever they spot an opportunity. We can also make an effort to respond to questions we don't know the answer to with enthusiasm and curiosity and collaborative learning, so that we aren't putting each other on the spot by asking a question.
It can help to just explain these dynamics. "If you have a question about the code, it is just as useful for the author of the code to know that you had that question as it is for you to get it answered" is one of my go-to phrases. I also like telling a story about how as a first-time manager I had a report with 30 years experience and in his first week he asked me more questions than any intern has ever been able to come up with. I tell that story and challenge new interns to prove me wrong. So far none of them have, and in the process their code reviews have been amazing feedback on where my code can be more accessible and exemplary.
Sometimes those social fears are also well-founded. Facilitated ensemble sessions are particularly helpful as a place to debug any interpersonal dynamics that are contributing to our lack of useful ignorance, as well as opportunities to improve the situation by providing public modeling and corrections.
I have a separate post exploring why aesthetic feedback is valuable. Here it is simply enough to say that without taking into account aesthetic feedback, your architecture is unlikely to randomly evolve into something suited to this program's particular context. We are trying to optimize subjective properties of the system: no linear measure can provide more than the roughest of estimates.
There are a couple of common reasons for aesthetic feedback to be missing: programmers might not have learned to articulate their aesthetic reactions, they might fear the social consequences of sharing their aesthetic reactions, or they might have been explicitly instructed to keep their aesthetic reactions to themselves.
Some time back Microsoft published a series of blog posts confidently declaring incredibly misguided advice about how to provide "useful" code reviews. These blog posts were based on surveys that weren't looking at any of the outcomes we would actually care about, like code quality, learning, value of the software produced, etc. Instead, the stated goal was minimizing time-to-commit of each PR. You could speed that up to infinity by simply eliminating code reviews, but instead they wanted to keep the reviews while discarding most of the value they can provide. Thus their advice was to never say anything positive, subjective or indeed comment on anything other than a specific error in the code.
This advice does successfully minimize conflict. It just also minimizes the usefulness of code reviews. It's basically like economists who write editorials telling you to give your family cash for the holidays: it makes sense only if you completely misunderstand the point and don't believe human relationships have value.
Unsurprisingly, when you forbid people from talking about architecture in code reviews, code reviews don't help you build good architecture. The up-front design process that approach substitutes, in addition to involving significant duplicate work and relying on people guessing right, can't do the whole job. Up-front design decisions are made before the system itself has a chance to provide feedback on how its architecture supports (or doesn't support) a particular change.
This advice has spread far enough that I believe it is usually worth addressing directly. You can't assume that the developers you hire will know all the roles code reviews play in building quality. Make sure your code review philosophy is highlighted when on-boarding new members of the team, and incorporate it explicitly into your leveling guide. Have people in positions of power and leadership state frequently that the goal of code reviews is to cultivate practices that build good software, not to find bugs. And then provide feedback and support to engineers who are treating it as a bug hunt instead.
If you are working with someone who believes in this advice, you can try pointing them to these blog posts. Alternatively, you can arrange to do some synchronous code reviews. This allows you to observe their non-verbal communication as they read the code, and by asking questions of them you can get at the information they were taught to withhold. It also helps build a relationship of trust: when you embody curiosity about their opinion and gratitude when they share it, it tells them that you understand you are all in this software project together.
Unlike most professions that involve design, the majority of computer science programs don't incorporate aesthetic critique as something students are taught. Software development management is often focused on what is measurable, like velocity, rather than engineering a context where it is possible to write quality software, and managers often aren't reading enough code on a day to day basis to have their own honed aesthetic sense. Our profession has grown so fast that even in communities where aesthetics are prioritized, we haven't aligned on vocabulary and practices.
The way we get better about talking about aesthetics is to do it, so the solution to this is creating opportunities for people to practice. One technique is to read a book like Implementation Patterns as a group: it highlights the choices we make as programmers and design principles we can use to guide them.
Another fun exercise is reading pieces of open source code and talking about them like they are poetry: what was the author trying to achieve? What techniques did they use to communicate that to the read? How successful were those techniques? How else could they have gone about it, and how would that have changed the code?
The thing about subjective feedback is that it is subjective. It is entirely possible for someone else to disagree with our aesthetic judgement and neither one of us to be wrong.
That is hard for some people to deal with, especially people used to being able to get an A+ by finding the right answer. Ideally in those situations we become curious about what our differences are. What context do I have that you don't? What do you value that I wasn't taking into account? But we have probably all seen situations where the resulting engagement wasn't anything like that productive. One or two bad experiences can scare people off from sharing their judgement.
Leaving positive comments on reviews can help break through that impasse and start building trust. Realizing that positive reactions are just as much feedback on our architecture as squick reactions makes talking about aesthetics less socially risky. Once people are in the habit, they can start sharing times when they notice an opportunity for more symmetry or following up when the semantics of an object aren't clear to them.
It is also useful to note that receiving critique is a skill to be practiced, just like giving it is. If people on your team struggle with being enthusiastic about feedback, consider highlighting the advantages of developing these skills. To progress in technical leadership, being able to write accessible, inclusive code that makes junior developers productive is vital. It is a lot easier to develop that skill if you can incorporate vague feedback delivered by someone new to offering it.
It might mean everything is great! Then again, we might be in a mix of dynamics from both failure states at once.
This is why these are heuristics, and not rules or laws or metrics to be measured. There isn't a specific number of PRs I aim for my team to end up completely throwing out. Instead, the most informative signals are qualitative. I keep an eye out for situations where someone would like to rework something but doesn't feel like they can, to see if I can address the blockers and pressures they are feeling. I also keep an eye out for people reworking things purely out of people-pleasing instincts, without personally agreeing with the changes, and encourage deeper engagement on those teams.
Don't give up on stepping back from the painting and taking the whole picture in.
We can spend all the time in the world trying to guess the right architecture before we start coding, but until bytes meet silicon it is all just a guess. Unless a product is so stable and slowly-evolving that you reliably guess correctly, it is better for both the software and the company to make it cheaper to make architecture changes once you know what changes will be asked of the code, rather than cutting off optionality from the start.
Process can make it impossible to write quality code. We can use tools like code reviews in our pursuit of the goal. But in the end only developers can build reliable, maintainable, valuable software. The company benefits when we advocate for the conditions that let us do our job well.
Thinking about why aesthetic feedback on code is so valuable led me to the question of why the feedback techniques we have for procedural code don't work well for Architecture. There have been many attempts to lint architecture decisions, or write unit tests for them, or create frameworks people can plug together like libraries, or create prescriptivist languages, and I have yet to see any of those techniques produce the desired payoff of graceful, cheaply-modified code.
It led me to this realization: architecture is largely semantic work, rather than syntactical.
A well-architected system could potentially end up running the same byte code as a poorly-architected system, and yet be more valuable to the business because it can be more-cheaply understood and modified by people.
Good architecture lets developers make accurate guesses about how the system will behave and how parts of the system we have not read are structured. These programs exhibit the property of a "well-constructed plot", as described by Lee Devin and Robert Austin in their book The Soul of Design. Quality software embodies a coherent set of patterns that work together to fulfill the expectations of the viewer.
In a system where our expectations are fulfilled, future changes are less likely to have unanticipated consequences, directly preventing bugs. When there is unintended behavior, it is cheaper to debug and correct. The coherent understanding makes it easier to observe the system, and that information can support impactful improvements to system properties like performance, security and resiliency. When we use Domain-Driven Design techniques to align our semantic concepts with our users', we can even use that well-constructed plot to provide a predictable experience to the user.
Unfortunately, we can't have a computer measure that quality. Not for lack of trying; people have come up with a huge variety of heuristics attempting to turn architecture quality into something syntactical we can statically analyze. Reams of papers have been published attempting to flatten the question of suitability into an automatic, linear operation that didn't need to involve humans as subjective participants.
It turns out to be is mathematically infeasible to judge the quality of an architecture in isolation. The properties of coupling and cohesion are only defined relative to a change being made to the system. Referential systems and the semantics of language only carry meaning when read in the cultural and linguistic context a reader brings to the work. Software architecture is inherently postmodern.
The value of software is created in the relationship between object and subjects across time and space.
Linear models are incapable of capturing the multi-sided, participatory, evolutionary nature of software quality, just like conventional management approaches are incapable of capturing the value of good design. Just like companies that are able to buck conventional management strategies to build extraordinary products can be inexplicably successful, software development teams that are able to buck legibility-obsessed software development conventions to prioritize subjective quality are able to succeed where other approaches fail.
Luckily there is an easy pattern for collecting distributed, contextual information: ask each node in the system what it thinks. This is why code reviews, collective code ownership and pair programming can be effective tools for quality: they function like MapReduce. Each developer participating in the system contributes information about the quality of our architecture relative to the variety of changes we are actually making.
This is also why having an on-team customer is so much more effective at creating quality than the more-linear approaches of product or project management.
Distributed approaches would still be impractical if each node had to do a lot of work to come up with the answer. Luckily, humans have evolved the capacity for aesthetic pattern matching. Our reactions of pleasure and disgust give us information about whether an object gracefully fulfills the purpose we have in mind. As Devin and Austin explore, it is particularly informative about how cohesive and self-consistent a system is.
Valuable architectures exhibit low coupling and high cohesion. As Kent Beck has described, to accurately judge coupling we have to consider the system in its entirety, whereas cohesion is a local property that is often easy to increase. Similarly, our aesthetic sense is mostly tuned to judge cohesion. The sense of satisfaction we feel in reaction to a coherent, self-referential whole lets us determine at a glance how coherent our code will be after a change.
Rather than struggling to find ways to computationally judge contextual value in isolation, it is more productive to optimize the cyborg system of software+collaborators. We don't need to erase human judgement to build extraordinary software: we need to embrace it.
In the primaries, it is strategic to vote our hearts even when our candidate won't win, because that is how we get our priorities adopted as party priorities. And then we go knock on doors when the general comes around to make sure our coalition has the power to get them done.
A rebellion is brewing. Ideas like post-commit reviews or even a return to cowboy coding are gaining traction over the unpleasant & unproductive experience that is the bug-hunt code review.
This is unfortunate, because code reviews are one of the delightful parts of our profession. They let us shape and revel in the things we build together. They let us be confident in our work, and demolish imposter syndrome. They are a powerful tool for building livable code with raptor numbers greater than one. While they aren’t the only way to achieve those benefits, unlike ensemble or pair programming they work across time zones and give people extra space.
The problem isn’t that code reviews are bad; it is that they are too often done badly.
Many software developers were introduced to code reviews via impersonal tools or corporate policies that require them. Those unfortunate programmers have never experienced a delightful code review and have no idea how to perform one.
While I can’t give every reader the experience of receiving a delightful code review, I can share with you the tools I use to perform them. Some of those tools require a supportive context or established relationships to work, but there is one that no matter where you work you can start using today:
As you read the code you are reviewing, pay attention to how it makes you feel. Any time it inspires a a spark of joy, any time you feel yourself smile, leave a comment.
If you don’t know why you felt joy, that’s okay: your comment can be simply “this delights me”, “:-D” or “Nice!” Your coworker gets to know you appreciate their work, and you get to notice which bits of our work you enjoy.
If you want to take it further, level 2 is figuring out what about that line made you smile. Maybe a name makes sense, or an API is elegant, or you recognize a design pattern used appropriately. By leaving a more-specific compliment, you give your coworker the opportunity to delight you more in the future.
Level 3 is identifying what doing that good thing accomplished for you as a reader. This not only gives your coworker the chance to delight you; it lets them know the context where doing it again will be similarly helpful. It gives them information they otherwise have no way to learn.
A level 3 positive comment might be something like, “Great job naming this Fire Break! `summonCredentialsFromTheDeep` accurately communicates the monstrosities that lie in those depths. If something goes wrong with credentials, I will definitely know where to look, and it leaves a clear marker that I might want to Tidy First if I need to modify that code.”
For this to pay off, you can’t fake it: you have to actually figure out what code you like. It is important that you actually enjoy the code you are complimenting. This isn’t some shit sandwich technique: if you don’t have something nice to say, for goodness sake don’t make something up.
It is also important to remember that joy is subjective. It is impossible to be wrong about what you enjoy because it is impossible to be right about what you enjoy. Your joy is your own.
The great things about compliments is that they ask nothing of your coworker. You aren’t trying to get them them to change anything, or telling them they are Wrong[tm]. If they take the critique personally, they have to feel good about themselves. And it is a lot more satisfying to receive that a bland, impersonal “LGTM”.
That doesn’t mean it won’t ever change the code. It may turn out that your coworker wanted to accomplish something different. If how you read it wasn’t what they meant you to read at all, they now have the chance to more accurately communicate their intention! But even then, you still genuinely enjoyed the thing they did. Even if it code ends up changing later, nothing changes your experience of delight.
Compliments are thus a safe way to move code reviews beyond bug hunting. It shows people that aesthetics are relevant to code quality. It establishes that our subjective opinions of our coworkers’ code is a relevant topic, and it establishes that without needing to ask them to do anything to accommodate those preferences. It lets other developers to think about whether they agree with your compliment, and it invites them to leave subjective comments of their own.
But even if no one else got anything out of these comments, I would still leave them. Our trade is fun, and it is worth taking the time to remind myself of that. Not every piece of code we write will gracefully communicate the problem and its solution, but when one does it is a wonder worth celebrating.
Enjoying those moments of grace is my privilege as a programmer.
Wired magazine published an article about why Musk’s Plan to Reveal The Twitter Algorithm Won’t Solve Anything.
Several of my non-programmer friends were interested in this, and we started chatting. Because the idea itself is obviously shockingly out of left field, I discovered this was a perfect opportunity to explain the properties of Coupling & Cohesion and why they matter.
Coupling & cohesion are defined in terms of a change you want to make to a system. In this case, Elon Musk would like to open source “the algorithm”, which he defines as all the bits of code that “make any changes to people's tweets, if they're emphasized or de-emphasized”.
I want to be clear that nothing here is based on my experience with the Twitter code base. I wouldn't speak to any private information, and my experience was nearly a decade ago. Things have most certainly changed.
The information from current developers in the article is plenty for us to speculate about how coupled and cohesive the system is with regards to this particular change.
To make the change easier, Twitter would need to rearchitect their system. This would involve moving all the related behavior together in one place. It would also involve separating any behavior in those components that isn’t about promoting or hiding a tweet. A service or a group of services that only handled promoting or hiding tweets would be high cohesion and possible to open-source.
There are two sources of coupling: code that the code being changed relies on, and code that relies on the code being changed. (Anyone know better words to distinguish those two? Let me know, because that is a mouthful.)
Luckily for Twitter, from Wired’s description it sounds like they are mostly dealing with only one of those two kinds of coupling. If not much else depends on which tweets are promoted or hidden, it makes the change a lot easier.
Wired reports that the scattered pieces of code “perform a complex dance atop mountains of data and a multitude of human actions. Results are also tailored to each user based on their personal information and behavior.” That is to say, the code that promotes or hides tweets is highly coupled to many different parts of the current system.
This coupling could prevent Twitter from extracting the behavior into a cohesive unit. Even if the code was centralized, it would still require understanding code that had nothing to do with promoting or hiding tweets in order to understand what is happening. If it is particularly tightly coupled, it might even be impossible to separate without an intermediate step.
Reducing coupling is less straight-forward than increasing cohesion. Twitter would need to consider why those dependencies were needed & what the purpose the data served. They would then turn that understanding into an interface of some kind, with names that reflect that understanding. Twitter’s current data could then be swapped out for some other source of data that satisfied the same purpose. That would let the system be loosely coupled with respect to this change.
As many colleges move online, I realize I have a somewhat-unique experience: I shameless ripped off the pedagogy from my small liberal arts professors and have spent the last decade+ applying it at distributed tech companies. I've facilitated video conversations with anywhere from three to fifty participants, both in the course of my work, as part of reading groups for specific texts, on social science topics like "gender and racial bias in tech" and as part of consciousness raising groups to help foment cultural change.
You all have the advantage that the students have already been interacting with one another and with you; that pre-existing trust makes it much easier. And you are all going to be doing this for the first time, so you can figure it out together. The best advice I can give is saving five minutes at the end to talk about how you all feel the discussion just went, and if it isn't working in the middle of the class, just stop and have a conversation about what isn't working.
The greatest challenge moving to video is that it is easier for people to check out from behind a screen and not have it be obvious. The advantage is that if they do, it isn't as disruptive. I always treat any video meeting as opt-in, and then work to make it easy for people to do that opting.
Basic advice:
Facilitating:
Accessibility:
Beyond all of that, know that it isn't as different as it feels at first and it is absolutely possible.
]]>When using Rails routing I came across an odd bug: a URL query parameter was breaking the route. A URL query parameter without a period? Everything works fine. A URL query parameter with a period? 404.
I eventually found the answer in an off-hand comment in a random blog post, and traced it back to the code. So that next time I remember what is going on, I figured I'd throw the explanation up here. By default, Rails assumes anything after the period represents the format (see the Mapping class defined in rails/actionpack/lib/action_dispatch/routing/mapper.rb). Which if, for example, you are using the format to determine whether a request should be served by a frontend app can then break the route.
To address this, you have two options. First, you can follow the suggestion I've seen elsewhere and define your own constraint:
get "*path", to: "react_frontend#show", constraints: { path: /.*/ }
get "*path", to: "react_frontend#show", :format => false
Two and a half years ago I joined LTSE, with the goal of changing the incentives companies face to prioritize short-term profits over everything else. In May, the SEC approved the creation of the Long Term Stock Exchange, making us one of only a handful of venues authorized to list publicly traded companies.
When this milestone happened I discovered that I still have an internalized voice that says, "if you prioritize hiring underrepresented developers, it means you are de-prioritizing success". I found that some part of me holds an insidious belief that places where I felt comfortable couldn't be "the best" companies. By demanding representation, this voice said, I was asking a sacrifice of the company I was working for. I had gotten as far as believing that sacrifice was justified, even necessary for the sake of justice, but it was still something I was being granted.
That voice is wrong.
We've built an engineering team here that is racially diverse and gender-balanced. We say out loud that we aren't trying to hire "smart" developers: we are hiring skilled developers who believe in practicing their skills in order to improve. We don't believe in a "founder gene": our tools set out to make explicit the implicit knowledge those folks horde, so that more people with valuable ideas can successfully found companies. My experience here is so different than what I had experienced elsewhere. I no longer fantasize about quitting the industry on a regular basis. I feel like I can recruit without worrying that I am selling harmful snake oil, and I feel empowered to support people the way they want to be supported instead of the way the industries says we should want to be supported. But some part of me distrusts this ease. Part of me still believed that feeling comfortable must mean something is wrong, and that it is unreasonable to want this comfort "at the expense" of the things that "really matter".
That part is also wrong.
I don't believe that our success here vindicates "diverse" teams any more than not succeeding at this ridiculously ambitious mission would mean "diverse" teams are a failure. This is not a magical Utopia, and I still react to things that happen here with the weight of all those other experiences I have had. But this weekend I found myself crying as some deep-seated clinching, this sense that my basic existence was an impediment to success, loosened a little. It is not unreasonable to want a community or company that takes me into account. We can succeed with a diverse team, where being a feminist is part of the bar, where we expect "D&I" efforts to be effective, where people take parental leave and no one yells and work is expected to be sustainable. It may even turn out that all of those things make it easier, not harder, to do useful, productive, successful work, rather than just being what it takes for me to not quit.
The part that always told me things could be different? That part was right.
We are going to be hiring a bunch over the next stage of this project. Many of the people reaching out and proactively raising their hands are people who take for granted that every company in the world has a place for them. Some of them will turn out to be great, but my goal in this next phase is to make sure that other people, candidates who wouldn't think to jump in just because the project had some success, feel invited to join as well. I want them to know this company is for them, in a way it is not actually for all these people who get to assume that every company is.
]]>When building transformative experiences for our users, we begin by identifying the emotion that motivates their engagement. We then imagine how we want them to feel when we have provided for their need. Finally, we are left to build something that we think can successfully transform the first into the second.
The only way to actually know if such a design works for a sufficient audience to support the product is to experiment and see, but there are some patterns of UX that can suggest things we might want to try. None of these is a product all on its own: we also have to actually address a need people have in a way that provides some substantive value. But since we can offer what we think of as value and still not have people walk away feeling better than when they walked in, this is a toolbox we can come back to to ensure that the actual value we provide is also giving people something they want.
There are many more of these possible: I look forward to hearing about the patterns you have discovered! If you are interested in reading more about the use of visuals and interaction in creating experiences, I highly recommend Understanding Comics and Reimagining Comics by Scott McCloud: they are an accessible entry point into the world of visual and interactive impact. The Design Of Everyday Things and Emotional Design, by Don Norman, are also great starting points, as well as Theater Of The Oppressed, by Augusto Boal and Impro by Keith Johnstone.
]]>Note that many of these are different things than I look for in a production language. I want students to make mistakes that help them learn, so protecting them from those mistakes isn't useful or helpful. They aren't going to be working on large code bases, so libraries, package management and scalability aren't important. No language is perfect on all of these dimensions, but some are definitely better than others.
Imposter syndrome is often presented as a personal failing. A lack of confidence, our wrong-headed beliefs not matching the reality of how competent we are, or worst of a flaw of our gender. Just tell yourself you are wrong! Imagine everyone else is just like you! Have confidence in all parts of yourself except that part that tells you not to have confidence!
Unsurprisingly, these interventions are not often effective. At best, they change behavior, frequently while making people who already feel bad about themselves feel worse. At worst, they lead people to stop trying to improve the environment they have found themselves in.
I would like to offer an alternative story: imposter syndrome is a rational response to insufficient feedback.
Until I joined TripAdvisor I had no real way of knowing whether I was a good programmer. I knew that often our customers were happy, but that was often three to six months after I wrote the code. I knew that sometimes I couldn’t make something work, and I assumed that was my fault whether it was or not. I sometimes knew I was grateful to be working in my own code months later and sometimes I wasn’t, but again that was far from where I had made those decisions and it was only my own opinion. I was anxiously waiting to be discovered and dismissed. I covered with appeals to good engineering principles I had heard about, largely ignored by the people around me.
Without feedback, there are three options: I can believe, without evidence, that I am an awesome programmer. I can believe, without evidence, that I am a terrible programmer and quit to go do something else. Or finally, I can believe, without evidence, that I am a terrible programmer somehow successfully pretending to be an awesome programmer.
The problem with believing, baselessly, in one’s own competence is that eventually we all reach the limits of our capabilities. Suddenly confronted with evidence of their lack of perfection, overconfident people become defensive and blame anything other than their own failings for the problems, making them impossible to fix. Any evidence of failure becomes personally devastating. It is hard to protect one’s self from any possible feedback. Among programmers, I often see such people retreat into believing that they are this nebulous category known as “smart” and that “smart” is always enough to do anything. These people appear deeply confident, but their confidence is a protective illusion and the cost is born by the people around them who have to protect their fragility.
The people who take the second choice and drop out simply disappear. They never have a chance to find the feedback they were missing, but they also don’t have to confront their personal insufficiencies. Personally, I certainly wasn’t going to drop out. Until I became a programmer I was making $27,000 a year as a hospital manager. I might not be good, I figured, but I was good enough to bring home my paycheck.
The genius of imposter syndrome is that it is adaptive and self-perpetuating. We don't have to disregard when we fall short, for such failures fit our internal narrative. Failure is a terrifying chance to be found out, but it is not an existential challenge. When we succeed, we can believe it is part of our act. Look how well I have fooled everyone by doing work they think is good! We often work hard, to cover for our obvious faults, and may take into account the possibility of human failures, which inevitably occur, both of which can produce decent results.
However, these successful results come at significant cost. We have trouble accepting real feedback, since any feedback is based on our facade and we “know” better. Imposter syndrome also brings with it anxiety and shame, preventing us from feeling the thrill of accomplishment when we do succeed. It robs us of the joy we earn.
When I joined TripAdvisor I found that all code was code reviewed. On my team we required three ShipIts to commit code, so at least three people had to read any code I wrote. It was absolutely terrifying to post my first code review. My first real project took a month, and I was panicking over my failure to deliver in the estimated time. I was working in all brand-new languages, in a brand-new domain: I didn’t ask questions because I didn’t even know what questions to ask. Instead I sweated and tried things out and Googled madly. At last the day came when I couldn’t put it off anymore and I posted my review.
Naturally, it being the first thing I had done as a web developer beyond bug fixes, I got back pages of comments. And then… the world didn’t end. I addressed the comments. I got ShipIts. The project worked. I moved on to the next thing. Next time, I had learned the coding conventions and how scope worked in JavaScript. The time after that I finally understood closures. On and on, my stupidity was laid bare for the world to see and it turned out it didn’t matter. No one cared, and when I took the feedback on board for next time they thought it was awesome. It wasn’t that we were all covering for how bad we were at our job: it is that we were all plenty competent and becoming more so. The flaws we had weren't disastrous because they were accounted for by the system.
Eventually I began learning to teach, to provide effective feedback and create a shared vision of what “good” code looked like. I built up the people around me, as they built me up.
After a few years of that, I realized I no longer imagined I was bad at my job. With no place to hide, I would have been found out years ago. Also, by this point I was significantly more effective, and the people around me were too. None of us were some binary “good coder” versus “bad coder”: we were all coders, with things we did well and places we struggled and areas where we were learning. I realized that I was a "good coder" not because my code was aways right or always worked straight away, but because I had processes I followed to ensure that my code became right and worked before being released and communicated clearly my intentions to my collaborators. I started to look for other feedback mechanisms: aesthetics and unit tests and pair programming for code, shared values and professional belonging for our team, manual testing and user experience feedback and analytics for our products.
Once I got to this state, it no longer mattered whether I was “good” or not. I became part of an effective team, and together we made useful things that satisfied our users' needs. They were easy to experiment with, extend and maintain. Anyone who looked at that and thought what mattered was whether we were “smart” seemed to me to be asking the wrong question.
The cost of imposter syndrome is shutting ourselves away from the very thing that could cure it. It becomes a trap. It prevents us from becoming improving where we could improve. It keeps us from connecting with our collaborators, with whom we can accomplish greater things than even the best programmer could do alone. These stories we tell ourselves to get by strangle our true voices, and in silence there is only fear.
I have at times offered the advice of “Be Brave”, but I now believe that is not quite right. I try instead to be curious when confronted with fear. When I find that I am afraid, I poke at it, prodding for information, finding what feedback I am missing. When I am done I can still choose to do the things I was afraid of, only more safely because I know more. Sometimes what I learn is that this is not something to do, and I set it aside. Fear points the way to all the most interesting things, and when we are safe and secure we can use it as that tool.
I believe that the reason so many underrepresented programmers feel imposter syndrome is two-fold. First, we seldom have the option of being baselessly overconfident while those around us protect us from the truth lest we throw a tantrum, so in the absence of feedback we can choose between imposter syndrome or quitting. Second, we have been deprived of some of the pathways others can use to get easy feedback and reassurance. We can not look around and be automatically reassured of our belonging. We get feedback that is extraneous to our actual job while too often lacking the casual feedback among peers about our work. We can't simply follow the cultural conventions we already know and be accepted. When we are busy responding to feedback that is about our very selves, we cannot build the professional confidence that protects us when we seek out feedback that lets us improve the craft. When we are rendered invisible and ignored, we have stolen from us the opportunity to build true confidence. This does not imply that is is a problem exclusive to those who are underrepresented, nor that all underrepresented developers feel this way. It is simply understandable that this adaptation would be more common. Until we can offer everyone who want to program inclusive and effective forms of unavoidable feedback there will continue to be disparities in experiences based in how people are regarded.
There is a way through imposter syndrome and out the other side, into confidence based in knowledge and not delusion. I have personally found this a more satisfying approach than merely telling myself other people have felt the same. I no longer fear being discovered to be a bad programmer. I am no perfect coding machine, but no human will ever be so. I strive to know my weaknesses, and find ways they can be valuable. I seek or build contexts where I can be most effective and have the most impact. The most important piece of that is getting feedback on whether what I am doing is the right thing to do, early and often and unavoidably. There is always more work to do, and we can never be “good” enough to do it all.
By accepting feedback we can all be constructive, and that is sufficient.
Someone I worked with had asked for recommendations when I noted the speaking lineup of a conference he was attending was exclusively men, and I figured I'd share the letter I came up with in case it is useful to others:
While I attended $CONFERENCE_NAME last weekend, I was disappointed to notice the oversights in your speaker line up leading to it being made up of nothing but men. Perhaps your prioritization of people with their own books to sell led to inadvertent systematic discrimination, as you were reliant on the discriminatory publishing world and more generally on people without a significant non-work-related demands on their time (who are most likely to be either single men or men in non-egalitarian marriages.) {Depending on your impression of the conference itself, something like: "Since I was also disappointed in how much of the conference devolved into the speakers plugging their own books, I am confident you could kill two birds with one stone by instead seeking out the most qualified speakers.” could fit here too.}
I wanted to convey that when trying to build a group from $COMPANY to attend this weekend, a woman who is normally excited to attend local conferences had no interest at all. Without any women speaking, a code of conduct or even the barest token of effort towards diversity, there was no evidence that there would exist other women there or that the men involved see women as peers. She expected that the weekend would, at best, be full of getting interrupted so men could explaining things she already understood, dudes hitting on her, men quizzing her about the alien experience of being "one of those", people assuming she was part of the conference organizing staff or from recruiting or some attendee’s wife, or simply ignoring her all together. She also assumed that any complaints would be brushed off as disruptive to the existing exclusionary atmosphere that it appears the organizers have cultivated.
Going forward, if I see another line up of all men speaking in Boston, I will have to assume she’s right and you are actively working to run a conference to alienate women. Since I’m not interested in that environment, this may be my last $CONFERENCE_NAME conference; I could instead have attended $OTHER_CONFERENCE_THAT_MADE_AN_EFFORT the week before where both I and the women I know would both have felt more welcome.
Now, the reasoning behind this approach. I like trying to turn it into a contest between conferences, since the only eventual pressure to change will come through economic pressure. It also circumvents the argument that it's not possible or there are no qualified women, without ever having to point out just how incredibly insulting that argument is. Other conferences have worked hard to change the make up on their conventions: accepting speakers through blind proposals (rather than just inviting people they know of or their currently-non-diverse attendees recommend), advertising a code of conduct widely and enforcing it when it comes up, creating scholarships for women who want to attend but who’s companies won’t support it and seeking out and addressing feedback from women speaking and attending. It’s not like this stuff is easy; it’s jut possible.
I did have one more recommendation for the guy I was talking with:
]]>If you want to be helpful while you are there, be your usual polite and outgoing and aware self and discuss the technical work of any women you do meet, especially listening to their ideas and learning about the work they are doing. A good interaction or two can brighten up even the most awkward conference.
I believe that this tendency to generalization is why I am able to jump between levels of abstraction quite easily. The concept of emergence, and the specific cases of recursion and polymorphism, are obvious to me. Everything in the universe is made up of component parts, interacting in ways that give rise to the meta-phenomenon we observe, like “matter” and “consciousness”, and I can keep that in mind without worrying particularly about what those components are. It is odd to me when people consider things to be discrete, isolated wholes; it can be useful to talk about them that way, but I usually don’t actually believe it.
Someone somewhere, of course, was mocking Gender Studies majors for taking an easy course because they couldn't hack liberal arts. I made the point that the Gender Studies classes I took were significantly more difficult than my computer science classes. Someone then asked me why I thought that was, and I came up with an answer:
]]>Evolution isn't about "good" or "bad". It is simply a word for a specific emergent process. It describes all the things that happened that led to the current state of affairs. Sometimes they happened for reasons, under specific and identifiable pressures, but other times just by accident. It gets way over-simplified, especially by people looking for answers, since evolution is bad at providing answers, or reasons, because it's a description of an emergent system and not a driving force.
The original building block of evolution were these two observations:My Usenix talk this year uses various books I've drawn on for inspiration as backgrounds for my slides. The goal of this was to share some of the broader world beyond what we usually look to as computer scientists. Some of these books are accessible, while others are extremely dense. I recommend picking things up and putting them down if they don't speak to you. It's all about what is useful, helpful and challenging to you where ever you are right now.
]]>I'm doing a workshop on putting together computers in two hours, and rather than do hand outs I figured I'd toss the links up on my blog. When buying components for a computer, I usually read:
If you are buying the components for a machine, they do systems guilds that are a useful report on the state-of-the-art each December-ish. The one for 2012 is available here:
http://arstechnica.com/gadgets/2012/12/ars-technica-system-guide-december-2012/
They have performance statistics and a useful ranking of video cards by performance per price that I find particularly useful. It is helpful to remember that many of these are relatively arbitrary, so if you have some specific game or application in mind it is useful to find reviews specifically for that application.
http://www.videocardbenchmark.net/gpu_value.html
This is the easy way to find the RAM that goes with your motherboard.
Has comparative reviews of various components, like hard drives, though I find their comparisons less easy-to-read than the video card benchmarks site.
]]>
The difference between Architecture and Code blurs quickly when refactoring becomes sufficiently common, so the distinction made between various pattern languages never seemed especially helpful to me. Between Architecture and Service the line is firmer: this code is mine, that code is yours, here is the interface. At the same time, I've found that the design patterns that work when I'm writing methods and classes still apply when I'm working with services. The goal is still to increase cohesion and decrease coupling, even if often I have no control over half of the code.
Thus, the idea of a Fractal Design Patterns. Instead of the usual pattern description, which describes the pattern at a specific level of abstraction, a Fractal Pattern would illustrate it at multiple levels and try to get at the underlying principle.
For example, I'll take the algorithm-swapping-base-on-state that is described by the Strategy pattern.
]]>A summary from YodasEvilTwin on Slashdot:
"The internet is dominated by sexist men, which discourages women from getting involved in related fields."
I add a bunch more caveats, references and empirical data, but that is a good summary of how I interpret the evidence.
There is currently a responsibility-dodging contest between industry and academia over who is to blame for the declining enrollment of women in Computer Science and declining employment of women in software development. I hear people in industry bemoan the "empty pipeline", while academics maintain that women aren't entering their programs because of perceptions of the industry. I have compiled some data that may help resolve the question by highlighting a third factor common to both: access to an Internet-based culture of computing.
]]>Scott McCloud, in Understanding Comics, uses a simple image to explain how people employ assumptions when reading comics:
I may have drawn an axe being raised in this example, but I'm not the one who let it drop or decided how hard the blow or who screamed or why. That, dear reader, was your special crime, each of you committing it in your own style.
I argue that the same is true when reading code. The difference, however, is that with executables we can check those assumptions against our invented reality.
]]>Now that Google has broken integrated Google Reader with Google+, I was looking for a replacement that would let me use my daily reading of feeds the way I always had: as a way to share long-form content with other folks who specifically wanted to read the long-form content I shared this way (opt-in broadcast). Google+ defeats the purpose: I like my RSS feed and my friends' shared items specifically because of the high signal-to-noise ratio and the lack of dilution with other content.
My search led me to Tiny Tiny RSS. It offers a similar feature to Google's shared items, except instead of a specific social area it “publishes” items to your own RSS feed. It does not replace the comment or discussion capabilities of Google Reader, but it has the advantage of being something I can host myself and open source; if I ever have some free time I can address any flaws that continue to bother me.
]]>
“Economics is basically about incentives and interaction — or, as Schelling put it, micromotives and macrobehavior. You try to think about what people will do in certain circumstances, and you try to understand how individual behavior adds up to an overall result.” – Paul Krugman
The economics of open source software has generally been approached from the perspective of “why would people do this thing?” This makes some sense; classical economic models leave non-monetary considerations to the realm of game theory and sociology and the question of micromotives initially looks exceptionally opaque. The result, however, has been a skeptical approach to applying an economic lens to open source and a general failure to explain the macrobehavior involved. Most papers I’ve found attempt to explain away open source as human irrationality, rather than demonstrate the way it fits with, and indeed validates, our existing models. I'm one of those people who think that if reality clashes with a model, the problem is probably not reality.
]]>
In a recent CMake project I was setting up, I wanted users to be able to choose one of several possible libraries at project generation, to make performance comparisons easy on multiple platforms. This is easy enough to do with a configuration parameter, but since the libraries available were a limited set offering the available options seemed better. I discovered that in the CMake GUI it is possible to have a drop down menu of options for a given property, and it’s actually quite easy. The only thing to keep in mind is that this approach doesn’t enforce anything; the user could still enter other options. Since this is only used by developers to generate projects, I didn’t particularly care. They break it, they bought it, as it were.
First, we use a cache variable and enumerate the options for our drop-down list:
SET(LIBRARY_TO_USE "Option1" CACHE STRING "library selected at CMake configure time")
SET_PROPERTY(CACHE LIBRARY_TO_USE PROPERTY STRINGS Option1 Option2 Option3)
After that it’s just a matter of changing the things that should change when this option changes. There are a couple possible approaches here, though none of them completely satisfy my aesthetic sense.
1. The first is simply to call everything on every invocation, but that defeats the purpose of the caching in the first place.
2. I can make sure this .cmake file is included at the top of the project CMakeLists.txt, so it is called before anything else that might include this library. In that case I can check the LIBRARY_FOUND variable, which is set the first time any of these libraries are loaded during a build. The upside of this is that if multiple files include this .cmake file it will only reload everything once per project generation. The downside is that it relies on not having someone load the library before this file is included, and that was a deal breaker; I don’t want to rely on implicit assumptions. Also, it still reloads the cache once per build. On the up side, if I want to vary non-cache values this allows me to group all the change logic in one place.
3. The final option is explicitly checking to see if the variable has changed by caching the last value inside of the has-changed if statement. This requires using a second cached variable to hold state and initializing it if it is undefined. Additionally, this variable should never be changed by a user, so I use MARK_AS_ADVANCED to hide it from the GUI.
I used option three, which looks like:
IF(NOT(DEFINED LIBRARY_LAST))
SET(LIBRARY_LAST "NotAnOption" CACHE STRING "last library loaded")
MARK_AS_ADVANCED (FORCE LIBRARY_LAST)
ENDIF()
IF(NOT (${LIBRARY_TO_USE} MATCHES ${LIBRARY_LAST}))
UNSET(LIBRARY_INCLUDES CACHE)
SET(LIBRARY_LAST ${LIBRARY_TO_USE} CACHE STRING "Updating Library Project Configuration Option" FORCE)
ENDIF()
The important part of this is “UNSET”. Any cached variables that are set in the Find.cmake file will need to be explicitly cleared in order for them to be actually updated. The rest of it is simply determining whether or not the parameter changed.
Finally, we need to change parameters on the basis of what option is selected. If only cache variables change, we can include this in the “if changed” loop, but I was using non-cached variables accessed by the Find.cmake files, so I set these each time. It would be cleaner to separate these into their own CMake files with a regular naming scheme, but since I was only setting one parameter I didn’t bother. This looked like:
IF(${LIBRARY_TO_USE} MATCHES "Option1")
SET(LIBRARY_ROOT "${CMAKE_CURRENT_SOURCE_DIR}/Vendor/Option1”)
ENDIF()
IF(${LIBRARY_TO_USE} MATCHES "Option2")
SET(LIBRARY_ROOT "${CMAKE_CURRENT_SOURCE_DIR}/Vendor/Option2")
ENDIF()
Etc. The path naming conventions were not actually that regular either, or wouldn’t have needed the switch statement.
All of these went into a SetLibraryOptions.cmake file, and I added INCLUDE(SetLibraryOptions.cmake) to the root level CMakeLists.txt file. When I included this library in a future target, I used the regular package syntax with ${LIBRARY_TO_USE} as the package name. This is why it was so useful to have a drop down menu here: each package name must exactly match the format of the Find.cmake file. T
Now, when I use the library in another package the include will look something like:
IF (NOT LIBRARY_FOUND)
FIND_PACKAGE(${LIBRARY_TO_USE} REQUIRED)
IF(NOT LIBRARY_FOUND)
MESSAGE(FATAL_ERROR “failed to find “ ${LIBRARY_TO_USE})
ENDIF()
ENDIF()
And that’s it; when the user selects a different library all of the projects will be regenerated with the new option. The final result looks like:
For those of you that write code, what term do you prefer? Programmer? Engineer? Developer? Something else?
Dharmesh Shah asked this question Twitter yesterday, and I did a quick compilation of the public responses. For the answers with more than one vote I include the total votes and also a score. About a third of responces used some form of, “I like X, but sometimes I use Y”, and instead of throwing that information away I awarded 3 points for a first choice, 2 for a second choice and 1 point for a third choice.
The answers that appeared only once were:
It definitely looks like “Developer” is the standard, but what immediately jumped out at me was the way some people embrace the same aspects of the job others try to avoid. Some people reported that “Programmer” sounded too much like someone who just wrote code and didn’t think about it, whereas someone else described their job as “Code Monkey”, which revels in that role. Some of the creative responses, like “Chief Ideas Officer” didn’t imply any contact with code at all, where as others, like “Byte Surgeon”, implied a visceral, low-level involvement.
It seems like sone of the trade off is between “code” and “prestige”, which is always disappointing for me to discover. Several people suggested they would use different words if talking to a fellow coder rather than someone outside the profession, usually preferring "Engineer" when talking to people who don't write code themselves. This is perhaps why “Developer” wins out in the end: it seems to suggest a job that involves typing things that get executed, one way or another, without also suggesting that someone handed you pseudocode to implement. Which may be to say, it is uniformly bland and uninformative, conveying as little information about the tasks performed and the role plays as humanly possible.
It is clear that there are multiple jobs that would fall in this category, though, even if we don’t yet have the language to articulate the differences. Certainly independence vs. subordination is a common theme, but I also noticed there were no terms proposed that specifically called out “team member” or “collaborator”. I would personally prefer such a term to either the independence of “Hacker” or the subordination of “Code Monkey”. Unfortunately, any such word runs the risk of stepping too far from the technical roots,and implying that the code writer is no longer elbow-deep in bloody code.
]]>Metaphor has a pernicious effect. It encourages people to take anecdotes as proof, effective rhetoric as useful advice and to accept only ideas which fit their preconceptions. Metaphors are better at conveying values than specific, practical advice. They can obscure the areas of ignorance and uncertainty where evidence should be collected and lead us to believe we understand things we don't.
Despite all that, I love metaphor. So far it is the most effective tool I've encountered for sharing values, the motivations for process and the assumptions we bring to discussions of how best to build software. Besides, I am the sort of person who sees parallels in everything I do. For me, writing software is like writing plays, post-modern lit crit, economics, psychology, art, poetry, baking, animal husbandry, gardening, physics, music and more. Some of these metaphors have usefully communicate to others (“coding reviews as writer's workshops”), others … not so much (“the computer as an economy”). Most often they are helpful going the other direction: I can describe coding to a poet by drawing analogies with their field, while most programmers probably wouldn't find those parallels useful because they are better versed in the language of software development than they are in the language of poetry.
Even though it may be where they are least needed, metaphors about code have always been employed when programmers talk to each other. Rather than invent a wholly new language, we compare our profession to everything from animation to farming. Sometimes these ideas feel more like thesis statements, a way to make otherwise context-less books flow and hold a reader's attention. Others are so widely embedded in our expectations that tracking down their origins is difficult.
Part of my background is in performance theory. It is the idea that people act in part to conform to or rebel against the stories they tell or hear about themselves. Psychology has more complete theories that describe individual manifestations of this, but I am interested, instead, in the interpersonal consequences of stories. We relate to other people based on what roles we see them play in our stories, and what role we see ourselves playing in theirs. If the senior person on my project describes themself as a “software architect” I will assume that my role as a programmer is different than if they describe themself as “technical lead”. They might actually perform the same tasks in either role, but I will assume that their expectations of my behavior are different, and so my behavior probably will be different whether or not I am conscious of it.
Metaphors are the stories programmers tell about ourselves. We are makers and builders, or we are scientists and engineers. We are crafters or servers. We are artists, assemblers, professionals, lovers of our profession. We are passionate seekers, humble students or skilled masters (or incompetent, frustrated, under-appreciated or under-performing geniuses). We are cowboys and ninjas and rockstars (oh, the assumptions in those...) Our stories about ourselves and our work change how we interact with each other, with our customers, with our code. I am never so intrigued by any specific badge as by the groups of people who choose to wear them.
For example, I believe that part of why “software is like building” became popular, rather than the more generic “software is like engineering” is because more programmers want to be like architects than engineers. We want to imagine ourselves as Frank Lloyd Wright, creating beautiful, useful, functional objects that people inhabit and own. As useful as electricity is, I admit that being Nikola Tesla is less appealing to me. “Architecture” as a metaphor lets us believe that we practical artists and artistic engineers. It makes us a part the tradition of architects, stretching back thousands of years and putting our not-yet-a-century of conversations to shame. Architects also have excellent PR, of course, and software isn't the only field to coopt the word. The job titles "Interior Architect" and "Landscape Architect" are both attempts to borrow gravitas without giving up up all of the art suggested by their original "Designer". Like "Agile" as a label, who wouldn't want to be An Architect?
I've started researching different metaphors, mapping their rises and falls in popularity. I have fifty years of past writings to dig through before I'll feel prepared to jump into the fray myself, but in the meantime I plan to share some of what I am discovering here. I've been intrigued in particular by some analogies that have been abandoned, and the ways that our analogies begin to fall apart as the fields we compare ourselves to integrate technology. Over the past thirty-five years building a house has become more like writing a program than writing a program has become more like building a house. Though they produce fewer good stories, collaborating with such hybrid professions may provide more practical improvements in the creation of software.
]]>Quantum Computers are transistor computers, except all at once.
All things are true, false and unknown, until they are observed. Answers are only opinions, but infinite opinions approach truth arbitrarily closely. A race against entropy; how much can you calculate in the blink of an eye? Even that is too long to be certain. Useful quantum computation is a compromise between reality and everything else. The more practical an approach is, the less likely it will be right. Miracles are correct, but impossible. Luckily, perfection is over-rated. Each thing contributes to and is shaped by an influential whole consisting of connections spacial, temporal and quantum. Neither individuals nor the whole can be described without describing the other. Grasp this, along with linear algebra, and you begin to understand.]]>The sorts of things that are likely to pop up here. This list is as
much a reminder to me as it is a teaser for things to come:
1. Discussion of and updates on my independent projects
1.a. A GUI Matlab xUnit unit testing tool
1.b. Educational webapp games that don't reload the entire page any
time you click a letter
2. Metaphors of software development: an on-going series
2.a. Interviews with experts in fields often used as analogies to
software development exploring how they actually work
3. Culture of software development
4. Early history of computer programming and how it influences current practices
5. The intersections of software development and society at large
6. The current state of internet activism
7. Interesting articles from around the web on a huge variety of topics
7.a. A link to my Google Reader, which is less about software and more
about everything else in the world
8. Books I read and enjoy
9. Goings-on I attend in the Boston area
10. Intro-level how-tos of various sorts
10.a. First up, building your own computer