tag:blog.bethcodes.com,2013:/posts Beth Andres-Beck 2022-11-10T18:58:59Z Beth Andres-Beck tag:blog.bethcodes.com,2013:Post/1896679 2022-10-29T18:59:21Z 2022-10-29T18:59:21Z A Two-Party System Makes The Primaries Important
Student loan forgiveness demonstrates why participating in primary elections is fantastic.
This wasn't Biden's priority and it didn't come from his base. If it hadn't been for the primary, this wouldn't have happened yesterday.
It was one of Warren's priorities. The bill that made student loan debt unable to be discharged in bankruptcy is what got her into politics in the first place, and mitigating that harm was high on her list.
I voted for Warren. That vote became part of the committed 15% support she got across the primaries, even when it was clear she wasn't going to win. When she dropped out and endorsed Biden, it was in exchange for him taking on some of her priorities: student loan forgiveness was one of them.
This means my vote wasn't wasted: people who had less support in the primaries couldn't negotiate for as much. Support in the primary translated into representation in the Democratic platform. Biden needed Warren's 15% to win in the general, and in exchange we got student loan forgiveness and Janet Yellen in Treasury.
Several things went into how effective that was. For one thing, Biden was confident that Warren could deliver her 15%. She hit the road for him with the same enthusiasm she had campaigned for herself, because it wasn't about her: it was about her priorities, and campaigning for Biden was campaigning for her priorities. We voted for him with almost the same enthusiasm we would have voted for her, because it wasn't just about her, it was about her priorities. And we knew that he knew he would need us again, both this fall and two years from now, so we could trust that he would follow through. And he did.
If you ever get frustrated with the "two-party system", I highly recommend getting involved earlier in the process. In American politics, we form the coalitions first, and then we select which of those two coalitions we would like to be the government. Forming the coalition is messier, but it is also a lot more satisfying than showing up at the end & picking between coalitions someone else designed.

In the primaries, it is strategic to vote our hearts even when our candidate won't win, because that is how we get our priorities adopted as party priorities. And then we go knock on doors when the general comes around to make sure our coalition has the power to get them done.

]]>
tag:blog.bethcodes.com,2013:Post/1887010 2022-10-04T23:38:11Z 2022-10-19T22:22:49Z For Delightful Code Reviews, Say Nice Things

A rebellion is brewing.  Ideas like post-commit reviews or even a return to cowboy coding are gaining traction over the unpleasant & unproductive experience that is the bug-hunt code review.

This is unfortunate, because code reviews are one of the delightful parts of our profession. They let us shape and revel in the things we build together. They let us be confident in our work, and demolish imposter syndrome. They are a powerful tool for building livable code with raptor numbers greater than one. While they aren’t the only way to achieve those benefits, unlike ensemble or pair programming they work across time zones and give people extra space.

The problem isn’t that code reviews are bad; it is that they are too often done badly.

Many software developers were introduced to code reviews via impersonal tools or corporate policies that require them. Those unfortunate programmers have never experienced a delightful code review and have no idea how to perform one.

While I can’t give every reader the experience of receiving a delightful code review, I can share with you the tools I use to perform them. Some of those tools require a supportive context or established relationships to work, but there is one that no matter where you work you can start using today:

Say nice things.

As you read the code you are reviewing, pay attention to how it makes you feel. Any time it inspires a a spark of joy, any time you feel yourself smile, leave a comment. 

If you don’t know why you felt joy, that’s okay: your comment can be simply “this delights me”, “:-D” or “Nice!” Your coworker gets to know you appreciate their work, and you get to notice which bits of our work you enjoy.

If you want to take it further, level 2 is figuring out what about that line made you smile. Maybe a name makes sense, or an API is elegant, or you recognize a design pattern used appropriately. By leaving a more-specific compliment, you give your coworker the opportunity to delight you more in the future. 

Level 3 is identifying what doing that good thing accomplished for you as a reader. This not only gives your coworker the chance to delight you; it lets them know the context where doing it again will be similarly helpful. It gives them information they otherwise have no way to learn.

A level 3 positive comment might be something like, “Great job naming this Fire Break! `summonCredentialsFromTheDeep` accurately communicates the monstrosities that lie in those depths. If something goes wrong with credentials, I will definitely know where to look, and it leaves a clear marker that I might want to Tidy First if I need to modify that code.” 

For this to pay off, you can’t fake it: you have to actually figure out what code you like. It is important that you actually enjoy the code you are complimenting. This isn’t some shit sandwich technique: if you don’t have something nice to say, for goodness sake don’t make something up.

It is also important to remember that joy is subjective. It is impossible to be wrong about what you enjoy because it is impossible to be right about what you enjoy. Your joy is your own.

The great things about compliments is that they ask nothing of your coworker. You aren’t trying to get them them to change anything, or telling them they are Wrong[tm]. If they take the critique personally, they have to feel good about themselves. And it is a lot more satisfying to receive that a bland, impersonal “LGTM”.

That doesn’t mean it won’t ever change the code. It may turn out that your coworker wanted to accomplish something different. If how you read it wasn’t what they meant you to read at all, they now have the chance to more accurately communicate their intention! But even then, you still genuinely enjoyed the thing they did. Even if it code ends up changing later, nothing changes your experience of delight.

Compliments are thus a safe way to move code reviews beyond bug hunting. It shows people that aesthetics are relevant to code quality. It establishes that our subjective opinions of our coworkers’ code is a relevant topic, and it establishes that without needing to ask them to do anything to accommodate those preferences. It lets other developers to think about whether they agree with your compliment, and it invites them to leave subjective comments of their own.

But even if no one else got anything out of these comments, I would still leave them. Our trade is fun, and it is worth taking the time to remind myself of that. Not every piece of code we write will gracefully communicate the problem and its solution, but when one does it is a wonder worth celebrating.


Enjoying those moments of grace is my privilege as a programmer.

]]>
tag:blog.bethcodes.com,2013:Post/1824448 2022-04-28T20:33:29Z 2022-06-03T08:09:23Z Coupling & Cohesion: How Musk Would Need To Rearchitect Twitter

Wired magazine published an article about why Musk’s Plan to Reveal The Twitter Algorithm Won’t Solve Anything.

Several of my non-programmer friends were interested in this, and we started chatting. Because the idea itself is obviously shockingly out of left field, I discovered this was a perfect opportunity to explain the properties of Coupling & Cohesion and why they matter.

Coupling & cohesion are defined in terms of a change you want to make to a system. In this case, Elon Musk would like to open source “the algorithm”, which he defines as all the bits of code that “make any changes to people's tweets, if they're emphasized or de-emphasized”.

Disclaimer

I want to be clear that nothing here is based on my experience with the Twitter code base. I wouldn't speak to any private information, and my experience was nearly a decade ago. Things have most certainly changed.

The information from current developers in the article is plenty for us to speculate about how coupled and cohesive the system is with regards to this particular change.

Cohesion

Wired magazine reports that there is “no single algorithm that guides the way Twitter decides to elevate or bury content”. Like many high-traffic platforms, Twitter uses a microservice architecture. According to the Wired article, multiple places in the code may promote or hide content, scattered across a multitude of services.

This is an example of low cohesion. We want to change all the code related to promoting or suppressing tweets at once. That code is scattered in many locations & mixed in with totally unrelated code. It isn't cohesive.

In order to understand all the ways a tweet might be promoted or hidden, every piece of the system involved in any of those would need to be open sourced and the reader would need to understand how those components interacted. This makes the change Musk wants expensive & error-prone.

To make the change easier, Twitter would need to rearchitect their system. This would involve moving all the related behavior together in one place. It would also involve separating any behavior in those components that isn’t about promoting or hiding a tweet. A service or a group of services that only handled promoting or hiding tweets would be high cohesion and possible to open-source.

Coupling

There are two sources of coupling: code that the code being changed relies on, and code that relies on the code being changed. (Anyone know better words to distinguish those two? Let me know, because that is a mouthful.)

Luckily for Twitter, from Wired’s description it sounds like they are mostly dealing with only one of those two kinds of coupling. If not much else depends on which tweets are promoted or hidden, it makes the change a lot easier.

Wired reports that the scattered pieces of code “perform a complex dance atop mountains of data and a multitude of human actions. Results are also tailored to each user based on their personal information and behavior.” That is to say, the code that promotes or hides tweets is highly coupled to many different parts of the current system.

This coupling could prevent Twitter from extracting the behavior into a cohesive unit. Even if the code was centralized, it would still require understanding code that had nothing to do with promoting or hiding tweets in order to understand what is happening. If it is particularly tightly coupled, it might even be impossible to separate without an intermediate step.

Reducing coupling is less straight-forward than increasing cohesion. Twitter would need to consider why those dependencies were needed & what the purpose the data served. They would then turn that understanding into an interface of some kind, with names that reflect that understanding. Twitter’s current data could then be swapped out for some other source of data that satisfied the same purpose. That would let the system be loosely coupled with respect to this change.

"Quality"

I want to be clear here that nothing I describe here is the result of Twitter’s code being “bad”. Twitter’s code is built in a way that would make this particular change hard, but all code makes some kinds of changes hard.

With respect to different changes, Twitter’s system is already highly cohesive and loosely coupled. Twitter grew revenue 16% yoy last quarter. It has recently made obvious strides in reducing harassment & abuse on the platform. All of that involved hundreds of engineers safely evolving running software.

Writing "good" code isn't enough. Even the best code in the universe isn’t loosely coupled and highly cohesive with regards to every possible change.

Conclusion

I don’t think this goal is useful or plausible or something that will happen even if the sale does go through. I’m just using it here as a concrete example of a sweeping change that someone might want to make to an existing system.

Its very absurdity is useful. It clearly demonstrates why You Aren’t Going To Need It is a powerful approach to managing coupling & cohesion.

It is impossible to predict what billionaire in a midlife crisis will get angry your company didn’t let him post Hitler memes in the wake of his breakup & decide to buy your company to make one specific change. Attempting to anticipate that eventuality would have been a massive waste of time & money.

But when something like that happens, regardless of what the system looks like today we can adapt it. And as long as the system solved the previous goals as gracefully as possible, supporting all the existing features plus this one new change will be as easy as we could make it.

Imagine that Twitter had guessed that a billionaire would get mad about an ad they showed him. They might have spent a similar amount of time & effort as this project will take today making the ad targeting logic cohesive & decoupled. The code still wouldn't be any more cohesive or loosely coupled with regards to the change that a billionaire actually wants. It would have cost a bunch of money to do, making all the other work over the years harder, and it still wouldn't make this change any easier.

Attempting to anticipate the future doesn't help us build systems that can adapt to it.

If Musk follows through on wanting to publish all the code that contributes to promoting or hiding a tweet it will cost Twitter a great deal of money. The result is unlikely to be particularly useful, especially when the greatest factor in whether a tweet is "promoted" is whether other human beings hit the “retweet” button.

But by employing these principles, by first increasing cohesion and reducing coupling with regards to the specific change, it would be possible.]]>
tag:blog.bethcodes.com,2013:Post/1518926 2020-03-11T20:23:46Z 2020-03-11T20:23:46Z Moderating Discussions Over Video

As many colleges move online, I realize I have a somewhat-unique experience: I shameless ripped off the pedagogy from my small liberal arts professors and have spent the last decade+ applying it at distributed tech companies.  I've facilitated video conversations with anywhere from three to fifty participants, both in the course of my work, as part of reading groups for specific texts, on social science topics like "gender and racial bias in tech" and as part of consciousness raising groups to help foment cultural change.

You all have the advantage that the students have already been interacting with one another and with you; that pre-existing trust makes it much easier.  And you are all going to be doing this for the first time, so you can figure it out together. The best advice I can give is saving five minutes at the end to talk about how you all feel the discussion just went, and if it isn't working in the middle of the class, just stop and have a conversation about what isn't working.

The greatest challenge moving to video is that it is easier for people to check out from behind a screen and not have it be obvious.  The advantage is that if they do, it isn't as disruptive.  I always treat any video meeting as opt-in, and then work to make it easy for people to do that opting.

Basic advice:

  • Switch to Speaker View: when someone starts speaking their video will pop up, similar to you glancing at someone who takes a breath
  • Have the other videos in gallery view, so you keep an I on if people are engaged
  • For up to 10 people I don't bother with raised hands.  For more than that some video tools have a built-in mechanisms: for the rest you can use chat.
  • Have one person, ideally not a participant, take collaborative notes so everyone else can pay attention.  If they are in the same room as you (ideal) they can also manage the queue of people who want to speak on a piece of paper so you can glance down and know who is next.
  • Encourage people to take paper notes, so their screens can just be the video.  Similar, have them use an e-reader/tablet or print out any readings.  The other advantage of this is that you will be able to see when they glance down.

Facilitating:

  • Have a way for people to submit things they'd like on the agenda before the call starts: that gives you more time to plan. I typically use Slack for this, but wherever this group of people is chatting works.
  • Establish your facilitation plan up-front and communicate it, even if it is the same as last week. Remember, we've just taken out all the turn-taking mechanisms people are used to using for in-person discussions, so replacing them with something explicit helps.
  • Cover the goal of the specific discussion as well, and frame any expectations ("if you haven't read X you are welcome to observe but not participate" is a common one for our architectural reviews, for example.)
  • Since you can't go "around the room", for smaller groups where you want to go round-robin, write down the list of names of participants on the line so you can call in them confidently.  I like using the same list to keep track of people's contributions later.
  • Plan the emotional arc of the conversation: this is the key to keeping people engaged.  
  • As much as possible, prompt for specific kinds of comments, rather than using open-ended questions.  The tendency for confident voices is amplified, and being more specific draws out people who aren't confident responding to general prompts.
  • People aren't getting the usual signals to stop talking, so don't be shy about interrupting. I like interrupting with clarifying questions: it draws people into the conversation, rather than driving them out, without letting them run away with the time.  Facilitation has to be pretty active to be effective.
  • Provide positive feedback on the process of how people are participating, as well as the comments themselves

Accessibility:

  • Use the best camera you can find, and try to ensure your internet connection is high-bandwidth
  • Use a microphone (I like https://www.amazon.com/Blue-Snowball-Condenser-Microphone-Cardioid/dp/B014PYGTUQ) or you phone headset if your computer's built-in microphone isn't great
  • Having someone share notes makes the information more accessible, because it is harder to take notes in the video format with a computer in front of you
  • If you have students with hearing loss, CART services can often integrate with video meetings
  • Using techniques like going around the room can help people who are thrown off by the loss of implicit turn-taking
  • If your students' equipment isn't as good, don't be shy about repeating questions

Beyond all of that, know that it isn't as different as it feels at first and it is absolutely possible.

]]>
tag:blog.bethcodes.com,2013:Post/1516202 2020-03-03T22:59:18Z 2022-11-10T18:58:59Z Rails Quirk: A Period in the URL Can Break The Route

When using Rails routing I came across an odd bug: a URL query parameter was breaking the route.  A URL query parameter without a period? Everything works fine. A URL query parameter with a period? 404.

I eventually found the answer in an off-hand comment in a random blog post, and traced it back to the code.  So that next time I remember what is going on, I figured I'd throw the explanation up here. By default, Rails assumes anything after the period represents the format (see the Mapping class defined in rails/actionpack/lib/action_dispatch/routing/mapper.rb). Which if, for example, you are using the format to determine whether a request should be served by a frontend app can then break the route.

To address this, you have two options.  First, you can follow the suggestion I've seen elsewhere and define your own constraint:

get "*path", to: "react_frontend#show", constraints: { path: /.*/ }
But to address the root of the issue, you can also just tell Rails not to look for the format with a regex:
get "*path", to: "react_frontend#show", :format => false






]]>
tag:blog.bethcodes.com,2013:Post/1427478 2019-08-16T01:07:05Z 2019-12-12T15:55:18Z Reflecting on Diversity, Inclusion and My Self-Alienation

Two and a half years ago I joined LTSE, with the goal of changing the incentives companies face to prioritize short-term profits over everything else.  In May, the SEC approved the creation of the Long Term Stock Exchange, making us one of only a handful of venues authorized to list publicly traded companies.

When this milestone happened I discovered that I still have an internalized voice that says, "if you prioritize hiring underrepresented developers, it means you are de-prioritizing success". I found that some part of me holds an insidious belief that places where I felt comfortable couldn't be "the best" companies.  By demanding representation, this voice said, I was asking a sacrifice of the company I was working for. I had gotten as far as believing that sacrifice was justified, even necessary for the sake of justice, but it was still something I was being granted.

That voice is wrong.

We've built an engineering team here that is racially diverse and gender-balanced. We say out loud that we aren't trying to hire "smart" developers: we are hiring skilled developers who believe in practicing their skills in order to improve. We don't believe in a "founder gene": our tools set out to make explicit the implicit knowledge those folks horde, so that more people with valuable ideas can successfully found companies. My experience here is so different than what I had experienced elsewhere.  I no longer fantasize about quitting the industry on a regular basis. I feel like I can recruit without worrying that I am selling harmful snake oil, and I feel empowered to support people the way they want to be supported instead of the way the industries says we should want to be supported. But some part of me distrusts this ease. Part of me still believed that feeling comfortable must mean something is wrong, and that it is unreasonable to want this comfort "at the expense" of the things that "really matter".

That part is also wrong.

I don't believe that our success here vindicates "diverse" teams any more than not succeeding at this ridiculously ambitious mission would mean "diverse" teams are a failure. This is not a magical Utopia, and I still react to things that happen here with the weight of all those other experiences I have had. But this weekend I found myself crying as some deep-seated clinching, this sense that my basic existence was an impediment to success, loosened a little.  It is not unreasonable to want a community or company that takes me into account. We can succeed with a diverse team, where being a feminist is part of the bar, where we expect "D&I" efforts to be effective, where people take parental leave and no one yells and work is expected to be sustainable. It may even turn out that all of those things make it easier, not harder, to do useful, productive, successful work, rather than just being what it takes for me to not quit.

The part that always told me things could be different?  That part was right.

We are going to be hiring a bunch over the next stage of this project. Many of the people reaching out and proactively raising their hands are people who take for granted that every company in the world has a place for them. Some of them will turn out to be great, but my goal in this next phase is to make sure that other people, candidates who wouldn't think to jump in just because the project had some success, feel invited to join as well. I want them to know this company is for them, in a way it is not actually for all these people who get to assume that every company is.

]]>
tag:blog.bethcodes.com,2013:Post/1344239 2018-11-15T17:41:04Z 2022-02-28T19:52:12Z UX Patterns of Emotional Journey

When building transformative experiences for our users, we begin by identifying the emotion that motivates their engagement.  We then imagine how we want them to feel when we have provided for their need.  Finally, we are left to build something that we think can successfully transform the first into the second.

The only way to actually know if such a design works for a sufficient audience to support the product is to experiment and see, but there are some patterns of UX that can suggest things we might want to try.  None of these is a product all on its own: we also have to actually address a need people have in a way that provides some substantive value.  But since we can offer what we think of as value and still not have people walk away feeling better than when they walked in, this is a toolbox we can come back to to ensure that the actual value we provide is also giving people something they want.

There are many more of these possible: I look forward to hearing about the patterns you have discovered! If you are interested in reading more about the use of visuals and interaction in creating experiences, I highly recommend Understanding Comics and Reimagining Comics by Scott McCloud: they are an accessible entry point into the world of visual and interactive impact.  The Design Of Everyday Things and Emotional Design, by Don Norman, are also great starting points, as well as Theater Of The Oppressed, by Augusto Boal and Impro by Keith Johnstone.

]]>
tag:blog.bethcodes.com,2013:Post/1189539 2017-09-07T18:45:07Z 2018-07-19T05:04:16Z Introductory Language Values I was having a conversation with some people about languages to use to teach programmers.  I am not a teacher and it has been my impression that language choice matters a lot less than pedagogy when creating good programmers comfortable with all of the possible tools of software development.  That said, I still have opinions and here are the things I look for in an intro language:

* Some things should be SUPER easy, hard things should be possible, nothing has to be fast or maintainable
* Fast feedback cycles between writing a thing and seeing if it works, with easily-visible results.  We want to introduce students to the phenomenal magical power of coding, because that will provide intrinsic motivation to keep going when things are hard.
* Transparent: if you dig down, you can figure out why something does what it does (LISP, Smalltalk and JavaScript all have this property)
* Supports encapsulation and recursion: these are the hardest concepts for most students to grasp, so introducing them early and insisting students use them is valuable.
* Good code should be pretty, ugly code should be ugly: I don't care if it's possible to write terrible code, I care if readable code is obviously readable. Students need to be able to start developing an aesthetic sense of code quality right away, but shouldn't have to write clean code in order to get something working.
* No meaningful whitespace or magical numbers of characters: these are often confusing to people not used to working with computers because in other places we use language those things don't matter.
* Good IDE support: most students aren't going to be used to working on the command line, and introducing version control is a more gentle introduction. This helps with explorability and getting students over the hurdle of understanding that they aren't writing prose: they are building a world with it's own internal rules system.
* Easy unit testing: it's hard enough to teach unit testing even at its easiest, but it is incredibly valuable to start with it because it introduces the idea of interfaces and encapsulation in a particularly tangible way. It can also help students learn to evolve software in small, safe steps.
* Publicly-visible well-written code bases: reading code is just as important as writing code when learning.
* An active, supportive, anti-sexist community: I want students to be able to feel like they belong when they go looking on line for information about what they are doing.
* Doesn't try to be clever or optimize for experienced user productivity: ideally, I should be able to tell a story about the language in a single sentence. "Everything is an object" or "everything inside the parentheses gets run together" or "we send objects messages" are ways to bootrstrap a mental model of the language (consider, for example, how Bootstrap explains LISP to teach students algebra: http://www.bootstrapworld.org/materials/spring2017/courses/algebra/units/unit1/index.html#lesson_Brainstorming5496)

Note that many of these are different things than I look for in a production language. I want students to make mistakes that help them learn, so protecting them from those mistakes isn't useful or helpful.  They aren't going to be working on large code bases, so libraries, package management and scalability aren't important. No language is perfect on all of these dimensions, but some are definitely better than others.

]]>
tag:blog.bethcodes.com,2013:Post/935198 2015-11-17T21:28:17Z 2022-04-15T20:53:35Z Confidence Through Feedback, or Why Imposter Syndrome is the Wrong Metaphor

Imposter syndrome is often presented as a personal failing.  A lack of confidence, our wrong-headed beliefs not matching the reality of how competent we are, or worst of a flaw of our gender.  Just tell yourself you are wrong!  Imagine everyone else is just like you!  Have confidence in all parts of yourself except that part that tells you not to have confidence!

Unsurprisingly, these interventions are not often effective.  At best, they change behavior, frequently while making people who already feel bad about themselves feel worse.  At worst, they lead people to stop trying to improve the environment they have found themselves in.

I would like to offer an alternative story: imposter syndrome is a rational response to insufficient feedback. 

]]>
tag:blog.bethcodes.com,2013:Post/750082 2014-10-02T14:17:33Z 2014-10-02T14:18:02Z Email Template For Addressing Conference Gender Diversity

Someone I worked with had asked for recommendations when I noted the speaking lineup of a conference he was attending was exclusively men, and I figured I'd share the letter I came up with in case it is useful to others:

While I attended $CONFERENCE_NAME last weekend, I was disappointed to notice the oversights in your speaker line up leading to it being made up of nothing but men.  Perhaps your prioritization of people with their own books to sell led to inadvertent systematic discrimination, as you were reliant on the discriminatory publishing world and more generally on people without a significant non-work-related demands on their time (who are most likely to be either single men or men in non-egalitarian marriages.)  {Depending on your impression of the conference itself, something like: "Since I was also disappointed in how much of the conference devolved into the speakers plugging their own books, I am confident you could kill two birds with one stone by instead seeking out the most qualified speakers.” could fit here too.}

I wanted to convey that when trying to build a group from $COMPANY to attend this weekend, a woman who is normally excited to attend local conferences had no interest at all.  Without any women speaking, a code of conduct or even the barest token of effort towards diversity, there was no evidence that there would exist other women there or that the men involved see women as peers.  She expected that the weekend would, at best, be full of getting interrupted so men could explaining things she already understood, dudes hitting on her, men quizzing her about the alien experience of being "one of those", people assuming she was part of the conference organizing staff or from recruiting or some attendee’s wife, or simply ignoring her all together.  She also assumed that any complaints would be brushed off as disruptive to the existing exclusionary atmosphere that it appears the organizers have cultivated.

Going forward, if I see another line up of all men speaking in Boston, I will have to assume she’s right and you are actively working to run a conference to alienate women.  Since I’m not interested in that environment, this may be my last $CONFERENCE_NAME conference; I could instead have attended $OTHER_CONFERENCE_THAT_MADE_AN_EFFORT the week before where both I and the women I know would both have felt more welcome.

Now, the reasoning behind this approach.  I like trying to turn it into a contest between conferences, since the only eventual pressure to change will come through economic pressure.  It also circumvents the argument that it's not possible or there are no qualified women, without ever having to point out just how incredibly insulting that argument is.  Other conferences have worked hard to change the make up on their conventions: accepting speakers through blind proposals (rather than just inviting people they know of or their currently-non-diverse attendees recommend), advertising a code of conduct widely and enforcing it when it comes up, creating scholarships for women who want to attend but who’s companies won’t support it and seeking out and addressing feedback from women speaking and attending.  It’s not like this stuff is easy; it’s jut possible.

I did have one more recommendation for the guy I was talking with: 

If you want to be helpful while you are there, be your usual polite and outgoing and aware self and discuss the technical work of any women you do meet, especially listening to their ideas and learning about the work they are doing.  A good interaction or two can brighten up even the most awkward conference.

]]>
tag:blog.bethcodes.com,2013:Post/702039 2014-06-09T20:54:45Z 2020-09-01T16:02:07Z The Dyslexic Programmer
I am dyslexic, and these are my experiences.  They certainly won’t be universal, especially as there isn’t just one form of dyslexia [0].  To identify my strain of dyslexia, I read quite quickly (though only somewhat accurately) through pure pattern-recognition. I can look at, say, "word" and identify that the second letter is an ‘o’, but if I want to understand it as a concept I ignore the letters involved all together [1].  Essentially, I’ve memorized how each word in the English language looks as a complete entity. (I also, thankfully, have an excellent conceptual memory.) This approach, of knowing something is made up of individual parts but not needing to worry about what specifically those parts are unless absolutely necessary, extends to how I approach math, history, social sciences and fantasy world-building as well. 

I believe that this tendency to generalization is why I am able to jump between levels of abstraction quite easily.  The concept of emergence, and the specific cases of recursion and polymorphism, are obvious to me.  Everything in the universe is made up of component parts, interacting in ways that give rise to the meta-phenomenon we observe, like “matter” and “consciousness”, and I can keep that in mind without worrying particularly about what those components are.  It is odd to me when people consider things to be discrete, isolated wholes; it can be useful to talk about them that way, but I usually don’t actually believe it.

]]>
tag:blog.bethcodes.com,2013:Post/651443 2014-02-07T02:02:39Z 2015-11-20T22:24:40Z How is Gender Studies hard?

Someone somewhere, of course, was mocking Gender Studies majors for taking an easy course because they couldn't hack liberal arts.  I made the point that the Gender Studies classes I took were significantly more difficult than my computer science classes.  Someone then asked me why I thought that was, and I came up with an answer:

]]>
tag:blog.bethcodes.com,2013:Post/616034 2013-11-05T04:56:14Z 2014-06-27T02:05:53Z In Brief: Evolution

Evolution isn't about "good" or "bad".  It is simply a word for a specific emergent process. It describes all the things that happened that led to the current state of affairs.  Sometimes they happened for reasons, under specific and identifiable pressures, but other times just by accident. It gets way over-simplified, especially by people looking for answers, since evolution is bad at providing answers, or reasons, because it's a description of an emergent system and not a driving force.

The original building block of evolution were these two observations:
1. Does a trait make it likely you will die early or otherwise won't reproduce when someone else will, given current evolutionary pressures? Then a trait is likely to be expressed in very few members of a species.

2. Does it directly lead you to have more kids, given the current environmental pressures? Then a trait is likely to spread, being is expressed in a larger percentage of the following generation of a population that contain that trait. "Being able to digest milk when food is scarce" is a good and recent example. Note that even this doesn't imply a value judgement unless you think humans' value is based on reproduction (which some evolutionists do because they're wrong.) People who can't digest milk aren't defective. Indeed, environmental pressures can change and which traits are adaptive will change with them: now that we have better nutrition being able to process lactose may no longer be an evolutionary advantage.

It turns out that in addition to the two more obvious dynamics, there are a bunch of other cases too:

3. Is a trait situationally useful, sometimes helpful and sometimes not?  It's likely to show up in some of the population, but not most. (There is an interesting cluster of traits that occur with 8-15% prevalence in humans, including male pattern baldness and ADHD.)  This is similar to a mixed equilibrium in game theory.

4. Does the trait allow for on-the-fly adaptation?  As programmers, we know how powerful reconfiguration can be.  The human brain, for example, is highly plastic and can adapt to changing circumstances, and our muscles grow better at performing exactly the tasks we perform with them.  Specialization is "expensive", in that it leaves the organism vulnerable to changes in the environment; allowing for cultural, technological or physical adaptation during a lifetime is an easier way to get a similar effect.

5. It is a trait that was once useful or is useful for some people even if not for you, and is not actively harmful? It may stick around! Dimorphism is complicated to evolve and thus usually only occurs under pressure.  This is why women have a prostate and men have nipples. Once something has evolved, it takes pressure to make it go away entirely, which is why we go through a phase in utero when we develop proto-gills.

6. Is it fun/attractive/entertaining/not actively annoying? Then it may not contribute to inherent fitness, but it is likely to be selected for anyway, because evolution isn't a passive thing done to us. It is a dialectic process: the process shaped us, and we get to shape the process. Cultural tastes or norms can lead to evolutionary pressure just a surely as any other environmental factor (which is how the Hapsburg's lasted as long as they did: cultural power was more influential than any pressures against genetic disorders.)  This is similar to mechanism design in game theory: if we don't like the outcome of the game, change the game.

7. Is it genetically linked with something that is subject to any of the other positive dynamics?  Even if a trait itself is not useful or desirable or advantageous, it may share a common cause with something that is.  

8. Finally, does a trait have no reliable impact on reproductive success? Then it might happen anyway! This is called "genetic drift". Sometimes answer to "why?" is "eh, why not?"

Assuming that something is one of the first two may seem really cool, but when trying to impress your friends and intimidate your enemies always remember: a trait might just not be bad enough to be worth getting rid of.
]]>
tag:blog.bethcodes.com,2013:Post/585942 2013-06-26T17:36:01Z 2013-10-08T17:26:46Z Reading List Referenced at Usenix Talk

My Usenix talk this year uses various books I've drawn on for inspiration as backgrounds for my slides.  The goal of this was to share some of the broader world beyond what we usually look to as computer scientists.  Some of these books are accessible, while others are extremely dense.  I recommend picking things up and putting them down if they don't speak to you.  It's all about what is useful, helpful and challenging to you where ever you are right now.

]]>
tag:blog.bethcodes.com,2013:Post/555997 2013-03-31T14:11:00Z 2013-10-08T17:20:36Z When Buying Computer Components

I'm doing a workshop on putting together computers in two hours, and rather than do hand outs I figured I'd toss the links up on my blog. When buying components for a computer, I usually read:

Ars Technica

If you are buying the components for a machine, they do systems guilds that are a useful report on the state-of-the-art each December-ish.  The one for 2012 is available here:

http://arstechnica.com/gadgets/2012/12/ars-technica-system-guide-december-2012/


Video Card Benchmarks

They have performance statistics and a useful ranking of video cards by performance per price that I find particularly useful.  It is helpful to remember that many of these are relatively arbitrary, so if you have some specific game or application in mind it is useful to find reviews specifically for that application.

http://www.videocardbenchmark.net/gpu_value.html


Corsair

This is the easy way to find the RAM that goes with your motherboard.

http://www.corsair.com

 

Tom’s Hardware 

Has comparative reviews of various components, like hard drives, though I find their comparisons less easy-to-read than the video card benchmarks site.

http://www.tomshardware.com/

 

]]>
tag:blog.bethcodes.com,2013:Post/555998 2012-09-03T05:57:15Z 2013-10-08T17:20:36Z Fractal Design Patterns

The difference between Architecture and Code blurs quickly when refactoring becomes sufficiently common, so the distinction made between various pattern languages never seemed especially helpful to me. Between Architecture and Service the line is firmer: this code is mine, that code is yours, here is the interface. At the same time, I've found that the design patterns that work when I'm writing methods and classes still apply when I'm working with services. The goal is still to increase cohesion and decrease coupling, even if often I have no control over half of the code.

Thus, the idea of a Fractal Design Patterns. Instead of the usual pattern description, which describes the pattern at a specific level of abstraction, a Fractal Pattern would illustrate it at multiple levels and try to get at the underlying principle.

For example, I'll take the algorithm-swapping-base-on-state that is described by the Strategy pattern.

]]>
tag:blog.bethcodes.com,2013:Post/555999 2012-06-19T00:45:00Z 2018-04-25T19:42:53Z The Internet is Convincing Women Not To Study Computer Science

A summary from YodasEvilTwin on Slashdot:

"The internet is dominated by sexist men, which discourages women from getting involved in related fields."  

I add a bunch more caveats, references and empirical data, but that is a good summary of how I interpret the evidence.


Introduction

There is currently a responsibility-dodging contest between industry and academia over who is to blame for the declining enrollment of women in Computer Science and declining employment of women in software development. I hear people in industry bemoan the "empty pipeline", while academics maintain that women aren't entering their programs because of perceptions of the industry.  I have compiled some data that may help resolve the question by highlighting a third factor common to both: access to an Internet-based culture of computing.

]]>
tag:blog.bethcodes.com,2013:Post/556000 2012-06-14T19:33:00Z 2013-10-08T17:20:36Z Assumptions Make Programming Possible

Scott McCloud, in Understanding Comics, uses a simple image to explain how people employ assumptions when reading comics:

I may have drawn an axe being raised in this example, but I'm not the one who let it drop or decided how hard the blow or who screamed or why.  That, dear reader, was your special crime, each of you committing it in your own style.

I argue that the same is true when reading code.  The difference, however, is that with executables we can check those assumptions against our invented reality.

]]>
tag:blog.bethcodes.com,2013:Post/556001 2011-12-10T00:12:00Z 2013-10-08T17:20:36Z An Introduction to Services From This Coder's Perspective
First, there is the question of “what is a Service?”  

W3C defines a web service as:
a software system designed to support interoperable machine-to-machine interaction over a network.”[1]

They go on to specify some specific technologies mandated by this particular standard and the role a service plays:
“The purpose of a Web service is to provide some functionality on behalf of its owner -- a person or organization."
 
This definition describes the infrastructure, focusing on how we deploy services and what purpose they serve right now to MBAs.  It is as though they are defining a program as "a sequence of machine instructions that perform mathematical or logical operations on behalf of a computer operator”.  I want a definition that tells me how to address the technical questions that emerge and provides guidance on how we will be writing and employing services going forward.  My current personal definition is:
 
A Service is an Object bound at run-time by a networking protocol.

Note that I mean “object” here under Alan Kay’s definition [2] and not in the particular sense that the word is used any specific language like Java or C++.  Despite varying dynamics, all service frameworks encapsulate some combination of data and behavior (not necessarily *well*, of course, but when they fail it is usually clear from the pain that follows.)  They hide implementation details while fulfilling either implicit or explicit contracts in response to messages sent by other services.[3]

The interesting change from current Object-Oriented programming paradigms is that they also encapsulate the location they are executing and the provider of the object.  Because we bind to the specific implementation at runtime using a networking protocol of some kind, the object can be running on any machine, virtual or physical, anywhere reachable by that protocol.  I don’t think this fundamentally changes the paradigm any more than dynamically linked libraries did, but it is really, really cool.
]]>
tag:blog.bethcodes.com,2013:Post/556003 2011-11-01T03:52:40Z 2013-10-08T17:20:36Z Swapping from Google Reader to Tiny Tiny RSS

Now that Google has broken integrated Google Reader with Google+, I was looking for a replacement that would let me use my daily reading of feeds the way I always had: as a way to share long-form content with other folks who specifically wanted to read the long-form content I shared this way (opt-in broadcast). Google+ defeats the purpose: I like my RSS feed and my friends' shared items specifically because of the high signal-to-noise ratio and the lack of dilution with other content.

My search led me to Tiny Tiny RSS. It offers a similar feature to Google's shared items, except instead of a specific social area it “publishes” items to your own RSS feed. It does not replace the comment or discussion capabilities of Google Reader, but it has the advantage of being something I can host myself and open source; if I ever have some free time I can address any flaws that continue to bother me.

]]>
tag:blog.bethcodes.com,2013:Post/556005 2011-04-11T16:09:00Z 2013-10-08T17:20:36Z A Preface to My Future Work on the Economics of Open Source

“Economics is basically about incentives and interaction — or, as Schelling put it, micromotives and macrobehavior. You try to think about what people will do in certain circumstances, and you try to understand how individual behavior adds up to an overall result.” – Paul Krugman

The economics of open source software has generally been approached from the perspective of “why would people do this thing?”  This makes some sense; classical economic models leave non-monetary considerations to the realm of game theory and sociology and the question of micromotives initially looks exceptionally opaque.  The result, however, has been a skeptical approach to applying an economic lens to open source and a general failure to explain the macrobehavior involved.  Most papers I’ve found attempt to explain away open source as human irrationality, rather than demonstrate the way it fits with, and indeed validates, our existing models.  I'm one of those people who think that if reality clashes with a model, the problem is probably not reality.

 

]]>
tag:blog.bethcodes.com,2013:Post/556006 2011-03-08T20:27:37Z 2018-02-25T12:35:42Z CMake Tips & Tricks: Drop Down List

In a recent CMake project I was setting up, I wanted users to be able to choose one of several possible libraries at project generation, to make performance comparisons easy on multiple platforms.  This is easy enough to do with a configuration parameter, but since the libraries available were a limited set offering the available options seemed better.  I discovered that in the CMake GUI it is possible to have a drop down menu of options for a given property, and it’s actually quite easy.  The only thing to keep in mind is that this approach doesn’t enforce anything; the user could still enter other options.  Since this is only used by developers to generate projects, I didn’t particularly care.  They break it, they bought it, as it were.

First, we use a cache variable and enumerate the options for our drop-down list:

SET(LIBRARY_TO_USE "Option1" CACHE STRING "library selected at CMake configure time")

SET_PROPERTY(CACHE LIBRARY_TO_USE PROPERTY STRINGS Option1 Option2 Option3) 

After that it’s just a matter of changing the things that should change when this option changes.  There are a couple possible approaches here, though none of them completely satisfy my aesthetic sense.

1.       The first is simply to call everything on every invocation, but that defeats the purpose of the caching in the first place. 

2.      I can make sure this .cmake file is included at the top of the project CMakeLists.txt, so it is called before anything else that might include this library.  In that case I can check the LIBRARY_FOUND variable, which is set the first time any of these libraries are loaded during a build.  The upside of this is that if multiple files include this .cmake file it will only reload everything once per project generation.  The downside is that it relies on not having someone load the library before this file is included, and that was a deal breaker; I don’t want to rely on implicit assumptions.  Also, it still reloads the cache once per build. On the up side, if I want to vary non-cache values this allows me to group all the change logic in one place.

3.       The final option is explicitly checking to see if the variable has changed by caching the last value inside of the has-changed if statement. This requires using a second cached variable to hold state and initializing it if it is undefined. Additionally, this variable should never be changed by a user, so I use MARK_AS_ADVANCED to hide it from the GUI.

I used option three, which looks like:

IF(NOT(DEFINED LIBRARY_LAST))

     SET(LIBRARY_LAST "NotAnOption" CACHE STRING "last library loaded")

     MARK_AS_ADVANCED (FORCE LIBRARY_LAST)

ENDIF()

IF(NOT (${LIBRARY_TO_USE} MATCHES ${LIBRARY_LAST}))

     UNSET(LIBRARY_INCLUDES CACHE)

     SET(LIBRARY_LAST ${LIBRARY_TO_USE} CACHE STRING "Updating Library Project Configuration Option" FORCE)

ENDIF()

The important part of this is “UNSET”.  Any cached variables that are set in the Find.cmake file will need to be explicitly cleared in order for them to be actually updated.  The rest of it is simply determining whether or not the parameter changed.

Finally, we need to change parameters on the basis of what option is selected.  If only cache variables change, we can include this in the “if changed” loop, but I was using non-cached variables accessed by the Find.cmake files, so I set these each time.  It would be cleaner to separate these into their own CMake files with a regular naming scheme, but since I was only setting one parameter I didn’t bother.  This looked like:

IF(${LIBRARY_TO_USE} MATCHES "Option1")

     SET(LIBRARY_ROOT "${CMAKE_CURRENT_SOURCE_DIR}/Vendor/Option1”)

ENDIF()

    

IF(${LIBRARY_TO_USE} MATCHES "Option2")

     SET(LIBRARY_ROOT "${CMAKE_CURRENT_SOURCE_DIR}/Vendor/Option2")

ENDIF()

Etc.  The path naming conventions were not actually that regular either, or wouldn’t have needed the switch statement.

All of these went into a SetLibraryOptions.cmake file, and I added INCLUDE(SetLibraryOptions.cmake) to the root level CMakeLists.txt file.  When I included this library in a future target, I used the regular package syntax with ${LIBRARY_TO_USE} as the package name.  This is why it was so useful to have a drop down menu here: each package name must exactly match the format of the Find.cmake file.  T

Now, when I use the library in another package the include will look something like:

IF (NOT LIBRARY_FOUND)

     FIND_PACKAGE(${LIBRARY_TO_USE} REQUIRED)

     IF(NOT LIBRARY_FOUND)

           MESSAGE(FATAL_ERROR “failed to find “ ${LIBRARY_TO_USE})

     ENDIF()

ENDIF()

And that’s it; when the user selects a different library all of the projects will be regenerated with the new option.  The final result looks like:

 

]]>
tag:blog.bethcodes.com,2013:Post/556009 2011-03-08T19:43:14Z 2013-10-08T17:20:36Z @dharmesh Polls Twitter: What Do We Call People Who Code?

For those of you that write code, what term do you prefer? Programmer? Engineer? Developer? Something else?

Dharmesh Shah asked this question Twitter yesterday, and I did a quick compilation of the public responses.  For the answers with more than one vote I include the total votes and also a score. About a third of responces used some form of, “I like X, but sometimes I use Y”, and instead of throwing that information away I awarded 3 points for a first choice, 2 for a second choice and 1 point for a third choice.

  • Developer – 17 votes/score 52
  • Engineer –11 votes/score 29
  • Programmer – 4 votes/score 11
  • Hacker – 4 votes/score 11
  • Coder – 3 votes/score 8

The answers that appeared only once were:

  • Byte Surgeon
  • Architect
  • Tinkerer
  • Code Monkey
  • Codewright
  • Professional Geek
  • Someone who types on a keyboard all day in air conditioning
  • Chief Ideas Officer

 

It definitely looks like “Developer” is the standard, but what immediately jumped out at me was the way some people embrace the same aspects of the job others try to avoid. Some people reported that “Programmer” sounded too much like someone who just wrote code and didn’t think about it, whereas someone else described their job as “Code Monkey”, which revels in that role. Some of the creative responses, like “Chief Ideas Officer” didn’t imply any contact with code at all, where as others, like “Byte Surgeon”, implied a visceral, low-level involvement.

It seems like sone of the trade off is between “code” and “prestige”, which is always disappointing for me to discover.  Several people suggested they would use different words if talking to a fellow coder rather than someone outside the profession, usually preferring "Engineer" when talking to people who don't write code themselves. This is perhaps why “Developer” wins out in the end: it seems to suggest a job that involves typing things that get executed, one way or another, without also suggesting that someone handed you pseudocode to implement. Which may be to say, it is uniformly bland and uninformative, conveying as little information about the tasks performed and the role plays as humanly possible.

 It is clear that there are multiple jobs that would fall in this category, though, even if we don’t yet have the language to articulate the differences.  Certainly independence vs. subordination is a common theme, but I also noticed there were no terms proposed that specifically called out “team member” or “collaborator”. I would personally prefer such a term to either the independence of “Hacker” or the subordination of “Code Monkey”.  Unfortunately, any such word runs the risk of stepping too far from the technical roots,and implying that the code writer is no longer elbow-deep in bloody code.  

]]>
tag:blog.bethcodes.com,2013:Post/556011 2011-02-21T02:38:00Z 2013-10-08T17:20:36Z Programming is...: Why Metaphors Interest Me

Metaphor has a pernicious effect. It encourages people to take anecdotes as proof, effective rhetoric as useful advice and to accept only ideas which fit their preconceptions. Metaphors are better at conveying values than specific, practical advice. They can obscure the areas of ignorance and uncertainty where evidence should be collected and lead us to believe we understand things we don't.

Despite all that, I love metaphor.  So far it is the most effective tool I've encountered for sharing values, the motivations for process and the assumptions we bring to discussions of how best to build software. Besides, I am the sort of person who sees parallels in everything I do. For me, writing software is like writing plays, post-modern lit crit, economics, psychology, art, poetry, baking, animal husbandry, gardening, physics, music and more. Some of these metaphors have usefully communicate to others (“coding reviews as writer's workshops”), others … not so much (“the computer as an economy”). Most often they are helpful going the other direction: I can describe coding to a poet by drawing analogies with their field, while most programmers probably wouldn't find those parallels useful because they are better versed in the language of software development than they are in the language of poetry.

Even though it may be where they are least needed, metaphors about code have always been employed when programmers talk to each other. Rather than invent a wholly new language, we compare our profession to everything from animation to farming. Sometimes these ideas feel more like thesis statements, a way to make otherwise context-less books flow and hold a reader's attention. Others are so widely embedded in our expectations that tracking down their origins is difficult.

Part of my background is in performance theory. It is the idea that people act in part to conform to or rebel against the stories they tell or hear about themselves. Psychology has more complete theories that describe individual manifestations of this, but I am interested, instead, in the interpersonal consequences of stories. We relate to other people based on what roles we see them play in our stories, and what role we see ourselves playing in theirs. If the senior person on my project describes themself as a “software architect” I will assume that my role as a programmer is different than if they describe themself as “technical lead”. They might actually perform the same tasks in either role, but I will assume that their expectations of my behavior are different, and so my behavior probably will be different whether or not I am conscious of it.

Metaphors are the stories programmers tell about ourselves. We are makers and builders, or we are scientists and engineers. We are crafters or servers. We are artists, assemblers, professionals, lovers of our profession. We are passionate seekers, humble students or skilled masters (or incompetent, frustrated, under-appreciated or under-performing geniuses).  We are cowboys and ninjas and rockstars (oh, the assumptions in those...)  Our stories about ourselves and our work change how we interact with each other, with our customers, with our code. I am never so intrigued by any specific badge as by the groups of people who choose to wear them.

For example, I believe that part of why “software is like building” became popular, rather than the more generic “software is like engineering” is because more programmers want to be like architects than engineers. We want to imagine ourselves as Frank Lloyd Wright, creating beautiful, useful, functional objects that people inhabit and own. As useful as electricity is, I admit that being Nikola Tesla is less appealing to me. “Architecture” as a metaphor lets us believe that we practical artists and artistic engineers.  It makes us a part the tradition of architects, stretching back thousands of years and putting our not-yet-a-century of conversations to shame.  Architects also have excellent PR, of course, and software isn't the only field to coopt the word.  The job titles "Interior Architect" and "Landscape Architect" are both attempts to borrow gravitas without giving up up all of the art suggested by their original "Designer".  Like "Agile" as a label, who wouldn't want to be An Architect?  

I've started researching different metaphors, mapping their rises and falls in popularity. I have fifty years of past writings to dig through before I'll feel prepared to jump into the fray myself, but in the meantime I plan to share some of what I am discovering here. I've been intrigued in particular by some analogies that have been abandoned, and the ways that our analogies begin to fall apart as the fields we compare ourselves to integrate technology. Over the past thirty-five years building a house has become more like writing a program than writing a program has become more like building a house.  Though they produce fewer good stories, collaborating with such hybrid professions may provide more practical improvements in the creation of software.

]]>
tag:blog.bethcodes.com,2013:Post/556012 2011-02-17T16:34:00Z 2013-10-08T17:20:36Z In koans: Quantum Computing

Quantum Computers are transistor computers, except all at once.

All things are true, false and unknown, until they are observed.

Answers are only opinions, but infinite opinions approach truth arbitrarily closely.

A race against entropy; how much can you calculate in the blink of an eye? Even that is too long to be certain.

Useful quantum computation is a compromise between reality and everything else. The more practical an approach is, the less likely it will be right. Miracles are correct, but impossible. Luckily, perfection is over-rated.

Each thing contributes to and is shaped by an influential whole consisting of connections spacial, temporal and quantum. Neither individuals nor the whole can be described without describing the other. Grasp this, along with linear algebra, and you begin to understand.]]>
tag:blog.bethcodes.com,2013:Post/556014 2011-02-14T16:41:00Z 2013-10-08T17:20:36Z Future topics of discussion

The sorts of things that are likely to pop up here.  This list is as
much a reminder to me as it is a teaser for things to come:


1. Discussion of and updates on my independent projects
1.a. A GUI Matlab xUnit unit testing tool
1.b. Educational webapp games that don't reload the entire page any
time you click a letter
2. Metaphors of software development: an on-going series
2.a. Interviews with experts in fields often used as analogies to
software development exploring how they actually work
3. Culture of software development
4. Early history of computer programming and how it influences current practices
5. The intersections of software development and society at large
6. The current state of internet activism
7. Interesting articles from around the web on a huge variety of topics
7.a. A link to my Google Reader, which is less about software and more
about everything else in the world
8. Books I read and enjoy
9. Goings-on I attend in the Boston area
10. Intro-level how-tos of various sorts
10.a. First up, building your own computer

]]>