Ben Scofield

me. still on a blog.

On Taking Time to Think

I’ve gotten into a really bad habit when it comes to my work: every job in software development I’ve ever had has essentially been an (extremely fortunate) accident:

  1. My first gig (nearly 14 years ago) was just something to occupy my time after my first year of graduate school in philosophy.
  2. From there, one of my volleyball buddies needed a web developer for his team at Nextel, and I thought it sounded fun.
  3. Once I decided to leave Nextel, I found my next job at Viget through Craigslist. I had a competing offer for that move, though, so I did make at least one decision about what I wanted to be doing.
  4. As I prepared to leave Viget, I started looking for my next move under the radar. I talked to a few companies, but only got into real depth with two, and once I visited Heroku in San Francisco I was convinced – they seemed like the best match for the direction my career was moving.
  5. After some time at Heroku, I realized that I’d misunderstood my ideal mix of evangelism and building-things; I chatted with some friends at conferences, and ended up responsible for email at LivingSocial.

I should be clear: every one of these jobs was wonderful. I made great friends, learned tons, and solved difficult problems at each one. Beyond the actual work I did, however, I also learned more about myself at each position – the things I like and dislike in companies, how I’m most productive, how to work with other people, etc. Each move was largely an accident, but they all resulted in fantastic experiences that I wouldn’t trade for anything.

So, for my next trick – as of April 5th, I’m leaving LivingSocial. I’ve been there for nearly a year and a half, worked with brilliant people, written systems that operated on a scale few people see, and gained a pretty substantial range of startup experience.

Unlike my previous moves, however, I’m leaving LivingSocial without my next job lined up. Instead, I’m joining the ranks of the funemployed for a bit, specifically to think about what I want to work on next. I’m really looking forward to talking to people and finding out what amazing things are happening in the world that I would otherwise never have known about – so, if you’re doing something awesome and looking for a senior developer with a wide range of experience, let me know!

On Remote Work

I’ve worked remotely to a greater or lesser extent for three or four years now – ranging from working from home a few days a week to being 100% remote and on the other side of the country from the home office. In my experience:

For any given person, they’re going to be less effective working remotely than they would be working face-to-face with their team.

ZOMG, let’s all be Yahoo! and eliminate remote work!

OK, calm down. The story is a little more complicated than it might appear.

For one, remote work opens up your recruiting pool to more talented people who happen to live somewhere else and don’t want to move. If the best developers are in fact 10x more productive than average developers, then you’re much better off with them at 90% effectiveness than you are with an 1x or 2x developer at 100%.

For another, unless you have an on-site customer there’s a better-than-average chance that you’re already abandoning the “face-to-face with their team” part of the statement above, and you’re already paying the reduced-effectiveness cost of a partially-distributed team. Recognizing that and adjusting in response is the best way to reduce that cost.

And finally, there are benefits intrinsic to remote work. Separating the team makes groupthink less likely, makes intentional action more likely than reaction, helps people think twice about interrupting, and lets people work when and how they choose, which can make them happier.

In the Twitter conversation that spawned this post, Joe O’Brien put it succinctly:

There are costs and benefits to both requiring everyone to be in one place and allowing people to work remotely. In some cases, balancing those costs and benefits may end up with everyone together; in others, they might result in a partially- or fully-distributed team. There is no silver bullet, other than committing to making informed decisions.

 

On “People” Problems

My friend Justin Gehtland tweet this a bit ago:

That spawned a quick response from me, Jim Van Fleet, and Ben Vandgrift. I want to refine what I said, however – and to a certain extent take it back.

First, a definition: a problem is a mismatch between the way the world is (facts and circumstances, in Justin’s formulation) and the way that some person wants the world to be. Not all such mismatches are problems (the non-existence of unicorns isn’t a problem, despite the fervor with which my daughter wishes they existed), but all problems are mismatches of this sort.

For example, here’s a problem: my website is vulnerable to a security exploit. This is a problem because the way the world is (my site is vulnerable) does not match my desires (to not have my site be vulnerable). 

Here’s another problem: a space probe sent to Mars stopped functioning the day after it landed. This is a problem because the way the world is (the probe stopped working) does not match a widely-held desire (for the probe to work properly over a long period of time).

OK, so if that’s what a problem is, what makes a problem a particular type of problem? What makes something a “people” problem?

I’d argue that we usually classify problems by their solutions. If the way to resolve a problem is by rebooting the server, then it makes sense to call it a server problem. If you have to replace a power source, then it’s a power problem. If you have to change code, then it’s a programming problem. 

So: problems are mismatches between the world and desire, and we classify problems by their solution. It follows from these two points that Justin was correct – all problems are people problems, because we can in fact resolve all problems by changing people. Specifically, we can eliminate any problem by changing the desire side of the mismatch. If I don’t care that my site is vulnerable to a security exploit, then the fact of its vulnerability is not a problem. If NASA doesn’t care that the Mars probe isn’t working, then its failure to work is not a problem.

Of course, this makes calling something a “people” problem a triviality. In the vast majority of cases, we want to resolve the problem by changing the world, not our desires. So, let’s refine the classification of problems to say that it should prefer solutions on the state-of-the-world side, falling back to the desire side only if no other option is available. For my insecure website, the best solution isn’t to stop caring that my site is vulnerable – it’s to fix the code to be more secure (so it’s a “security” or “programming” problem). For the Mars probe, on the other hand, we may not actually have a way to change the world – we can’t zip up to Mars and unstick the wheels, though we might be able to push code changes if the issue is one of programming. In that case, we’d either have to live with the problem unresolved or eliminate it by changing how we felt about the situation.

 

On Self-knowledge

Picking up on that “how do I know what I think” quote – let’s start to tie a little bit of this into topics that might be more interesting to my usual crowd. One of the most powerful things you can do in your life is journal. I don’t necessarily mean writing down your thoughts and feelings each day (though that can be extremely powerful) – I mean recording anything, reliably. Keeping a food journal is a proven strategy for improving eating habits and losing weight. Keeping a training log helps runners break through mental and physical barriers. Following GTD and keeping track of next actions helps millions of people feel – and be – more productive. This practice has become known (though usually in its more technology-enhanced forms) as the Quantified Self.

Here’s the thing: we, as human beings, are spectacularly bad at understanding what we do. Check the list of cognitive biases in Wikipedia (check it even if you’ve read it before, because you might be succumbing to a bias on the list) and you’ll see a litany of ways that our brains fool us. Read any of Dan Ariely’s books to learn about experiments that prove that we’re more susceptible to external influence than we think, that we can rationalize away enormously surprising behaviors, and to get really depressed about our rationality or lack thereof.

Journaling (or, more broadly, measurement) actively confronts some of these problems by allowing us to get a more objective view onto ourselves and our behavior than we’d otherwise have. Even journaling, however, is imperfect. If I wait until after work to write down what I had for lunch (I left my food journal at home!), then I might forget the second serving of the bread basket. One option is to make the journal harder to forget; making it an iPhone app ensures that it’ll be as close as my phone. The ubiquitous journal still requires effort, however, and allows me to inject subjectivity and error into the process. If I eat out after a hard workout, I might fudge the log on the grounds that “I earned a milkshake.” 

The real goal – and as far as I can tell this is true for every Quantified Self domain – is transparent, automatic tracking, with no manual intervention. Imagine how this might work for, say, running:

  • First, no tracking of any sort. You go out and run for fun.
  • Next, we add a pen-and-paper training log. After each run, you note how long it took, how far you went, etc.
  • Now, we add a stopwatch and a GPS, to get precise measurements and to help prevent you from unconsciously improving your times and distances.
  • Then, we switch over to Runkeeper or Nike+ so that the recording itself is automated.
  • Finally, we slap on a Fitbit or Nike+ Fuelband and track movement constantly, whether running or not.

With this level of automated, objective measurement, we finally have the ability to see exactly what we’re doing without the soft focus filter that clouds our eyes while we’re doing it. The challenge then becomes analysis and response – what does the data mean, and how should we change our future behavior to better meet our goals.

On Radical Honesty

OK, who’s heard of Radical Honesty? My introduction to it was in AJ Jacobs’ My Life as an Experiment. The premise is that you don’t lie, and (according to the founder) don’t filter at all. If you’re mad at someone, you let them have it with the full force of your anger; if you’re attracted to someone who is not your partner, you tell them that you’re thinking about what it would be like to have sex with them. This would seem to be the ultimate expression of the thought I expressed in On Integrity, but in practice I think it’s much too facile.

Here’s the thing: I fully believe that you have the opportunity to choose who to be. My two favorite quotes of all time are both related to this. Paraphrased, they’re:

How can I know what I think until I see what I say?

I started acting like the person I wanted to be, and I gradually became him.

The first one points to the extreme limits of our self-knowledge. We don’t control our internal thoughts and feelings, and at times we’re not even aware of what they are. That’s why flipping a coin to make a decision sometimes works so well – when it comes up on one side and we get disappointed, it’s evidence that internally we had a preference that we weren’t consciously aware of.

The second one points to the fact that our instantaneous lack of control of our minds doesn’t mean that we can’t train them to do what we want. If I decide I want to be more civil, then I can force myself to act more civil – and eventually, being civil becomes a reflex or a habit and I don’t have to force myself anymore. Per the discussion of integrity, of course, this only really works if I’m fully consistent. I can’t be civil in my non-anonymous interactions and still flame people at will on Twitter and expect to train myself away from the troll-tastic instinct.

The problem with Radical Honesty, then, is that is completely abandons the second principle. It’s all about advertising who you are now, as opposed to who you want to (and can) be.

 

On Integrity

Some years ago, I remember hearing about a novel (short story, maybe?) by a futurist-minded science fiction author. The premise is that in the future, surveillance is cheap, easily accessible, and (most importantly) universal. I imagine the introduction of such technology would be a boon to law enforcement and amateur porn enthusiasts alike, but that wasn’t the point of the story, which took place after society had somewhat adjusted to the new status quo.

The author’s view of that adjustment had society fragmenting into essentially two camps. One group of people decided to live in complete darkness – the total absence of light allowed them to retain their privacy, at obvious costs. The other group chose to live as if someone was always watching – abandoning all attempts to maintain privacy. 

(I imagine that an actual future of this sort would reveal a much larger third contingent who continue to go about their business as usual, under the assumption that “sure, someone could be looking, but at any given point they probably aren’t.”)

I never actually read this story, and I haven’t been able to find any trace of it online despite my casual searches for it, so it’s possible that I imagined it. Regardless, the underlying idea has stuck with me all this time – it’s an exploration of the “morals/character is what you do when no one’s looking.” 

Now, I can’t claim to know which group I’d end up in were this hypothetical reality come to pass – the people who live with complete integrity and transparency in the light, or the people who cling to the possibility of privacy and separation between how they are and how they wish to be perceived at the cost of living in the world – but I fervently hope that I’d have the courage to join with the former. 

All of that is to say: if you catch yourself doing something that you’d be, say, embarrassed by if the rest of your community discovered it, then you’ve got an incredibly strong signal that you need to reflect on what you’re doing. That’s not to say you absolutely shouldn’t do it – maybe your community and its standards are wrong – but it’s something that you shouldn’t just accept and do without thought.

I think they key is this: integrity is being the same person in public and in private. If you wouldn’t be proud that your private actions came to light, then Something is Wrong*.

To kickstart some arguments, here are things you might want to bring up in comments: dancing like no one is watching, using different voices with different listeners, aliases, privacy as a natural right.

* Note: this is something that I (and I imagine many others) struggle with, as you might gather from my post on self-deception. If there’s any good news in this thought, it’s that integrity of this sort is and will always be a journey.

Disclaimer: this is incomplete. There’s a ton more to say, and much of it ties into projects and questions that are more in keeping with things I’ve written and spoken about in the past (mastery, intentionality, etc.) I’m publishing this early to get this first step off my mind so that I can move on to the next bits.

On Self-Deception

I was listening to an interview with Dan Ariely on NPR earlier today – he was talking about topics from his latest book (The Honest Truth About Dishonesty) and he mentioned an experiment that I hadn’t heard of before.

The experimenters took two strangers, put them in a room and let them introduce themselves and chat for ten minutes. Afterwards, the experimenters asked the subjects if they’d told any lies during the conversation, which both subjects denied. Then, they were presented with the audio – and on average, people told 2-3 lies during that ten-minute conversation with a stranger. It’s a stark illustration of Ariely’s thesis in the book: we cheat (or in this case, lie) quite a bit, and we’re able to do so in part because we’re excellent at rationalizing it away.

So here’s an exercise that I’m trying. Every time I notice that I’m lying to myself – rationalizing an action that goes against what I’ve said I want, or against my principles, or whatever – I’m making a note of it. It’s … sobering, to say the least. I’m not exactly sure what I’m going to take away from it yet, but I can already see changes in my behavior as a result.

On Levels of Description

I’ve been fascinated by levels of description for as long as I can remember:

  • physics, chemistry, and biology as a kid – with both expected and emergent properties at each level
  • neurology to consciousness as an undergrad
  • different levels of selection in (and beyond) evolutionary theory in grad school
  • abstraction and realism in comics during my brief tenure as a web designer
  • low- to high-fidelity with user interface design (wireframes, mockups, etc.)
  • methods, classes, libraries, applications, and ecosystems, and how similar design patterns can be applied at each over the past couple of years
  • requirements, implementation plans, and code in my daily work

I think the impact of choosing the right level to discuss a particular problem can’t be overstated – to paraphrase Einstein: discuss things as generally as possible, but no more generally. Check out any work on wireframing or paper prototyping and you’ll hear exactly why this is important: discussion tends to happen at the finest-level of grain you provide.

If you give someone a wireframe with actual user interface elements, they’ll argue over whether checkboxes or radio buttons are appropriate – not whether there’s more or less information on the page than there should be. If you give someone a mockup in color, they’ll debate over the shade of blue you used – not over whether the elements of the page are in the correct relationship to each other. If you present a new project by showing code, they’ll argue over database schemas and method names instead of whether you’ve actually solved the problem at hand. 

So: be aware and intentional about how you communicate, and make sure you’re talking about things at the right level.

On Interface Segregation

As promised, here’s the first installment of my Fractal Design series. I’d like to kick things off with what might seem to be an odd choice for Rubyists: the interface segregation principle.

A Brief Introduction to the ISP

Interface Segregation is one of the SOLID design principles, and states:

Clients should not be forced to depend upon interfaces that they do not use. (source)

The idea is that when a class implements an interface, it should only get methods that it needs. When followed, you get higher cohesion and less coupling in your code.

In Ruby, we don’t have explicit interface constructs – but we do have plenty of violations of this principle.

Utility Modules

At RailsConf a few months ago, DHH justified the asset pipeline by referring to the javascripts and stylesheets folders in Rails as junk drawers, full of bric-a-brac and this-and-that. Modules make a much better target for that description, though.

I’ve seen (and/or written) any number of medium- or larger-sized Ruby applications where there’s a utility module, filled with all sorts of methods used here or there in the application. Such modules are created with the best of intentions – people just want to stay DRY, after all, so they extract methods that appear in multiple classes or views, but they end up dropping those into an ever-expanding, unfocused module that then has to be included all over the place even when only a fraction of it is used in any particular context.

Sure, many classes that include Toolbox might use the log method – but how many of them need to format credit card numbers, or select a random element from an array? Here’s a better approach:

Granted, there’s more code now, but this ISP-following refactoring allows us to include just the pieces of Toolbox that we need where we need them. The result is a cleaner, more comprehensible, and more testable system.

ActiveSupport

For a very long while, the biggest ISP offender in Ruby was ActiveSupport. It was the utility module pattern writ large – including it anywhere brought in a huge amount of code, the vast majority of which was completely irrelevant to the reason you included it somewhere.

In Rails 3, ActiveSupport was refactored and modularized so that you could include just the pieces you need as you needed them:

And millions of gems, non-Rails applications, and more breathed a sigh of relief.

Rails Helpers

Sticking with Rails, there’s one violation of ISP still present in the current releases by default: automatic controller-based helper generation:

I can count on one hand the number of controllers I’ve seen that use the same helper methods across even a majority of their actions, much less all of them. It’s a clear violation – controller-scoped helpers are automatically included in all of that controller’s views, regardless of whether they are needed or not.

Luckily, this is easy to fix. In your config/application.rb, add a config.generators block:

And voila, the offending helpers are no longer created:

Of course, many of you have probably been ignoring the auto-generated helpers for years, creating smaller, more focused, more ISP-compliant ones the whole time. To you, I say: Good show!

JavaScript and CSS

In my introductory post, I promised that we’d be looking at these principles at various levels of code. In the above examples, we’ve mainly stuck to the class and library levels (or to reasonable facsimiles thereof) of complexity – but there’s another spectrum on which we can look at code. In addition to the server-side examples we’ve already seen, we can get significant violations of the ISP on the client-side, with JavaScript and CSS.

Go to your app and take a look at the JavaScript and CSS you’re serving up on each page. Chances are, there’s a ton of stuff in there that isn’t relevant to any given page – even if you’re using the new Rails asset pipeline, asset_packager, or something similar, it takes a tremendous and constant effort to make sure you’re only sending what you need down the pipe at any given time. I’ve seen thousands of lines of JavaScript  and CSS come down on pages that don’t use any of it, and I’m sure you have, too.

Following the ISP is inherently harder when you get code from a third-party. As much work as has been done to keep JavaScript libraries like jQuery slim, there’s still always going to be code in there that you don’t need. jQuery UI does a nice job of providing a custom script generator, allowing you to choose only the effects that you want – but that probably isn’t a universal solution. 

With any design principle, there’s a tradeoff. With the ISP, it’s between convenience and precision. It may be absolutely more correct to have a unique, customized JavaScript and CSS load for each page on your site, but the organizational work to reach that goal – and the cognitive overhead of figuring out which features are shared between multiple pages and unique to individual ones, so that you can find the right file to edit – could easily become overwhelming.

Libraries and Applications

When you move from classes and modules to the larger scales of libraries and applications, the ISP doesn’t seem to map that closely to the code – we move from discussing implementing interfaces to providing and consuming them. Nevertheless, I think there’s still some good to come out of thinking this way. 

And ISP-friendly gem, for instance, should provide a single chunk of related functionality, without a ton of extraneous stuff thrown in (hey, old ActiveSupport – there you are again!). An ISP-friendly system of applications would pass around APIs where each application did something relatively specific… but at a certain point this approach transitions from being an example of interface segregation to illustrating the Single Responsibility Principle (which we’ll be getting to soon enough).

On Fractal Design

I’m on my way back home after a great Ruby Hoedown in Nashville, where I gave a talk called “Fractal Design.” This is the second(ish) time I’ve given this talk, having previewed it in a very conversational format at the Charlotte Ruby Users’ Group a month or so ago – but I’m starting to think it’s better suited for a blog post (or series of, at least), so: this.

The idea behind the talk is that, just as fractals are the immensely complex, impressive, and emergent results of the repeated application of a simple set of rules at different levels of magnification, so to might we produce complex and impressive software by repeatedly applying a simple set of principles and practices at different levels of code.

The Sierpinski Triangle

So most people have probably seen the Sierpinski Triangle at some point. Heck, I bet more of them have been idly doodled in middle school math notebooks than were ever generated by practicing mathematicians.

The basic idea is that you start with an equilateral triangle. 

[[posterous-content:qqaBCIfkrcwkaAzcfnxz]]

You then remove the triangle formed by connecting the midpoints of each side, leaving you with three new triangles:

[[posterous-content:BaoHohnEhsbpuFfgldps]]

Next, you do the same thing to each of these three triangles – remove smaller triangles from the center of each:

[[posterous-content:bvasmbCqDaAcxkJmIBqi]]

And on and on, as many times as you like.

[[posterous-content:ioBFJyFkDwvhcbbuvwof]]

What you end up with is impossibly intricate, and can be manipulated in surprising ways. When conceived of in three dimensions, you end up with a structure that has an infinite surface area and no volume at all! (a pyramidal Menger sponge)

Software

OK, so software isn’t exactly analogous to pure geometric space. Nevertheless, we do have distinct levels of magnification:

methods → classes → libraries → applications → ecosystems

We also have a large array of software design principles and practices, each of which is usually couched in the language of one of these levels (or at most, two adjacent levels). The SOLID principles, XP’s practices, the law of Demeter, design patterns – what would happen if we tried applying them across our entire body of software, instead of just when designing a library’s API, a user interface, or a single method?

So that’s the question. In future posts I’ll tackle this question with specific principles, and hopefully we’ll find something interesting.