Holy mango! Talk about unexpected. When I wrote my last entry on Feynman and engineering, I was aiming for my 5-strong subscriber base. After one-time deductions of friends and family, that’s a negative number of readers. Not in a million years I could have guessed it would be on Slashdot. But now a decent respect for my newfound readership compels me to explain myself a bit better (or try, anyway).
The biggest controversy was around the "bottom-up" idea. A number of people, including NASA engineers, wrote me about the need for top/down balance. I agree with this view. Feynman’s "bottom-up" is not a dismissal of top-down analysis. As he talks about the lack of a "preliminary study of materials and components" in relation to the engine, it’s clear that such a study would be guided by a plan and exploratory design. After all, engineers can’t randomly test materials until a space shuttle engine crystallizes in front of them. The problem Feynman points out is the lack of essential information about reality in the design. Analysis is important, but it must not overrule or disregard reality. And reality is best exposed by the utmost bottom-up affair: experimentation. Feynman’s bottom-up is empiricism plus the "attitude of highest quality".
He came from the same island as Martin Fowler
I’m not going to dwell on philosophy lest this degenerate into postmodern blabber. For those interested, I think Feynman’s flavor of science is best shown in the last chapter in The Character of Physical Law and in the electromagnetism and quantum mechanics bits of The Feynman Lectures on Physics. The brilliant empirical mind behind Appendix F is laid bare in these wonderful, fun books. But how does this apply to software? Empiricism in a project context is described well in the business literature. Here’s what In Search Of Excellence has to say in the chapter "A Bias For Action":
The problem we’re addressing (…) is the all-too-reasonable and rational response to complexity in big companies: coordinate things, study them, form committees, ask for more data(…). Indeed, when the world is complex, as it is in big companies, a complex system often does seem in order. But this process is usually greatly overdone. Complexity causes the lethargy and inertia that make too many companies unresponsive.
The important lesson from the excellent companies is that life doesn’t have to be that way. Their mechanism comprises a wide range of action devices especially in the area of management systems, organizational fluidity, and experiments. (…)
There is no more important trait among excellent companies than an action orientation. (…) They don’t indulge in long reports. Nor do they install formal matrixes. They live in accord with the basic human limitations we described earlier: people can only handle a little bit of information at one time.
Finally, and most important, is the user connection. The customer, especially the sophisticated customer, is a key participant in most successful experimenting processes.
Action and experimentation are the cornerstones of empiricism. No attempt is made to subdue reality by extensive analysis and copious documentation. Reality is invited in via experiments. Instead of agonizing over market research, an empirical company hires interns and develops a product in one summer. A non-empirical company has 43 people planning an off-button design for one year. Empirical companies still rely on analysis. P&G has memos, they’re just limited to one page. But software projects are not after "empirical reality", we just want working products. Built to Last deftly relates experiments to process in a chapter entitled "Try a Lot of Stuff and Keep What Works":
What looks in hindsight like a brilliant strategy was often the residual result of opportunistic experimentation and "purposeful accidents".
Bill Hewlett told us that HP "never planned more than two or three years out". (…) We could go on with examples from Citicorp, Philip Morris, GE, Sony, and others. (…) We were surprised to find so many examples of key moves by the visionary companies that came about by some process other than planning. Nor do these examples merely represent random luck. No, we found something else at work (…): evolutionary progress. Evolutionary progress begins with small incremental steps
After dubbing 3M the "Mutation Machine From Minnesota" the authors say:
If we had to bet our lives on the continued success and adaptability of any single company (…), we would place that bet on 3M. Using 3M as a blueprint for evolutionary progress at its best, here are five basic lessons (…).
- Give it a try – and quick!
- Accept that mistakes will be made.
- Take small steps.
- Give people the room they need.
- Mechanisms–build that ticking clock
Built to Last makes the inescapable link to biological evolution, the epitome of bottom-up experimental development. Top companies experiment vigorously with products and processes, driven by the market and organizational metrics. Nature experiments with genetic variation, driven by natural selection. The common theme is that successful systems are driven by reality through experimentation. That’s dandy, but how about software? The best discussion I know of software-as-evolution is the famous LKML thread where Linus shuns top-down design in favor of experimentation. I think of it this way:
A good software development process should optimize experimentation and improve feedback from reality. This is what I mean by reality-driven development. And in software the most important realities are user experience and technical quality, while the primary experiments are working software and code. This isn’t a formal model (heh), it’s simply my favorite analogy for software development. I like the name "reality-driven" because when you mention reality people think of users. And I like the model because it helps me focus on important stuff and on effective ideas, like Paul Graham’s advice to release early and let the market design the product. It also has good explanatory power. Firefox is such a great browser due to intense experimentation in the form of add-ons. Waterfall is so awful because reality is ignored: when the time for feedback comes, the project is over.
There is no specific reality-driven methodology. The Agile principles have a lot in common with these ideas (and certainly influenced them), but the devil is in the details. I prefer to think of software engineering in terms of a toolbox, full of techniques we pick and choose for the right situation. Process tools for optimizing experimentation include iterative development, executable architecture, continuous integration, and unit testing.
Based on this model, the two realities we care about are user experience (including the software’s utility) and technical quality. User experience is often neglected in agile and waterfall alike. The measurement tools come from the usability people and from plain old business sense. Techniques include usability testing, observing users, spending time with users (preferably in their habitat), talking to users, and hugging users. Technical quality revolves around the code base and third party tools. Here we’re looking for the ol’ bit of ultraviolence plus generality, clarity, simplicity, security, etc. Tools include code inspections, code reviews, and metric reports as part of the build. The elusive hiring of good programmers is crucial, but it’s not measurement, so it falls within the "software project" box.
When I think about pre-requisites (requirements and top-down design) I do so in the context of this reality-driven model. Pre-requisites can optimize experimentation by minimizing cost and risk. I have seen how well-written requirements can quickly take a team from zero to working software that’s close to users’ wishes. Likewise, good top-down design can help achieve technical quality faster. But I think of prerequisites as sketches, not blueprints. I prefer minimal specs that produce working software to be molded by the users. And rigid upfront design is a sure way to a crappy code base or engineering disasters. Alistair Cockburn put it best: "With design I can think very fast, but my thinking is full of little holes."
In the end, feedback from reality helps you avoid Ivory Tower Development and pass the Ultimate Unit Test. You make your users happy. A reality-driven process with management buy-in purges faulty o-rings and gets the right materials in a shuttle engine. It avoids abominable applications. It brings money and fame and huge obelisks in your honor. So now you know my idea of bottom-up:
- Have a bias for experiment over analysis, though both have their place.
- Optimize experiments: make them as early, fast, cheap, and broad as you can. Analysis can help here.
- Experiment vigorously.
- Be smart and proactive about measuring reality: user experience and technical quality.
- React to feedback. Let reality drive.
Of course, you can turn the empirical machine towards the process itself, and try to improve the way you build rather than what you build ("It’s fractal, dude!"). That’s the whole point of Built to Last. Also, I’ve found that Built to Last and In Search Of Excellence work well for explaining evolutionary/agile ideas to senior management.
I hope I didn’t kill the aforementioned newfound readership by boredom. Thanks for reading and see you next time. The new server arrives on Friday.