I learned touch typing in October 2006, first on a regular keyboard layout and then on Dvorak. I’m posting the results for those who wonder about switching.
My situation was a common one for people who had computers as children. I typed at a decent speed (~50 WPM) but in a wholly chaotic way. While I didn’t fish for letters, I had to glance at the keyboard sometimes to "get my bearings". Some words were burned in finger memory: "printf", "protected virtual", "SELECT * FROM", "site:". You know, words that people commonly use. But I wanted to improve my typing for efficiency and ergonomic reasons (I wanted my monitors at the right eye level without the need for looking down at the keyboard).
Here are the results. Speed is in net words per minute (total words minus mistakes). The starting point was ~50 WPM, which was without touch typing and on a regular layout. Before touch typing my accuracy was much lower, so the net speed was well below the total speed. Touch typing is more accurate. That’s not shown in the numbers, but it does make you happier (in a control-your-environment sort of way). Also, my original ~50 WPM would drop lower depending on the typing task – being able to type without looking is a boon sometimes.
|Regular Touch Typing||Dvorak Touch Typing|
|Hand comfort||Much worse than starting point||Almost as good as starting point|
|Speed after 2 days||20-25 WPM||15-20 WPM|
|Speed after 2 weeks||~30 WPM||~30 WPM|
|Speed after 2 months||Stopped. Decided to try Dvorak.||~50 WPM|
|Current speed||-||75-80 WPM|
I did some research about Dvorak at first but decided against it because the speed difference compared to the regular layout didn’t seem worth the hassle. So I bought Typing Master and learned regular touch typing. Using the software you can learn touch typing very fast, in a matter of hours. By "learn" I mean you’ll be able to type without looking, but you’ll be slower than the Filthy Critic’s cousin. It feels like you’re handicapped: you want to type but the fingers just won’t move. Avoid heated online discussions for the first two weeks. After 2 weeks I was back up to about 30 WPM, but touch typing on the regular layout felt uncomfortable. I have decent finger mobility from playing the guitar, but the movements still felt ungodly. I decided to switch to Dvorak instead.
Support for Dvorak is excellent in Windows and Linux. I changed system settings to Dvorak and kept the same keyboard. I actually like the fact that the keys on the keyboard have the wrong labels: it makes you stop looking in no time. Typing Master is geared towards the regular layout, but it does Dvorak well enough. I would look for specific Dvorak typing software though. The move was pretty painless; the major issue was typing passwords. Sometimes I had to use a conversion diagram (link below) to type a password, since I couldn’t see what I was typing nor look at the keyboard to know which key to press. Also, people have a hard time using my computers now . That can be a bug or a feature. I don’t think you should expect a speed boost from Dvorak, but it feels a lot easier on the hands than regular touch typing. I was surprised to find out that, at least for my hands, "chaotic" typing is the most comfortable method. I was also surprised that typing on Dvorak did not help me impress women.
Based on my experience, I suggest this:
- If you don’t touch type and want a speed boost, move to Dvorak
- If you touch type on regular and have hand problems, move to Dvorak
Otherwise, don’t bother. Links:
- This diagram (prints better than it looks) and this one helped a lot. They are crucial for the first few days, and for passwords in the first couple of weeks.
- I really liked Typing Master (no affiliation), but there are free alternatives out there
Holy mango! Talk about unexpected. When I wrote my last entry on Feynman and engineering, I was aiming for my 5-strong subscriber base. After one-time deductions of friends and family, that’s a negative number of readers. Not in a million years I could have guessed it would be on Slashdot. But now a decent respect for my newfound readership compels me to explain myself a bit better (or try, anyway).
The biggest controversy was around the "bottom-up" idea. A number of people, including NASA engineers, wrote me about the need for top/down balance. I agree with this view. Feynman’s "bottom-up" is not a dismissal of top-down analysis. As he talks about the lack of a "preliminary study of materials and components" in relation to the engine, it’s clear that such a study would be guided by a plan and exploratory design. After all, engineers can’t randomly test materials until a space shuttle engine crystallizes in front of them. The problem Feynman points out is the lack of essential information about reality in the design. Analysis is important, but it must not overrule or disregard reality. And reality is best exposed by the utmost bottom-up affair: experimentation. Feynman’s bottom-up is empiricism plus the "attitude of highest quality".
He came from the same island as Martin Fowler
I’m not going to dwell on philosophy lest this degenerate into postmodern blabber. For those interested, I think Feynman’s flavor of science is best shown in the last chapter in The Character of Physical Law and in the electromagnetism and quantum mechanics bits of The Feynman Lectures on Physics. The brilliant empirical mind behind Appendix F is laid bare in these wonderful, fun books. But how does this apply to software? Empiricism in a project context is described well in the business literature. Here’s what In Search Of Excellence has to say in the chapter "A Bias For Action":
The problem we’re addressing (…) is the all-too-reasonable and rational response to complexity in big companies: coordinate things, study them, form committees, ask for more data(…). Indeed, when the world is complex, as it is in big companies, a complex system often does seem in order. But this process is usually greatly overdone. Complexity causes the lethargy and inertia that make too many companies unresponsive.
The important lesson from the excellent companies is that life doesn’t have to be that way. Their mechanism comprises a wide range of action devices especially in the area of management systems, organizational fluidity, and experiments. (…)
There is no more important trait among excellent companies than an action orientation. (…) They don’t indulge in long reports. Nor do they install formal matrixes. They live in accord with the basic human limitations we described earlier: people can only handle a little bit of information at one time.
Finally, and most important, is the user connection. The customer, especially the sophisticated customer, is a key participant in most successful experimenting processes.
Action and experimentation are the cornerstones of empiricism. No attempt is made to subdue reality by extensive analysis and copious documentation. Reality is invited in via experiments. Instead of agonizing over market research, an empirical company hires interns and develops a product in one summer. A non-empirical company has 43 people planning an off-button design for one year. Empirical companies still rely on analysis. P&G has memos, they’re just limited to one page. But software projects are not after "empirical reality", we just want working products. Built to Last deftly relates experiments to process in a chapter entitled "Try a Lot of Stuff and Keep What Works":
What looks in hindsight like a brilliant strategy was often the residual result of opportunistic experimentation and "purposeful accidents".
Bill Hewlett told us that HP "never planned more than two or three years out". (…) We could go on with examples from Citicorp, Philip Morris, GE, Sony, and others. (…) We were surprised to find so many examples of key moves by the visionary companies that came about by some process other than planning. Nor do these examples merely represent random luck. No, we found something else at work (…): evolutionary progress. Evolutionary progress begins with small incremental steps
After dubbing 3M the "Mutation Machine From Minnesota" the authors say:
If we had to bet our lives on the continued success and adaptability of any single company (…), we would place that bet on 3M. Using 3M as a blueprint for evolutionary progress at its best, here are five basic lessons (…).
- Give it a try – and quick!
- Accept that mistakes will be made.
- Take small steps.
- Give people the room they need.
- Mechanisms–build that ticking clock
Built to Last makes the inescapable link to biological evolution, the epitome of bottom-up experimental development. Top companies experiment vigorously with products and processes, driven by the market and organizational metrics. Nature experiments with genetic variation, driven by natural selection. The common theme is that successful systems are driven by reality through experimentation. That’s dandy, but how about software? The best discussion I know of software-as-evolution is the famous LKML thread where Linus shuns top-down design in favor of experimentation. I think of it this way:
A good software development process should optimize experimentation and improve feedback from reality. This is what I mean by reality-driven development. And in software the most important realities are user experience and technical quality, while the primary experiments are working software and code. This isn’t a formal model (heh), it’s simply my favorite analogy for software development. I like the name "reality-driven" because when you mention reality people think of users. And I like the model because it helps me focus on important stuff and on effective ideas, like Paul Graham’s advice to release early and let the market design the product. It also has good explanatory power. Firefox is such a great browser due to intense experimentation in the form of add-ons. Waterfall is so awful because reality is ignored: when the time for feedback comes, the project is over.
There is no specific reality-driven methodology. The Agile principles have a lot in common with these ideas (and certainly influenced them), but the devil is in the details. I prefer to think of software engineering in terms of a toolbox, full of techniques we pick and choose for the right situation. Process tools for optimizing experimentation include iterative development, executable architecture, continuous integration, and unit testing.
Based on this model, the two realities we care about are user experience (including the software’s utility) and technical quality. User experience is often neglected in agile and waterfall alike. The measurement tools come from the usability people and from plain old business sense. Techniques include usability testing, observing users, spending time with users (preferably in their habitat), talking to users, and hugging users. Technical quality revolves around the code base and third party tools. Here we’re looking for the ol’ bit of ultraviolence plus generality, clarity, simplicity, security, etc. Tools include code inspections, code reviews, and metric reports as part of the build. The elusive hiring of good programmers is crucial, but it’s not measurement, so it falls within the "software project" box.
When I think about pre-requisites (requirements and top-down design) I do so in the context of this reality-driven model. Pre-requisites can optimize experimentation by minimizing cost and risk. I have seen how well-written requirements can quickly take a team from zero to working software that’s close to users’ wishes. Likewise, good top-down design can help achieve technical quality faster. But I think of prerequisites as sketches, not blueprints. I prefer minimal specs that produce working software to be molded by the users. And rigid upfront design is a sure way to a crappy code base or engineering disasters. Alistair Cockburn put it best: "With design I can think very fast, but my thinking is full of little holes."
In the end, feedback from reality helps you avoid Ivory Tower Development and pass the Ultimate Unit Test. You make your users happy. A reality-driven process with management buy-in purges faulty o-rings and gets the right materials in a shuttle engine. It avoids abominable applications. It brings money and fame and huge obelisks in your honor. So now you know my idea of bottom-up:
- Have a bias for experiment over analysis, though both have their place.
- Optimize experiments: make them as early, fast, cheap, and broad as you can. Analysis can help here.
- Experiment vigorously.
- Be smart and proactive about measuring reality: user experience and technical quality.
- React to feedback. Let reality drive.
Of course, you can turn the empirical machine towards the process itself, and try to improve the way you build rather than what you build ("It’s fractal, dude!"). That’s the whole point of Built to Last. Also, I’ve found that Built to Last and In Search Of Excellence work well for explaining evolutionary/agile ideas to senior management.
I hope I didn’t kill the aforementioned newfound readership by boredom. Thanks for reading and see you next time. The new server arrives on Friday.
On January 28th, 1986, Space Shuttle Challenger was launched at 11:38am on the 6-day STS-51-L mission. During the first 3 seconds of liftoff the o-rings (o-shaped loops used to connect two cylinders) in the shuttle’s right-hand solid rocket booster (SRB) failed. As a result hot gases with temperatures above 5,000 °F leaked out of the booster, vaporized the o-rings, and damaged the SRB’s joints. The shuttle started its ascent, but seventy two seconds later the compromised SRB pulled away from the Challenger, leading to sudden lateral acceleration. Pilot Michael J. Smith uttered "Uh oh" just before the shuttle broke up. Torn apart by excessive force, it disintegrated rapidly. Within seconds the severed but nearly intact crew cabin began to free fall and seven astronauts plunged to their deaths. I was a child then and remember watching in horror as Brazilian TV showed the footage.
At the time I didn’t know that SRB engineers had previously warned about problems in the o-rings, but had been dismissed by NASA management. I also didn’t know who Richard Feynman or Ronald Reagan were. It turns out that President Reagan created the Rogers Commission to investigate the disaster. Physicist Feynman was invited as a member, but his independent intellect and direct methods were at odds with the commission’s formal approach. Chairman Rogers, a politician, remarked that Feynman was "becoming a real pain." In the end the commission produced a report, but Feynman’s rebellious opinions were kept out of it. When he threatened to take his name out of the report altogether, they agreed to include his thoughts as Appendix F – Personal Observations on Reliability of Shuttle.
It is a good thing it was included, because the 10-page document is a work of brilliance. It has deep insights into the nature of engineering and into how reliable systems are built. And you see, I didn’t put ‘software’ in the title just to trick you. Feynman’s conclusions are general and very much relevant for software development. After all, as Steve McConnell tirelessly points out, there is much in common between software and other engineering disciplines. But don’t take my word for it. Take Feynman’s:
The Space Shuttle Main Engine was handled in a different manner, top down, we might say. The engine was designed and put together all at once with relatively little detailed preliminary study of the material and components. Then when troubles are found in the bearings, turbine blades, coolant pipes, etc., it is more expensive and difficult to discover the causes and make changes.
So software is not the only discipline where the longer a defect stays in the process, the more expensive it is to fix. It’s also not the only discipline where a "top down" design, made in ignorance of detailed bottom-up knowledge, leads to problems. There is however a difference here between design and requirements. The requirements for the engine were clear and well defined. You know, go to space and back, preferably without blowing up. Feynman is arguing not so much against Joel’s functional specs, but rather against top down design such as that advocated by the UML as blueprint crowd. On goes Feynman:
The Space Shuttle Main Engine is a very remarkable machine. It has a greater ratio of thrust to weight than any previous engine. It is built at the edge of, or outside of, previous engineering experience. Therefore, as expected, many different kinds of flaws and difficulties have turned up. Because, unfortunately, it was built in the top-down manner, they are difficult to find and fix. The design aim of a lifetime of 55 missions equivalent firings (27,000 seconds of operation, either in a mission of 500 seconds, or on a test stand) has not been obtained. The engine now requires very frequent maintenance and replacement of important parts, such as turbopumps, bearings, sheet metal housings, etc.
Unfortunate top down manner, difficult to find and fix, failure to meet design requirements, frequent maintenance. Sound familiar? Is software engineering really a world apart, removed from its sister disciplines? Feynman elaborates on the difficulty in achieving correctness due to the ‘top down’ approach:
Many of these solved problems are the early difficulties of a new design. Naturally, one can never be sure that all the bugs are out, and, for some, the fix may not have addressed the true cause.
Whether it’s the Linux kernel or shuttle engines, there are fundamental cross-discipline issues in design. One of them is the folly of a top-down approach, which ignores the reality that detailed knowledge about the bottom parts is a necessity, not something that can be abstracted away. He then talks about the avionics system, which was done by a different group at NASA:
The software is checked very carefully in a bottom-up fashion. First, each new line of code is checked, then sections of code or modules with special functions are verified. The scope is increased step by step until the new changes are incorporated into a complete system and checked. This complete output is considered the final product, newly released. But completely independently there is an independent verification group, that takes an adversary attitude to the software development group, and tests and verifies the software as if it were a customer of the delivered product.
Yes, go ahead and pinch yourself: this is unit testing described in 1986 by the Feynman we know and love. Not only unit testing, but ‘step by step increase’ in scope and ‘adversarial testing attitude’. It’s common to hear we suck at software because it’s a "young discipline", as if the knowledge to do right has not yet been attained. Bollocks! We suck because we constantly ignore well-established, well-known, empirically proven practices. In this regard management is also to blame, especially when it comes to dysfunctional schedules, wrong incentives, poor hiring, and demoralizing policies. Management/engineering tensions and the effects of bad management are keenly discussed by Feynman in his report. Here is one short example:
To summarize then, the computer software checking system and attitude is of the highest quality. There appears to be no process of gradually fooling oneself while degrading standards so characteristic of the Solid Rocket Booster or Space Shuttle Main Engine safety systems. To be sure, there have been recent suggestions by management to curtail such elaborate and expensive tests as being unnecessary at this late date in Shuttle history.
This is one of many passages. I picked it because it touches on other points, such as the ‘attitude of highest quality’ and the ‘process of gradually fooling oneself’. I encourage you to read the whole report, unblemished by yours truly. With respect to software, I take out four main points:
- Engineering can only be as good as its relationship with management
- Big design up front is foolish
- Software has much in common with other engineering disciplines
- Reliable systems are built by rigorously tested, incremental bottom-up engineering with an ‘attitude of highest quality’
There are other interesting themes in there, and Feynman’s insight can’t be captured in a few bullet points, much less by me. What do you get out of it?
This weekend I created an ASP.NET Runtime Cheat Sheet to be used as a quick reference for HttpRequest, HttpRuntime and AppDomain/Process/Identity stuff.
It shows several members of those classes, each with its live value from my site, a link to MSDN, and some explanations. I included links to useful tools (like Process Monitor) and good posts (like anything K. Scott Allen writes). The idea is to be brief and have the highest possible information-per-word ratio.
I wrote all of the information retrieval code as a user control, so it is easy to embed into an application for debugging. The code is in AspNetRuntimeDiagnostics.ascx, MIT-licensed as usual. The output should be restricted to trusted users though, since it’s a lot of information to potential attackers.
I hope this is useful to others. There are a couple of bits that could use more description, which I plan to add. The cheat sheet is a live document, your suggestions and corrections are very welcome.
Hands-on SQL Injection is the first article in my Hands-on Web Security series (‘series’ of one). It teaches you what SQL Injections are and how to protect against them. But the fun part is that I have 2 live holes open for exploitation hosted at http://victim.duartes.org (which is really just an alias, since I run the site off an old creaky computer which would blow up with VMware).
If you know what SQL Injections are but aren’t exactly sure how they work, here’s your chance to have at it without breaking federal law.
The hard part about offering live exploitable holes is to block privilege escalation for malicious users. In my case I needed a locked down database where arbitrary SQL can't do any damage. And thus was born Lock Down SQL Server 2005, which teaches you a variety of techniques for hardening your whole MSSQL installation plus specific databases. It also has a world-accessible SQL Shell where you can experiment with a hardened database and attempt white-hat escalation.
Interestingly, we face a similar challenge in real-world production systems (as opposed to websites with kids' pictures ), where we must run bastard applications guaranteed to have security problems. Hardening our systems against them to achieve a secure whole despite the weak links is what Defense in Depth is all about.
I have posted some code along with the article, I hope the stuff is useful to other people. Try it out!
When it comes to application security, you can usually tell people who have actually exploited vulnerabilities from those who have only a conceptual grasp of the subject. Seeing exploits in action goes a long way towards making security issues real. So I've started a series of short tutorials called Hands-on Web Security using live, exploitable vulnerabilities to illustrate the concepts behind them. I hope to cover the major types of flaws afflicting web apps, one per article. Each flaw will have an explanation of the problem, live holes to be exploited, and recommendations on how to avoid the problem.
By the way, the fine people at OWASP have an application along those lines. It's called the Web Goat Project. The main difference is that you have to download it and run it on your computer, and it's geared towards security people and more advanced exploitation, whereas my stuff is geared towards developers and architects.
So, I just heard about this 'blogging' thing. I doubt it'll ever become popular, but as an early adopter I figured I'd give it a shot.
I plan to write weekly entries, mostly about software. Some of them will be announcements for more in-depth articles.
Here we go.