These are all the posts archived for the'Journal' category

A Capability Maturity Model for HCI Research

From time to time I’ve had opportunities to advise organizations on establishing and staffing an HCI research programme.  This has often been a challenge, both for them and for me, because HCI is usually a poorly understood subject within organizations that want to form an HCI research group.  This is a problem whether their existing competences lie in computer science, psychology or, as in one of my less successful career moves, IT consultancy.

It was during one of these advisory projects that I reflected on the experiences we’d had at Xerox trying to apply the SEI’s Capability Maturity Model to our research.  I realised that the problems we had with CMM were mostly due to the nature of our research, and that a “CMM for HCI” might be no bad thing.

One measure of your capability in HCI is the level of research results you can produce. My 1993 studies of the products of HCI (see my December 11th posting) had brought out what form these contributions take: novel designs, empirical findings, predictive models, etc.  And my attempts to make these contributions had taught me, often painfully, that some were more challenging than others.  Indeed the skill in building novel applications that I initially brought to bear on my HCI research turned out to be quite commonplace.

The HCI Capability Maturity Model that I offered to my client went like this:

  1. Inventing: Designing and building innovative applications, and applying standard usability testing methods to them.
  2. Exploring: Conducting controlled user studies of applications and techniques, or conducting field studies of application domains, and in either case reporting findings;
  3. Device-level-modelling: Building models of device-level interaction, and using them to explain and/or enhance the performance of existing devices and techniques; or Application-enhancing: Applying study findings to the design of applications, repeating the studies and reporting fresh findings;
  4. Application-modelling: Building models of application domains, and using them to explain and/or enhance the performance of existing systems;
  5. Method-enhancing.  Developing new methods for design or evaluation, conducting comparisons of them with existing methods, and reporting findings.

The main point we can take away from this model is that the lower levels (1 and 2) of HCI research aren’t very different from what practitioners do.  Software engineers invent, build and test novel systems in the course of their work, and human factors professionals conduct controlled experiments and field studies.  It’s only at level 3 that HCI research gets into areas that practitioners don’t have the time for, or the expertise.  These days, my advice to those wanting to get into HCI research is to start by recruiting people who have worked successfully at level 3, if not higher.

Posted January 22nd, 2006 + plink
What does HCI research tell us?

I write this in Atlanta, a city I’m sharing with some 15,000 haematologists attending their annual meeting. They would all have been in New Orleans had it not been for Hurricane Katrina. Next year’s CHI conference, whose organizing committee meeting I’m attending, will be a lot smaller than this. But it will probably be a good deal broader in scope; and thereby hangs a tale.

Coming late to HCI, I missed the first five years of CHI conferences. It was only around 1991 that I started delving into the proceedings of past conferences, usually to look for the answers to questions I’d encountered in my research. And it was then that I first experienced a problem.

Each of my forays into the CHI proceedings would run much the same course. When I had a question, I’d first ask my Xerox colleagues, and if that failed I’d head for the shelf of CHI proceedings in the library. I’d look through the table of contents of each volume. I’d see interesting-looking papers, and would often get side-tracked into reading them, but I never found the answer to my question.

In retrospect, even then the coverage of HCI was so broad that questions were far outnumbering answers. It was a matter of too wide a range of technology, and too broad a range of users, performing too many different tasks. And not enough researchers. If it was bad then, it’s much worse now. At this weekend’s meetings I heard two people — quite unprompted by me – describe the same experience as mine.

Back in 1991, I was thinking about the case histories in Vincenti’s What Engineers Know and How They Know It. Compared with 1990s CHI authors, his 1930s aeronautics researchers seemed much more tightly focused on providing answers to designers’ questions. Was this simply because aircraft design then was different from interactive system design now? Or was there a more fundamental reason? Would I find a similar difference between HCI and other branches of recent engineering research?

Starting in 1992, I began reading through engineering proceedings and journals, and through the CHI proceedings, classifying the results of papers. I didn’t read every paper in detail, and in any case most of the engineering papers were beyond me, but I could understand what they were offering the reader. And in most cases they offered an analytical model that provided a more accurate way for the designer to predict the performance of some kind of engineering artifact: an integrated circuit, a heat exchanger, a nuclear reactor component, an earth-filled dam. Ninety percent of the papers described an improved predictive model, or an improved tool for applying a model. These papers were trying to help designers achieve performance targets.

When I looked at what CHI papers were offering, I found a big difference. There were indeed papers on improved models, but they represented less than 10 percent. Instead, over two thirds of the papers presented results of a kind I hadn’t found in any engineering papers:

  • radically new designs for interactive systems and interaction techniques
  • findings from empirical studies that had not yet been reduced to a predictive model

I reported this discovery, explained my ‘pro-forma based’ method of classification, and offered my initial thoughts on implications for HCI — in a CHI paper! More on this next week.

Posted December 11th, 2005 + plink
Do we learn from performance failures?

In the November issue of IEEE Computer, Bob Colwell’s column on Books Engineers Should Read caught my eye and led me to add several titles to my reading list. It also made me think about the contrasting messages of Vincenti’s What Engineers Know and some of the more well-known books like Henry Petroski’s To Engineer Is Human and Charles Perrow’s Normal Accidents.

The main theme of Colwell’s column is that we can learn from design failures: from the collapse of the Tacoma Narrows Bridge, from the Challenger shuttle disaster, from Three Mile Island. I can’t disagree, for I’ve been strongly influenced by Petroski’s and Perrow’s books, as well as by Peter Neumann’s Risks Digest. But I’ve been influenced even more by Vincenti’s book, which is much less concerned with design failures and much more with the research that enables engineers to set and meet design requirements.

One of Vincenti’s case studies, for example, describes how the NACA’s researchers in the 1930s studied the stability of aircraft in level flight. This led to them to discover how stability could be achieved by designing the plane’s controls to require a certain stick force per g of acceleration. From then onwards it became relatively easy for engineers to design planes with the stability that their customers wanted.

Are engineers worried only about avoiding catastrophic failures? I think not. More often, I believe, they’re worried about improving on last year’s product, about outdoing their competitors, about achieving the performance and cost targets they’ve taken on. And that, I believe, is why the researchers in What Engineers Know focused on discovering what performance criteria were important and how to meet them – how to avoid performance failures.

There is little doubt that software systems often fail to deliver the performance their users need. The most celebrated case is the so-called Project Ernestine, in which the redesign of a phone operator’s workstation was found to increase the time operators took to handle calls. It’s celebrated, not because of the size of the failure (the increase was only 3.4 percent) but because it was made public, and was confirmed by an elegant cognitive model. The project showed not only that performance failures were occurring – it showed that HCI could help prevent them.

There haven’t been any more cases quite like Project Ernestine, and I believe the reason is simple: few of us know how to measure the performance of the systems we use. As a result, we can’t tell whether they’re getting better or getting worse. Or at least we can’t tell unless the difference is enormous, as in the case of a Windows-based system that British Telecom installed for its customer service staff; it was so slow that the staff would switch back to the previous DOS-based system to deal with any complex request. When systems get slightly worse, nobody can really tell.

After reading What Engineers Know and hearing about Project Ernestine, I began to see a connection. Engineers know what to measure; designers of interactive systems don’t. It’s as simple as that.

Posted December 6th, 2005 + plink
Newman and Lamming

In 1991 Mik Lamming and I embarked on a book that was to be published in 1995 by Addison Wesley under the title Interactive System Design. At the time, neither of us really considered ourselves experts in HCI, but we had spent a number of years designing and building interactive systems, and we reckoned we had some experiences to share. We field-tested our material, first in a short course we taught in Ispra, Italy at the invitation of Rob Witty, and then in lectures to computer science undergraduates at Cambridge University, at the invitation of Peter Robinson.

This project began at around the time I was first reading Walter Vincenti’s What Engineers Know and How They Know It. Indeed the main reason I was instantly attracted by this title was that it hinted at solving an ongoing problem with our book, namely how to write a chapter about Requirements. What could we say about defining requirements for interactive system, other than that the desired functionality should be specified, and that the system should be required to be “usable”?

What Engineers Know presents a set of case studies of engineering research, drawn from the golden period of aeronautics between 1920 and 1940. During this period the aircraft industry gradually figured out, with the help of researchers, how to design the planes that their customers wanted. As the book tells it, this was achieved partly by focusing on steady improvement, partly by learning how to measure what improvements had been made, and partly by developing analytical models and tools that helped predict these outcomes. Each of the case studies, comprehensively researched and beautifully written, opens a different door onto a back room that I and probably many other aircraft enthusiasts have wanted to explore.

In the end, the influence of What Engineers Know on our book was restrained by the urgent need to get it into print. Our chapter on Requirements got written; in fact it got completely written and dumped in the wastebasket seven times! But there was no way we could refocus the book around the engineering concepts of measurable improvement and predictable outcomes.

So when Newman and Lamming came out in 1995, its messages were relatively mundane:

  • Learn about the domain of application
  • State the design problem in a sentence
  • Evaluate analytically when you can
  • Consider the user’s conceptual model when you design the UI
  • Learn how to make good use of UI guidelines.

It was only when the book was done that I could devote some of my time to exploring more fully the lessons to be learned from What Engineers Know.

Posted November 27th, 2005 + plink