I want to take a slight detour from usable privacy and security and discuss issues of design. I was recently at the Microsoft Faculty Summit, an annual event where Microsoft discusses some of the big issues and directions they are headed.
In one of the talks, a designer at Microsoft mentioned two data points I've informally heard before but had never confirmed. First, the ratio of developers to user interface designers at Microsoft was 50:1. Second, this ratio was better than any other company out there.
As someone who teaches human-computer interaction, I wanted to push on this point, so I asked the hard question: "On a relative and absolute scale, Microsoft has more designers than Apple or Google. However, I think most people would argue that Apple and Google have much better and easier-to-use products. Is it an organizational issue, a process issue, a skills issue? What do I tell the students I teach?"
I want to make clear that my question wasn't meant to be negative of Microsoft specifically. It actually hits on a pervasive issue facing the software industry today: how do we effectively incorporate great design into products? Because, to be blunt, a lot of user interfaces today are just plain bad. Painfully bad.
Companies like Nintendo, Apple, Google, and Amazon have proven that good design matters. Why haven't other companies followed suit? Or perhaps the real question is, why haven't other companies been able to follow suit?
I discussed this issue with a lot of people at the faculty summit and in Silicon Valley. I used Microsoft as my example, but I think the same problems apply to pretty much every software company today.
People offered five explanations. The first is the "geek culture" where engineers rule. At most software companies, there has never been a strong mandate at the top for great design. Instead, engineers tend to have wide latitude for features and implementation.
Second, there is often no one in charge of the overall interaction and user experience. As such, there tends to be no consistency and unified flow for how a system looks and feels.
Third, designers tend to be brought in too late in the process, to do superficial work on a system that has already been built. One person basically said it was "lipstick on a pig". Design is more than just the surface layer of the visual interface. It's also about understanding how people work and play, how things fit into their lives, how people see and understand things, what problems they have, and how to craft a product that fits these constraints.
Fourth, designers have no real power in most organizations. They are typically brought in as consultants, but cannot force (sometimes necessary) changes to happen.
Fifth, a lot of people in technical professions just don't "get" design. They think it's just putting on a pretty interface, or worse, the equivalent of putting pink flamingos in one's lawn. A lot of this is due to education in computer science programs today, which focuses heavily on algorithms and engineering, but leaves little room for behavioral science, social science, the humanities, and any form of design, whether it be industrial design, graphic design, or interaction design.
Overall, this issue of design is a fundamental problem that industry doesn't quite know how to get their heads around. Just hiring an interaction designer isn't enough. Improving the ratio of developers to designers isn't enough. With the advent of trends like "design thinking", the increasing number of human-computer interaction degree programs, and visibly great products, people are starting to realize that design really matters, but just don't know how to go about making it a fundamental way for organizations to make software, rather than just "lipstick on a pig". The problem becomes even harder when the system combines hardware, software, and services, which describes pretty much every system we will have in the future.
I want to make clear that we as researchers and educators don't know the answer either. But, in my next blog entry, I'll write about my discussion with someone who used to work at Apple, with some insights about how their process works. I'll also be visiting Google next week, so I'll be sure to ask them these same questions.
P.S. I want to get your comments and feedback, but please don't use the word "intuitive", talk about "dumbing down interfaces", or say that users are "dumb, stupid, and naive". These are signals that you really don't know what you're talking about. And yes, not all interfaces have to be walk-up-and-use.
Nice article . . .
You write, "Overall, this issue of design is a fundamental problem that industry doesn't quite know how to get their heads around."
I would add that the same applies to academia.
Design oriented process seems to be the Apples' way.
However, could lead to big issues. Let just mention the recent launch of the Iphone 4, and the Antenna issue...
This proves design people should not have all the power also. As most things in life, all is about balance.
And the Antenna issue at the Iphone 4 should leave a strong lesson in the whole industry.
Great post, it hits the nail.
On one side we have the "geek culture" people, oblivious of what happens to the user, doing self'referential design.
On the other hand we have the designers with their MacBooks reluctant to delve into technical issues.
Between them there is a void that supports the "nobody is in charge" stance.
Actually, the void was created before everything happened, before the software was written and way before the designers cringed.
The void is in fact the lack of an usable UI definition in the requirements and (software) design stages.
If it´s not specified then the outcome might be anything. For example, it might be a good UI every now and then, as it happens nowadays, making you say "a lot of user interfaces today are just plain bad".
Functional analysts and software architects have to raise the flag, not (graphics) designers.
See for example some writings by Larry Constantine. He got "it" many years ago, and he is a geek like Alan Cooper and Jakob Nielsen. Not a designer.
See http://www.foruse.com/articles/whatusers.htm or other articles in http://www.foruse.com/publications/index.htm
Those who specify the software to be built are responsible of making it "usable by design" as I like to say. See my slides http://www.ixda.com.ar/cms/wp-content/uploads/2009/15octubreUP/juan-lanus-roi/ (mostly in Spanish).
Else it´s too late.
The UI should be able to be tested "in white", i.e., before applying style. White background, default font. It should work, before the frills are added.
The design activity that determines the usability of a (non trivial) UI is the one that happens before writing the first line of code. Moreover, before starting to envision a disposable prototype.
Thanks for the insightful discussion, Jason. I've been thinking a lot about this too, especially with some of the consulting I've been doing with various Seattle companies. Geek culture is definitely part of it; I was at the Mozilla Summit a few weeks ago and wrote up some thoughts about what exactly geek culture is (http://andyjko.com/2010/07/13/mozilla-summit-2010-and-dev-culture/).
I think there are also significant organizational and market forces at work. For example, many of the UX designers I know at Microsoft feel highly constrained by the sheer size of the user population and the habits these users have formed around the UI (even if the UI is bad). I've seen this requirement of user experience backwards compatibility kill even the most incremental of design improvements in existing products.
Even when companies like Microsoft do something bold, like with the Ribbon, there are significant prices to pay. Perhaps a company as large as Microsoft can afford the bad press and the push back from its existing users because they have the resources to play the long game--small startups don't always have that luxury.
There's also a different kind of conservatism at play with respect to competition. There's a lot gain from being first to market, but doing so is often counter to the unpredictability of iterative design methods. In fact, I think one of the reasons that Apple's designs are so great is because they've positioned themselves as a company that leverages from other companies' failures, rather than investing in failures themselves. There were a lot of mobile and tablet-sized devices before the iPhone and iPad, and a lot of good and bad decisions. Not to discount the talent at Apple, but the designers there benefited greatly from seeing precisely how and why early tablet PC's and stylus-based handhelds failed, both from an interaction design perspective, and a market perspective.
I think a lot of this boils down to risk. Good design needs a healthy combination of risk-taking and risk mitigation. At least at this point in history, its less risky to build something with no design foresight because it will probably still sell, and more so if its first to market. We may be approach a shift, with the advent of app stores and the increased ease of engineering software, where competition is stiff enough that software developers will have to start competing on design merit instead of time to market.
One of the mistakes people make in designing a UI (and I am also guilty of this from time to time) is that people will often think that a problem is simple/easy and go about coding it or designing it without any thought for the big picture. Often times individuals will make widgets
that complete a specific task, but as the project grows and they attempt to reuse code, instead of redesigning an interface from scratch they will simply do a makeover of an existing product. Reusing code is great. Reusing interfaces is not. My best UI designs are ones that have been included from the start as part of the solution rather than as an afterthought.
Susan, I agree that academia doesn't quite know how to think about and do design research yet. Though, I think it's a different (but related) problem, given that design doesn't have a history of publishing in the same way that scientific communities do. Maybe I'll write up my thoughts as another blog entry.
@juan, lots of great points, thanks!
One of the goals of emerging human-computer interaction programs is to cross-train "Renaissance teams" that can combine the best of design, computer science, and behavioral science.
(I take the term "Renaissance team" from Randy Pausch, who noted that given the scope of knowledge today, "Renaissance man" is a near impossibility today. Plus, people tend to listen to me more when I quote him. :)
On the computer science side, I've been advocating that all computer science undergrads should be required to take at least (1) a machine learning course, and (2) a human-computer interaction course. My rationale is that these two themes are fundamental to the future of computer science. Taking a machine learning course is an easy sell in CS departments and starting to happen, but it's less so with hci.
I'm not sure yet what to advocate on the design and behavioral science side of things, though.
Many of these observations are spot on - even, uncomfortably accurate. However, perhaps a missing observation is that these five different issues are not independent and many share a similar root cause. Much of it comes down to the leadership in the company and the resulting decision making processes. Projects, resources, and timelines are typically dictated at a very high level, often by those who exclusively focus on multi-year competitive business strategy. That calculus is generally of the form "To improve our performance in market X, so we need to create product Y, with features Z, by Q2 of next year because our competitors A,B,C are likely releasing E,F,and G."
The notion of "good design" ends up becoming a product "feature" that (just like any other feature) has to be scoped and completed by a particular deadline and preferably not revisited. Good design is typically not at the essence of a product’s reason for existing. More often than not, projects are green-lit based soley on whether there was a pressing business need, rather than on the belief something genuinely great could (or should be) created. This core issue, in my opinion, sets into place a series of processes, priorities, and corporate culture that accounts for nearly all of your observations. Prioritizing good design requires a culture where by “creating a great experience” is a slightly higher priority than “maintaining a strategically competitive market share”. It’s a little bit of a “built it and they will come” mentality - which, understandably, is not a pill many executives readily swallow, especially if their compensation is connected to the stock price.
Occasionally, business needs and good hw/sw/ux design do happen to align. But, it is fairly rare. A leadership team that can see past the battle field of business needs with enough vision and taste to identify the seedlings of a great (and profitable) experience… is rarer still.
Your points about Computer-Human Interfaces are very valid and interesting. But you CHI folks can't claim the use of the term "design" for yourself.
Every developers *IS* a designer. The problem you identify is that those developers are ignoring parts of the problem.
The terminology highlights this problem. "[A] lot of people in technical professions just don't 'get' design. They think it's just putting on a pretty interface. . . "
Your use of the term "design" to mean only "human-computer interaction design" aggravates the problem. The process of product creation has to answer all of the constraints and somehow resolve all of the forces tugging on it. When anyone restricts the term "design" down to the visual elements, you create the lipstick-on-a-pig tendency.
"We need some design in that word processor! We'll call it a 'ribbon'. Never mind we can't do paragraph numbering properly."
The visual aspects are deceptive; people don't like Apple's or Sony's products solely because they're pretty or clean or symmetrical or feel good in your hand. The product is designed so it only needs one button to get things done, instead of five. Or a turning dial actually is the most natural way to choose from a range of options (Sony).
I’d change the question to ‘Why is Great Design so infrequent’ – because we have solved much harder problems. It requires a leader who needs to enforce the principle of profitability –i.e. we are going to design a product that will outsell our competitors because they will want to buy it – and the infrequency of great design is a result of the infrequency of these leaders. I suspect they are often only one person, who comes to the project with a picture of what is needed ex. No thicker than…, no more than an average of 1.2 user inputs per result…, and a willingness to bet that certain hardware (screen resolution, etc.) will be available when the product goes to production. So you don’t need 50 design engineers, rather what’s needed is one responsible ‘profit’ engineer – one who is willing to delay each step of development until trial users give their feedback. You can’t predict what users will want, but you can put prototypes in their hand and have them tell you what they want.
I have lots more to say on this topic if you are interested.