I’m becoming increasingly uncomfortable with how online data collection is driving product decisions. If a product’s sole source of revenue is advertising, then the design is going to reflect that. The product is going to be optimized for data collection so that it can provide better accuracy for advertisers. And if a product’s direction is driven by anything other than user needs, that product becomes worse for end users. That is inevitable. Nothing you can do about it.
This is why the “Well, what’s wrong with better ads?” argument doesn’t hold water. It’s not that I want to see less relevant ads (or no ads at all). It’s that I don’t want a company’s design decisions to be driven by a need to get as much data out of people as possible (as opposed to how to meet their core needs better).
I couldn’t help but notice similarities between this argument and the one I use to explain why I don’t like games that have consumable in-app purchases. It’s not the cost that’s the problem — I’m happy to pay as much as $50 or $60 up front for a great game — rather, it’s the way game design is influenced by the need to incentivize spending money. “This slot machine has some really compelling gameplay,” said no one ever.
Products, like anything else that takes part in an ecosystem, evolve to optimize whatever sustains them, and over time they shed the remainder like dead skin. Websites that rely on pageviews to survive become linkbait crapfarms. Ad-supported social networks sell off your attention in the precise quantity you’ll tolerate — until you get used to that, and then they sell off a little more. And games become shallow, joyless chores in fun’s clothing, because there’s a 0.15% chance you’re a “whale.”
If you’re working on a tech product right now, here’s what I propose. Before you type another line of code or click another pixel, stop and think: What do I want this to become? Now, is that vision the basis of your business model? Not something that exists alongside it, or despite it, or in carefully balanced tension with it, but the basis of it? If it isn’t, then you’re building the wrong thing.
“Maybe everybody else knows this, but what is the difference between the pager and the email?”
Roberts isn’t asking about the difference between e-mail and a pager. He’s asking about the differences in how police department policy treated e-mails sent from a computer and texts sent from department-issued pager. He’s actually making a rather sophisticated distinction, not betraying his ignorance. The exchange preceding Roberts’ question features Quon’s lawyer Dieter Dammeier explaining the policy, “The city will periodically monitor e-mail, Internet use and computer usage,” and Justice Ginsburg asking if it wouldn’t be reasonable for an employee to assume the same would apply to texts sent via pager.…
What Roberts is trying to tease out is whether there are differences in reasonable expectations of privacy and the police department’s conduct depending on where e-mails are stored (on a government server) vs. where text messages are stored (by a private company).
Now, the Aereo case does have some great examples of the justices being confounded by gimcracks and befuddled by geegaws, but that doesn’t bother me much. Their job is to interpret and reconcile the decisions of lower courts, not to draft policy. They are experts in the law, and novices in every other field. Do you also expect them to have encyclopedic knowledge of human biology and reproductive medicine when hearing an abortion case? No; it’s the duty of the arguing attorneys to provide the background information. If one side leaves out a key detail, and the omission would harm the other side, then the other side fills it in. And outside parties file amicus briefs, and the justices do their own research in the three or four months it takes them to draft a ruling following oral argument. That’s the system. It’s not perfect, but it’s pretty good.
It does seems shocking when a justice doesn’t know how SMS works, because we—the Technopedants of the Internet—do, and because of the principle that it’s hard to imagine not knowing something that you know. But I guarantee you they ask questions that ring as dumb or dumber in the ears of subject-matter experts every time they hear a case. I’d be terrified if they didn’t.
This is a sentiment I’ve heard repeatedly over the past week or so, most recently in a quote from Justin Rhoades:
It’s like a pendulum swinging from obvious visual affordances to engaging kinetic ones. The parallax effect, the physics of the messages bubbles and I’m sure many other ‘kinetic’ behaviors are new to devs in iOS7. Apple wants apps to use more motion and less visual design.
Let’s talk about what an affordance actually is. Here are some examples:
The moment you see this object, you have a sense not just of how to use it, but of what it would feel like. You can feel your palm on the lever, your knuckles firm on the grip, separated slightly by those bumps. You’re anticipating having to choke down somewhat for leverage, clued in by the ridges toward the end of the handle. You may already be planning to pop off the cap by thumbing its little tab, and you’re aware you may need to work the plastic retainer a bit to counter its natural bend and keep it from springing back into the line of fire — or, as a last resort, perhaps sacrifice some grip strength by looping your index finger around it. You might not be certain what the metal knob is for, but you know from the knurled edge that you can turn it and that there will be some resistance. Shape, material, and texture combine with your experience to yield intuition, which lets you capture all of these details instantly given nothing but a glance at a photograph.
That’s what affordances do. They operate on the boundary between sight and touch. You see a thing, often from a distance, and its affordances give you enough information to simulate, in your mind, the sensation of manipulating it. Unconsciously, you configure your fine motor system in advance, so that by the time you get to the door handle, your hand is already forming the right shape to grasp it and pull the door open.
When affordances are misused, it’s more than a little frustrating:
And when they’re entirely absent, it can even be dangerous:
(Trapped in a burning building? Hope you can read English.)
iOS 7 may be “trading” affordances for kinetics, but only in the sense that it’s losing the former and arbitrarily gaining the latter. They are not interchangeable. Kinetics, or UI Dynamics in Apple’s parlance, are visual effects that occur while you interact with an object, or afterward. (You pull up on the camera icon and let go, and the lock screen falls back down with a realistic bounce; you scroll quickly in Messages and the word bubbles act like they’re mounted on springs.) But affordances can only help if they appear before you interact. You need to see the handle to mentally feel how to open the door, or even to know that it’s a door in the first place, regardless of how smoothly it’s going to swing open. In user interfaces we call this trait “discoverability.” (“Intuitiveness” is another good word for it. So is “joy.”) In the real world we don’t call it anything because it’s a basic operating principle that keeps us from walking into walls.
Affordances are the baby to skeuomorphism’s bathwater. When they engage our instincts just right, they create an emotional bond, and the unfamiliar becomes inviting. Without them, it’s just pictures under glass. It makes no difference how flat, how deep, how minimal, or how ornate the look-and-feel is if it can’t show us, when we look, how to feel.