I’m becoming increasingly uncomfortable with how online data collection is driving product decisions. If a product’s sole source of revenue is advertising, then the design is going to reflect that. The product is going to be optimized for data collection so that it can provide better accuracy for advertisers. And if a product’s direction is driven by anything other than user needs, that product becomes worse for end users. That is inevitable. Nothing you can do about it.
This is why the “Well, what’s wrong with better ads?” argument doesn’t hold water. It’s not that I want to see less relevant ads (or no ads at all). It’s that I don’t want a company’s design decisions to be driven by a need to get as much data out of people as possible (as opposed to how to meet their core needs better).
I couldn’t help but notice similarities between this argument and the one I use to explain why I don’t like games that have consumable in-app purchases. It’s not the cost that’s the problem — I’m happy to pay as much as $50 or $60 up front for a great game — rather, it’s the way game design is influenced by the need to incentivize spending money. “This slot machine has some really compelling gameplay,” said no one ever.
Products, like anything else that takes part in an ecosystem, evolve to optimize whatever sustains them, and over time they shed the remainder like dead skin. Websites that rely on pageviews to survive become linkbait crapfarms. Ad-supported social networks sell off your attention in the precise quantity you’ll tolerate — until you get used to that, and then they sell off a little more. And games become shallow, joyless chores in fun’s clothing, because there’s a 0.15% chance you’re a “whale.”
If you’re working on a tech product right now, here’s what I propose. Before you type another line of code or click another pixel, stop and think: What do I want this to become? Now, is that vision the basis of your business model? Not something that exists alongside it, or despite it, or in carefully balanced tension with it, but the basis of it? If it isn’t, then you’re building the wrong thing.
Not to minimize the many legitimate occasions on which Supreme Court justices have shown a grasp of technology that would embarrass Grampa Simpson, but this particular quote isn’t one:
Roberts isn’t asking about the difference between e-mail and a pager. He’s asking about the differences in how police department policy treated e-mails sent from a computer and texts sent from department-issued pager. He’s actually making a rather sophisticated distinction, not betraying his ignorance. The exchange preceding Roberts’ question features Quon’s lawyer Dieter Dammeier explaining the policy, “The city will periodically monitor e-mail, Internet use and computer usage,” and Justice Ginsburg asking if it wouldn’t be reasonable for an employee to assume the same would apply to texts sent via pager.…
What Roberts is trying to tease out is whether there are differences in reasonable expectations of privacy and the police department’s conduct depending on where e-mails are stored (on a government server) vs. where text messages are stored (by a private company).
Now, the Aereo case does have some great examples of the justices being confounded by gimcracks and befuddled by geegaws, but that doesn’t bother me much. Their job is to interpret and reconcile the decisions of lower courts, not to draft policy. They are experts in the law, and novices in every other field. Do you also expect them to have encyclopedic knowledge of human biology and reproductive medicine when hearing an abortion case? No; it’s the duty of the arguing attorneys to provide the background information. If one side leaves out a key detail, and the omission would harm the other side, then the other side fills it in. And outside parties file amicus briefs, and the justices do their own research in the three or four months it takes them to draft a ruling following oral argument. That’s the system. It’s not perfect, but it’s pretty good.
It’s like a pendulum swinging from obvious visual affordances to engaging kinetic ones. The parallax effect, the physics of the messages bubbles and I’m sure many other ‘kinetic’ behaviors are new to devs in iOS7. Apple wants apps to use more motion and less visual design.
Let’s talk about what an affordance actually is. Here are some examples:
The moment you see this object, you have a sense not just of how to use it, but of what it would feel like. You can feel your palm on the lever, your knuckles firm on the grip, separated slightly by those bumps. You’re anticipating having to choke down somewhat for leverage, clued in by the ridges toward the end of the handle. You may already be planning to pop off the cap by thumbing its little tab, and you’re aware you may need to work the plastic retainer a bit to counter its natural bend and keep it from springing back into the line of fire — or, as a last resort, perhaps sacrifice some grip strength by looping your index finger around it. You might not be certain what the metal knob is for, but you know from the knurled edge that you can turn it and that there will be some resistance. Shape, material, and texture combine with your experience to yield intuition, which lets you capture all of these details instantly given nothing but a glance at a photograph.
That’s what affordances do. They operate on the boundary between sight and touch. You see a thing, often from a distance, and its affordances give you enough information to simulate, in your mind, the sensation of manipulating it. Unconsciously, you configure your fine motor system in advance, so that by the time you get to the door handle, your hand is already forming the right shape to grasp it and pull the door open.
When affordances are misused, it’s more than a little frustrating:
And when they’re entirely absent, it can even be dangerous:
(Trapped in a burning building? Hope you can read English.)
iOS 7 may be “trading” affordances for kinetics, but only in the sense that it’s losing the former and arbitrarily gaining the latter. They are not interchangeable. Kinetics, or UI Dynamics in Apple’s parlance, are visual effects that occur while you interact with an object, or afterward. (You pull up on the camera icon and let go, and the lock screen falls back down with a realistic bounce; you scroll quickly in Messages and the word bubbles act like they’re mounted on springs.) But affordances can only help if they appear before you interact. You need to see the handle to mentally feel how to open the door, or even to know that it’s a door in the first place, regardless of how smoothly it’s going to swing open. In user interfaces we call this trait “discoverability.” (“Intuitiveness” is another good word for it. So is “joy.”) In the real world we don’t call it anything because it’s a basic operating principle that keeps us from walking into walls.
Affordances are the baby to skeuomorphism’s bathwater. When they engage our instincts just right, they create an emotional bond, and the unfamiliar becomes inviting. Without them, it’s just pictures under glass. It makes no difference how flat, how deep, how minimal, or how ornate the look-and-feel is if it can’t show us, when we look, how to feel.
“Plotnitsky goes on, however, to agree with Sokal and Bricmont that the ‘square root of –1’ which Lacan discusses (and for which Plotnitsky introduces the symbol (L)√-1) is not, in spite of its identical name, ‘identical, directly linked, or even metaphorized via the mathematical square root of –1,’ and that the latter ‘is not the erectile organ.’”—The best sentence in Wikipedia
Lucky for us, there are some struggles that we can get rid of or, at the very least, conceive of a world without through lessening the loads of our brothers and sisters who have different experiences from our own. There are things that come easy to me that don’t come easy to you and I want to help.
Toward the end of my last post, I mentioned that I’d like to see App.net move toward a federated architecture. Broadly, what that means is that instead of being a central service that each client connects to directly, it would become a loosely organized mesh of independently controlled nodes. Users and devices would connect to whatever node they liked best — you can run your own if you want — and the nodes would talk to each other in some clever way to collectively maintain the appearance of a single unified social network.
The advantages are numerous and comparable to those of the web itself: no single point of failure, no concentration of power, no risk that the entire network will be sold to Facebook.
But does this work for a service like Twitter? Can the behavior we’ve come to expect from social networks be reproduced in this model?
Let’s find out. Since every good blog post needs a list of three things, here’s a list of three constraints we’ve come to expect of our social timelines:
Immediacy: if a post has been made by someone I follow, I can see it in my timeline right away (or close enough that I don’t notice the difference).
Chronology: posts always appear in order by time posted.
Monotonicity: timelines grow only from the top; older posts are never retroactively inserted.
The problem appears to be that no federated architecture can simultaneously satisfy all three of these conditions. You can have any two: for example, if you let go of immediacy, your node can just wait until it’s received the latest content from every other node before displaying anything. But that’s not very scalable, and it makes real-time conversation impossible, so let’s keep immediacy. Now we have to decide what to do when content from a far-away node arrives late: if we’ve already displayed newer posts, we have to violate either chronology (by posting the older content above the newer) or monotonicity (by inserting it chronologically into the timeline).
Violating chronology is bad because it turns conversations into nonsense, but violating monotonicity means you can’t assume you’ve seen everything once you’ve read to the top of your timeline. Your client will have to maintain read/unread status for every item, and you’ll have to keep winding back in time to pick up things you missed. Which might be fine, but now we’re talking about something less like Twitter and more like email or RSS.
OK, so all of those options suck for conversations. But chronology is really only important within a conversation. So what if instead of replicating Twitter exactly, we shoot for a hierarchical, threaded model? The timeline would be a list of threads, and chronological order is preserved within each thread, but the threads themselves show up in arbitrary order. Oh, and you see a thread if you’re following the person who started it, I guess? Never mind, at least we’re getting somewhere! We’ve invented Usenet.
The moral of the story is that the qualities that make Twitter interesting — its mix of conversation, discovery, and one-to-many communication — are direct consequences of its centralized architecture. Without the centralization you can still have something interesting, but it’s a different thing.
Back in the early 1990s, when you went online, you either dialed up a local BBS or you used a national service like Prodigy or America Online. These services each had their own user interfaces and content and jargon and there was no easy way to communicate between them. If you were on one and your friends were on another, you had to get different friends.
You could connect to the Internet if you knew how (I found a university library with a public VAX terminal I could dial into), but there was no World Wide Web yet so there wasn’t much to do. When the web finally arrived, the walls around the Prodigy and AOL gardens crumbled. They just couldn’t keep up. They became irrelevant. CompuServe, GEnie, Prodigy, and the rest all disappeared and AOL became a dumb pipe. But before that happened, I remember trying to explain the web to my parents, who loved their AOL, and getting blank stares. Why couldn’t they see how incredible, how game-changing this new thing was?
Today, App.net is getting the same blank stares, and worse. Anil Dash echoed Tess Rinearson in calling it a “country club”; others have alluded to its slightly-less-than-diverse demographics. Most of this criticism stems from a perception of the service as a Twitter clone that costs money. Which is totally fair because right now, that’s all it is. But it’s also a bit like calling the web in 1993 an AOL clone for rich white college students. Fair, but entirely missing the point.
Let’s back up for a sec and consider the main components of a typical social architecture:
The social graph (who follows whom) determines the audience when a user publishes something (tweet, status update, blog post, checkin, baby bunny), and each user receives a stream or aggregation (timeline, dashboard, news feed) of the things their friends have published.
Twitter and Facebook have happily provided #1, subject to various restrictions, rate limits, and arbitrary shutoffs. Any photo sharing site, blogging engine, or even RSS feed provides the second. But if I want to start a new social app, even if I piggyback on existing social graphs and publishing platforms, I still have to come up with #3 on my own — and that’s where it gets tricky. That’s what Instagram did: they built #2 and #3 while bootstrapping #1 off of Facebook and Twitter, and it was such a monumental achievement that they sold the company a year later for the cost of the first two Mars rovers. Scaling is that hard.
How many great ideas for socially-aware apps or services haven’t been built because there’s no common, open infrastructure to build them on?
Twitter could have become that infrastructure if the advertising people hadn’t won. Imagine if 140 characters of flat text were only one of the things that a tweet could be. What if when you added a photo to Flickr, say, your Flickr account “tweeted” (on your behalf) a block of data, tagged as a Flickr photo? People reading your stream from a Twitter client would never see this, because Twitter clients only know how to display text-based tweets. But a Flickr client? It would see just the Flickr data, allowing it to build an aggregated photo stream using Twitter as the plumbing. Now we have the equivalent of Instagram, and we didn’t have to build or scale or maintain any social networking infrastructure.
But Twitter has made it abundantly clear — or at least firmly vague — that they have no interest in being anyone’s plumbing. Twitter is for tweets, and tweets are one thing only. But with App.net as the back end, anything is possible — and not just social publishing and aggregation. Devices in your home like your security system or your TiVo or your sprinkler timer could publish their own feeds, and then you could have a single app that monitored all of them. You could turn iTunes Store release data into App.net feeds and follow your favorite bands to hear when new albums come out — and this extra data wouldn’t pollute your main social timeline, because it would all be tagged by data type for clients to filter to their liking.
Of course, none of this is likely to come true as long as App.net costs $50 per year per account and another $50 for developer access. I can’t imagine it will keep this revenue model forever, though. Maybe users would pay per data source they publish from, or developers would pay per user and recoup that cost in app sales. Ideally App.net would adopt a federated architecture, so I can run my own node if I have the interest and resources. But I don’t want it ever to be free, because as we’ve seen with Twitter, free pipes tend to make the pipe owners get possessive about the stuff that’s in the pipes.
So let’s take back our stuff. I love Twitter’s product, but I believe it’s on the path of Prodigy and CompuServe: so desperate not to become a dumb pipe like AOL that it will soon become nothing.
The web democratized publishing. Ad-funded social networks are locking up distribution. I think App.net just might be the way to unlock it. If you agree, you can join here and follow me here.
You’d think paying $80 for a piece of software would earn you the right not to be treated with contempt by its publisher, wouldn’t you? Well, Parallels now has “in-product notifications” that can’t be disabled. Ads, in other words.
The justification is that the notifications are used for important things, like bug fix updates, therefore they can’t be turned off. Which, of course, is complete nonsense. That story is how you sell ads to sponsors, not how you sell a product to users. What’s actually happening is that Parallels is abusing a critical information channel by stuffing paid content into it, and then pretending it’s not their fault. It’s like running ads over the Emergency Broadcast System and claiming you have no choice because it’s for emergencies.
[W]e occasionally share special offers from Parallels or other third party companies who provide special deals for our customers.… However, because customers need to receive important product information, there is not a mechanism for customers to completely disable notifications.
"Need"? Hmm, I think I read something about "needs" once, in a psych textbook or somewhere.
Currently, reply scope on Twitter works like this:
If Alice replies to Bob’s tweet, everyone who follows both Alice and Bob will see the reply.
What if, instead, it worked like this:
If Alice replies to Bob’s tweet, everyone who follows Alice and saw Bob’s tweet in their timeline will see the reply.
The difference is subtle but significant, because following Bob isn’t the only way I might see Bob’s original tweet. Someone else I follow could have retweeted it into my timeline. Alice herself may have done so, in fact. So one effect of this change would be to eliminate that awful “dot-reply” cheat: instead, just retweet the original before replying and all of your followers will see both.
Another effect is that it gives us a way to meaningfully interact with promoted tweets. Today, replying to a promoted tweet is all but pointless. The promoter won’t hear me — they’ll have thousands of replies to sift through, if they even bother to look — and beyond that, only those who already follow both me and the promoter will see my reply. That’s not likely to be very many people, since the whole idea of promoting a tweet is to put it in front of people who aren’t following the promoter. But under this new rule, all of my followers who saw the promoted tweet would also see my reply. Now we’re getting somewhere.
That’s step one. Step two is where this dream gets crazy:
Using an algorithm similar to that of Top Tweets, when a reply to a promoted tweet receives a large number of retweets and/or favorites, the reply should also be promoted to the same audience.
It might look like this:
(Originaltweets.) Completely nuts, right? Why would any advertiser ever go for this? It’s not fair, you may be thinking: Microsoft paid good money to have their tweet promoted, so why should any random nobody get to ride that train for free?
Here’s why: like it or not, by introducing promoted tweets, Twitter has declared that popularity and money are equivalent currencies. Your tweet can gain exposure organically, by virtue of your own following and the viral nature of retweets, or you can just pay up and they’ll stick it in everybody’s stream (or some large subset of everybody). What’s unfair is that due to the way replies currently work, the paying voice speaks alone. It’s paying to disrupt conversations, not to participate in them.
Which, of course, is what advertisers want, so this will probably never change. But to me, that’s an unfortunate failure of imagination. Banksy, or possibly Sean Tejaratchi, writes:
Any advert in a public space that gives you no choice whether you see it or not is yours. It’s yours to take, re-arrange and re-use. You can do whatever you like with it. Asking for permission is like asking to keep a rock someone just threw at your head.
What if this idea of freedom to rearrange and reuse were baked into the concept of what ads are? What if online ads became less like discarded flyers blowing down a busy street and more like living, breathing fragments of human conversation? What if users were treated like thinking beings and not like credit cards with eyestalks? Wouldn’t everybody win?
Naturally, not all replies to promoted tweets will be favorable to the promoter, and that’s OK: when you barge into a crowded room and start shouting through a megaphone, people don’t always say nice things. That’s part of the deal. And the longer ads continue to live on a weird plane of their own that barely intersects reality, the less effective they’ll continue to be, and the sooner the things we love that depend on ads will stop being able to exist.
I’d rather Twitter didn’t have ads at all, but if we must have them, let’s at least try to do something better than throwing rocks at heads.
I doubt any author has influenced me more than Douglas Adams did when I was a teenager. I’ve read all of his books half a dozen times. Last Chance to See, a nonfiction travelogue on endangered species, may be the most painfully beautiful book I’ve ever encountered.
He was the kind of writer who possessed such vast knowledge and such monstrous insight that he could completely change the way you thought about things just by telling a funny story. Take, for example, this brilliant speech he gave in 1998 about the ages of sand:
I can imagine Newton sitting down and working out his laws of motion and figuring out the way the Universe works and with him, a cat wandering around. The reason we had no idea how cats worked was because, since Newton, we had proceeded by the very simple principle that essentially, to see how things work, we took them apart. If you try and take a cat apart to see how it works, the first thing you have in your hands is a non-working cat. Life is a level of complexity that almost lies outside our vision; is so far beyond anything we have any means of understanding that we just think of it as a different class of object, a different class of matter; ‘life’, something that had a mysterious essence about it, was god given — and that’s the only explanation we had. The bombshell comes in 1859 when Darwin publishes On the Origin of Species. It takes a long time before we really get to grips with this and begin to understand it, because not only does it seem incredible and thoroughly demeaning to us, but it’s yet another shock to our system to discover that not only are we not the centre of the Universe and we’re not made of anything, but we started out as some kind of slime and got to where we are via being a monkey. …
I can remember the first time I ever read a programming manual, many many years ago. I’d first started to encounter computers about 1983 and I wanted to know a little bit more about them, so I decided to learn something about programming. I bought a C manual and I read through the first two or three chapters, which took me about a week. At the end it said ‘Congratulations, you have now written the letter A on the screen!’ I thought, ‘Well, I must have misunderstood something here, because it was a huge, huge amount of work to do that, so what if I now want to write a B?’ The process of programming, the speed and the means by which enormous simplicity gives rise to enormously complex results, was not part of my mental grammar at that point. It is now — and it is increasingly part of all our mental grammars, because we are used to the way computers work.
So, suddenly, evolution ceases to be such a real problem to get hold of. It’s rather like this: imagine, if you will, the following scenario. One Tuesday, a person is spotted in a street in London, doing something criminal. Two detectives are investigating, trying to work out what happened. One of them is a 20th Century detective and the other, by the marvels of science fiction, is a 19th Century detective. The problem is this: the person who was clearly seen and identified on the street in London on Tuesday was seen by someone else, an equally reliable witness, on the street in Santa Fe on the same Tuesday — how could that possibly be? The 19th Century detective could only think it was by some sort of magical intervention. Now the 20th Century detective may not be able to say, “He took BA flight this and then United flight that” — he may not be able to figure out exactly which way he did it, or by which route he travelled, but it’s not a problem. It doesn’t bother him; he just says, ‘He got there by plane. I don’t know which plane and it may be a little tricky to find out, but there’s no essential mystery.’ We’re used to the idea of jet travel. We don’t know whether the criminal flew BA 178, or UA270, or whatever, but we know roughly how it was done. I suspect that as we become more and more conversant with the role a computer plays and the way in which the computer models the process of enormously simple elements giving rise to enormously complex results, then the idea of life being an emergent phenomenon will become easier and easier to swallow. We may never know precisely what steps life took in the very early stages of this planet, but it’s not a mystery.
Stephen Fry wrote a touching remembrance some years back, and it’s collected here with quite a few others:
He was a huge man: when he was in a house it rattled and you always knew he was there. He did the same to the earth. It doesn’t rattle any more now that he’s gone.
I haven’t seen it announced anywhere, but Tumblr recently made a couple of changes to the way reblogging works. You can now reblog any post, including your own — “Reblog” buttons appear on everything — but you can no longer reblog a post to the same blog where it was originally posted. I don’t really understand why they’d actively prevent this, since it’s handy in some cases and anyone who abuses it can be easily unfollowed. But Tumblr is Tumblr, and who but mere Users are we.
Back when the solar system was still forming, I wrote a bookmarklet that gave you a universal reblog button, expressly for the purpose of self-reblogging. With these changes, the bookmarklet isn’t useful anymore. You may therefore now gently delete it, dallying for a moment to reflect on simpler days, when the wind blew cool and sweet and the reblogs flowed like summer wine.
When you view an item in the Amazon app and tap the button to add it to your wish list, it comes back with this:
alert, n an alarm or warning, esp. a siren warning of an air raid.
It’s really not that big a deal that I added an item to my wish list. There’s no need to lock me into a modal dialog. Just add the item and move on.
Neven’s right, of course, but as I clumsily observed on Twitter, this alert abuse is in stark contrast to Amazon’s web design, in which they’re usually great at not bothering the user with needless shrill error messages. For example, when you add an item to your Wish List from the website, this is one possible outcome:
I would change the icon to be more btw and less omg, but otherwise, see how polite that is? Instead of interrupting me to point out that I asked for something dumb, Amazon helpfully did something else that better matched what I probably wanted in the first place. It’s like mistakenly asking for an extra fork with your ice cream and having the waiter just go ahead and bring an extra spoon, rather than needlessly correct you.
We expect that sort of intelligent interpretation in human/human interaction, but in human/computer interaction it’s so vanishingly rare that when it actually happens, nerds write blog posts about it. Ta-da.
Every day at your airport checkpoints, you screen thousands of passengers for objects that could conceivably be used as a weapons. If you find one, you confiscate it, and the unfortunate traveler continues on her way, cupcakeless but no longer a threat to national security.
You’re also looking for explosives, which is understandable. If you found a live bomb — I mean, not that you ever have — but if you did, well, that would clearly be one terrorist caught and many lives saved, right? That is, assuming you actually remembered to do something about it, of course. But everybody makes mistakes and I won’t blame you for that. I’m sure someday you’ll stop being a complete waste of money. Really, we’re all pulling for you.
But here’s my question. Suppose I, a normal taxpaying non-terrorist type guy, were to bring through a checkpoint something relatively harmless but still against the rules: not a bomb but, say, a pocketknife. You’re going to take that away from me, right? But why? If I’m not a terrorist, how is it dangerous for me to have a four-inch folding knife in my trousers? It’s staying there until well after we land, unless Amazon Prime really improves. Or do you think I might suddenly decide to abort my vacation, abandon my family, and throw my life away in a fit of deranged violence when the captain interrupts the in-flight Mad About You for the seventh time to announce that one of the shittier Great Lakes is on the other side of the plane? Right when Murray the dog is about to make Paul Reiser get a little bit annoyed?
Of course not, because you are an organization of highly intelligent cupcake confiscators. The only logical reason for you to take my knife from me is that you think I’m a terrorist. You’ll smile and shake your head at the dopey terrorist, and you’ll go tsk tsk, and then you’ll let me through to board my flight.
So, TSA, answer me this: why are you allowing suspected terrorists onto planes?
I’m willing to bet cold hard cash that Apple has no intention to and will never try to stop a publisher or author from taking content written in iBooks Author and publishing it elsewhere in another format. No one will ever hear from Apple after exporting from iBooks Author to text or PDF.
But I think the license is crummily written, because it’s not precisely clear what Apple is saying. If Apple wants to make bold and far-reaching licensing restrictions, they should express them clearly and succinctly. Whereas I think, much like with the App Store, their lawyers seek to express the legal restrictions in terms far broader than what they actually seek to enforce. I’m willing to make the above bet based on my understanding of the company and the way Apple thinks, not the language of the EULA.
I agree; I don’t think Apple plans to restrict anything but its own .ibooks format. But that doesn’t matter because, as Mike Ash puts it, “Unless we’re friends, your intentions don’t matter to me at all, only your actions.” Apple isn’t anyone’s friend but Apple’s, and its actions so far are to reserve a broad swath of rights pertaining to everything iBooks Author is capable of “generating” (whatever that means).
Even if we’re right and Apple doesn’t care about PDFs or plain text files, that’s still the Apple of today. The Apple of 20 years from now might turn out to be a completely different company, and this EULA has no expiration date. That’s a dangerous situation for authors and publishers who care about long-term distribution rights. It would be best for Apple to clarify the terms now — and, I hope, loosen them — rather than prolong the uncertainty.
Common Misconceptions about What I Wrote Yesterday
In a probably-futile attempt to stem the tide of redundant comments, I’ll address some of the more frequent reactions to my last post:
If you don’t like it, don’t use it! Duh.
You’re missing the point. The issue is that this is a software EULA which for the first time attempts to restrict what I can do with the output of the app, rather than with the app itself. No consumer EULA I’ve ever seen goes this far. Would you be happy if Garage Band required you to sell your music through the iTunes Store, or if iPhoto had license terms that kept you from posting your own photos online? It’s a step backward for computing freedom and we should resist it.
Plenty of EULAs restrict what you can do with software. That’s the whole point.
Yes, restricting use is what EULAs have traditionally done. This one does something different: it restricts what you can do with the output of the software after the software is closed and put away. If you make a document using iBooks Author, you aren’t allowed to sell that document except through Apple, ever, for the rest of your life.
Interestingly, as the author of the document and presumed signatory to the iBooks Author EULA, you’re the only person to whom that restriction applies. If you gave your iBook to a friend, Apple would have no control over what your friend did with it. And you could sell your friend’s iBooks too, because you aren’t the one who used iBooks Author to generate them.
Yeah, but that only applies to .ibooks files. You can also export .pdf and .txt and those are unrestricted. Not true. The license defines “Work” as “any book or other work you generate using this software.” That definitely includes PDF and plain text, and it could be construed to include the very words you type in. So if you use iBooks Author to write your novel, you might be legally barred from ever selling that novel in any format, not just as an iBook.UPDATE (3 Feb 2012): Apple has clarified the license to indicate that only the .ibooks format is covered.
Wait, so Apple’s taking my copyrights away?
No no no, just your right to sell the output of iBooks Author on your own or through any other store.
But why would you want to sell iBooks anywhere but in the iBookstore?
It doesn’t matter why, since I made the iBook myself and should be free to do as I please with it. But if you must have a reason, here are five: because Apple’s cut is too high; because I already have an arrangement with another publisher or online store; because I want to sell my work in a country the iBookstore doesn’t serve; because the iBookstore doesn’t let me offer academic pricing, bulk rates, or loyalty discounts; because I tried selling through Apple and they refused.
iBooks Author is free, so Apple deserves a cut.
How on earth does that follow? Xcode is free, and software companies have been using it and the tools that preceded it for decades to build Mac software that they’ve distributed without Apple’s help, and without paying Apple for the privilege. We buy hardware from Apple, and Apple provides the tools to enable us to make that hardware more useful so that more people will buy it. The same virtuous circle could exist for the iPad and iBooks if Apple hadn’t overreached with this ridiculous license.
That’s what the EULA says, so quit whining!
Do you really want EULAs to be able to say anything they please? Do you want copyright holders to have unlimited control over what you can do with legally obtained copies of their work? It’s not even clear that EULAs can be enforced at all. So even if you’re on Apple’s side in this argument, wouldn’t you rather they based their money grab on a sound legal theory?
It’s the same having to sell iOS apps through the App Store.
No, it’s not. The license terms for Xcode (PDF link) don’t contain any language restricting the use of files generated by Xcode. And when you join the iOS Developer Program, there’s a separate contract you’re required to consciously agree to, once a year and each time it’s updated, before you can download your development certificate. But if you don’t join the program, nothing stops you from continuing to use Xcode’s output however you like.
So Apple’s “audacity” is that they’ve created a snazzy creation tool that, from all appearances, only works with their viewers. Wineman is correct in that it’s the license, not the technology, that prevents you from taking a .ibooks file and selling it somewhere other than Apple’s store. But you don’t have much reason to sell something this thing creates outside Apple’s store, ’cause it ain’t gonna be creating those snazzy multimedia books for your Kindle Fire.
You wouldn’t be selling iBooks for Kindle Fire, you’d be selling them for iBooks on the iPad, which last I checked wasn’t just a vending machine for Apple content. And there are plenty of reasons to want to do so, chief among them being that 30% is a lot. If I’m capable of doing all the marketing and payment processing and hosting of my .ibooks documents, why shouldn’t I get to keep all the profits?
Before anyone else points out that Apple deserves its cut because iBooks Author is free, remember that this argument applies equally well to app distribution outside of the App Stores. (At least on the Mac, where that’s still a viable way of doing business.) No one contends that Apple should get a cut of non-App Store app sales simply because Xcode is free.
There’s nothing wrong with selling tools that help people make money. I’m sure there’d be a market for a non-free iBooks Author, just as there is for Aperture, Final Cut, Logic Pro, and the rest of Apple’s professional content-creation tools. But giving the tools away for free and then using semi-hidden legal terms to wedge yourself into an exclusive middleman position? That’s shameful.
The Unprecedented Audacity of the iBooks Author EULA
Apple just released iBooks Author, a free Mac app for creating digital books for the new version of iBooks. I haven’t played with it much, but so far it looks like a very good tool. However, a curious thing happens when you go to export your work in iBooks format:
This restriction — that iBooks can be sold only in the iBookstore — isn’t enforced on a technical level. You can save the document, move it to your iPad in any of the usual ways (including just emailing it to yourself), and it happily opens in the iBooks app.
But if you look at the end-user license agreement (EULA) for iBooks Author, accessible via the app’s About box, the following bold note appears at the top:
If you charge a fee for any book or other work you generate using this software (a “Work”), you may only sell or distribute such Work through Apple (e.g., through the iBookstore) and such distribution will be subject to a separate agreement with Apple.
And in section 2:
B. Distribution of your Work. As a condition of this License and provided you are in compliance with its terms, your Work may be distributed as follows:
(i) if your Work is provided for free (at no charge), you may distribute the Work by any available means;
(ii) if your Work is provided for a fee (including as part of any subscription-based product or service), you may only distribute the Work through Apple and such distribution is subject to the following limitations and conditions: (a) you will be required to enter into a separate written agreement with Apple (or an Apple affiliate or subsidiary) before any commercial distribution of your Work may take place; and (b) Apple may determine for any reason and in its sole discretion not to select your Work for distribution.
In other words: Apple is trying to establish a rule that whatever I create with this application, if I sell it, I have to give them a cut. And iBooks Author is free, so this arrangement sounds pretty reasonable.
Here’s the problem: I didn’t agree to it. Apple wants me to believe I did, of course, just by using the software:
PLEASE READ THIS SOFTWARE LICENSE AGREEMENT (“LICENSE”) CAREFULLY BEFORE USING THE APPLE SOFTWARE. BY USING THE APPLE SOFTWARE, YOU ARE AGREEING TO BE BOUND BY THE TERMS OF THIS LICENSE. IF YOU DO NOT AGREE TO THE TERMS OF THIS LICENSE, DO NOT INSTALL AND/OR USE THE SOFTWARE.
But that language is in the EULA itself, a contract of adhesion which I was not required to sign (or even indicate my agreement to by clicking) before installing the software. So, to paraphrase: By using this software, you agree that anything you make with it is in part ours. But if it can say that and have legal force, can’t it say anything? Isn’t this the equivalent of a car dealer trying to bind you to additional terms by sticking a contract in the glove compartment? By driving this car, you agree to get all your oil changes from Honda of Cupertino?
Apple, in this EULA, is claiming a right not just to its software, but to its software’s output. It’s akin to Microsoft trying to restrict what people can do with Word documents, or Adobe declaring that if you use Photoshop to export a JPEG, you can’t freely sell it to Getty. As far as I know, in the consumer software industry, this practice is unprecedented. I’m sure it’s commonplace with enterprise software, but the difference is that those contracts are negotiated by corporate legal departments and signed the old-fashioned way, with pen and ink and penalties and termination clauses. A by-using-you-agree-to license that oh by the way asserts rights over a file format? Unheard of, in my experience.
When I make something myself, no matter what software I use to make it, then — assuming it doesn’t infringe any copyrights — it’s my right to distribute it however I want, in whatever format I choose, for free or not. I don’t lose the right to publish my novel if Microsoft determines that I wrote it using a pirated copy of Word. Would I lose that right if I tried to sell my iBook outside of the iBookstore and Apple got wind of it? I don’t know; we’re in uncharted waters here. Or how about this: for a moment I’ll stipulate that Apple’s EULA is valid and I’ve agreed to it implicitly by using the software. Now suppose I create an iBook and give it to someone else who has never downloaded iBooks Author and is not party to the EULA, and that person sells it on their own website. What happens now?
In ensuring that the App Store remains the only legitimate market for iOS apps, Apple doesn’t claim any legal rights to the content I create using its Xcode toolset. Instead, they enforce technical restrictions; apps must be cryptographically signed by Apple in order to run on unaltered iOS devices. Is this a good situation? For Apple and for novice users, maybe, but for developers it sucks and causes massive headaches. But in a way it’s better than a world in which software can assert whatever rights it wants over your stuff just by hiding a few paragraphs in its glove compartment.
This act enshrines the practice of indefinitely detaining people who are suspected by a Presidential Administration of terrorist connections without trial. It overwhelmingly passed the Senate. Here are the few senators who still believe that the Constitution prevents you from being locked up indefinitely on a mere suspicion.