Toward the end of my last post, I mentioned that I’d like to see App.net move toward a federated architecture. Broadly, what that means is that instead of being a central service that each client connects to directly, it would become a loosely organized mesh of independently controlled nodes. Users and devices would connect to whatever node they liked best — you can run your own if you want — and the nodes would talk to each other in some clever way to collectively maintain the appearance of a single unified social network.
The advantages are numerous and comparable to those of the web itself: no single point of failure, no concentration of power, no risk that the entire network will be sold to Facebook.
But does this work for a service like Twitter? Can the behavior we’ve come to expect from social networks be reproduced in this model?
Let’s find out. Since every good blog post needs a list of three things, here’s a list of three constraints we’ve come to expect of our social timelines:
- Immediacy: if a post has been made by someone I follow, I can see it in my timeline right away (or close enough that I don’t notice the difference).
- Chronology: posts always appear in order by time posted.
- Monotonicity: timelines grow only from the top; older posts are never retroactively inserted.
The problem appears to be that no federated architecture can simultaneously satisfy all three of these conditions. You can have any two: for example, if you let go of immediacy, your node can just wait until it’s received the latest content from every other node before displaying anything. But that’s not very scalable, and it makes real-time conversation impossible, so let’s keep immediacy. Now we have to decide what to do when content from a far-away node arrives late: if we’ve already displayed newer posts, we have to violate either chronology (by posting the older content above the newer) or monotonicity (by inserting it chronologically into the timeline).
Violating chronology is bad because it turns conversations into nonsense, but violating monotonicity means you can’t assume you’ve seen everything once you’ve read to the top of your timeline. Your client will have to maintain read/unread status for every item, and you’ll have to keep winding back in time to pick up things you missed. Which might be fine, but now we’re talking about something less like Twitter and more like email or RSS.
OK, so all of those options suck for conversations. But chronology is really only important within a conversation. So what if instead of replicating Twitter exactly, we shoot for a hierarchical, threaded model? The timeline would be a list of threads, and chronological order is preserved within each thread, but the threads themselves show up in arbitrary order. Oh, and you see a thread if you’re following the person who started it, I guess? Never mind, at least we’re getting somewhere! We’ve invented Usenet.
The moral of the story is that the qualities that make Twitter interesting — its mix of conversation, discovery, and one-to-many communication — are direct consequences of its centralized architecture. Without the centralization you can still have something interesting, but it’s a different thing.
I’d love to be proven wrong.
Back in the early 1990s, when you went online, you either dialed up a local BBS or you used a national service like Prodigy or America Online. These services each had their own user interfaces and content and jargon and there was no easy way to communicate between them. If you were on one and your friends were on another, you had to get different friends.
You could connect to the Internet if you knew how (I found a university library with a public VAX terminal I could dial into), but there was no World Wide Web yet so there wasn’t much to do. When the web finally arrived, the walls around the Prodigy and AOL gardens crumbled. They just couldn’t keep up. They became irrelevant. CompuServe, GEnie, Prodigy, and the rest all disappeared and AOL became a dumb pipe. But before that happened, I remember trying to explain the web to my parents, who loved their AOL, and getting blank stares. Why couldn’t they see how incredible, how game-changing this new thing was?
Today, App.net is getting the same blank stares, and worse. Anil Dash echoed Tess Rinearson in calling it a “country club”; others have alluded to its slightly-less-than-diverse demographics. Most of this criticism stems from a perception of the service as a Twitter clone that costs money. Which is totally fair because right now, that’s all it is. But it’s also a bit like calling the web in 1993 an AOL clone for rich white college students. Fair, but entirely missing the point.
Let’s back up for a sec and consider the main components of a typical social architecture:
The social graph (who follows whom) determines the audience when a user publishes something (tweet, status update, blog post, checkin, baby bunny), and each user receives a stream or aggregation (timeline, dashboard, news feed) of the things their friends have published.
Twitter and Facebook have happily provided #1, subject to various restrictions, rate limits, and arbitrary shutoffs. Any photo sharing site, blogging engine, or even RSS feed provides the second. But if I want to start a new social app, even if I piggyback on existing social graphs and publishing platforms, I still have to come up with #3 on my own — and that’s where it gets tricky. That’s what Instagram did: they built #2 and #3 while bootstrapping #1 off of Facebook and Twitter, and it was such a monumental achievement that they sold the company a year later for the cost of the first two Mars rovers. Scaling is that hard.
How many great ideas for socially-aware apps or services haven’t been built because there’s no common, open infrastructure to build them on?
Twitter could have become that infrastructure if the advertising people hadn’t won. Imagine if 140 characters of flat text were only one of the things that a tweet could be. What if when you added a photo to Flickr, say, your Flickr account “tweeted” (on your behalf) a block of data, tagged as a Flickr photo? People reading your stream from a Twitter client would never see this, because Twitter clients only know how to display text-based tweets. But a Flickr client? It would see just the Flickr data, allowing it to build an aggregated photo stream using Twitter as the plumbing. Now we have the equivalent of Instagram, and we didn’t have to build or scale or maintain any social networking infrastructure.
But Twitter has made it abundantly clear — or at least firmly vague — that they have no interest in being anyone’s plumbing. Twitter is for tweets, and tweets are one thing only. But with App.net as the back end, anything is possible — and not just social publishing and aggregation. Devices in your home like your security system or your TiVo or your sprinkler timer could publish their own feeds, and then you could have a single app that monitored all of them. You could turn iTunes Store release data into App.net feeds and follow your favorite bands to hear when new albums come out — and this extra data wouldn’t pollute your main social timeline, because it would all be tagged by data type for clients to filter to their liking.
Of course, none of this is likely to come true as long as App.net costs $50 per year per account and another $50 for developer access. I can’t imagine it will keep this revenue model forever, though. Maybe users would pay per data source they publish from, or developers would pay per user and recoup that cost in app sales. Ideally App.net would adopt a federated architecture, so I can run my own node if I have the interest and resources. But I don’t want it ever to be free, because as we’ve seen with Twitter, free pipes tend to make the pipe owners get possessive about the stuff that’s in the pipes.
So let’s take back our stuff. I love Twitter’s product, but I believe it’s on the path of Prodigy and CompuServe: so desperate not to become a dumb pipe like AOL that it will soon become nothing.
Daughter, 3: Daddy, the rover landed!
Me: It did! Lots of smart people made that happen by using their brains and working really hard for a long time. It’s called science.
Me: When you get bigger, you can learn science and do amazing things too.
Daughter: And chew gum.
Me: Yes, and chew gum.
You’d think paying $80 for a piece of software would earn you the right not to be treated with contempt by its publisher, wouldn’t you? Well, Parallels now has “in-product notifications” that can’t be disabled. Ads, in other words.
The justification is that the notifications are used for important things, like bug fix updates, therefore they can’t be turned off. Which, of course, is complete nonsense. That story is how you sell ads to sponsors, not how you sell a product to users. What’s actually happening is that Parallels is abusing a critical information channel by stuffing paid content into it, and then pretending it’s not their fault. It’s like running ads over the Emergency Broadcast System and claiming you have no choice because it’s for emergencies.
[W]e occasionally share special offers from Parallels or other third party companies who provide special deals for our customers.… However, because customers need to receive important product information, there is not a mechanism for customers to completely disable notifications.
"Need"? Hmm, I think I read something about "needs" once, in a psych textbook or somewhere.
Yup, there it is. No problem then.