The Broken Permissions Model in Android Apps as Illustrated by Facebook

A couple days ago I was informed that the Facebook app on my Samsung Galaxy S3 wanted to update. But it needed me to agree to some additional permissions for the app. I clicked the button to see what they were and was greeted with this:


I was more than a little surprised by the things that Facebook expected me to agree to let them have access to on my phone. Let’s be honest, that’s a pretty invasive list of things that I’m being asked to agree to allow and trust that Facebook will do no harm.

So I decided not to upgrade.

Here’s the thing, though. While I was originally angry with Facebook (I still am to some degree), I realized that Google is to blame here as well. They’ve developed this “all or nothing” permissions model. It’d clearly be more friendly to the user if every one of those permissions had an associated checkbox. That would allow me to choose the things which are reasonable and uncheck those that are not. The price, of course, is that I wouldn’t get the application’s full feature set. But maybe I don’t need or want all those features anyway.

I just want to post cat pictures and stuff. Let’s leave my SMS messages and wireless network connections out of it, OK?

This “take it or leave it” system really doesn’t allow for that use case.

I’d remove the app entirely, but I do use the 2-factor authentication codes that it generates. So I’d need to find an alternative way of getting those.

If this was a desktop app, I could at least run it inside a Virtual Machine and manage what it has access to. Maybe we should expect next generation phones, as they’re going to be more and more powerful, to offer similar virtualization? Seems like the wrong solution to me, but I wouldn’t be at all surprised to see it.

Posted in Uncategorized | 9 Comments

Additions to a Fresh Windows 7 Installation

I recently found myself shuffling computers around a bit.  And since it’s useful to have a functional Windows box on hand, I installed Windows 7 on an older desktop in my office.  (Installing from an original Win7 DVD was entertaining–the number of updates required to bring it current was impressive.)  It occurred to me that I’ve installed Windows 7 more than a few times since it came out and I should jot down a list of all those little (and some big) things I end up installing during the first few days of breaking in a new Windows box.

So without further delay, here’s my annotated list of what gets installed:

  • Google Chrome: cross-machine browser sync rocks my world, and since extensions sync too that means I get LastPass, Ghostery, and AdBlock Plus automatically
  • Microsoft Security Essentials: basic free virus and malware protection
  • Mozilla Firefox: because it’s the next best thing to chrome and occasionally sites require it
  • Dropbox: great for cross-machine file sync
  • VirtualWin: a simple but very effective virtual desktop add-on
  • Ctrl2Cap: because the caps lock key is stupid
  • IZArc: the best free archive tool around
  • VLC: free media player that groks nearly every file format
  • PuTTY: because you need to SSH to a Linux box for Real Work anyway
  • ImgBurn: free easy CD/DVD burner
  • WinSCP: to copy files to/from non-Windows boxes

I’ll try to update this list as I come across more.  But that’s it for now.

Are there essential tools that you install on a new build?

Posted in Uncategorized | 15 Comments

C-130 Taking a run at the Rim Fire

Shot from the Pine Mountain Lake Marina earlier today (August 22nd).

C-130 Making a Drop Run

The next photo is a picture of a plume that blew up east of Pine Mountain Lake Airport.

Plume East of PML Airport

Posted in Uncategorized | Leave a comment

Aircraft Fighting the Rim Fire, seen from Pine Mountain Lake

Long time no blog.  With the Rim Fire raging up here, I’ve been active on Facebook and Twitter, though.

We shot some pictures of the fire fighting aircraft this evening from the Pine Mountain Lake Marina before dinner.

DC-10 Fire Bomber DC-10 Fire Bomber Closer View of Smoke Clouds C-130 Against Smoke Clouds C-130 Fire Bomber IMG_7502 Helicopter with Water Smoke Clouds

Posted in flying | 1 Comment

Seeking Sucks: Spoiled by SSDs

I’m in the process of rebuilding full-text indexes for a good sized document collection that lives in a sharded MongoDB cluster. And the funny thing about this is that I don’t really use MongoDB that much. I mean we put data into it day after day, but I don’t personally have to interact with it that often. For this particular use case it “just works” the vast majority of the time I don’t have to think about it.

I like that.

But this particular task involves slurping ALL the data out of that cluster and onto a cluster of sharded Sphinx servers so I can re-index the roughly 3 billion documents. That’s all well and good, but since our MongoDB cluster isn’t terribly performance sensitive, it is built on old-fashioned (am I allowed to use that phrase?) spinning disks. And you know what that means, right?

Yeah, seek time matters. A lot.

If this was hitting our production MySQL clusters, I wouldn’t care nearly as much. Those all use one flavor or another of flash stoarge. In fact, we’ve been using SSDs long enough and in enough places that I’m spoiled at this point. I sort of cringe every time I have to deal with disk seeks. That’s so five years ago.

Anyway, I knew this would be an issue so I tried to be clever. I dumped all the document IDs from Mongo in advance, doing so in a way that give them to me in “disk order” so that when I later had to fetch them for indexing, I’d be able to minimize the seeking and hopefully maximize the throughput.

Well, that plan kind of half worked. You see, I had made the assumption that “disk order” on one member of a replica set would be the same as “disk order” on another member of the set. That appears not to be the case. So I had to work around this by telling the indexer processes not to use the mongos routing server, instead talking directly to the mongod on the specific server(s) that I fetched the ids from originally.

I look forward to a few more years from now, when we really do view spinning disks as “the new tape” and use them mainly for archival tasks.

Posted in craigslist, mongodb, nosql, sphinx, tech | 10 Comments

When Google Voice Transcription Goes Wrong…

Hilarity ensues…

Hey there, I decided to try to catch up with you and I just got back from him. So I got it yet. My My Sweet diarrhea certainly help. I can tell you a little more about it later and got a couple of things to touch base with you on. I know you’re kind of busy concert in the bay area today, but if you get. If we could chat. So anyways. Things are fine here. Just leave me. Cats and what they Hi Sweetie. So, yeah. On the process coming along so I guess. Later on out. I’ll straining. But anyways, I will hopefully catch up with you sometime soon. Okay bye.

I guess the lesson here is that adding “My Sweet diarrhea” to just about anything would make it funny.

(This was all prompted by Facebook posting about a friend’s “best voicemail transcript ever”, though hers was from Vonage.)

Posted in fun, wtf | 25 Comments

Handling Database Failover at Craigslist

There has been some interesting discussion on-line recently about how to handle database (meaning MySQL, but really it applies to other systems too) failover. The discussion that I’ve followed so far, in order, is:

As Rick James (from Yahoo) notes in the comments on Baron’s posting, they take the same approach that I still advocate and which we use at Craigslist: no automated failover. Get a human involved. But try to make it as easy for that human to do two very important things:

  1. Get a clear picture of the state of things
  2. Put things in motion once a choice has been made

It’s that simple.

Peter’s posting gets at the heart of the matter for me. While it’d be fun (and scary) to try and build a great automated system to detect failures and Do The Right Thing, it’s also a really hard problem to solve. There are lots of little gotchas and if you get it wrong, the amount of pain you can bring is potentially enormous.

At Craigslist, we share some similarities with Yahoo. We own our own hardware and it is installed in space that we manage. We try to select good hardware and take good care of it. And things still fail (of course). But the failures are not so frequent that we’re constantly worried about the next MySQL master that’s going to die in the middle of the night.

Rick pointed at MHA in his comment. I need to have a look at it and/or point some of my coworkers at it. I didn’t realize it existed and spent a couple weeks creating a custom tool to help with #1 above. In the event of a master failure, it looks at all available slaves, finds the most suitable candidates, presents a list, and allows the operator to choose a new master. Once selected, the script then tries to automate as much of the switching as possible.

Though I’ve stared at the code quite a bit and tried to reason about the ways it might fail, and I feel pretty good about it, we’ve never actually used it. And that’s OK, really. We have a nicely documented playbook of what to do in that sort of situation already. It has served us well. And, as I said, it doesn’t happen that often. All the script does it try to automate existing practice so that we can turn 10-20 minutes of “read-only” time into less than 5 minutes.

There’s a point at which you start to wonder if that savings is worth the risk of a tricky to spot bug finding its way in and turning 20 minutes into many hours of late night pain. I’m not sure where I stand on that in this particular case. Something like Galera Cluster for MySQL is interesting too, but I kinda feel like it pays not to be an early adopter here too. If we had a lot of problems with master failures, I’d surely feel differently.

Posted in craigslist, mysql, tech, yahoo | 12 Comments

Coursera on-line classes and the future of learning…

Ten years from now a “college education” is going to look radically different from when I went to school. And I think that’s a good thing, especially when you consider the skyrocketing costs of “higher education” and the miserable job market that recent graduates have faced.

This all started for me when I first saw MIT’s Open Courseware and then when Standford offered a few Computer Science courses on-line. I had actually signed up for Andrew Ng’s Machine Learning class but never made time in my schedule to participate. Since then, Andrew and Daphne Koller have kicked things up a notch by starting Coursera. They’ve built a platform that allows instructors to distribute their courses to many, many people on-line at a very low cost.

If you haven’t seen it, take a minute and browse the list of courses. There are 124 at the time of this writing, and that’s up from just a few weeks ago. I’ve already signed up for several (check my Coursera Profile), one of which starts tomorrow.

By figuring out how to make great instruction available to literally millions of people worldwide every year, and solving some of the harder problems associated with class sizes that are a factor of 100 more than what most instructors are using to handling (even with Teaching Assistants), Coursera is on to something–something potentially quite big.

I think the institution of college is about to undergo some very interesting changes. Few of us are able to predict the final outcome, but it’s going to be very interesting to watch–and maybe even more interesting to actually participate! Both Kathleen and I have signed up for some classes. I’m really looking forward to expanding my Computer Science and Programming horizons a bit and trying out a new style of learning and participation.

It’s worth watching Daphne’s TED Talk: what we’re learning from on-line education. I found some very surprising (and inspiring) ideas in there. Coursera is a very promising experiment in education.

Posted in Uncategorized | 6 Comments

iPad Annoyance: File Uploads in Safari

A few weeks ago I got a new iPad and have generally been quite happy with it. When paired with a wireless keyboard, I can even do some basic (lightweight) “work” on it as a remote terminal. But there’s a rather surprising issue that both Kathleen and I have encountered: file uploads from the browser (Safari) are simply not supported.

So if you shoot a picture with the camera, you can’t hop on to your WordPress dashboard, upload the image, and blog about it. At first I thought it was just me (or us) but a bit of searching reveals that this is, in fact, a “feature” of sorts.

One suggestion I’ve seen is to use a third-party browser, such as iCab. While I’m not opposed to paying a few bucks for something that actually works, the comments on that app make me think it’s not quite the solution either.

I guess you could argue that I “just” need an app that uploads to whatever site it is that I’m using at the time, but that’s pretty unrealistic. I’m a bit puzzled why Apple doesn’t let you browse your media libraries (photos and video, at least) to upload to web sites from mobile Safari. It stikes me as a very, very common need.

No every web site is “big” enough to be able to afford developing their own app for the iPhone and iPad.

So this makes me wonder if I’m just missing something. Is there a reasonably well known workflow that accomplishes what I want? If you’re using an iPhone or iPad, how do you handle this?

Posted in other, wtf | 28 Comments

iPad and Wireless Keyboard

In the last year or so, I’ve been working to reduce my computer inventory so that I spend less time administering things and more time actually using them. As part of that, Kathleen and I recently got new iPads (the retina display is amazing).

After a bit of tinkering around, it occurred to me to try pairing it with the Apple Wireless Keyboard. It’s clearly not the same as a laptop but it’s definitely a replacement for a netbook or low-end notebook. Kathleen took to it right away, so now we each have one.

It’s going to be interesting to see how much I’m able to do with an iPad, wireless keyboard, and decent Internet access.

Posted in Uncategorized | 19 Comments