Nov 102015
 

A while ago, Google created a two-factor authentication scheme called U2F. The general idea is as follows. You have a little USB dongle that you can register with sites. When you enable U2F on a site, the dongle gives that site a public key and a key handle, and that dongle is the only device that can use that key handle to sign things with the corresponding public key. So from then on, websites can hand over the key handle and a challenge, and if they get back a valid signature they know that whoever’s logging in has the dongle you used to sign up.

A few days ago, someone released a U2F implementation based on a Teensy microcontroller and it made Hackaday – and they added a disclaimer warning that doing anything related to crypto by yourself is a bad idea and that using XOR encryption for the key handle is insecure. There was a bit of a pushback against this in the comments. After all, so reasoned the Hackday commenters, security isn’t that hard and XOR isn’t a big deal, right? Surely this warning is just insulting, gratuitous negativity from the Hackaday editors.

Spoiler alert: XOR is in fact horribly insecure in this application, and this code is broken and should not be used by anyone.

So, there are a couple of obvious issues here. Firstly, XOR is not a secure form of encryption. If you know the plaintext, you can obtain the key simply by XORing the plaintext and ciphertext together. You also know that the XOR of two ciphertexts is the XOR of the plaintexts and that’s exploitable in some circumstances. But the author of this code doesn’t see that as a big deal. After all, so he argues, the private key for each client is random, so all you can get is the XORs of some random data that you don’t know – not very useful. I haven’t looked into this aspect too much because it involves deep crypto knowledge.

The second issue is that XOR is malleable and there’s no integrity check. That means that if you flip a bit in the ciphertext, it flips the corresponding bit in the plaintext, and the code has no way of detecting this modification and will quite happily sign things with the modified key. Still not a big deal though, right? After all, surely just being able to flip bits in the key is not going to let us deduce its actual value using – say – a puny 256 signing requests, one per bit, with a trivial amount of work?

Turns out it does. Every time we flip a bit, we’re either adding 2n to the private key if the bit was cleared before, or subtracting 2n if it was set before – and we can tell which. We can calculate a compensating factor and try both adding and subtracting it from the signature. One of these will cause the signature to validate correctly, and by looking at which one worked we can recover that bit. By repeating this with each bit, we can recover the private key in 256 signing requests to the device and 512 local signature verifications, then recover the XOR key, decrypt key handles from any other services the user has signed up with and authenticate to them using U2F.

I’ve ported his U2F implementation to run as a library and written a proof-of-concept that demonstrates this attack against it. You can find it here. It should be possible to extend this to actual Teensy devices and probably even to carry out the attack remotely from websites the user visits; this is left as an exercise for the reader. (I don’t think malicious websites will even need to request a key handle of their own – in this implementation the part of the key handle tying it to a particular website is malleable too, so an attacker can take a key handle from any other website the user has signed up with and convert it into one for their website.)

 Posted by at 6:44 pm

  13 Responses to “Breaking a Teensy U2F implementation: why you shouldn’t write your own crypto”

  1. Nicely done. I’m surprised you dismissed the XOR-key attack as requiring ‘deep crypto knowledge’ when your own attack demonstrates arguably more knowledge of the internals of ECC. 🙂

    Based on reading the implementation code, each ECC private key is a series of randomly generated integers between 0 and n, where the n for each integer is less than the maximum for a word, and determined by the curve being used. Knowing that, you can simply try out candidates for each word of the secret key, XOR it with each ciphertext you have, and reject a candidate if any of them decode to a number outside the allowable range. Each word in each ciphertext thus eliminates a fraction of the keys corresponding to the proportion of the range of values that are invalid. I think this would require fewer keys than your attack, but it’s hard to say without doing the math.

    • Thanks! The trouble is that if I’m understanding this particular form of ECC correctly, the private key is a single integer between 1 and n, and n is large enough that only something like 1 in 232 possible 256-bit values is not a possible private key. That doesn’t seem to leave much scope for recovering the XOR key in a reasonable amount of ciphertexts unless there’s some trick I’m missing.

      (And yeah, I know my SSL certificate expired a while back. Sorry about that. I had some trouble getting a new one issued and have been too busy to look into it much.)

  2. By the way, your SSL certificate has expired.

  3. Thank you for this work. I have updated my page to link to this. I should have learned more about the signing and verification part.

    • To be honest, the most important lesson here probably isn’t the exact details of how signing and verification works, but simply this: if you really, absolutely, unavoidably need to do symmetric encryption yourself, use someone else’s well-tested and reviewed implementation of authenticated encryption like the one in NaCl or expect to get burned. Just using a secure cipher isn’t enough – if you’d used Salsa20, which is the cipher in NaCl, my attack should still work – and implementing authenticated encryption yourself is dangerous. It doesn’t even matter much what you’re encrypting or why, if you’re not using authenticated encryption there’s probably some subtle attack that only cryptographers can spot, and they’re not going to bother looking too hard because it’s not surprising or interesting to them.

      If you’d used authenticated encryption with a secure key, then you could be sure that the key handle would either decrypt to something that you’d encrypted or fail to decrypt at all and the exact, subtle details of how flipping bits allowed me to extract the private key wouldn’t matter because I wouldn’t be able to do that in the first place.

      This isn’t the only nasty trap waiting to snare you when dealing with crypto, of course, but it’s certainly one that’s worth avoiding.

      • Thanks for your input.

        Have you looked a little bit about how Yubico implements it:

        https://www.yubico.com/2014/11/yubicos-u2f-key-wrapping/

        Their approach is that they don’t let the key to be out of the device at all. Do you foresee anything that I would mess up when implementing that method?

        • Oh, probably plenty of things. For example, your current RNG looks fishy and even a small RNG bias is enough to break ECDSA signing, making it possible to calculate the ECDSA private key from the public key and a few signatures. Subtle bugs like not actually checking the MAC could pass all tests but still cause sneaky security issues. Also, you may need to try more than one nonce to get a valid private key and I’m not sure what happens if you get that wrong.

          Now, if you implement Yubico’s scheme exactly right with no modifications or shortcuts, a securely-generated device secret, and a secure, robust, unbiased RNG (which is much easier said than done) I can’t see any obvious avenues to break it. That doesn’t mean they don’t exist – I’m no crypto expert and Yubico are doing some unusual things there which aren’t obviously secure – just that I can’t spot them. And again, this is not easy to do. Random number generation alone is a complex, fiddly area full of pitfalls and traps, and writing code of any kind that’s free of bugs is almost impossible.

          • After thinking about it: the encryption can go wrong in so many ways, and button in the key is the most important security aspect of U2F key.

            If someone get access to your hardware: game over, its not a biometric device, they can just plug it and use it anywhere.

            If someone tries to break things automatically using desktop/web software, without pressing hardware key, the hardware won’t respond so it will be almost impossible to send multiple requests to get meaningful data to break it.

            I think the first change that I must do is to either implement a button approach, or implement buttonless approach, but with requirement to replug the USB key for every login request.

          • Come to think of it: buttonless approach with plug/replug for every request may not be secure if someone can reset the USB bus power and turn it on again.

        • For the button approach, pins 15-19 on the LC can support touch sensing. You shouldn’t need any additional hardware, just touching the plated vias.

  4. […]   Breaking a Teensy U2F implementation: why you shouldn’t write your own crypto […]

 Leave a Reply

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>

(required)

(required)