A while ago, Google created a two-factor authentication scheme called U2F. The general idea is as follows. You have a little USB dongle that you can register with sites. When you enable U2F on a site, the dongle gives that site a public key and a key handle, and that dongle is the only device that can use that key handle to sign things with the corresponding public key. So from then on, websites can hand over the key handle and a challenge, and if they get back a valid signature they know that whoever’s logging in has the dongle you used to sign up.
A few days ago, someone released a U2F implementation based on a Teensy microcontroller and it made Hackaday – and they added a disclaimer warning that doing anything related to crypto by yourself is a bad idea and that using XOR encryption for the key handle is insecure. There was a bit of a pushback against this in the comments. After all, so reasoned the Hackday commenters, security isn’t that hard and XOR isn’t a big deal, right? Surely this warning is just insulting, gratuitous negativity from the Hackaday editors.
Spoiler alert: XOR is in fact horribly insecure in this application, and this code is broken and should not be used by anyone.
So, there are a couple of obvious issues here. Firstly, XOR is not a secure form of encryption. If you know the plaintext, you can obtain the key simply by XORing the plaintext and ciphertext together. You also know that the XOR of two ciphertexts is the XOR of the plaintexts and that’s exploitable in some circumstances. But the author of this code doesn’t see that as a big deal. After all, so he argues, the private key for each client is random, so all you can get is the XORs of some random data that you don’t know – not very useful. I haven’t looked into this aspect too much because it involves deep crypto knowledge.
The second issue is that XOR is malleable and there’s no integrity check. That means that if you flip a bit in the ciphertext, it flips the corresponding bit in the plaintext, and the code has no way of detecting this modification and will quite happily sign things with the modified key. Still not a big deal though, right? After all, surely just being able to flip bits in the key is not going to let us deduce its actual value using – say – a puny 256 signing requests, one per bit, with a trivial amount of work?
Turns out it does. Every time we flip a bit, we’re either adding 2n to the private key if the bit was cleared before, or subtracting 2n if it was set before – and we can tell which. We can calculate a compensating factor and try both adding and subtracting it from the signature. One of these will cause the signature to validate correctly, and by looking at which one worked we can recover that bit. By repeating this with each bit, we can recover the private key in 256 signing requests to the device and 512 local signature verifications, then recover the XOR key, decrypt key handles from any other services the user has signed up with and authenticate to them using U2F.
I’ve ported his U2F implementation to run as a library and written a proof-of-concept that demonstrates this attack against it. You can find it here. It should be possible to extend this to actual Teensy devices and probably even to carry out the attack remotely from websites the user visits; this is left as an exercise for the reader. (I don’t think malicious websites will even need to request a key handle of their own – in this implementation the part of the key handle tying it to a particular website is malleable too, so an attacker can take a key handle from any other website the user has signed up with and convert it into one for their website.)