Eric Rafaloff

Biased Nonce Generation in PuTTY for NIST P-521 Keys

Fabian Bäumer and Marcus Brinkmann of the Ruhr University Bochum disclosed an interesting vulnerability to the OSS Security mailing list today. It is being tracked as CVE-2024-31497.

The PuTTY SSH client was designed to produce deterministic secret nonces (k values) under DSA, due to the author's concern about low quality entropy on Windows in 2001. Deterministic signature generation isn't particularly unusual — in fact, RFC6979 standardized this general idea about 13 years later. It's a great way to generate signatures when your computing environment doesn't have access to a good source of entropy.

To generate a deterministic k value, PuTTY first computed the following:


Nothing suspicious so far. It's thought to be computationally very hard to invert this function, so our secret key x is safe, and it's going to output something random for every unique message under x. This checks all the boxes for nonce generation.

However, the next step is crucial and where the vulnerability existed. PuTTY would then coerce the 512-bit output of this expression into an multi-precision (MP) integer of size modulus. In elliptic curve cryptography, each curve has its own prime modulus. Most of the time programs implement this correctly and it's fine. For NIST-recommended Weierstrass curves, the moduli are going to be:

You may have gone through the list and missed the odd one out, and I wouldn't blame you. The last curve's modulus is 521 and not 512. And there is the problem. If you try to coerce a 512-bit value into a 521-bit value, you're always going to get at least nine leading zero bits. It turns out that these nonces weren't so randomly distributed after all.

We can see this problematic MP arithmetic play out toward the end of the vulnerable dsa_gen_k function definition. Note the use of digest512 which is 512 bits long. modulus is a function parameter and its value depends on the curve the caller wants to implement. In our case the value is 25211.

     * Now convert the result into a bignum, and coerce it to the
     * range [2,q), which we do by reducing it mod q-2 and adding 2.
    mp_int *modminus2 = mp_copy(modulus);
    mp_sub_integer_into(modminus2, modminus2, 2);
    mp_int *proto_k = mp_from_bytes_be(make_ptrlen(digest512, 64));
    mp_int *k = mp_mod(proto_k, modminus2);
    mp_add_integer_into(k, k, 2);

The problematic operation is the call to mp_mod, which instantiates a new MP integer with a size of modminus2 (521 bits). It then calls mp_divmod_into with proto_k (derived from digest512, so 512 bits) and modminus2, storing the result in the new MP integer. This 521-bit MP integer now contains a 512-bit value, which is where we get our leading zeros from.

mp_int *mp_mod(mp_int *n, mp_int *d)
    mp_int *r = mp_make_sized(d->nw);
    mp_divmod_into(n, d, NULL, r);
    return r;

After this is all said and done, you're going to end up with biased distribution due to the leading zeroes of k. Things can fail catastrophically when k isn't evenly distributed and you can observe enough tainted signatures. The private key recovery problem reduces down to the hidden number problem with the potential for plenty of leaked bits of information. Lattice reduction algorithms have been attacking hidden number problems for a long time, and optimizations and novel attacks are still being discovered. So this is a lot closer to the real risk side of cryptographic attacks and further away from the just theoretical risk side.

As a final note, I thought it was interesting that RFC6979 actually stresses the importance of this pitfall upfront in the introduction:

One characteristic of DSA and ECDSA is that they need to produce, for each signature generation, a fresh random value (hereafter designated as k). For effective security, k must be chosen randomly and uniformly from a set of modular integers, using a cryptographically secure process. Even slight biases in that process may be turned into attacks on the signature schemes.

#cryptography #news