Single Points of Failure in Cryptography #3: Weak or Low Entropy

Entropy is a measure of order. For example, Imagine a system as a handful of coins. The more ordered a system is, the lower the entropy in that system. Another way to look at entropy is the number of permissible configurations of a system. A highly ordered system can only exist in very few configurations (all coins must be tails up). While a completely random system, where many configurations are possible, the entropy is very high (all coins can be either way). Why does entropy matter to cryptography?

When using a key to encrypt your data, the best possible scenario is that an eavesdropper has no way to guess your key. Imagine a key for which there were only a few possible valid options, your eavesdropper would need only a few guesses to try to find out which key you used.  Naturally you want all possible keys to be equally likely, so that an eavesdropper has no more than a completely random chance to guess your key. In that sense, entropy is a measure of predictability for a given key. We prefer a key that can have many possible values, all of which are equally likely – a key from a source with very high entropy. All keys are equally likely and therefore completely unpredictable.

Computers aren’t good at doing unpredictable things. Their predictability is what makes them so well suited for most of the tasks that have driven their success. To create good encryption keys that are very random and hard to guess several different schemes were invented. Many of which are pseudo random, relying on an algorithm that given a seed number produces a sequence of random looking numbers. But it is only pseudo random because for the same seed number, the same sequence of random numbers appears. There’s a hint at guessability.

Other schemes rely on measuring physical properties in the system, such as the temperature at certain points inside the CPU. Or the memory layout of the system, which is different depending on which applications are open and what they are doing. These things are hard to predict and are likely to give different outcomes each time they are measured. 

A step further is the quantum random number generators (QRNG) that make use of quantum effects to generate randomness. This is usually a small photon source (aka a laser beam of light) and a half-silvered or polarized mirror and a photon detector. That’s randomness at the quantum level and completely unpredictable.

But there’s a rub.  A real problem that forms a single point of failure in cryptography.

It is generally agreed each of these forms of randomness have better times and worst times.  For instance, right after booting a machine the CPU temperature is more guessable. So is the memory layout. And photonic random generators might have manufacturing deficiencies, some that only become apparent in time and with heat cycles that make them more predictable.

Unfortunately, these random sources are generally passed through a hashing algorithm of some kind, which gives the appearance of complete randomness (take a look at an MD5 sum, for instance, looks completely random). If any guessability existed, it is completely masked by the hashing. Put another way, there is often no realistic way to judge if the key that the system generates for you is truly random or if it is guessable. Or if there’s a few million possible keys, instead of trillions of billions (keep in mind that testing a few million keys is trivial for most modern computers).

We can conclude by summarizing the following observations:

  • Computer-based random number generators (whether physical or quantum) may be random at times but have some periods where their entropy is low.
  • It is nearly impossible to determine exactly how random the random number is, or more specifically how much entropy is present in the basic random source at a given time.
  • Both these problems can be mitigated by taking a massive sampling size, but that is not practical on day-to-day computing devices.

How does one proceed and remove low entropy as a single point of failure in cryptography?  

The answer is in diversification. There are several redundant sources of randomness in any computing system. Although some may be good at times, and some are worse at times, by combining sources of randomness (specifically by concatenation), the risk of falling prey to a single bad moment shrinks considerably.

At Quantum Xchange we combine at least three different sources of entropy, which are all evaluated for their randomness before their outcome is passed to the hashes, such that a guarantee can be made that the resulting cryptographic keys are truly unguessable. This combines physical techniques with QRNG and even algorithmic randomness, to produce the cleanest, most uniform, random distribution. This approach is as good as random gets.

Don’t miss our other posts in the Single Points of Failure series examining the shortcomings of asymmetric encryption and public handshake & key derivation.

Subscribe to the Quantum Xchange Monthly Newsletter

Quantum Xchange does not share or rent your information to any third parties.