אליהו א. סולטניק, Ph.D.

Evan's First Name @ Sultanik .com

Computer Security Researcher
Trail of Bits

Recent Content:

Lenticrypt: a Provably Plausibly Deniable Cryptosystem

or, This Picture of Cats is Also a Picture of Dogs

Back in 2009, I wrote about a thought experiment on how to subvert copyright law via plausible deniability. A couple years ago I expanded on that thought experiment by proposing a seedling idea on how to accomplish it via cryptography. Since then, I’ve slowly been developing that seedling into a functioning proof-of-concept, which has culminated in the creation of the Lenticrypt project:

Lenticrypt can generate a single ciphertext file such that different plaintexts are generated depending on which key is used for decryption:

$ python lenticrypt.py -e key1 plaintext1 -e key2 plaintext2 -o output.enc

$ python lenticrypt.py -d key1 output.enc | diff - plaintext1 -s
Files - and plaintext1 are identical

$ python lenticrypt.py -d key2 output.enc | diff - plaintext2 -s
Files - and plaintext2 are identical

Unlike alternative plausibly deniable cryptosystems like the recently discontinued TrueCrypt—whose ciphertext size grows in proportion to the number of plaintexts (i.e., hidden volumes) it encrypts—Lenticrypt’s ciphertext size is proportional to the largest plaintext it encrypts. This is because Lenticrypt shares bytes in the cyphertext between each of the plaintexts it encrypts; they are not stored in separate regions of the ciphertext. Therefore, there is no straightforward way to estimate the number of plaintexts that are “hidden” inside a single ciphertext.

In fact, Lenticrypt has the theoretical property that, under reasonable assumptions, there is always a near 100% probability that there exists an key in the public domain that will decrypt a given ciphertext to any desired plaintext, even if that key is not known. Therefore, even if an incriminating plaintext is revealed, the author of the ciphertext can plausibly deny having created it because there is a non-zero probability that the plaintext was legitimately decrypted by random chance.

More technical details on the cryptosystem as well as additional use-cases are described in Issue 0x04 of The International Journal of PoC||GTFO.

Note that Issue 0x04 of PoC||GTFO is a polyglot: Among other things, you can also treat the .pdf file as if it were a .zip. If you extract it, there are some neat Leticrypt-related Easter eggs inside the feelies.

PoC‖GTFO Issue 0x04

TRACT de la SOCIÉTÉ SECRÈTE de POC ‖ GTFO sur L'ÉVANGILE DES MACHINES ÉTRANGES et autres SUJETS TECHNIQUES par le prédicateur PASTEUR MANUL LAPHROAIG

TRACT de la SOCI\'ET\'E SECR\`ETE de POC $\|$ GTFO sur L'\'EVANGILE DES MACHINES \'ETRANGES et autres SUJETS TECHNIQUES par le pr\'edicateur PASTEUR MANUL LAPHROAIG

PoC‖GTFO Issue 0x03

AN ADDRESS to the SECRET SOCIETY of POC ‖ GTFO concerning THE GOSPEL OF THE WEIRD MACHINES and also THE SMASHING OF IDOLS TO BITS AND BYTES by the Rt. Revd. Dr. PASTOR MANUL LAPHROAIG

AN ADDRESS to the SECRET SOCIETY of POC $\|$ GTFO concerning THE GOSPEL OF THE WEIRD MACHINES and also THE SMASHING OF IDOLS TO BITS AND BYTES by the Rt. Revd. Dr. PASTOR MANUL LAPHROAIG

Random Solutions are Often Good Enough

Sometimes taking the easy way out isn't nearly as bad as it might seem!

Hard Problems

Computer Scientists and Software Engineers deal with computationally hard problems all the time. But those challenges don’t solely apply to those people creating the software; they are equally relevant to those people analyzing it. For cybersecurity researchers, finding a hash collision… determining the user inputs that can assign a certain value to a tainted EIP… deciding whether a black-box binary is malicious… they’re all really hard! As our work becomes more and more automated with the help of program analysis techniques, these computationally hard problems become more and more prevalent. But what makes one problem harder than another? In the remainder of this post I’ll answer that question, and introduce some surprising new results of my research that demonstrate how good solutions to inherently very hard problems can be quickly generated. Read on for more.

In order to get there, we’re first going to have to review the concepts of computational complexity and approximation algorithms. If you are already familiar with the former, you can skip to the third section. If you are already familiar with both, you can skip to the fourth section.

It’s clear that certain computational tasks are harder than others. For example, finding the smallest value in an $n$-element array can be completed in about $n$ operations (i.e., simply iterate over the array once, keeping track of the smallest). Sorting that array, on the other hand, will always take at least $n \log(n)$ operations in the worst case (for an explanation of why that is, here is a great article). Sure enough, if we look at the common sorting algorithms (quicksort, heapsort, mergesort, bubblesort, etc.), it’s always pretty easy to construct $n$-element arrays that will require those algorithms to execute at least $n \log(n)$ operations. In fact, when quicksort is presented with an array that is already nearly in sorted order, it will require $n^2$ operations, which is much slower. Therefore, we can say that the problem of sorting is inherently harder than finding the infimum (smallest element) of an array.

There are in fact many different classes of problems that have known “hardness.” Some classes are easier to solve than others. Certain classes also have other interesting properties; for example, a group of problems that are called “P-complete” are all very likely to be difficult to parallelize effectively. The common compression algorithm LZW (used in GIF) solves one such P-complete problem, so I’m sorry to report that it’s very unlikely that you’re going to get a speedup by running your automated animated GIF meme generator on a GPU.

In the remainder of this post we’re going to be focusing on what are called NP-hard problems. These are a class of problems for which it is very likely that there is no way to solve them efficiently. The interesting thing is that if someone were to find a single efficient way to solve one of them, then we’d immediately have a solution for all of them.

Really Hard Problems

One of the most famous examples of an NP-hard problem is the traveling salesman problem:

What is the shortest possible route to visit each city in a map, without visiting any city more than once, and returning to the originating city at the end?
Variants of this problem crop up in the real world all the time, particularly in logistical networks (e.g., how companies like UPS and FedEx decide routes for their delivery vehicles and locations for their distribution centers). Another famous NP-hard problem is the Steiner network problem:
What is the shortest set of roads that connects a given set of cities to each other?
That problem is very important in efficient networking protocol design. The games Sudoku, Minesweeper, and Battleship, as well as many others, are all also NP-hard. And, as you may have guessed, many of the cybersecurity tasks and program analysis problems we encounter on a daily basis are also NP-hard. That’s why things like static taint analysis, constraint solving, and concolic execution that many people use in vulnerability research are so slow.

The important part here is that there is no known polynomial time algorithm that solves any NP-hard problem. I’ll get to what “polynomial time” means in a minute; for now you can just assume that it means “solvable in a reasonable amount of computation time.” Discovering a polynomial time algorithm that solves an NP-hard problem or, conversely, proving no such algorithm can exist, is one of the most important open problems in theoretical computer science. If you can devise such an algorithm, xor prove that none can exist, then you’ll win the million dollar Millenium Prize. More importantly: If such an algorithm does exist, then it has very many significant consequences. For example, such an algorithm would break the assumptions of many cryptographic schemes and mean that you could efficiently break most cryptography very quickly. Most computer scientists and mathematicians believe that it is impossible to create a polynomial time algorithm to solve an NP-hard problem, but so far no one has been able to prove it.

So what does “polynomial time” mean, and why do we want to achieve it? Computer scientists classify the computational complexity of an algorithm based on the order of magnitude of the number of operations the algorithm would have to perform in the worst case. Let’s say we create an algorithm to find the optional solution to the traveling salesman problem simply by brute-force enumerating all possible paths through the map. In the worst case, the input map is going to have a road between each pair of cities. That means our algorithm, in the worst case, is going to need to test every possible path (these are what are called Hamiltonian paths), of which there are $\frac{(n - 1)!}{2}$, where $n$ is the total number of cities. That means our algorithm does not run in worst-case polynomial time, because the equation $\frac{(n - 1)!}{2}$ is not a polynomial. For it to be a polynomial, it would have to have a worst-case runtime equation that behaves asymptotically similar to $n^c$ for any constant $c$.

I’ll get to why such a polynomial expression is so desirable in a minute, but first let’s look at an example of why slower-than-polynomial-time algorithms, like our traveling salesman one, are very bad. As we can see, our algorithm quickly becomes very slow as the number of cities, $n$, increases:

$n$
(number of cities)
Number of Paths to Test
3$\frac{1}{2}(3 - 1)!$ =1
4$\frac{1}{2}(4 - 1)!$ =3
5$\frac{1}{2}(5 - 1)!$ =12
6$\frac{1}{2}(6 - 1)!$ =60
7$\frac{1}{2}(7 - 1)!$ =360
8$\frac{1}{2}(8 - 1)!$ =2520
9$\frac{1}{2}(9 - 1)!$ =20160
10$\frac{1}{2}(10 - 1)!$ =181440
11$\frac{1}{2}(11 - 1)!$ =1814400
12$\frac{1}{2}(12 - 1)!$ =19958400
13$\frac{1}{2}(13 - 1)!$ =239500800
14$\frac{1}{2}(14 - 1)!$ =3113510400
15$\frac{1}{2}(15 - 1)!$ =43589145600
16$\frac{1}{2}(16 - 1)!$ =653837184000
17$\frac{1}{2}(17 - 1)!$ =10461394944000
18$\frac{1}{2}(18 - 1)!$ =177843714048000
19$\frac{1}{2}(19 - 1)!$ =3201186852864000
20$\frac{1}{2}(20 - 1)!$ =60822550204416000
30$\frac{1}{2}(30 - 1)!$ =4420880996869850977271808000000
61$\frac{1}{2}(61 - 1)!$ =4160493556370695072138170591611682190377086303180622976224638848204800000000000000

So, with as few as 61 cities on the map, our algorithm would have to consider over four sexvigintillion possible paths! That’s about 42 times the number of atoms in the visible universe! With only nine cities, It’s Over 9000!!!1 And that’s not even taking into account the number of operations required to enumerate each of those paths, which likely increases the overall figure by at least a factor of $n$.

But what makes polynomial time algorithms tractable while asymptotically slower algorithms are not? Why does that phase transition occur specifically at the point when the worst-case number of operations grows polynomially? It all has to do with Moore’s law, which states that processor speed doubles every two years or so. (More accurately, Moore’s law speaks of the number of components per integrated circuit.) Now, let’s say we have an algorithm for solving the traveling salesman problem that can quickly solve problem instances up to $k$ cities on current hardware. How many years will we have to wait before hardware will be fast enough to solve a problem of size $k + 1$? If our algorithm runs in roughly $n^c$ operations (i.e., it runs in polynomial time), then, by Moore’s law, we will only have to wait about $\log_2\left(1 + k^{-1}\right)c$ years before we can solve a problem of size $k+1$. For most values of $k$ and $c$, that is a very short time! If our algorithm’s runtime were instead exponential (e.g., $c^n$), then the number of years we would have to wait would be at least linear in $k$.

If we’re stuck solving a problem that we know is NP-hard—which means that there is no known polynomial time algorithm, and there will likely never be one—what are we to do? Luckily, there is a subfield of computer science known as approximation algorithms whose goal is to find algorithms that can quickly find solutions that are not necessarily optimal, but are within some known bound of optimal. That’s what we are going to talk about in the next section.

The “Bingo Problem”: Approximate Solutions to Really Hard Problems

We’re all likely familiar with the game of BINGO: Balls labeled with letters and numbers are randomly drawn…


This area should be automatically filled with BINGO balls. If you see this message, it means that you probably need to enable JavaScript!

…until one of any given subset of balls (as specified by the BINGO cards) is collected.

Let’s imagine a variant in which all letters of the alphabet exist on the balls and they are each given arbitrary floating point values:

This area should be automatically filled with BINGO balls. If you see this message, it means that you probably need to enable JavaScript!

The object of this new type of BINGO game is: Given n balls chosen randomly from the infinite set of all possible letter/number combinations, find some subset of the balls that spells out an English word. The goal is to maximize the sum of the values of the balls used for that word. It’s a bit like scrabble. If the word length is unbounded, this game is in fact NP-hard (it is a variant of the subset sum problem). As an example, say the following thirteen balls are randomly chosen:

P
1
R
48.7
N
79.8
O
26
O
83.8
P
23.6
A
22.7
M
92.1
I
34.2
T
36.7
A
64.5
I
84.5
X
49.1

One solution would be to choose the word “MOTION” with a value of 402.9:


P
1
R
48.7
N
79.8
O
26
O
83.8
P
23.6
A
22.7
M
92.1
I
34.2
T
36.7
A
64.5
I
84.5
X
49.1

That isn’t the best we can do, though. A better solution would be to choose the word “APPROXIMATION”, which has a value of 646.7:


P
1
R
48.7
N
79.8
O
26
O
83.8
P
23.6
A
22.7
M
92.1
I
34.2
T
36.7
A
64.5
I
84.5
X
49.1

This is in fact the optimal solution, since it uses all of the possible letters. If we had originally chosen MOTION, then, in approximation algorithms parlance, we say that the “constant of approximation” is 646.7 ÷ 402.9 = 1.6051. In other words, MOTION is 1.6051 times “worse” than optimal.

In the case of the traveling salesman problem, there are well known polynomial time approximation algorithms that can return a result with a 1.5 constant of approximation. In other words, these algorithms can return a result that is guaranteed to be no worse than 1.5 times the value of the optimal result, even though we don’t actually know the value of the optimal solution! For most NP-hard problems, any constant of approximation that is 2 or lower is considered good.

Random Solutions to Really Hard Problems

I like to ask stupid questions. So I asked myself: “What if our algorithm just chooses a solution randomly?” It’s usually pretty easy to write a fast, dumb algorithm to return random solutions. In the Bingo problem, it simply amounts to ignoring the ball values and returning a random word whose letters match the balls. That can certainly be implemented in polynomial time if we have an indexed dictionary of words. For problems like minimum Steiner network, we can just randomly generate any spanning tree, which will always be a feasible (valid) solution, and which we can compute in linear time. On average, how bad are those random solutions likely to be?

First, let’s sort the balls in order of increasing value:

P
1
A
22.7
P
23.6
O
26
I
34.2
T
36.7
R
48.7
X
49.1
A
64.5
N
79.8
O
83.8
I
84.5
M
92.1


$\ell$ $m$

If we know that the shortest word in our dictionary is $\ell$ letters long and the longest word in our dictionary is $m$ letters long, then the expected value for the constant of approximation of a randomly selected word is at most equal to the expected value of the sum of the $m$ highest-value balls divided by the expected value of the $\ell$ lowest value balls. This is simplifying a lot of details (e.g., things get a lot more complex if the values can be negative), and it requires a bit of advanced math, but luckily we’ve already worked it all out. You can read about it in our recently published paper here.

If we distill the crufty mathematical incantations we needed to handle all of the edge cases for the formal proofs, our results can be summarized as follows:

Under very reasonable assumptions, the expected value for the constant of approximation of a randomly selected feasible solution is almost always going to be at most two!
That’s really surprising, especially realizing that this applies for all NP-hard problems, if formulated correctly. So, simply choosing a random solution is often effectively as good as the best approximation algorithms that are currently known. In fact, in our paper we linked above we present some empirical evidence suggesting that the random solutions are often even closer to optimal than ones produced by state-of-the-art approximation algorithms.

Conclusions

Certain computational problems are harder than others, and a class of them, called NP-hard, are very likely to be intractable to solve optimally. A great number of cybersecurity and vulnerability research tasks fall into that class. Polynomial time algorithms are desirable because if a problem instance is too large to be solved by current hardware, we don’t have to wait very long until hardware will become fast enough to solve that instance using the polynomial time algorithm. Polynomial time approximation algorithms can be used on intractable NP-hard problems to find solutions relatively quickly that are within a known distance from the optimal solution (even if we don’t actually know what the optimal solution is). For most NP-hard problems, a polynomial time approximation algorithm that can achieve a solution that is no worse than two times the cost of the optimal solution is usually considered good. The surprising result of our new research is that randomly selected feasible solutions will on average be at least as good as those complex approximation algorithms!

So the moral of the story is: Sometimes quickly and mindlessly choosing a random solution isn’t half bad!

Physical Security Followup

These Locks are Everywhere!

First of all, thanks for all of your positive feedback on our recent post on physical security. One of the comments we’ve received multiple times is that these types of locks and the practice of using mnemonics for their codes is primarily limited to government facilities. It turns out that’s not really a limitation. Like a muted post horn, now that these locks have piqued our curiosity we’re starting to see them everywhere we look. Check out this find we made today:

Kaba Lock on an ATM

Don’t bother checking, we scrubbed the Exif. Bank of America: Feel free to contact us directly if you’d like to know the location ;-)

Another thing we neglected to emphasize in the original post was that our time estimates are upper bounds based off of the assumption that the lock will have a four minute timeout for successive failed attempts. For locks that do not have such a timeout (e.g., basically all mechanical locks, and presumably the one on this ATM), you can divide the brute-force cracking time by at least a factor of four.

Exploiting Password Weaknesses in Physical Security

In which I channel the spirit of my eighth academic cousin thrice removed, Richard Feynman.

Digital spin locks like the Kaba Mas X-09 and X-10 are very common for high security applications like vault doors. US General Services Administration approval means that they are nearly ubiquitous in securing government filing cabinets that contain documents that are classified secret, as well as securing the doors to laboratories in which secret code is written and stored. Strolling through the hallways of any Department of Defense facility or government contractor’s office, one is likely to see many of these on the doors:

Kaba Mas X-09 Lock
Kaba Mas X-09

These types of locks are very sophisticated: they are self-powered by rotating the dial and they have a digital printout on the top of the code as it is being entered. Security features to thwart automated brute force attack include random jumps in the numbers as the dial is being turned, algorithms for detecting “non-human” rotations of the dial, and long timeouts for successive failed attempts. These types of locks are usually configured to accept a six-digit code that is input as a sequence of three numbers from 0 to 99. That means there are $math$10^6 = 100^3 = 1,000,000$math$ possible combinations. With a four minute timeout, brute forcing all of the combinations would take almost eight years!

Humans are generally bad at remembering long sequences of numbers, and in most facilities the combinations to these locks are changed frequently. A single employee might have to memorize dozens of different combinations (one for each safe, door, etc.). Therefore, as a mnemonic, the security officers who set the codes often use six-letter words which are translated into codes using the mapping from a phone keypad. In fact, one will almost always see a magnet or sticker like this right next to the lock:



For example, the word “HACKER” would equate to the code 42-25-37.

1

2
abc
3
def
4
ghi
5
jkl
6
mno
7
pqrs
8
tuv
9
wxyz
0


And therein lies a vulnerability. First of all, using a phone dial pad for the mapping from letters to numbers means that the numbers zero and one will never be used! This means the number of potential codes to test has been decreased from one million to $math$8^6 = 262,144$math$. That’s a decrease of almost 75%! But we can do even better.

Next, let’s assume that the code word is a six-letter English word that occurs in the dictionary. Looking at /usr/share/dict/words, there are only 17,706 six-letter English words that have direct mappings using a phone keypad. That’s much lower than the 262,144 possible codes without zero and one, which means that we have reduced the number of possible codes to brute force by over 98% from the original! For example, the code 55-57-59 doesn’t appear to correspond to any words in the dictionary. That’s likely because almost all words need at least one vowel, and the phone keys 5, 7, and 9 don’t provide any vowels. Furthermore, many of those 17,706 words actually map to the same code. For example, the words “DOSAGE”, “ENRAGE”, “FORAGE”, and “FORBID” all map to the code 36-72-43. If we just count the number of distinct codes generated by those six-letter English words, we get only 14,684, a reduction of almost 99%! With a four minute failure timeout, brute forcing those 14,684 codes would only take about forty days.

But wait, there’s more!

Not all letters occur in English words with the same frequency; you’re much more likely to find the letter ‘e’ in a randomly chosen word than the letter ‘x’. For example, almost 10% of the six-letter English words in the dictionary end in the code “37”, just like in our example word “HACKER”. Therefore, if we cleverly order our brute force approach to try codes that occur most frequently in English words, then we can increase our chances of cracking the code quicker.

I sorted all of the 14,684 distinct codes in decreasing order according to the number of words that map to that code. If a code word is chosen uniformly from the set of all six-letter English words, then this ordering will maximize our chances of cracking the code quickest. In fact, after fewer than 250 attempts, we will have cracked the code with about 5% probability. In other words, with only about half a day of access to the lock, a brute force attempt using this ordering will have a 5% chance of success. In contrast, if the lock’s code were instead set using a randomly generated six-digit number, it would require on average 50,000 attempts (~5 months) to crack it with 5% probability. On the other hand, if the code was set from a six-letter English word using a phone keypad mapping, the lock can be cracked with 50% probability after about 16 days.

Codes are not usually changed on a month-to-month frequency, so an adversary with intermittent access to the lock (e.g., after hours) could quite conceivably crack it before the code were changed.

If there exists an additional vulnerability that allows the attacker to detect whether each successive two-digit subcode is correct before entering the next two-digit subcode, then the expected value for the length of time required to crack the code is on the order of minutes.

So what can we do about it? The most obvious solution is to simply choose codes randomly from the set of all possible one million 6-digit numbers. If we really do want to have a mnemonic, though, we could do a lot better by using a letter-to-number mapping that makes use of zeros and ones, and also produces a relatively even distribution of codes from English words. Here is one such example of a mapping:


1
a
2
e
3
br
4
co
5
dnq
6
ip
7
lmjz
8
hkt
9
su
0
fgvwxy

It’s not necessarily the best one, though. Finding the optimal mapping—the one that ensures a uniform distribution across all six-digit code for six-letter English dictionary words—is actually an instance of the multiple knapsack problem (a variant known as the multiple subset sum problem) and is therefore NP-Complete, and likely intractable for problems even as “small” as this one. Let $math$L = \{a, b, \ldots, z\}$math$ be the set of letters in the alphabet and let $math$f : L \rightarrow \mathbb{N}$math$ be a function mapping letters to their frequency among six-letter words in the dictionary. Then we can relax the problem to the minimization of the mean squared error over the summed frequencies of the letters mapped to each number: Find a function $math$t : L \rightarrow \{0, \ldots, 9\}$math$ that maps letters to numbers such that $equation$\frac{1}{10}\sum_{i=0}^9 \left(\sum_{\ell \in t^{-1}(i)} \left(f(\ell) - \frac{\sum_{\ell \in L} f(\ell)}{10}\right)\right)^2$equation$ is minimized. The total number of legal mappings, $math$t$math$, is $math$10^{26} = 100$math$ million billion billion. With that said, the example mapping above will be at least a million billion billion times more secure than using a phone keypad mapping!

The moral of the story here isn’t that these locks are inherently vulnerable; if used properly, they are in fact incredibly secure. The takeaway is that these locks are only as secure as the codes humans choose to assign to them. Using a phone keypad mapping on six-letter English dictionary words is the physical security equivalent of a website arbitrarily limiting passwords to eight characters.

Unambiguous Encapsulation

Defending Against "Packet in Packet" Attacks

A couple days ago, Dominic Spill and Michael Ossman presented an interesting talk at Shmoocon on using specially crafted error correcting codes to have unambiguous encapsulation, preventing attacks like “Packet in Packet.” This appears to be the culmination of some work that was first proposed to the LANGSEC mailing list back in March. The underlying problem that they are trying to address is that the physical layer is almost always error-prone: bits often get flipped. The data link layer addresses this by using error-correcting codes: methods of encoding the bits to be transmitted such that certain types of errors can be caught and corrected, at the expense of some additional bandwidth overhead.

Packet in Packet Attacks

If a sufficient number of bit-errors occur, however, error correcting codes can still fail to correct or even detect an error. Protocols usually address this by adding a checksum in the packet header/preamble. To illustrate this, let’s consider a made-up link layer protocol in which there is a header that starts with the magic number 0x1337 followed by a checksum and the message body. If a detectable but uncorrectable error occurs, then the packet can be dropped. If an undetected bit-error occurs in the body, then the checksum will fail and the packet will be dropped. If an undetected error occurs in the header, then either the magic will be broken and it will fail to be detected as a frame or the checksum will fail, in both cases causing the packet to be dropped. The one exception to this latter case is the basis for Packet in Packet attacks: If we were to start the message body with 0x1337 followed by a checksum and our desired body, then the link layer protocol will skip over the corrupted header and think that the actual frame starts at the beginning of our hand-crafted body! In effect, we have embedded a complete packet inside the body of another. It turns out that if we transmit this carefully crafted packet-in-a-packet a bunch of times, then eventually (in what turns out to be a reasonable amount of time) a bit-error will occur causing our payload to be interpreted as the real link layer frame. Attacks like this have been demonstrated on real data link layer protocols like IEEE 802.15.4.

Isolated Complementary Binary Linear Block Codes

Spill and Ossman propose using a special error correcting code that has a property they call “isolation:” the code can be split into multiple subcodes that have additional error-correcting properties between them. One of the subcodes could be used strictly for bits of the header/preamble, while another subcode could be used strictly for the body. This provides additional error correcting capability specifically for cases like that which the attack above exploits, without increasing the bandwidth overhead of the code. They call this family of codes “Isolated Complementary Binary Linear Block Codes” (ICBLBC).

As an example, consider the following eight code words:

00000000110010101001
01110101101101011100

Note that each code word differs from the others by at least two bits. This means that we can detect single-bit errors. For example, if 01110 is corrupted in the physical layer to 01010, then we can immediately know that an error occurred because that code does not exist in our set of eight. If two or more bit-errors occur, then an error may go undetected, which is the case that Packet in Packet attacks exploit. This is exactly like the types of error correcting codes used in most data link layer protocols. The difference, however, is that this particular set of eight code words has an additional property: it is an isolated complementary binary linear block code. Let’s call the first row of codes “Subcode 1” and the second row of codes “Subcode 2.” Now, note that every code in Subcode 1 differs from every other code in Subcode 2 by at least three bits. That means that we can not only detect but also correct single-bit errors between the two codes, giving the two subcodes properties of a Hamming Code. If we encode the header/preamble of the packet using one subcode and the body using the other, then we can circumvent Packet in Packet attacks with little to no extra bandwidth overhead!

Generating the Codes

Spill and Ossman have provided some code that can enumerate these codes. Currently, they have reported discovery of 6-bit ICBLBCs with up to twenty-two codes in subcode 1 and two codes in subcode 2, or up to ten codes in each if they are divided evenly. While their implementation does work, it is limited by the fact that it essentially performs a brute-force search to discover them, causing them to investigate a hardware-based approach using FPGAs.

Out of curiosity, I wondered if this problem could be solved more efficiently using constraint reasoning. Therefore, we implemented a generator that encodes the ICBLBC constraints as a Satisfiability Modulo Theories problem and used DOCO (Digital Operatives’ proprietary constraint optimizer) to enumerate all feasible solution codes. Sure enough, our solver was able to almost instantaneously reproduce Spill and Ossman’s results, as well as enumerate codes of larger bit length that would likely be intractable to discover using brute force search. For example, in less than a second it was able to discover this 7-bit ICBLBC consisting of fifty codes in subcode 1 and two codes in subcode 2:

Subcode 1Subcode 2
37, 118, 44, 21, 35, 28, 107, 61, 41, 25, 94, 13, 56, 84, 124, 31, 110, 117, 50, 76, 62, 49, 47, 81, 7, 19, 109, 87, 103, 22, 93, 82, 70, 26, 127, 115, 55, 52, 59, 73, 11, 91, 88, 67, 69, 42, 121, 14, 38, 1220, 96

as well as this one with nineteen codes in each subcode:

Subcode 1Subcode 2
23, 89, 17, 27, 9, 65, 51, 5, 83, 85, 0, 18, 80, 54, 36, 33, 113, 3, 11676, 122, 61, 127, 103, 14, 107, 104, 79, 56, 109, 98, 28, 70, 94, 47, 110, 74, 42

This 8-bit ICBLBC was discovered in a matter of seconds and consists of eighty codes in one subcode:

Subcode 1Subcode 2
1, 4, 7, 8, 13, 16, 19, 25, 30, 32, 35, 37, 41, 42, 44, 47, 49, 54, 56, 59, 61, 69, 76, 81, 84, 87, 88, 97, 100, 104, 109, 110, 112, 121, 122, 124, 127, 128, 131, 133, 137, 142, 145, 146, 148, 152, 155, 161, 162, 164, 168, 171, 173, 176, 179, 181, 185, 190, 193, 196, 199, 200, 205, 208, 211, 217, 218, 220, 223, 224, 227, 229, 233, 234, 236, 239, 241, 246, 248, 25366, 75

I have reimplemented this approach using Microsoft’s “free” (as in beer) Z3 constraint solver and pushed it to a fork of Spill and Ossman’s project, available here. As always, feedback on this new approach is greatly appreciated.

PoC‖GTFO Issue 0x02

Children's Bible Coloring Book of PoC ‖ GTFO Issue 0x02, an Epistle to the 30th CCC Congress in Hamburg Composed by the Rt. Revd. Pastor Manul Laphroaig to put pwnage before politics.

Children's Bible Coloring Book of PoC $\|$ GTFO Issue 0x02, an Epistle to the 30th CCC Congress in Hamburg Composed by the Rt. Revd. Pastor Manul Laphroaig to put pwnage before politics.

PoC‖GTFO Issue 0x01

Proceedings of the Society of PoC ‖ GTFO Issue 0x01, an Epistle to the 10th H2HC in São Paulo From the writing desk, not the raven, of Rt. Revd. Preacherman Pastor Manul Laphroaig

Proceedings of the Society of PoC $\|$ GTFO Issue 0x01, an Epistle to the 10th H2HC in S\~ao Paulo From the writing desk, not the raven, of Rt. Revd. Preacherman Pastor Manul Laphroaig

Defending Your E-Mails from Surveillance … Conveniently

via Magiic!

With the recent and ongoing disclosures of what appear to be widespread Internet surveillance programs, the public is becoming increasingly aware of the privacy risks in sending plaintext E-mail. Even connecting to one’s E-mail service provider using a cryptographically secure protocol like HTTPS provides a false sense of security, because one cannot ensure the trust or privacy of any intermediary servers/connections used to route the message to its recipient. As such, there are many excellent tutorials—and even entire web campaigns—that empower average users to protect their online communications via free tools like OpenPGP.

I have been personally encrypting my E-mail for well over a decade, and, since day one, Digital Operatives has employed strong cryptography to protect all of its internal E-mail communications. This works extremely well, and, for all intents and purposes, is currently very secure. There are some downsides, however. The number one complaint about using public key cryptography to secure all E-mail communications is that there really isn’t a good way to search through the bodies of the E-mails in your inbox (since the message bodies are encrypted, a simple search for a term like “cat” or “meeting” won’t match any of the E-mails it otherwise should have). In fact, the second bug ever reported for the popular EnigMail GPG plugin for the Thunderbird mail client was a feature request asking for the ability to search through encrypted E-mail bodies. That bug was opened in 2003 … and it is still open today.

The trouble is that the decryption step is too computationally expensive to decrypt all of the message bodies on the fly during the search. The alternative would be to temporarily decrypt the message bodies of new E-mails as they arrive and add them to a search index. The trouble is that this invites a security vulnerability, since sensitive message data would therefore be included in the search index.

Given that over 90% of the E-mail in my inbox is encrypted, I decided to scratch my own itch and develop a solution to this problem. I took the second approach mentioned above: I incrementally build a search index to search across the encrypted message bodies. To mitigate the aforementioned security risk with this approach, I encrypt the entire search index using the same private key used to decrypt one’s E-mails. Therefore, the only risk would be if an adversary got access to one’s private key, but that of course would have even worse security implications since he or she could then read all of the original E-mails anyway.

My proof-of-concept solution is a tool called Magiic. Magiic Allows for GPG Indexing of IMAP on the Command-line. It is a Python script that uses GnuPG for encryption/decryption and Whoosh for full-text indexing. It acts as a standalone mail application, connecting directly to an IMAP server and creating a local index off of the contents. It has a simple ncurses interface so all interaction can take place on the command line. Digital Operatives is releasing the code using a version of the Creative Commons BY-NC-SA 3.0 license that has been modified slightly to be more applicable for software licensing. It is free for non-commercial use. The code is available here.