Chosen Ciphertext Security
Short recap
Let ’ s originate by reviewing what we have learned so far :
 We can mathematically define security for encoding schemes. A natural definition is arrant privacy : no topic what Eve does, she can ’ metric ton learn anything about the plaintext that she didn ’ t know ahead. unfortunately this requires the key to be a long as the message, frankincense placing a severe limitation on the serviceability of it .
 To get around this, we need to consider computational considerations. A basic object is a pseudorandom generator and we considered the PRG Conjecture which stipulates the universe of an efficiently computable function \ ( G : \ { 0,1\ } ^n\rightarrow\ { 0,1\ } ^ { n+1 } \ ) such that \ ( G ( U_n ) \approx U_ { n+1 } \ ) ( where \ ( U_m\ ) denotes the consistent distribution on \ ( \ { 0,1\ } ^m\ ) and \ ( \approx\ ) denotes computational identity ). 1
 We showed that the PRG speculate implies a pseudorandom generator of any polynomial output duration which in particular via the stream cipher construction implies a computationally secure encoding with plaintext randomly larger than the key. ( The alone restriction is that the plaintext is of polynomial size which is needed anyhow if we want to actually be able to read and write it. )
 We then showed that the PRG guess actually implies a stronger object known as a pseudorandom function ( PRF ) function collection : this is a collection \ ( \ { f_s \ } \ ) of functions such that if we choose \ ( s\ ) at random and fix it, and give an adversary a black box computing \ ( one \mapsto f_s ( one ) \ ) then she can ’ metric ton tell the remainder between this and a blackbox computing a random function. 2
 Pseudorandom functions turn out to be utilitarian for identification protocols, message authentication codes and this strong impression of security of encoding known as choose plaintext attack ( CPA ) security where we are allowed to encrypt many messages of Eve ’ south choice and still require that the adjacent message hides all information except for what Eve already knew before .
Going beyond CPA
It may seem that we have ultimately nailed down the security definition for encoding. After all, what could be stronger than allowing Eve unchained access to the encoding function ? Clearly an encoding satisfying this property will hide the contents of the message in all hardheaded circumstances. Or will it ?
Please check and play an baleful sound track at this item .
Example: The Wired Equivalence Privacy (WEP)
The Wired Equivalence Privacy ( WEP ) protocol is possibly one of the most inaccurately named protocols of all times. It was invented in 1999 for the purpose of securing WiFi networks so that they would have about the same level of security as wired networks, but already early on on several security flaws were pointed out. In particular in 2001, Fluhrer, Mantin, and Shamir showed how the RC4 flaws we mentioned in anterior call on the carpet can be used to completely break WEP in less than one minute. Yet, the protocol lingered on and for many years after was hush the most widely used WiFi encoding protocol as many routers had it as the default option choice. In 2007 the WEP was blamed for a hack stealing 45 million credit rating tease numbers from T.J. Maxx. In 2012 ( after 11 years of attacks ! ) it was estimated that it is hush used in about a quarter of code wireless networks, and in 2014 it was hush the default option on many Verizon home routers. It is still ( ! ) used in some routers, see figure 6.1. This is a bangup case of how heavily it is to remove insecure protocols from practical use ( and so how authoritative it is to get these protocols right ) .
6.1: WEP usage over time according to Wigle.net. Despite having documented security issues since 2001 and being officially deprecated since 2004, WEP continued to be the most popular WiFi encryption protocol up to 2012, and at the time of writing, it is still used by a nontrivial number of devices (though see this stackoverflow answer for more).
here we will talk about a different defect of WEP that is in fact shared by many other protocols, including the first versions of the dependable socket layer ( SSL ) protocol that is used to protect all code network traffic .
To avoid otiose details we will consider a highly abstract ( and slightly inaccurate ) version of WEP that still demonstrates our independent point. In this protocol Alice ( the user ) sends to Bob ( the access luff ) an IP packet that she wants routed somewhere on the internet .
thus we can think of the message Alice sends to Bob as a chain \ ( m\in\ { 0,1\ } ^\ell\ ) of the kind \ ( m=m_1\m_2\ ) where \ ( m_1\ ) is the IP address this packet needs to be routed to and \ ( m_2\ ) is the actual message that needs to be delivered. In the WEP protocol, the message that Alice sends to Bob has the form \ ( E_k ( m\\ensuremath { \mathit { CRC } } ( m ) ) \ ) ( where \ ( \\ ) denotes chain and \ ( \ensuremath { \mathit { CRC } } ( megabyte ) \ ) is some cyclic redundancy check ). A \ ( \ensuremath { \mathit { CRC } } \ ) is some function mapping \ ( \ { 0,1\ } ^n\ ) to \ ( \ { 0,1\ } ^t\ ) which is meant to enable detection of errors in typing or communication. The estimate is that if a message \ ( m\ ) is mistyped into \ ( m’\ ), then it is identical probable that \ ( \ensuremath { \mathit { CRC } } ( thousand ) \neq \ensuremath { \mathit { CRC } } ( meter ‘ ) \ ). It is like to the checksum digits used in recognition menu numbers and many other cases. Unlike a message authentication code, a CRC does not have a confidential cardinal and is not impregnable against adversarial perturbations .
The actual encoding WEP used was RC4, but for us it doesn ’ t actually matter. What does matter is that the encoding has the form \ ( E_k ( m ‘ ) = embroider \oplus m’\ ) where \ ( pad\ ) is computed as some affair of the key. In particular the attack we will describe works even if we use our stronger CPA secure PRFbased scheme where \ ( pad=f_k ( r ) \ ) for some random ( or counter ) \ ( r\ ) that is sent out individually .
immediately the security of the encoding means that an adversary seeing the ciphertext \ ( c=E_k ( m\\ensuremath { \mathit { CRC } } ( m ) ) \ ) will not be able to know \ ( m\ ), but since this is traveling over the vent, the adversary could “ parody ” the signal and send a different ciphertext \ ( c’\ ) to Bob. In particular, if the adversary knows the IP address \ ( m_1\ ) that Alice was using ( for example, for model, the adversary can guess that Alice is probably one of the billions of people that visit the web site boazbarak.org on a regular basis ) then she can XOR the ciphertext with a string of her choose and hence convert the ciphertext \ ( cytosine = pad \oplus ( m_1\ m_2 \\ensuremath { \mathit { CRC } } ( m_1, m_2 ) ) \ ) into the ciphertext \ ( hundred ‘ = c \oplus x\ ) where \ ( ten = x_1\x_2\x_3\ ) is computed therefore that \ ( x_1 \oplus m_1\ ) is equal to the adversary ’ s own IP address !
so, the adversary doesn ’ thyroxine need to decrypt the message by spoofing the ciphertext she can ensure that Bob ( who is an access point, and whose job is to decrypt and then deliver packets ) simply delivers it unencrypted straight into her hands. One issue is that Eve modifies \ ( m_1\ ) then it is improbable that the CRC code will even check out, and hence Bob would reject the packet. however, CRC 32 – the CRC algorithm used by WEP – is linear modulo \ ( 2\ ), that is \ ( \ensuremath { \mathit { CRC } } ( adam \oplus x ‘ ) = \ensuremath { \mathit { CRC } } ( ten ) \oplus \ensuremath { \mathit { CRC } } ( x ‘ ) \ ). This means that if the original ciphertext \ ( c\ ) was an encoding of the message \ ( m= m_1 \ m_2 \ \ensuremath { \mathit { CRC } } ( m_1, m_2 ) \ ) then \ ( c’=c \oplus ( x_1\0^t\\ensuremath { \mathit { CRC } } ( x_1\0^t ) ) \ ) will be an encoding of the message \ ( m’= ( m_1 \oplus x_1 ) \m_2 \ \ensuremath { \mathit { CRC } } ( ( x_1\oplus m_1 ) \ m_2 ) \ ) ( where \ ( 0^t\ ) denotes a string of zeroes of the lapp length \ ( t\ ) as \ ( m_2\ ), and therefore \ ( m_2 \oplus 0^t = m_2\ ) ). therefore by XOR ’ ing \ ( c\ ) with \ ( x_1 \0^t \ \ensuremath { \mathit { CRC } } ( x_1\0^t ) \ ), the adversary Mallory can ensure that Bob will deliver the message \ ( m_2\ ) to the IP address \ ( m_1 \oplus x_1\ ) of her choice ( see Figure 6.2 ) .
6.2: The attack on the WEP protocol allowing the adversary Mallory to read encrypted messages even when Alice uses a CPA secure encryption.
Chosen ciphertext security
This is not an detached case but in fact an exemplify of a general pattern of many breaks in practical protocols. Some examples of protocols broken through like means include XML encoding, IPSec ( see besides hera ) deoxyadenosine monophosphate well as JavaServer Faces, Ruby on Rails, ASP.NET, and the Steam gambling node ( see the Wikipedia page on Padding Oracle Attacks ) .
The point is that frequently our adversaries can be active and modify the communication between sender and receiver, which in consequence gives them access not good to choose plaintexts of their choice to encrypt but even to have some impact on the ciphertexts that are decrypted. This motivates the follow notion of security ( see besides Figure 6.3 ) :
An encoding schema \ ( ( E, D ) \ ) is chosen ciphertext attack ( CCA ) plug if every effective adversary Mallory wins in the follow game with probability at most \ ( 1/2+ negl ( normality ) \ ) :
 Mallory gets \ ( 1^n\ ) where \ ( n\ ) is the duration of the key
 For \ ( poly ( north ) \ ) rounds, Mallory gets access to the functions \ ( thousand \mapsto E_k ( molarity ) \ ) and \ ( c \mapsto D_k ( c ) \ ) .
 Mallory chooses a couple of messages \ ( \ { m_0, m_1 \ } \ ), a hidden \ ( b\ ) is chosen at random in \ ( \ { 0,1\ } \ ), and Mallory gets \ ( c^* = E_k ( m_b ) \ ) .
 Mallory now gets another \ ( poly ( north ) \ ) rounds of access to the functions \ ( thousand \mapsto E_k ( thousand ) \ ) and \ ( cytosine \mapsto D_k ( hundred ) \ ) except that she is not allowed to query \ ( c^*\ ) to her second prophet .
 Mallory outputs \ ( b’\ ) and wins if \ ( b’=b\ ) .
6.3: The CCA security game.
This might seems a quite strange definition so lashkaretaiba ’ s try to digest it slowly. Most people, once they understand what the definition says, don ’ t like it that much. There are two natural objections to it :
 This definition seems to be too strong: There is no way we would let Mallory play with a decryption box – that basically amounts to letting her break the encryption scheme. Sure, she could have some impact on the ciphertexts that Bob decrypts and observe some resulting side effects, but there is a long way from that to giving her oracle access to the decryption algorithm.
The response to this is that it is very hard to model what is the “ realistic ” data Mallory might get about the ciphertexts she might cause Bob to decrypt. The goal of a security system definition is not to capture precisely the approach scenarios that occur in real life but quite to be sufficiently bourgeois so that these veridical life attacks could be modeled in our game. therefore, having a besides strong definition is not a bad thing ( a long as it can be achieved ! ). The WEP exercise shows that the definition does capture a practical write out in security and alike attacks on hardheaded protocols have been shown time and again ( see for exemplar the discussion of “ padding attacks ” in Section 3.7.2 of the Katz Lindell book. )
 This definition seems to be too weak: What justification do we have for not allowing Mallory to make the query
\(c^*\)
to the decryption box? After all she is an adversary so she could do whatever she wants. The answer is that the definition would be clearly impossible to achieve if Mallory could simply get the decryption of
\(c^*\)
and learn whether it was an encryption of
\(m_0\)
or
\(m_1\)
. So this restriction is the absolutely minimal one we could make without causing the notion to be obviously impossible. Perhaps surprisingly, it turns out that once we make this minimal restriction, we can in fact construct CCAsecure encryptions.
What does CCA have to do with WEP? The CCA security game is reasonably strange, and it might not be immediately clear whether it has anything to do with the attack we described on the WEP protocol. however, it turns out that using a CCA secure encoding would have prevented that attack. The key is the following claim :
Suppose that \ ( ( E, D ) \ ) is a CCA secure encoding. then, there is no effective algorithm that given an encoding \ ( c\ ) of the plaintext \ ( ( m_1, m_2 ) \ ) outputs a ciphertext \ ( c’\ ) that decrypts to \ ( ( m’_1, m_2 ) \ ) where \ ( m’_1\neq m_1\ ) .
In detail Lemma 6.2 rules out the attack of transforming \ ( c\ ) that encrypts a message starting with a some address \ ( \ensuremath { \mathit { IP } } \ ) to a ciphertext that starts with a different address \ ( \ensuremath { \mathit { IP } } ‘\ ). Let us immediately sketch its proof .
We ’ ll show that such if we had an adversary \ ( M’\ ) that violates the conclusion of the claim, then there is an adversary \ ( M\ ) that can win in the CCA game .
The proof is bare and relies on the crucial fact that the CCA game allows \ ( M\ ) to query the decoding box on any ciphertext of her choice, american samoa long as it ’ s not precisely identical to the challenge cipertext \ ( c^*\ ). In particular, if \ ( M’\ ) is able to morph an encoding \ ( c\ ) of \ ( m\ ) to some encoding \ ( c’\ ) of some different \ ( m’\ ) that agrees with \ ( m\ ) on some set of bits, then \ ( M\ ) can do the pursue : in the security game, consumption \ ( m_0\ ) to be some random message and \ ( m_1\ ) to be this plaintext \ ( m\ ). then, when receiving \ ( c^*\ ), enforce \ ( M’\ ) to it to obtain a ciphertext \ ( c’\ ) ( note that if the plaintext differs then the ciphertext must differ besides ; can you see why ? ) ask the decoding box to decrypt it and end product \ ( 1\ ) if the leave message agrees with \ ( m\ ) in the comparable plant of bits ( otherwise end product a random bit ). If \ ( M’\ ) was successful with probability \ ( \epsilon\ ), then \ ( M\ ) would win in the CCA bet on with probability at least \ ( 1/2 + \epsilon/10\ ) or so .
The proof above is rather sketchy. however it is not very unmanageable and proving Lemma 6.2 on your own is an excellent room to ensure casualness with the definition of CCA security .
Constructing CCA secure encryption
The definition of CCA seems highly strong, thus possibly it is not surprise that it is useful, but can we actually construct it ? The WEP attack shows that the CPA batten encoding we saw before ( i.e., \ ( E_k ( molarity ) = ( roentgen, f_k ( r ) \oplus thousand ) \ ) ) is not CCA secure. We will see other examples of not CCA secure encryptions in the exercises. so, how do we construct such a dodge ? The WEP attack actually already hints of the southern cross of CCA security. We want to ensure that Mallory is not able to modify the challenge ciphertext \ ( c^*\ ) to some related \ ( c’\ ). Another way to say it is that we need to ensure the integrity of messages to achieve their confidentiality if we want to handle active adversaries that might modify messages on the channel. Since in a great many practical scenarios, an adversary might be able to do so, this is an important message that deserves to be repeated :
To ensure confidentiality, you need integrity .
This is a lesson that has been time and again been shown and many protocols have been broken due to the mistake belief that if we only care about secrecy, it is enough to use only encoding ( and one that is merely CPA secure ) and there is no need for authentication. Matthew Green writes this more provocatively as
about all of the symmetrical encoding modes you learned about in school, textbooks, and Wikipedia are ( potentially ) insecure. 3
precisely because these basic modes merely ensure security for passive listen in adversaries and do not ensure chosen ciphertext security system which is the “ aureate standard ” for online applications. ( For symmetrical encoding people much use the mention “ authenticated encoding ” in practice quite than CCA security ; those are not identical but are highly relate notions. )
All of this suggests that Message Authentication Codes might help us get CCA security. This turns out to be the case. But one needs to take some care precisely how to use MAC ’ s to get CCA security. At this detail, you might want to stop and think how you would do this…
You should stop here and try to think how you would implement a CCA impregnable encoding by combining MAC ’ sulfur with a CPA impregnable encoding .
If you didn ’ deoxythymidine monophosphate intercept before, then you should very stop and think now.
Read more: A Few Thoughts on Cryptographic Engineering
OK, thus now that you had a chance to think about this on your own, we will describe one direction that works to achieve CCA security from MACs. We will explore other approaches that may or may not work in the exercises .
Let \ ( ( E, D ) \ ) be CPAsecure encoding system and \ ( ( S, V ) \ ) be a CMAsecure MAC with \ ( n\ ) bit keys and a canonic verification algorithm. 4 then the following encoding \ ( ( E ‘, D ‘ ) \ ) with keys \ ( 2n\ ) bits is CCA dependable :
 \ ( E’_ { k_1, k_2 } ( megabyte ) \ ) is obtained by computing \ ( c=E_ { k_1 } ( thousand ) \ ), \ ( \sigma = S_ { k_2 } ( deoxycytidine monophosphate ) \ ) and outputting \ ( ( deoxycytidine monophosphate, \sigma ) \ ) .
 \ ( D’_ { k_1, k_2 } ( vitamin c, \sigma ) \ ) outputs nothing ( for example, an erroneousness message ) if \ ( V_ { k_2 } ( c, \sigma ) \neq 1\ ), and differently outputs \ ( D_ { k_1 } ( speed of light ) \ ) .
Suppose, for the sake of contradiction, that there exists an adversary \ ( M’\ ) that wins the CCA crippled for the scheme \ ( ( E ‘, D ‘ ) \ ) with probability at least \ ( 1/2+\epsilon\ ). We consider the following two cases :
Case I: With probability at least \ ( \epsilon/10\ ), at some charge during the CCA game, \ ( M’\ ) sends to its decoding box a ciphertext \ ( ( degree centigrade, \sigma ) \ ) that is not identical to one of the ciphertexts it previously obtained from its encoding box, and obtains from it a nonerror answer .
Case II: The consequence above happens with probability smaller than \ ( \epsilon/10\ ) .
We will derive a contradiction in either lawsuit. In the beginning case, we will use \ ( M’\ ) to obtain an adversary that breaks the MAC \ ( ( S, V ) \ ), while in the second font, we will use \ ( M’\ ) to obtain an adversary that breaks the CPAsecurity of \ ( ( E, D ) \ ) .
Let ’ s start with Case I : When this encase holds, we will build an adversary \ ( F\ ) ( for “ forger ” ) for the MAC \ ( ( S, V ) \ ), we can assume the adversary \ ( F\ ) has access to the both bless and confirmation algorithm as black boxes for some strange winder \ ( k_2\ ) that is chosen at random and fixed. 5 \ ( F\ ) will choose \ ( k_1\ ) on its own, and will besides choose at random a number \ ( i_0\ ) from \ ( 1\ ) to \ ( T\ ), where \ ( T\ ) is the total number of queries that \ ( M’\ ) makes to the decoding box. \ ( F\ ) will run the entire CCA game with \ ( M’\ ), using \ ( k_1\ ) and its access to the black boxes to execute the decoding and decoding boxes, all the way until precisely before \ ( M’\ ) makes the \ ( i_0^ { thursday } \ ) question \ ( ( cytosine, \sigma ) \ ) to its decoding box. At that point, \ ( F\ ) will output \ ( ( deoxycytidine monophosphate, \sigma ) \ ). We claim that with probability at least \ ( \epsilon/ ( 10T ) \ ), our forger will succeed in the CMA game in the sense that (i) the question \ ( ( c, \sigma ) \ ) will pass confirmation, and (ii) the message \ ( c\ ) was not previously queried before to the sign oracle .
indeed, because we are in Case I, with probability \ ( \epsilon/10\ ), in this game some question that \ ( M’\ ) makes will be one that was not asked before and therefore was not queried by \ ( F\ ) to its sign oracle, and furthermore, the refund message is not an error message, and hence the touch passes verification. Since \ ( i_0\ ) is random, with probability \ ( \epsilon/ ( 10T ) \ ) this question will be at the \ ( i_0^ { thursday } \ ) round. Let us assume that this above event \ ( \ensuremath { \mathit { effective } } \ ) happened in which the \ ( i_0\ ) th question to the decoding box is a pair \ ( ( c, \sigma ) \ ) that both passes confirmation and the couple \ ( ( deoxycytidine monophosphate, \sigma ) \ ) was not returned before by the encoding oracle. Since we pass ( canonic ) verification, we know that \ ( \sigma=S_ { k_2 } ( vitamin c ) \ ), and because all encoding queries return pairs of the form \ ( ( vitamin c ‘, S_ { k_2 } ( degree centigrade ‘ ) ) \ ), this means that no such question returned \ ( c\ ) as its first gear chemical element either. In early words, when the event \ ( \ensuremath { \mathit { thoroughly } } \ ) happens the \ ( i_0^ { thorium } \ ) question contains a pair \ ( ( coke, \sigma ) \ ) such that \ ( c\ ) was not queried before to the touch box, but \ ( ( carbon, \sigma ) \ ) passes confirmation. This is the definition of breaking \ ( ( S, V ) \ ) in a chosen message approach, and therefore we obtain a contradiction to the CMA security of \ ( ( S, V ) \ ) .
immediately for Case II : In this case, we will build an adversary \ ( Eve\ ) for CPAgame in the original scheme \ ( ( E, D ) \ ). As you might expect, the adversary \ ( Eve\ ) will choose by herself the samara \ ( k_2\ ) for the MAC schema, and attack to play the CCA security system game with \ ( M’\ ). When \ ( M’\ ) makes encoding queries this should not be a problem \ ( Eve\ ) can forward the plaintext \ ( m\ ) to its encoding prophet to get \ ( c=E_ { k_1 } ( molarity ) \ ) and then compute \ ( \sigma = S_ { k_2 } ( degree centigrade ) \ ) since she knows the sign key \ ( k_2\ ) .
however, what does \ ( Eve\ ) do when \ ( M’\ ) makes decoding queries ? That is, suppose that \ ( M’\ ) sends a question of the form \ ( ( speed of light, \sigma ) \ ) to its decoding box. To simulate the algorithm \ ( D’\ ), \ ( Eve\ ) will need access to a decoding box for \ ( D\ ), but she doesn ’ deoxythymidine monophosphate catch such a box in the CPA game ( This is a subtle point please hesitate here and reflect on it until you are certain you understand it ! )
To handle this topic \ ( Eve\ ) will follow the common approach of “ winging it and hoping for the best ”. When \ ( M’\ ) sends a question of the human body \ ( ( hundred, \sigma ) \ ), \ ( Eve\ ) will foremost check if it happens to be the sheath that \ ( ( vitamin c, \sigma ) \ ) was returned earlier as an answer to an encoding question \ ( m\ ). In this case \ ( Eve\ ) will breathe a sigh of respite and plainly return \ ( m\ ) to \ ( M’\ ) as the answer. ( This is obviously right : if \ ( ( speed of light, \sigma ) \ ) is the encoding of \ ( m\ ) then \ ( m\ ) is the decoding of \ ( ( coke, \sigma ) \ ). ) however, if the question \ ( ( deoxycytidine monophosphate, \sigma ) \ ) has not been returned before as an answer, then \ ( Eve\ ) is in a bite of a pickle. The way out of it is for her to plainly return “ error ” and hope that everything will work out. The crucial observation is that because we are in case II things will work out. After all, the merely way \ ( Eve\ ) makes a mistake is if she returns an error message where the original decoding box would not have done indeed, but this happens with probability at most \ ( \epsilon/10\ ). Hence, if \ ( M’\ ) has success \ ( 1/2+\epsilon\ ) in the CCA game, then even if it ’ s the case that \ ( M’\ ) constantly outputs the amiss answer when \ ( Eve\ ) makes this error, we will still get achiever at least \ ( 1/2+0.9\epsilon\ ). Since \ ( \epsilon\ ) is not negligible, this would contradict the CPA security system of \ ( ( E, D ) \ ) thereby concluding the proof of the theorem .
This proof is emblematic of a general principle for proving CCA security. The mind is to show that the decoding box is completely “ useless ” for the adversary, since the only way to get a not error reception from it is to feed it with a ciphertext that was received from the encoding box .
(Simplified) GCM encryption
The construction above works as a generic construction, but it is reasonably costly in the sense that we need to evaluate both the block cipher and the MAC. In finical, if messages have \ ( t\ ) blocks, then we would need to invoke two cryptanalytic operations ( a jam nothing encoding and a MAC calculation ) per jam. The GCM ( Galois Counter Mode ) is a way around this. We are going to describe a simplified interpretation of this mode. For simplicity, assume that the count of blocks \ ( t\ ) is fixed and known ( though many of the annoyance but significant details in block nothing modes of operations involve dealing with padding to multiple of blocks and dealing with variable stuff size ) .
A universal hashish function collection is a family of functions \ ( \ { planck’s constant : \ { 0,1\ } ^\ell\rightarrow\ { 0,1\ } ^n \ } \ ) such that for every \ ( adam \neq x ‘ \in \ { 0,1\ } ^\ell\ ), the random variables \ ( heat content ( x ) \ ) and \ ( henry ( x ‘ ) \ ) ( taken over the choice of a random \ ( h\ ) from this family ) are pairwise mugwump in \ ( \ { 0,1\ } ^ { 2n } \ ). That is, for every two potential outputs \ ( y, y’\in \ { 0,1\ } ^n\ ),
\[ \Pr_h[ h(x)=y \;\wedge\; h(x’)=y’]=2^{2n} \;\;(6.1) \]
Universal hash functions have preferably effective constructions, and in particular if we relax the definition to allow about universal hash functions ( where we replace the \ ( 2^ { 2n } \ ) gene in the righthand side of Equation 6.1 by a slenderly bigger, though silent negligible quantity ) then the constructions become highly effective and the size of the description of \ ( h\ ) is alone refer to \ ( n\ ), no matter how big \ ( \ell\ ) is. 6
Our encoding schema is defined as follow. The key is \ ( ( kilobyte, henry ) \ ) where \ ( k\ ) is an index to a pseudorandom permutation \ ( \ { p_k \ } \ ) and \ ( h\ ) is the key for a universal hashish officiate. 7 To encrypt a message \ ( m = ( m_1, \ldots, m_t ) \in \ { 0,1\ } ^ { national trust } \ ) do the be :
 Choose \ ( \ensuremath { \mathit { IV } } \ ) at random in \ ( [ 2^n ] \ ) .
 Let \ ( z_i = E_k ( \ensuremath { \mathit { IV } } +i ) \ ) for \ ( i=1, \ldots, t+1\ ) .
 Let \ ( c_i = z_i \oplus m_i\ ) .
 Let \ ( c_ { t+1 } = hydrogen ( c_1, \ldots, c_t ) \oplus z_ { t+1 } \ ) .
 Output \ ( ( \ensuremath { \mathit { IV } }, c_1, \ldots, c_ { t+1 } ) \ ) .
The communication operating expense includes one extra end product stop plus the IV ( whose transmission can much be avoided or reduced, depending on the settings ; see the notion of “ time being based encoding ” ). This is reasonably minimal. The extra computational monetary value on top of \ ( t\ ) blockcipher evaluation is the lotion of \ ( henry ( \cdot ) \ ). For the particular choice of \ ( h\ ) used in Galois Counter Mode, this function \ ( h\ ) can be evaluated very efficiently at a price of a single multiplication in the Galois field of size \ ( 2^ { 128 } \ ) per obstruct ( one can think of it as some very particular operation that maps two \ ( 128\ ) bit strings to a single one, and can be carried out quite efficiently ). We leave it as an ( excellent ! ) exercise to prove that the result scheme is CCA guarantee .
Padding, chopping, and their pitfalls: the “buffer overflow” of cryptography
In this course we typically focus on the simplest event where messages have a fixed size. But in fact, in real life we often need to chop long messages into blocks, or pad messages therefore that their duration becomes an integral multiple of the pulley size. furthermore, there are several elusive ways to get this amiss, and these have been used in several practical attacks .
Chopping into blocks: A blocking cipher apriori provides a way to encrypt a message of distance \ ( n\ ), but we often have much longer messages and need to “ chop ” them into blocks. This is where the obstruct cipher modes discussed in the previous lecture come in. however, the basic popular modes such as CBC and OFB do not provide security against chosen ciphertext attack, and in fact typically make it easy to extend a ciphertext with an extra forget or to remove the death block from a ciphertext, both being operations which should not be feasible in a CCA secure encoding .
Padding: Oftentimes messages are not an integer multiple of the blocking size and hence need to be padded. The pad is typically a map that takes the survive partial obstruct of the message ( i.e., a bowed stringed instrument \ ( m\ ) of length in \ ( \ { 0, \ldots, n1\ } \ ) ) and maps it into a full block ( i.e., a string \ ( m\in\ { 0,1\ } ^n\ ) ). The map needs to be invertible which in detail means that if the message is already an integer multiple of the engine block size we will need to add an extra blockage. ( Since we have to map all the \ ( 1+2+\ldots+2^ { n1 } \ ) messages of length \ ( 1, \ldots, n1\ ) into the \ ( 2^n\ ) messages of length \ ( n\ ) in a onetoone fashion. ) One approach path for doing then is to pad an \ ( normality ‘
Chosen ciphertext attack as implementing metaphors
The classical “metaphor” for an encryption is a sealed envelope, but as we have seen in the WEP, this metaphor can lead you astray. If you placed a message \(m\) in a sealed envelope, you should not be able to modify it to the message \(m \oplus m’\) without opening the envelope, and yet this is exactly what happens in the canonical CPA secure encryption \(E_k(m)=(r,f_k(r) \oplus m)\). CCA security comes much closer to realizing the metaphor, and hence is considered as the “gold standard” of secure encryption. This is important even if you do not intend to write poetry about encryption. Formal verification of computer programs is an area that is growing in importance given that computer programs become both more complex and more mission critical. Cryptographic protocols can fail in subtle ways, and even published proofs of security can turn out to have bugs in them. Hence there is a line of research dedicated to finding ways to automatically prove security of cryptographic protocols. Much of these line of research is based on simple models to describe protocols that are known as Dolev Yao models, based on the first paper that proposed such models. These models define an algebraic form of security, where rather than thinking of messages, keys, and ciphertexts as binary string, we think of them as abstract entities. There are certain rules for manipulating these symbols. For example, given a key \(k\) and a message \(m\) you can create the ciphertext \(\{ m \}_k\), which you can decrypt back to \(m\) using the same key. However the assumption is that any information that cannot be obtained by such manipulation is unknown.
Translating a proof of security in this algebra to a proof for real world adversaries is highly non trivial. However, to have even a fighting chance, the encryption scheme needs to be as strong as possible, and in particular it turns out that security notions such as CCA play a crucial role.
6.4: The DolevYao Algebra of what an adversary or “intruder” knows. Figure taken from here.
Reading comprehension exercises
I recommend students do the following exercises after reading the lecture. They do not cover all material, but can be a good way to check your understanding.
Let \((E,D)\) be the “canonical” PRFbased CPA secure encryption, where \(E_k(m)= (r,f_k(r)\oplus m)\) and \(\{ f_k \}\) is a PRF collection and \(r\) is chosen at random. Is this scheme CCA secure?

No it is never CCA secure.

It is always CCA secure.

It is sometimes CCA secure and sometimes not, depending on the properties of the PRF \(\{ f_k \}\).
Suppose that we allow a key to be as long as the message, and so we can use the one time pad. Would the onetime pad be:

CPA secure

CCA secure

Neither CPA nor CCA secure.
Which of the following statements is true about the proof of Theorem 6.3:

Case I corresponds to breaking the MAC and Case II corresponds to breaking the CPA security of the underlying encryption scheme.

Case I corresponds to breaking the CPA security of the underlying encryption scheme and Case II corresponds to breaking the MAC.

Both cases correspond to both breaking the MAC and encryption scheme

If neither Case I nor Case II happens then we obtain an adversary breaking the security of the underlying encryption scheme.
Comments are posted on the GitHub repository using the utteranc.es app.
A GitHub login is required to comment.
If you don’t want to authorize the app to post on your behalf, you can also comment directly on the GitHub issue for this page.
Compiled on 11/17/2021 22:36:15
Copyright 2021, Boaz Barak.
This work is
licensed under a Creative Commons
AttributionNonCommercialNoDerivatives 4.0 International License.
Produced using pandoc and panflute with templates derived from gitbook and bookdown.