## How POODLE Happened

Bodo Möller, Thai Duong, and Krzysztof Kotowicz have just broken the internet again with POODLE[20], a new and devastating attack against SSL. POODLE, an acronym for Padding Oracle On Downgraded Legacy Encryption, permits a man-in-the-middle attacker to rapidly decrypt any browser session which utilizes SSL v3.0 — or, as is generally the case, any session which can be coerced into utilizing it. POODLE is a death blow to this version of the protocol; it can only reasonably be fixed by disabling SSL v3.0 altogether.

This post is meant to be a “simple as possible, but no simpler” explanation of POODLE. I’ve tried to make it accessible to as many readers as possible and yet still go into full and accurate technical detail and provide complete citations. However, as the title implies, I have a second goal, which is to explain not merely how POODLE works, but the historical mistakes which allow it to work: mistakes that are still with us even though we’ve known better for over a decade.

### A brief history of SSL

POODLE is the latest in a long line of line of similar attacks against known weaknesses in SSL’s use of cipher block chaining (CBC). This explanation of it, therefore, will begin with a bit of a history lesson. I’ve focused this timeline on vulnerabilities that are relevant to CBC; it is nowhere near being a complete list of past problems with SSL. I’ve left off the Bleichenbacher attacks on RSA[6, 14], compression ratio attacks like CRIME[27] and BREACH[24], and implementation bugs like Heartbleed[16].

• 1994: A team including Taher Elgamal at Netscape Communications designs SSL (Secure Socket Layer) v1.0; this version is never released publicly[22].

• 1995: SSL v2.0 is released as a feature of Netscape Navigator[22].

• 1996: Paul Kocher et. al., also at Netscape, develop SSL v3.0[15], a nearly-complete redesign of the protocol, addressing several serious vulnerabilities in SSL v2.0. SSL v3.0 is the first version of the protocol to authenticate handshake messages, thus in theory preventing attackers from triggering downgrades to earlier protocol versions.

• 1999: The IETF publishes RFC 2246[9], standardizing TLS (Transport Layer Security) v1.0, closely based on SSL v3.0.

• 2001: Serge Vaudenay notes a vulnerability in the padding schemes used for cipher block chaining in SSL v3.0 and TLS v1.0, but his attack is at the time believed to be impractical under normal circumstances[30].

• 2002: Wei Dai publishes an attack against SSH’s practice of using predictable initialization vectors for cipher block chaining[8]; Bodo Möller observes that SSL v3.0 and TLS v1.0 have the same flaw but does not leverage it into an attack[18].

• 2003: Brice Canvel, Alain Hiltgen, Serge Vaudenay, and Martin Vuagnoux combine Vaudenay’s padding attack with a timing attack to produce practical results against OpenSSL[7].

• 2006: Gregory Bard publishes a “challenging but feasible” way of exploiting the predictable-IV flaw observed by Dai and Möller to extract small amounts of information from an encrypted SSL session[3].

• 2006: With the release of Internet Explorer 7, all major browsers ship with TLS 1.0 enabled by default and SSL 2.0 disabled by default[31].

• 2006: RFC 4346[10] defines TLS v1.1, fixing the predictable-IV flaw and providing implementation advice for mitigating the Vaudenay attack. It is slow to be adopted.

• 2008: RFC 5246[11] defines TLS v1.2, with support for AEAD cipher modes. Again, adoption is slow.

• 2011: Thai Duong and Juliano Rizzo sucessfully use the Vaudenay attack to exploit ASP.NET[12].

• 2011: Thai Duong and Juliano Rizzo publish the BEAST attack[13], building on Gregory Bard’s work to provide a much more powerful attack against TLS v1.0 and SSL v3.0. Some servers switch to using RC4, which, being a stream cipher, is not vulnerable to BEAST or any other variation of the Vaudenay attack.

• 2012–2014: Motivated in part by BEAST, browser support for TLS v1.1 and v1.2 finally improves[31].

• 2013: Nadhem AlFardan and Kenny Paterson publish Lucky13[1], a new twist on the Canvel-Hiltgen-Vaudenay-Vuagnoux timing attack of 2003. It impacts CBC mode in all versions of SSL.

• 2013: Nadhem AlFardan, Daniel Bernstein, Kenny Paterson, Bertram Poettering and Jacob Schuldt publish a practical attack against long-known weaknesses in RC4[2]. As a cure for BEAST, RC4 can now be considered worse than the disease.

• 2014: Bodo Möller, Thai Duong, and Krzysztof Kotowicz publish POODLE[20].

### The downgrade dance, or why the ’90s just won’t stop calling

You might gather by reading this timeline that nobody who has been paying attention is really shocked hear about another attack against SSL’s handling of cipher block chaining. It’s been a thorn in our side for at least 13 years and has jabbed us again and again.

Some readers are probably wondering why we still care about SSL v3.0 vulnerabilities. TLS v1.0 was published 15 years ago, and support for it has practically been universal since the demise of Internet Explorer 6 (which supports TLS v1.0 but shipped with it disabled by default). The handshake authentication introduced in SSL v3.0 is supposed to prevent downgrade attacks, and in theory it should. In the overwhelming majority of cases where both client and server support TLS v1.0 or greater, we shouldn’t have to care about anything that’s wrong with SSL v3.0 — right?

Unfortunately, no — that’s wrong. The problem stems from browser vendors’ desire to be able to cope with buggy servers and middleboxes which advertise a protocol version that they can’t actually support. To work around such broken behavior, when an SSL handshake fails most browsers (all but Opera[5]) will fall back to an earlier protocol version and retry. This browser behavior, called the “downgrade dance”, makes it trivially vulnerable to downgrade attacks. All an attacker has to do is temporarily cut the client’s connection a few times until the browser’s fallback behavior is triggered.

### A primer on SSL bulk encryption

SSL is a big, complex protocol, and stepping through the whole thing would make a much longer post than this one. Fortunately, most of that complexity lives in the initial steps of the protocol, the handshake phase. The handshake phase is the part of the protocol which enables two previously-unfamiliar parties (a client and a server) to agree on a shared secret key for communication, in addition to some non-secret, auxiliary parameters such as which cipher algorithms they’ll be using. The attacks we’re concerned with in this article don’t involve the handshake: they attack the bulk encryption phase, where the communicating parties are already exchanging application-level messages such as HTTP requests. So, for the remainder of this discussion, you can assume that the client and server already have a key shared between the two of them and nobody else.

That shared key is actually several shared keys. Each side has one key that it uses for encryption (preventing attackers from reading the contents of the communication), and one that it uses for authentication (preventing attackers from modifying the contents of the comunication without being immediately detected). Client-to-server communication uses one pair of keys and server-to-client communication uses another, bringing the total to four.

The modern consensus (e.g. [28]) among cryptographers is that right way to produce an authenticated, encrypted block of data is either to use a dedicated AEAD construct such as AES-GCM, or to encrypt-then-MAC: first encrypt the plaintext, producing ciphertext, then compute a message authentication code (MAC) over the ciphertext. However, when SSL was designed in mid-90s, this consensus was not yet established, and SSL does things in a different order: MAC-then-encrypt. It first computes a MAC over the plaintext, then encrypts the plaintext together with the MAC. While this construction isn’t inherently broken per se, it’s fraught with danger and it directly enabled Vaudenay’s attack and POODLE.

Most algorithms supported by SSL v3.0 – TLS v1.1 (in practice, all but RC4) are based on block ciphers. A block cipher is a cryptographic primitive which takes as input a key of a fixed size (typically 128–256 bits, but much shorter for some legacy ciphers), and a plaintext of fixed size (typically 64–128 bits), and produces a ciphertext of equal length to the plaintext. A corresponding decryption primitive, given the ciphertext and the same key, returns the original plaintext.

Because raw block ciphers can only operate on fixed-sized inputs, a bit of additional machinery, called a block cipher mode of operation is needed for encrypting longer messages. The mode of operation used in SSL is called cipher block chaining, or CBC, and it works as follows:

1. Take the first block of plaintext and XOR it with an extra parameter called the initialization vector, or IV.

2. Encrypt the result of step 1, producing a ciphertext. Output it.

3. Take the next block of input and XOR it with the previous ciphertext.

4. Encrypt the result of step 3, producing another ciphertext. Output it.

5. Repeat steps 3–4 until the input is exhausted

CBC allows you to use a block cipher to encrypt a message of effectively-unbounded length, but the length of the input stills needs to be a multiple of the block size. If this isn’t already the case, the input needs to be padded with some extra bytes in order to make it so.

The whole structure used as input for encryption looks like this ([15] §5.2.3.2):

To put that into English, this means the structure consists of four fields concatenated together:

1. The plaintext.

2. A MAC computed over the plaintext, also covering some auxiliary data such as the sequence number of the message.

3. As many padding bytes as are needed to make the message fit the block size.

SSL v3.0 doesn’t specify what goes into the padding bytes; they’re often random. TLS is more strict about them: each byte of padding must take on the same value as the length byte at the end. So, if you have three bytes of padding plus the one length byte, then the last four bytes of your message will be 0x03 0x03 0x03 0x03. This detail turns out to thwart some attacks, including POODLE.

In both SSL v3.0 and all versions of TLS, the initialization vector used to encrypt the first message of the bulk encryption phase is determined during the handshake. TLS v1.1 changes how the initialization vectors for subsequent messages are chosen. In v1.1 and beyond, a fresh initialization vector is transmitted explicitly with every message. In v1.0 and prior, the last block of ciphertext of each message is used as the initialization vector for the next. The old approach is insecure and is what enabled BEAST, but the difference has no relevance to POODLE.

The notion of an oracle is a central concept of cryptology. The term is borrowed from theoretical computer science. In CS theory, an oracle is a “black box” which, given a string, returns an instantaneous yes-or-no answer as to whether that string satisfies some particular logical predicate. CS theorists commonly study how some particular oracle would augment the capability of a Turing machine, by enabling it to solve new problems or to solve old ones more efficiently.

The sorts of oracles that interest CS theorists are usually physically unrealizable, but the sort that interest cryptologists are everywhere: they’re often built by accident. A cryptologic oracle is usually manifested as a network service which has access to some secret piece of information, such as a cryptographic key. When it interacts with a (perhaps hostile) network client, it somehow leaks some bit of information which could not otherwise be obtained without direct access to the secret.

The general idea behind a padding oracle is that when an SSL server (or client) receives and decrypts some ciphertext, its behavior afterward reveals information about the padding bytes of the plaintext. If the padding is valid, it continues with the protocol; if it is invalid, it aborts with an error.

### The Vaudenay attack

Suppose an SSL client sends the following message to a server, using AES-128 (block size = 16 bytes) for encryption and HMAC-SHA-1 (tag size = 20 bytes) for authentication:

Before encryption, a hex dump of the message would look like this:

I’ve broken the lines at 16-byte intervals, so every line corresponds to one AES block. The 20 bytes following the body of the message, starting with bc and ending with 5b, are the MAC tag. After the end of the MAC tag, we’re 15 bytes into a 16-byte block, so we append no padding, and then a padding-length byte of 00.

So that you can follow along and try this yourself, I’m using a all-null keys and IVs. In practice, of course, these would be replaced by real keys, and the attacker wouldn’t know them.

Here’s the above message after encryption:

Let’s see what happens if we start flipping bits in this ciphertext. See that 47 byte at the end of the first block? Let’s change it to 46 and then see what happens after decryption:

The first block of plaintext has been entirely corrupted, just like you’d probably expect. But what happened to the second block? The “V” at the end has changed to a “W”, yet the rest of it is intact. To understand why, recall how CBC works. Before encryption, each plaintext block is XORed with the previous ciphertext block. Therefore, CBC decryption needs to work by first decrypting each ciphertext block and then XORing the result with the previous ciphertext block. Therefore, flipping a bit in a ciphertext block will cause the corresponding bit in the next plaintext block to get flipped, without impacting the rest of the block or any succeeding block.

Now let’s try another experiment. I’ll take the second ciphertext block, the one that contains the password, and copy it to the end of the message, where the padding belongs. So my ciphertext now looks like this:

If you decrypt this you get:

The last byte, where the padding-length belongs, is now 82. Why 82? The plaintext byte from that block that I copied to the end was originally 56, or “V”, the last character of the password. But now it’s preceeded by a ciphertext block whole final byte is 93 rather than 47 like it was originally. So the plaintext ends up as 56 $$\oplus$$ 93 $$\oplus$$ 47 $$=$$ 82.

Have you ever seen the Hollywood trope where the hero is trying to break the door code to get into the evil overlord’s lair, so he plugs a gizmo into the control panel which slowly locks in one digit at a time until the whole code is broken? That’s a bit like how the Vaudenay attack actually works.

Let’s suppose that I as an attacker make a lucky guess that the user’s password ends in “V”. Then I could form the following ciphertext:

This ciphertext is the same as the last two lines of the one above, except that I’ve replaced the 93 at the end of the first block with 11, which is 93 $$\oplus$$ 82. Now if I decrypt this, I get plaintext ending in 00, which is the correct padding-length byte! If I can get the server to reveal this fact to me, then I can confirm that my “V” guess was correct.

This is the basis of the Vaudenay padding-oracle attack. An attacker who can get the server to reveal whether a ciphertext decrypts to something with valid padding or not, can then guess the contents of any block of plaintext one character at a time, and get confirmation when the guess is correct.

Vaudenay discovered this attack in 2001, but didn’t immediately recognize the full extent of its implications. He wasn’t convinced that it was actually possible for an attacker to tell valid from invalid padding based on the server’s behavior. Even if the attacker successfully generates valid padding in a tampered message, the tampering will still be detected because MAC verification will still fail (except, not necessarily — we’ll reconsider this when we talk about POODLE). TLS v1.0 generates a different error message for bad padding than it does for a bad MAC, but that error message is encrypted so it’s not obvious how an attacker can tell the difference. Vaudenay suggested getting the error message out of an unprotected log file, but this isn’t very plausible. Later, in 2003, Vaudenay co-authored a paper which uses a timing attack to tell the difference between the two errors, and in 2013 the Lucky13 attack resurrected this idea.

Vaudenay also originally believed that the fact that TLS treats all padding errors as fatal, shutting the connection and discarding the session key, meant that the full attack wasn’t possible: that the attacker got to take one guess at one byte and nothing more. POODLE, using ideas already foreshadowed by BEAST, shows that in the browser context, this isn’t necessarily so.

### POODLE

Recall that SSL v3.0 treats its padding differently than TLS does. In TLS, every padding byte is determined: it must take the same value as the padding-length. In SSL v3.0, the padding is random: it can be anything. Therefore, in SSL v3.0, since there’s no such thing as an “invalid” padding byte, it may be impossible to determine whether it has been tampered with. This is especially true when the padding fills an entire block. I’ll add an extra space to the plaintext from my previous example, so that it pushes into the next block and results in an entire block of padding:

Here’s the ciphertext:

I’ve padded this message the way that TLS v1.0 would, but if we’re speaking SSL v3.0 then all but the last of those bytes would be ignored: you could fill anything at all in there and the message would still be accepted. That means I can modify the final ciphertext block any way I want, and since the final byte is the only one that matters, there’s a 1-in-256 chance that the message will be accepted: even the MAC will still be valid! That means, in particular that I can do this:

I took the block containing the password and copied it into the final block, much like before. 255 times out of 256, this will result in an error and the session will abort. One time in 256, though, it will just contiune along as normal — and then I can do the simple XOR math which tells me what the last character of the password must have been.

In the context of web browser, if I have a man-in-the-middle position on the victim’s network and the secret I’m trying to steal is inside an HTTPS-only cookie, it’s easy for me to force the client to keeping resending the same message until this attack succeeds. All I have to do is wait for the victim to visit any plain-HTTP site, and insert an invisible iframe into it which runs some Javascript. The Javascript will keep making requests to site whose cookie I’m trying to steal, and I’ll keep tampering with each request as it occurs. Each failed attempt will result in the connection dropping and then being renegotiated with new key material, so each attempt has an independent 1-in-256 chance of succeeding. Once I’ve successfully determined one byte of the secret cookie, I then increase the length of the URL being requested by one, so that the next unknown byte is now positioned at the end of a block. I also adjust the length of something after the cookie, such as the POST body, so that there is still a full block of padding at the end. I repeat my attack in this fashion until I’ve decrypted the entire cookie.

### The workaround

Within the confines of SSL v3.0, POODLE cannot be fixed. However, the downgrade dance which enables it can be. For this purpose, Bodo Möller and Adam Langley, of Google, have introduced a proposal called TLS_FALLBACK_SCSV[19].

SCSV stands for “signaling cipher suite value”, and it’s essentially a hack which allows TLS clients to indicate to servers that they support some extension to TLS, while ensuring that servers that don’t understand the extension will simply ignore it. TLS already provides an extension mechanism which is supposed to satisfy this purpose, but a lot of TLS servers choke if they receive an extension they don’t recognize. SCSV works by appending a special, bogus value to the list of ciphersuites that the client advertises support for; it turns out that TLS servers more reliably ignore unrecognized ciphersuites than unrecognized extensions. The SCSV mechanism has previously and successfully been used to advertise support for the workaround developed for Marsh Ray’s renegotiation attack[25].

The TLS_FALLBACK_SCSV proposal is that when a client is retrying a TLS connection with an earlier protocol version as part of the downgrade dance, it will signal that it is doing so by including TLS_FALLBACK_SCSV in the cipher list. Legacy servers won’t recognize what it means, so they’ll proceed as normal and allow the downgrade to occur. However, newly-patched servers, which ought to handle recent protocol versions properly, should recognize TLS_FALLBACK_SCSV and refuse the connection. The idea is that a well-behaved server should never trigger the downgrade dance, so therefore if it occurs it must be a result of adversarial interference, and the right thing to do is kill the connection rather than allow the attack to proceed.

I admit that this workaround leaves me a bit puzzled. Sooner rather than later, there are going to be buggy servers out there which understand TLS_FALLBACK_SCSV, yet choke when the client offers TLS v1.2 (or, soon, v1.3). The result will be that clients which support TLS_FALLBACK_SCSV and TLS v1.2 will be unable to connect to them. This is the same outcome as if the browser simply avoided the downgrade dance altogether. If this outcome is acceptable — and it should be — wouldn’t disabling the downgrade dance be a cleaner way to go about it?

Anyway, the Googlers behind the TLS_FALLBACK_SCSV proposal have a lot more collective experience interfacing with broken SSL implementations than I do; they probably have their reasons for doing it this way. It’s not a standard yet, and I’m sure that at some point in the IETF process this argument will be raised and they’ll answer to it.

### The fix

The only correct way to fix POODLE is to disable SSL v3.0 altogether.

I think that last sentence will be mostly uncontroversial. Now, though, I am going to step onto my soapbox and say: disabling SSL v3.0 does not go far enough. It is time to aggressively deprecate as many old versions of TLS as possible. POODLE is not a one-off. It exploits a known mistake that has bitten us before. Many more similar mistakes still exist in TLS v1.0, and some time very soon one of them is going to bite us again.

Every revision of TLS contains fixes for dangerous errors committed by earlier versions. TLS v1.0 dictates the format of padding, preventing POODLE. v1.1 gets rid of IV-chaining, preventing BEAST. v1.2 introduces support for AEAD ciphersuites, providing an alternative to the dangerous MAC-then-encrypt construct. TLS v1.3 will eliminate the RSA handshake protocol[29], which lacks forward secrecy.

Currently, browser support for TLS versions beyond v1.0 is not deployed widely enough to make it practical for most servers to disable v1.0 or anything after it. This must be fixed. For browser vendors, this means backporting TLS v1.2 support to older branches. For website operators, it means displayping nag screens encouraging users to upgrade their browser. For enterprise IT administrators, it means updating your desktop image and fixing or replacing legacy applications that rely on old browser versions.

It’s time to put the cryptographic mistakes of the ’90s behind us.

### References

[1]: Nadhem J. AlFardan and Kenneth G. Paterson, 2013. “Lucky Thirteen: Breaking the TLS and DTLS Record Protocols”. <http://www.isg.rhul.ac.uk/tls/TLStiming.pdf>

[2]: Nadhem J. AlFardan, Daniel J. Bernstein, Kenneth G. Paterson, Bertram Poettering, and Jacob C.N. Schuldt, 2013. “On the security of RC4 in TLS and WPA”. <http://www.isg.rhul.ac.uk/tls/RC4biases.pdf>

[3]: Gregory V. Bard, 2006. “A Challenging but Feasible Blockwise-Adaptive Chosen-Plaintext Attack on SSL”. <https://eprint.iacr.org/2006/136.pdf>

[4]: Mihir Bellare and Chanathip Namprempre, 2000. “Authenticated Encryption: Relations abong notions and analysis of the generic composition paradigm”. <ftp://ftp.iks-jena.de/pub/mitarb/lutz/crypt/symmetric/Bellare-Namprempre%3AAuthenticated_Encryption%3AAnalysis_of_Composition_Paradigm.pdf>

[6]: Daniel Bleichenbacher, 1998. “Chosen Ciphertext Attacks Against Protocols Based on the RSA Encryption Standard PKCS #1”. <http://archiv.infsec.ethz.ch/education/fs08/secsem/Bleichenbacher98.pdf>

[7]: Brice Canvel, Alain Hiltgen, Serge Vaudenay, and Martin Vuagnoux, 2003. “Password Interception in a SSL/TLS Channel”. <http://canvel.free.fr/crypto/pdf/CHVV03.pdf>

[8]: Wei Dai, 2002. “An attack against SSH2 protocol”. <http://www.weidai.com/ssh2-attack.txt>

[9]: Tim Dierks and Philip Karlton, 1999. “RFC 2246: The TLS Protocol Version 1.0”. <https://tools.ietf.org/html/rfc2246>

[10]: Tim Dierks and Eric Rescorla, 2006. “RFC 4246: The Transport Layer Security (TLS) Protocol Version 1.1”. <https://tools.ietf.org/html/rfc4346>

[11]: Tim Dierks and Eric Rescorla, 2008. “RFC 5246: The Transport Layer Security (TLS) Protocol Version 1.2”. <https://tools.ietf.org/html/rfc5246>

[12]: Thai Duong and Juliano Rizzo, 2011. “Cryptography in the Web: The Case of Cryptographic Design Flaws in ASP.NET”. <http://www.ieee-security.org/TC/SP2011/PAPERS/2011/paper030.pdf>

[13]: Thai Duong and Juliano Rizzo, 2011. “Here Come the $$\oplus$$ Ninjas”. <http://www.hpcc.ecs.soton.ac.uk/~dan/talks/bullrun/Beast.pdf>

[14]: Hal Finney, 2006. “Bleichenbacher’s RSA signature forgery based on implmentation error”. <http://www.imc.org/ietf-openpgp/mail-archive/msg06063.html>

[15]: Alan Freier, Philip Karlton, and Paul Kocher, 2011. “RFC 6101: The Secure Sockets Layer (SSL) Protocol Version 3.0”. <https://tools.ietf.org/html/rfc6101>

[16]: Marko Laakso, 2014. “Heartbleed Bug”. <http://heartbleed.com>

[17]: Moxie Marlinspike, 2011. “The Cryptographic Doom Principle”. <http://www.thoughtcrime.org/blog/the-cryptographic-doom-principle/>

[18]: Bodo Möller, 2002. “Security of CBC Ciphersuites in SSL/TLS: Problems and Countermeasures”. <https://www.openssl.org/~bodo/tls-cbc.txt>

[19]: Bodo Möller and Adam Langley, 2014. “TLS Fallback Signaling Cipher Suite Value (SCSV) for Preventing Protocol Downgrade Attacks (work in progress)”. <https://tools.ietf.org/html/draft-bmoeller-tls-downgrade-scsv-02>

[20]: Bodo Möller, Thai Duong, and Krzysztof Kotowicz, 2014. “This POODLE Bites: Exploiting the SSL 3.0 Fallback”. <https://www.openssl.org/~bodo/ssl-poodle.pdf>

[22]: Rolf Oppliger, 2009. “SSL and TLS: Theory and Practice”. <http://books.google.com/books?id=dR2G0oPufe0C&pg=PA68>

[23]: Colin Percival, 2009. “Encrypt then MAC”. <http://www.daemonology.net/blog/2009-06-24-encrypt-then-mac.html>

[24]: Angelo Prado, Neal Harris, and Yoel Gluck, 2013. “BREACH ATTACK”. <http://breachattack.com>

[25]: Marsh Ray and Steve Dispensa, 2009. “Renegotiating TLS”. <https://kryptera.se/Renegotiating%20TLS.pdf>

[26]: Eric Rescorla, Marsh Ray, and Steve Dispensa, 2010. “Transport Layer Security (TLS) Renegotiation Indication Extension”. <https://tools.ietf.org/html/rfc5746>

[27]: Juliano Rizzo and Thai Duong, 2012. “The CRIME Attack”. <https://docs.google.com/presentation/d/11eBmGiHbYcHR9gL5nDyZChu_-lCa2GizeuOfaLU2HOU>

[28]: Philip Rogaway, 2002. “Authenticated-Encryption with Associated-Data”. <http://seclab.cs.ucdavis.edu/papers/ad.pdf>

[29]: Joseph Salowey, 2014. “Re: [TLS] Confirming Consensus on removing RSA key Transport from TLS 1.3”. <https://www.ietf.org/mail-archive/web/tls/current/msg12266.html>

[30]: Serge Vaudenay, 2002. “Security Flaws Induced by CBC Padding; Applications to SSL, IPSEC, WTLS…”. <https://www.iacr.org/archive/eurocrypt2002/23320530/cbc02_e02d.pdf>

[31]: Wikimedia Foundation, 2014. “Transport Layer Security: Web browsers”. <https://en.wikipedia.org/w/index.php?title=Transport_Layer_Security&oldid=629101662#Web_browsers>