Beyond Eavesdropping: Adversarial Models for Secure Messaging

* The content of this paper was presented at 2017 AppSecUSA

In an age of ever more sophisticated cybercrime and mass surveillance, secure communication is an increasingly rare and premium commodity. As we develop better methods and models for protecting communication streams, it is essential to examine how the threat model for secure messaging applications has evolved beyond the traditional man-in-the-middle attack.

This plays a crucial role in guiding the development of new secure communication tools to better address proven dangers to our digital security.

Since the early days of the Diffie-Hellman key exchange, the RSA encryption scheme and the Schnorr signature scheme modern cryptography, secure messaging has come a long way. As our technology improves and becomes widely deployed, keeping track of the lessons we can learn from its past and present failures is key to advancing our security models.

Indeed, this exercise is pivotal in our quest to build both more secure and more usable secure communication tools. Just as considering the eavesdropping model led us to design various forms of end-to-end encryption and authentication, the same benefits can arise from developing newer adversarial models inspired by attacks launched on our current systems.

In this paper, I will go over a range of attacks on secure messaging and collaboration platforms that we have witnessed in the wild in the last few years. I’ll look at what lessons we can draw from those incidents to help us build more realistic adversarial models beyond the classic eavesdropping scenario. The paper will also cover some defensive techniques that can provide meaningful security in these models including the steps taken by Wickr for defending against these types of attacks.

1. Long Term Security Attacks

One of the most important lessons (and a theme we will touch on repeatedly) is that regardless of how much effort we may put into our security technology, some type of compromise will likely occur eventually during the life time of the system. While this holds in a much more general sense across the information security industry, it is also true for secure communication tools where end-points, infrastructure, services and user accounts may all live for many years at a time. Of course, avoiding any and all compromise over this time span is a laudable goal, but it would be unrealistic (and somewhat irresponsible) to simply assume that no compromise ever will occur. Instead, we must plan for such eventualities before they happen and prepare as much as we can by implementing mitigations early on in the design process.

To see what can go wrong, consider the cases of PGP and S/MIME, the most widely deployed systems for encrypting and authenticating email. Users generally have just a few (usually one) pair of keys for encryption and for signatures and these keys can remain in use for years. In both systems, messages are encrypted (and signed) directly with those keys. Although simple to implement, this has some severe downsides in the context of long term security. Crucially, an adversary which gains access to the decryption keys (say via device compromise) years after a key pair first comes into use can then use the key to decrypt all past emails that were sent to that key pair. The same holds for some more recent secure messaging schemes such as Threema and (at least one recent version of) Viber .

1.1. Forward Secrecy

An important lesson here is we should be considering what is sometimes called attacks on forward secrecy. That is an adversarial model which makes it explicit that an adversary runs the two distinct stages.

In the first “eavesdropping” phase, the adversary sits on the wire between participants, recording packets. Here, it is common to distinguish between weak forward secrecy where the attacker only records packets and strong forward secrecy where the attacker may also modify, insert and delete packets in transmission.

Then, in the second “intrusion” phase, the attacker gains access to the long term keys (or even the entire internal state) of honest participating parties. This could model a device compromise via spear phishing or malware at one or all of the end points involved in the secure communication from the first phase. In this scenario, the security guarantee we should be asking for is that the attacker is not able to use any of information gained in the second phase of the attack to learn anything new about the encrypted traffic it witnessed (and even interacted with) during the first attacks’ phase.

For several modern secure messaging systems some form of forward secrecy has already been adopted as an explicit security goal. In particular, Wickr’s protocol provides strong forward secrecy for all messages (and VoIP traffic) exchanged between users.

The standard approach to achieving PFS can be summarized as follows. Instead of directly encrypting messages with long term keys, a sender uses her long term keys to authenticate freshly generated ephemeral key material. The ephemeral keys are then used to run a (non-interactive) authenticated key exchange protocol with the recipient so as to generate ephemeral session key material. It is this session key that is used to actually encrypt the message being sent. As soon as the resulting ciphertext is sent out on the wire, the sender deletes all of their ephemeral key material and the session key. Similarly, as soon as the recipient has received and decrypted a message, they too delete all ephemeral and session key material used for that message. This way, regardless of what devices are compromised after the delivery of any given message, no device has the necessary key material decrypt that message left in memory.

For example, to send a message using Wickr’s protocol, first, the sender generates a fresh signed Elliptic Curve Diffie-Hellman key pair. Next, it obtains a similar public ECDH key from the recipient and runs the ECDH protocol to produce a message-specific ephemeral session key which it uses to encrypt and authenticate the message. For more details, see our technical white paper .

1.2. Backwards Secrecy

A closely related security notion, motivated by similar considerations about recovery from compromise in long term systems, is that of Backwards Secrecy. Somewhat confusingly, this notion is sometimes referred to as “Future Secrecy” or more clearly “Post-Compromise Security.”

The main difference to Forward Secrecy is that the two phases of the adversary are switched. In other words, the adversary could already know the long term secret keys of one (or both) participants while observing the targeted message exchange on the wire during the eavesdropping phase. Consequently, one asymmetry with forward secrecy is that in the Backwards case the adversary is restricted to being passive during the this phase. (Though not ideal, the restriction is necessary as an active adversary with full knowledge of Alice’s long term keys could effectively perform all operations Alice could include taking part in a key exchange and decrypt any messages sent using the resulting session key.)

The Wickr protocol provides backwards secrecy for much the same reason it enjoys strong forward secrecy. Each new message is encrypted with a key derived by combining short-lived ephemeral keys of the sender and receiver. Before use, the recipient’s ephemeral key is authenticated (i.e. signed) using the recipients long term key. So backwards secrecy is provided by Wickr for all messages sent using an honestly generated recipient’s ephemeral key.

1.3. Ephemerality

Of course, when a device is compromised, some past sent and received messages could still be found, say in the application’s conversation history. This motivates a different defense mechanism, namely providing users with tools to manage the life cycle of the data they exchange using the system. A great example of what can go wrong in practice when such a mechanism is not provided (and instead the decision of when to delete data is left up to each user) is seen in the recent compromise of John Podesta’s emails during the 2016 US election cycle. Much of the most damaging data released by the attackers came from emails exchanged years earlier. As is often the case, there was actually little to no reason to continue storing those emails other than that it was the default and simplest option given the way modern email clients and services are designed.

With that in mind, one of the key security features provided by Wickr’s apps is an explicit system for limiting the life time of data sent and received by default. While these techniques are by no means a bulletproof solution, they do hugely impact the lifespan of the content delivered through Wickr limiting the window of opportunity for attackers. Specifically, Wickr allows users to:

I.      Set the burn-on-read time for messages (i.e. the amount of time a message lives after first being opened for reading)

II.     Set the expiration or time-to-live for any given message (i.e. the amount of time a message lives after it was sent, regardless of if it was ever viewed.

III.   Securely delete local messages on a device.

IV.   Remotely delete or recall messages already sent out to other devices (including of other users).

V.    By default, Wickr apps only store received content (e.g. messages and files) in a local encrypted sandbox. Users can, of course, explicitly opt to export data from that sandbox to the OS for use by other apps after being shown a warning (on Android at the moment, iOS and others are coming soon) about the risks involved in such an action.

VI.   Wickr also provides users with a secure image viewer which allows viewing received pictures without ever having to export them outside of the sandbox. This helps prevent long term data leakage of sensitive content by other image viewers in the form of thumbnails and other temporary files.

1.4. Post Quantum Cryptography

A topic of growing importance and attention in the domain of secure communication is the increasing likelihood that at some point in the not so distant future we will start facing attackers armed with quantum computers. In brief, the concern here is that powerful enough quantum computers can be used to efficiently solve both integer factorization and discrete log problems over arbitrary groups (e.g. over modular integers used by say RSA as well as groups of rational points on an elliptic curve as used by ECDH, ECDSA and EdDSA). In effect, this means that almost all currently employed public key cryptography (e.g. signature schemes and key exchange algorithms) would be vulnerable to such an attacker. Of course, a mitigating factor here is that such powerful quantum computers are likely still quite a few years off from becoming a reality.

However, that observation misses a key point when it comes to long term security of today’s communication. As remarked above, most secure messaging protocols (especially with end-to-end encryption) rely on some form of public key cryptography, in particular for determining the key used to encrypt the message (or at least to encrypt that key). The problem with this is that if we use vulnerable algorithms today then an attacker that records our encrypted traffic and eventually gains access to a sufficiently powerful quantum computer in the future can then go back and decrypt all the recorded traffic. In other words, the reality is that we are no better off now than in the days of PGP and S/MIME when it comes to quantum attacks. Put differently, the forward security of even the most advanced of todays secure messaging protocols still relies on the hope that no quantum computer will ever be built in the future.

Crucially in this scenario though, if we want long term security for today’s communication, then the algorithms for agreeing on encryption keys today must already resist tomorrows quantum attacks. In other words, it is not enough to make sure that we have quantum secure algorithms in time for the advent of quantum computers if protecting today’s communication in the future is important for one’s risk model. The bottom line is for long term security we must already be introducing such cryptography today and that is something we at Wickr are actively working to address. We will be sharing our progress on this soon.

2. Attacks on Login Credentials

Switching gears a little, let’s focus on what is perhaps the most effective and wide-spread attack on user accounts. Most systems that authenticate users via a login-password combination end up storing user credentials for later authentication in some sort of login database (DB). Naturally, such a DB is a high value target representing immense value to attackers. Compromising it can allow the attacker to perform offline dictionary attacks, an extremely effective method of gaining access to large numbers of accounts.*

In practice, these DBs are constantly leaking. For example, Slack’s DB was leaked in 2015 , large parts of Yahoo’s DB have been leaked at least 3 times in 2013 and 2014 while the messaging service HipChat’s DB was compromised as recently as this year .

A natural first reaction to these kinds of events is that we should be doing as much as possible to secure these DBs in the hopes of making such a compromise an impossibility. But if history tells us anything, these attacks can still happen. Thus, an important lesson here is that we should also consider an adversarial model where the DB is actually compromised and its contents handed over to an adversary.

Considering this case motivates the use of several important tools and mechanism for a more in-depth defensive posture. Here are several easy yet surprisingly effective steps that can help mitigate the risks.

I.      Whenever possible, other means of authenticating users beyond passwords (i.e. multi-factor authentication) should be used;

II.     Use password hashing algorithms specifically designed to be brute-force resistant such as scrypt or Argon2. Moreover, tune the security parameters of those algorithms to consume as much CPU and memory as you can afford on your login system. In practice, using such functions correctly in place of say SHA256 (or worse MD5) can slow down dictionary attacks by many orders of magnitude, greatly reducing the attack’s effectiveness. Even compared to bcrypt or PBKDF#2, script and Argon2 are a significant strengthening of your system as they force the attacker to use not only larger amounts of computation but also memory and memory bandwidth, which are comparatively expensive and slower than just pure computation. In situations where side-channels are a concern (e.g. if passwords are being hashed by a virtual host in a cloud), you may want to consider using Argon2id, a specially hardened mode of operation with constant time and a constant memory access pattern. At Wickr, we are currently using scrypt for password hashing purposes.

III.   Needless to say, all password hashes should also be salted to prevent an attacker from using precomputed rainbow tables. Moreover, a great technique for preventing offline dictionary attacks is to use “pepper,” namely a large random value unique to the system that is included in each of the hashes. Crucially, the pepper value should be stored somewhere outside of the DB (e.g. in a separate config file, environment variable, etc.).

IV.   Another simple and very elegant counter measure is the use of Honeywords . The idea is to embed false positives (i.e. fake passwords) into the password hashes in the DB. That way when an attacker runs the offline dictionary attack, they can’t tell if they have found the real or a fake password. If the attacker later tries to login using a fake password, the authentication system can easily detect this and react accordingly.

V.    Mechanisms should be deployed to help users keep track of activity on their account to help them detect compromises. For example, the system should always notify users in real time when their account is accessed from a new device. All Wickr apps immediately generate a pop-up notifying users that a new device has been associated with their account.

VI.   Users should be able to remotely and immediately end any and all open sessions associated with their account. This is a crucial part of taking back control of a compromised account.

VII.  Account recovery and password resetting should be very carefully planned. At the end of the day, regardless of all other security mechanism in place, a user’s account can only ever be as secure as the mechanism for resetting their password. In fact, for Wickr Messenger, we intentionally opted for not having any such mechanism in place as we did not want to rely on the security of say, an email account or a phone number. Effectively, doing so would outsource a major part of account security to an external system outside our control. I’ll talk about a real world attack arising from just such a vulnerability in the next section on phone-based attacks. For the remaining Wickr products, account recovery is possible with the assistance of the network administrator. However, even then, new long term key material needs to be regenerated and re-authenticated by all contacts. Otherwise nothing would prevent a rogue network administrator from quietly impersonating users on their network.

* I’m assuming that passwords are at least not being stored in clear text in the DB, but instead are salted and hashed as has long been industry best practice.

3. Phone-Based Attacks

Like all security tools, designers of secure messaging systems are often faced with the conflicting goals of usability and security. When it comes to contact discovery and account recovery, many contemporary messaging platforms have opted to make heavy use of phone numbers both for identifying accounts and as an authenticated private channel for use in account recovery. However, as demonstrated by the following real world attacks, in practice, these choices can have problematic consequences.

In 2015, reports emerged of an attack on the Telegram messaging service perpetrated by an Iranian threat actor dubbed Rocket Kitten . The attack consisted of abusing Telegram’s open API to enumerate all Iranian phone numbers associated with Telegram accounts. As acquiring phone numbers almost always requires registering a real world identity, this effectively constitutes a complete de-anonymization attack of the entire Iranian user base.

A year later, the same threat actor hijacked several Iranian telegram accounts by intercepting SMS used in the account recovery process . In fact, such SMS interception attacks have become more common being used, for example, to compromise CoinBase accounts , e-banking accounts , as well as perpetrate attacks on political activists in Iran, Russia and the U.S. .

I see at least two important consequences to draw from these events in terms of our threat models.

I. In any threat model for anonymous messaging, a phone number should be treated the same as a real world identity. This takes on special importance when designing communication tools for say, political activists, journalists with sensitive contacts and sources and whistle blowers trying to shield themselves from government reprisal. Forcing users to associate their accounts with a phone number can be very detrimental to users’ security in these cases. That is why in Wickr Messenger we have opted to identify accounts with nothing more than a fresh username selected by the user during account creation. To help contact discovery among users, Wickr Me supports associating accounts with a phone number, but strictly as an opt-in capability.

II. Phone numbers make for increasingly poor authenticated channels in practice. The reason stems from the fact that SS7, the phone networks’ backbone protocol, was designed with a similar threat model in mind as say TCP/IP or email, which is to say, basically none at all. This means that either by directly colluding with, or simply compromising any one of the world’s numerous telecom providers connected to the backbone it becomes relatively easy to track phones and intercept SMS. This says nothing of attacks in the form of adversary-controlled base stations such as those used by law-enforcement to perform phone intercepts (i.e. Lawful intercept capability). For this reason, Wickr has chosen not to support account recovery via SMS or phone numbers in general.

4. Traffic Analysis

Traffic-analysis is an often ignored but potentially devastating attack. We tend to think of encrypted traffic as providing us with strong privacy of the encrypted content. However, in practice, things can be very different (even assuming that a decent block-cipher and mode of operation are being used). An early classic example of how eavesdropping on encrypted traffic can be used to recover sensitive encrypted information is man-in-the-middle attacks on SSH protocol implementations. More recent examples include attacks allowing an eavesdropper to determine the type of traffic being encrypted (e.g., browsing, VoIP, file sharing, etc.) . Other attacks focus specifically on encrypted text and VoIP protocols.

Take for example the work of Wright, Ballard, Monrose and Masson who developed a model for automatically determining which human language is being spoken in an encrypted VoIP call . This was then taken a step further to reconstruct the content of an encrypted VoIP call . Similarly, Coull and Dyer showed how to distinguish between a collection of common human languages in encrypted iMessage text chats .

There is a common theme to these and similar attacks. Understanding it can help to improve our adversarial models to more accurately reflect the real world threats, in particular, to explain why these attacks are possible. Although the encryption of a message hides its content, in practice (often for reasons of conserving bandwidth), ciphertexts have little to no padding as so they potentially reveal the length of the encrypted plaintext. Moreover, as common networks tend to perform relatively quickly and consistently (especially for real-time & high priority channels), the attacker can potentially also learn quite fine-grained information about the exact time when a ciphertext was sent. Therefore, it is crucial that we do not over simplify the eavesdropping adversarial model.

With this more rigorous eavesdropping model in mind, Wickr implements several defensive mechanisms aimed at eliminating information leakage via such side-channels.

Message Padding:

As demonstrated in , the length of text messages can already be used over time to violate privacy goals for encrypted communication. A key property leveraged is the variation of the bit-lengths of the different character sets used for encoding different human languages . At Wickr, we use a single encoding (UTF8) for all languages.

More importantly, we also make use of a step-function-based padding scheme to hide the length of plaintexts. That is, before encryption, we pad messages so that the length of the data actually being encrypted is the smallest positive multiple of a step size that is greater than or equal to the plaintext. By adjusting the step size we can smoothly trade added bandwidth requirements for greater privacy about message length.

Hardened VoIP:

For our VoIP protocol we make use of a Constant Bit Rate (CBR) audio encoding scheme as opposed to the far more common VBR codecs. The problem with VBR codecs, as observed in , is that when packets encrypting the encoded audio signal are generated and sent out many times per second, the result is that the ciphertext lengths start to leak a lot of information about how much data the VBR codec has produced to store individual sounds in any given word. Because some sounds used in human language require much less data than others, this can effectively leak a truly impressive amount of information about what is actually being said .

Constant Time Cryptography:

One of the more insidious ways that the timing information about ciphertexts can be abused is related to sensitive cryptographic operations. Most cryptographic operations run on sensitive data (such as key material or plaintexts). Moreover, the completion of such operations can often result in the (automatic) sending of a message. For example, if the operation fails, the message could consist of (an encryption of) an error message. Alternatively, successful completion of the cryptographic operation could result in the next message in a handshake being sent out.

Either way, to avoid information leakage in case traffic is being observed by an eavesdropper, we must ensure that time (e.g., number of CPU cycles and delay due to memory access) of the cryptographic operation is independent of any sensitive data being computed on. For this reason, all of Wickr sensitive cryptographic operations are implemented using constant-time code.

5. Multi-Party Messaging

In response to the recent proliferation of secure messaging platforms, cryptographers have now turned their attention to the security of group messaging protocols. So far, the results have been somewhat sobering indicating that much work needs to be done to translate the security of 1:1 conversations to larger groups.

Part of the problem is that group messaging is significantly more complex than 1:1 messaging in at least two senses:

I.      Group conversations potentially require more complex cryptographic techniques and

II.     The multi-party nature of conversations introduces completely new security requirements with no analogue in the two-party case.

For example, the natural trend to apply a two-party key agreement used in forward-secure 1:1 messaging to the multi-party key agreement is just not practical (See more details in the next section). We need a new technique to use two-party key agreement to build multi-party forward security. More importantly, securing group conversations requires some new security properties which simply have no analogue in the two-party case, and are only recently even being defined and formally analyzed.

5.1. More is Less

In their recent work “More is Less: How Group Chats Weaken the Security of Instant Messengers Signal, WhatsApp, and Threema”, Rösler, Mainka and Schwenk set out to define some of the security goals and investigate three common secure messaging protocols . Roughly speaking, the goals applicable to both two-party and multi-party conversations laid out in that work can be summarized as follows:

A.    End-to-end secrecy of messages;

B.    Strong forward and backwards secrecy;

C.    Message authenticity (integrity);

D.    Traceable message delivery: senders only accept messages as being delivered after all recipients have received the message;

E.    No replay attacks.

In the multi-party setting, a conversation is equipped with an implicit group of participants taking part in the group chat. The security goals considered for the multi-party case can be summarized as follows:

F.     Only group members can send messages to the conversation;

G.    Only authorized users may alter the set of group members;

H.    FIFO ordering: all messages sent by a user are delivered to other participants in the order they were sent;

I.      Total ordering: all recipients of a pair of message agree on their ordering.

Upon analyzing the protocols for Signal, WhatsApp and Threema per above notions, Rösler, Mainka and Schwenk discovered several weaknesses in the current versions of those protocols which can be summarized as follows:

•   In Signal, under various conditions, properties G and E can be violated. As a consequence of applying 1:1 scheme to group chats, properties A and B can be violated since being added to a group (illegally) means all future messages to that group can be read.

•   A powerful network attacker or one that has compromised the Signal server can also violate property D;

•   Signal provides neither properties H nor I;

•   In WhatsApp, various conditions allow an attacker to add an arbitrary user to a group chat. This violates property G which then allows for violating A and B;

•   WhatsApp also allows for violating D;

•   For group chat WhatsApp has no backwards secrecy;

•   A malicious server in WhatsApp can violate property H;

•   Replay attacks are possible for Threema;

•   Threema has no forward or backwards secrecy;

•   Neither properties H or I hold for Threema;

•   Members can be tricked into holding an inconsistent view of the group membership in a room.

5.2. New Security Goals for Groups

Given all of the above issues, the bottom line is that we need a better understanding both of what our goals are when designing secure group communication tools and, of course, how to actually build protocols that achieve those goals. Rösler, Mainka and Schwenk already provided some pointers for what our security goals could look alike with their list of (somewhat informal) security goals .

Another recent work on this subject is by Katriel Cohn-Gordon and Cas Cremers and Luke Garratt, Jon Millican and Kevin Milner “On Ends-to-Ends Encryption: Asynchronous Group Messaging with Strong Security Guarantees” . It lays out several security goals for the group setting, constructs a new type of key agreement protocol and shows that it satisfies their security goals.

At Wickr, we will soon be announcing the results of our own investigations into the concrete assumptions made and formal security properties achieved by our two-party and multi-party goals. The initial focus has been on the two-party case, but the security goals we target for the soon-to-be released analysis are specifically designed to allow for cleanly abstracting out what is provided by our two-party communication protocol. Specifically, in contrast to listing independent security property as done in previous works by researchers , our approach relies on defining an idealized secure communication channel which is effectively being emulated by the Wickr protocol.

That way any further application built on top of the two-party Wickr protocol (such as a Wickr’s group chat protocol) can be analyzed directly in the setting where the application uses the idealized secure channel. This approach has at least two benefits:

I.   On the one hand, it can significantly facilitate the analysis of an applications security.

II.  On the other hand, it also helps us build a thorough understanding of what security is being provided for the two-party case as what the idealized secure channel captures goes a long way towards completely characterizing the protocols properties. For example, any particular property we may try to define (such as FIFO ordering, total ordering, resistance to replay attacks, etc.) is already captured by the description of the idealized channel. Thus, a single definition (and security proof) can potentially capture a hole collection of useful properties, including the ones we may not have even articulated explicitly yet.

The most important application we will consider is of course, group communication. Taking the same real/ideal definitional approach to Wickr’s group protocols, we believe will also lead to a more thorough understanding of its strengths and weaknesses.

6. Exploiting Insecure Design Choices

Stronger security is often inherently in conflict with better usability. This goes for secure messaging too. As such, it is important to carefully weigh the design choices of our desired feature sets and application behavior against the potential vulnerabilities the behavior can create.

Recent events demonstrate what problems can arise when making such choices. Recall the attack described earlier in which an attacker was able to enumerate all Iranian phone numbers associated with a Telegram account . The attack was perpetrated using nothing more than access to Telegram’s public API . In other words, it exploited a functionality that was intentionally provided to the public.

More recently, at the beginning of this year, a vigorous public debate emerged when it was revealed that WhatsApp allowed for silent remote rekeying of a contacts key material by default. That is if Alice’s app received a message announcing new public key material for her friend Bob, her app would (by default) automatically accept the material and allow the continued communication between Alice and Bob (or really, whoever the holder of the new keys is) with no further interaction or confirmation needed from Alice.

In terms of usability, this makes events like Bob setting up his WhatsApp account on a new phone (or even updating the OS on an existing phone) a much more non-intrusive process for Alice. However, the downside in terms of security is immense. It effectively destroys any guarantees of authenticity, integrity or really any other meaningful security if an attack can quietly replace any contacts public key material with their own. Hence, the public uproar. It is important to point out that as a consequence of the public reaction, WhatsApp has since altered its default behavior to now display a notification to Alice when one of her contacts’ key material is changed.

Another relatively recent criticism of weakness-by-design was raised by Matthew Green about Apple’s iMessage system . The main concern here is that Apple takes control of the distribution of all long-term key material used to establish secure connections between iMessage users. When there is no attacker involved this makes for a very efficient and effective method of contact discovery and key distribution. However, the iMessage apps do not give any means for users to distribute keys through other (authenticated) channels, nor even to verify that the keys they were given by Apple’s key server actually match the keys in use by their supposed owners (e.g. by comparing a key fingerprint). In effect, this means that the security of system from the perspective of users depends completely on the essentially unverifiable assumption that the Apple key server is behaving honestly. (In other words, the assumption is that Apple key server is in fact giving you the correct keys for your contacts or that it is not quietly distributing additional keys to other users requesting your keys. The latter theoretically could be used by Apple to build a transparent eavesdropping mechanism.)

What these scenarios indicate is that it is useful to design the features and functionality of secure messaging products with adversarial users and infrastructure in mind. Adversarial users can abuse API with no rate limiting to extract vast volumes of information. Adversarial infrastructure (e.g., compromised key servers or message delivery servers) are particularly important to consider as they present huge central targets to an attacker since any capabilities gained by their compromise can be applied to the entire network of users.

6.1. Security by Design at Wickr

At Wickr, we are mindful of these types of attacks. In fact, we have often opted for a rather conservative and extremely cautious approach when it comes these types of trade-offs erring on the side of security. For example, as touched on earlier, Wickr Messenger accounts by default are not associated with any external information such as an email or phone number despite the obvious benefits to user growth via automated contact discovery. Similar considerations about privacy vs. contact discovery led us to not upload users’ local address books to our servers upon registration of a new device unless explicitly requested to do so by the user.

However, the much more profound impact such careful consideration of these tradeoffs has had on our approach at Wickr is the (somewhat unique) decision to provide organizations with their own Wickr networks. Crucially, this way, the administrators of each network can make the appropriate usability vs. security choices as it fits the policies, goals and needs of each organization. Wickr’s goal is to provide those administrators with a range of reasonable options in the spectrum of trade-offs.

Of course, the usefulness of any messaging service (secure or otherwise) also depends on how wide of a user base one can communicate with through it. This is why our Wickr networks can federate with each other, if allowed by each network’s administrators, to form a global interoperable network of Wickr users.

7. Legal Threats

For a holistic approach to security, it is important to consider the legal environment in which the platform operates. This is true for the legal environments governing both the operations of service providers (building and running the software and infrastructure) and the use of these platforms by the end users. As we develop secure messaging platforms, considering already existing and potentially coming legal limitations is key to building more resilient systems.

Legal threats may come in various forms:

•   Particularly harmful ones may come in the shape of government-mandated backdoors such as Russia’s recent law requiring such backdoors in all encryption products ;

•   Others come in the form of more absolute laws, outright banning encrypted communication tools. In practice, such laws are actually surprisingly widespread. Examples of countries with past or present bans for encrypted platforms include Brazil , China , Saudi Arabia , Pakistan to name just a few.

•   A third type of legal threats with severe consequences for messaging security consist of laws that mandate the collection of meta-data: who is talking to whom, from which IPs, at what time, how long, and how often. These laws are already wide-spread within the telecom industry and, to a slightly lesser extent, for ISPs. Recently, several countries have started discussing extending metadata collection mandates to cover messaging and social media platforms, including some in the EU .

7.1. Prepare and Communicate

In contrast to other threats discussed above, legal treats are not technical in nature. As such, defenses are not all of technical nature either. The most important step is to prepare for these eventualities in advance by developing a well-considered policy for how your organization will respond when faced with each type of legal threat to the security of your platform. Once adopted, it is crucial to communicate your policy to the users well in advance so as to ensure they are aware of how the platform works today and what changes may be coming tomorrow which may impact their use of the tool.

Transparency helps to avoid building a false sense of security in users. It also allows them to adjust their usage of the secure messaging platform accordingly. At Wickr, we have made it a priority to provide as much information and transparency about how our platform works, what information we do and do not have, and how specifically we respond to government request for user content as possible. We believe that to provide reliable security for our users it is important to be predictable with regards to law enforcement, our privacy protection practices and our security promises that we make to our users .

As an example of how a lack of predictability and communication can lead to exposing users to risks (without their knowledge), consider the following recent case. In 2016 the Egyptian government banned the use of Signal. The platform was soon updated to add the circumvention technology to continue working transparently (using Domain Fronting in collaboration with Google). Although well-intended, this change was rolled out without an explicit explanation to the affected users. The upshot is: overnight, many users in Egypt were unaware that their previously legal application had now become illegal to use. Given the political and security climate in Egypt at the time, it is hard to overstate the severity of the danger of being caught with illegal encrypted communication tools.

To be clear, the lesson here is not that circumvention technology must be avoided, but rather that it is absolutely crucial to vigorously communicate what is provided by a security tool including, where possible, the legal risks and consequences of using such a tool.

In addition to formulating strong policies and over-communicating it to the users, there are also technical defenses that might prove useful under certain circumstances.

In the case of mandated metadata collection, it is common for discussed laws to remain opportunistic as in not require providers to fundamentally alter the underlying technology to enable metadata collection. For secure messaging platforms this means its private-by-design approach can help limit the exposure if metadata is simply not recorded or available to the platform. A lot stronger security guarantee though would be provided if the messaging protocol is designed to route communication directly between users and bypass the platform infrastructure whenever possible, enabling a true peer-to-peer connection between users.

At Wickr, we have taken an approach to offer customers the opportunity to choose the deployment model suitable for their security and compliance needs including an option to run the infrastructure for their Wickr network on their own premises. With this type of deployment, Wickr has no visibility into metadata or other information about the traffic on such self-hosted networks. Critically, this is true even when clients are not able to form peer-to-peer connections between each other.


Bibliography

Chris Howell, Tom Leavy & Joël Alwen: Wickr White Paper on Wickr’s Messaging Protocol.

Greg Kumparak: Slack Got Hacked.  Retrieved Oct. 17, 2017

Jonathan Stempl, Jim Finkle: Yahoo says all three billion accounts hacked in 2013 data theft.

Thomas Fox-Brewster: Yahoo Admits Staff Discovered 500M Hack In 2014, Two Years Before Disclosure.

Shaun Nichols: HipChat SlipChat lets hackers RipChat.

Udi Manber. A simple scheme to make passwords based on one-way functions much harder to crack. Computers & Security, 15(2):171–176, 1996.

Honeywords Project: https://people.csail.mit.edu/rivest/honeywords/

Joseph Menn, Yeganeh Torbati: Hackers accessed Telegram messaging accounts in Iran.

Claudio Guarnieri, Collin Anderson: Iran and the Soft War for Internet Dominance. Appearing at Black Hat USA 2017.

Russell Brandom, This is why you shouldn’t use texts for two-factor authentication: Researchers show how to hijack a text message.

Dan Goodin, Thieves drain 2fa-protected bank accounts by abusing SS7 routing protocol: The same weakness could be used to eavesdrop on calls and track users’ locations.

Andy Greenberg, So hey you should stop using texts for two-factor authentication.

Charles V. Wright, Lucas Ballard, Fabian Monrose, Gerald M. Masson: Language Identification of Encrypted VoIP Traffic: Alejandra y Roberto or Alice and Bob? USENIX Security Symposium 2007

Charles V. Wright, Lucas Ballard, Scott E. Coull, Fabian Monrose, Gerald M. Masson: Uncovering Spoken Phrases in Encrypted Voice over IP Conversations. ACM Trans. Inf. Syst. Secur. 13(4): 35:1-35:30 (2010)

Scott E. Coull, Kevin P. Dyer: Privacy Failures in Encrypted Messaging Services: Apple iMessage and Beyond. IACR Cryptology ePrint Archive 2014: 168 (2014)

Threema Cryptography Whitepaper. https://threema.ch/press-files/2_documentation/cryptography_whitepaper.pdf

Nadim Kobeissi: Remarks on Viber Message Encryption. https://nadim.computer/2016/05/04/viber-encryption.html

Paul Rösler and Christian Mainka and Jörg Schwenk. More is Less: How Group Chats Weaken the Security of Instant Messengers Signal, WhatsApp, and Threema. IACR Cryptology ePrint Archive 2017: 713 (2017)

Katriel Cohn-Gordon and Cas Cremers and Luke Garratt and Jon Millican and Kevin Milner: On Ends-to-Ends Encryption: Asynchronous Group Messaging with Strong Security Guarantees. IACR Cryptology ePrint Archive 2017: 666 (2017)

Matthew Green: Let’s talk about iMessage (again). https://blog.cryptographyengineering.com/2015/09/09/lets-talk-about-imessage-again/

Dawn Xiaodong Song, David A. Wagner, Xuqing Tian: Timing Analysis of Keystrokes and Timing Attacks on SSH. USENIX Security Symposium 2001

Putin gives federal security agents two weeks to produce ‘encryption keys’ for the Internet. Posted Jul. 7, 2016.

Kate Conger: WhatsApp blocked in Brazil again. Posted Jul. 19, 2016.

Keith Brasher: China Blocks WhatsApp, Broadening Online Censorship. Posted on Sept. 25, 2017.

Sebastian Usher: Saudi Arabia blocks Viber messaging service. Posted June 6, 2013.

Mehreen Zahra-Malik: Pakistan province orders halt to Skype over security concerns. Posted Oct. 7, 2013.

Erich Möchel: Vorratsspeicherung für Facebook-Daten. Posted Oct. 30, 2012.

Mariella Moon: Egypt has blocked encrypted messaging app Signal. Posted Dec. 20, 2016.

Andy Greenberg: Encryption App ‘Signal’ Fights Censorship With a Clever Workaround. Posted Dec. 21, 2016.

Wickr’s privacy statement and policies: https://wickr.com/privacy

Wickr’s Transparency Reports: https://wickr.com/information-request-reporting-report