Research and Advances
Computing Applications Review articles

Metadata-Private Communication for the 99%

Sketching the underlying system needed to facilitate metadata-private communication for several applications with a large user base.
Posted
  1. Introduction
  2. Key Insights
  3. Dealing with Nation-State Adversaries
  4. Online Privacy at Scale
  5. Discussion
  6. Acknowledgments
  7. References
  8. Author
  9. Footnotes
metadata, illustration

Online privacy is a prominent topic in the news and receives growing attention from the public. This motivated messaging services such as WhatsApp, Signal, and Telegram to deploy end-to-end encryption, which hides the content of messages from anyone who listens on the communication. While encryption is widely deployed, it does not hide metadata: anyone capable of tapping the network links can learn who is communicating with whom, at what times, and study their traffic volumes. Metadata reveals a lot about the underlying content. Public announcements by ex-government officials as well as the leaked Snowden documents have made it clear that intelligence organizations have a substantial interest in metadata even for encrypted communication since it often obviates the need for the actual content.13,24,28

Back to Top

Key Insights

  • The size of a user base is for crucial for online privacy. If a metadata-private system only has a handful of users, then installing the system’s client might raise suspicion. Therefore, to support a large user base and attract many users, rnetadata-private systems should be designed to be scalable and performant.
  • Perfect security and privacy guarantees make rnetadata-private systems challenging to scale. It is possible to make trade-offs, providing weaker guarantees with a more scalable and performant system.
  • The recent progress in metadata-private communication systems is impressive yet many challenges still exist, including supporting mobile devices and alleviating CPU bottlenecks.

The most popular system for hiding metadata—named Tor7—routes user traffic through a series of relay servers, as illustrated in Figure 1. In this fashion, the first relay only sees traffic to and from Alice, but it does not observe the other end of the conversation. Similarly, the last relay sees Bob’s traffic, but does not observe the user at the other end. So even if just one of the relays is honest, that is, keeps secret which incoming message maps to what outgoing message that it forwards, the connection between Alice and Bob remains hidden. One of the key reasons for Tor’s popularity is its performance, which can support mobile and desktop users and a variety of applications, such as messaging, Web surfing, and VoIP calls. However, Tor is vulnerable to attackers that can observe traffic going in and out of the honest relays. By tapping relays, attackers can correlate messages that a relay receives to those that it sends, and follow a message from its source to destination. In fact, it is sufficient to tap the first and last Tor relays to correlate traffic and break its privacy guarantees, as shown in Figure 2. Indeed, the Snowden documents revealed the U.S.’s National Security Agency (NSA) NSA and the U.K.’s Government Communications Headquarters (GCHQ) attempted to break the privacy provided by Tor in this manner (as well as by “contributing” their relays to the system).24

f1.jpg
Figure 1. Alice sends a message to Bob through Tor. The message routes through three relays to hide its metadata.

f2.jpg
Figure 2. Eavesdropping on the communication links allows identifying which pairs of users communicate through Tor by correlating their traffic patterns.

Recent research on systems that hide metadata focuses on dealing with a global adversary with the ability to monitor and inject network traffic on any link, comparable to nation-state organizations like the NSA or GCHQ. For example, various recent systems hide metadata for point-to-point messaging,1,19,27,35,36 and others facilitate anonymous postings on public bulletin boards3,18 in the presence of such an adversary.

Hiding metadata is complicated, and comes at a price. Metadata includes crucial information for functionality. As one notable example, the source and destination addresses, which identify the communication endpoints, are include d in every IP packet and are fundamental for establishing communication over the Internet. Other than explicit information recorded in network packets, a metadata private communication system also needs to obscure traffic correlations, such as the relation between the time at which messages were sent and the time they were delivered. If the attacker sees that Bob starts receiving messages when Alice starts sending messages and stops receiving messages when she stops sending them, then he can deduce that they are communicating even if the source and destination addresses in messages are hidden. Worse yet, the attacker might control Alice’s link to the Internet, which allows him to perturbate her connection to trigger such events. For example, the attacker may drop some of Alice’s messages and observe correlated reductions in throughput at Bob’s end.11,21

As a result, many of the connections through Tor are vulnerable to nation-state adversaries,26 and the substantial interest of the intelligence community in metadata13,28 means these attacks are a real problem in practice.

To hide metadata, the communication system needs to change the fundamentals that facilitate efficient communication over the Internet, such as packet routing and timely message delivery. The system might also need to constantly send messages, to hide when the user is actually communicating. These changes lead to significant challenges in designing practical metadata hiding communication systems and result in substantial overhead over the non-metadata-hiding “vanilla” counterparts. The overhead deems some applications, such as private voice and video chats, and some less-powerful devices like mobile phones, unusable with the current state of the art.

In this article explores the challenges ahead in hiding metadata to facilitate private communication through common types of applications. That is, allow two users who know one another to communicate without exposing that they are doing so. Hiding communication metadata can also facilitate anonymous communication, where users access a network service without revealing information about their identity. Some services refuse or throttle connections from anonymous users (for example, connected through Tor),17,29 which dispirits adoption. The discussion here is focused on the simpler problem of facilitating private communication at large scale between users who want to communicate with one another. Informally, we ask: is it possible to hide metadata for popular types of applications for user-to-user communication? Some compromises in security and privacy must be made to support large-scale latency-sensitive applications. We motivate why, despite these compromises, supporting such applications with substantial privacy and security guarantees may be possible even with a large user base.

Back to Top

Dealing with Nation-State Adversaries

Designing systems that hide metadata in the face of resourceful organizations like nation-states requires dealing with several attack vectors: A nation-state might monitor traffic on network links, not just within its territory, but also at a global scale. It might also corrupt a substantial fraction of the system’s servers in targeted attacks to read messages encrypted for these servers and find correlations between messages that they relay. Moreover, attacks are not confined to be technical. Nation-states might compel service providers to cooperate with them. For example, the NSA’s PRISM program coerced public companies to disclose data that was not otherwise observable by the NSA.12

To understand where the performance penalties in dealing with these challenges stem from, we explain in more detail the attacks that metadata private communication systems need to resist and the defenses that these systems typically deploy.

Aside from the challenges listed here, which aim to break privacy, nation-states may also target the availability of a metadata-hiding service. They may block the service to stop users from communicating privately or even detain users who install the system’s clients. We discuss why the key to dealing with such problems may lie in achieving good performance.

Traffic monitoring. IP addresses, service ports, message sizes and their transmission/receipt times are all directly observable to an attacker monitoring the traffic links. These observations reveal information about the underlying conversation, as discussed earlier. Traffic monitoring attacks are often also tough to detect since the attacker can remain passive, that is, avoid intervening with actual communication.

Internet routing increases the attack surface. Messages travel on the Internet through many hops and via several autonomous systems (independently administrated organizations). It is sufficient for an attacker to tap any of these links, or coerce any of the transient autonomous systems, to observe communication metadata. Internet routing gives end users very little control over the routes their messages take. They can choose their Internet service provider, but that provider selects the next hop (and that hop selects the next one). So forcing their messages to avoid routes through particular organizations is difficult. Worse yet, Internet routing is asymmetric, so messages traveling from Alice to Bob are likely to go over different links than those from Bob to Alice. Most protocols rely on bi-directional communication, which essentially doubles the attack surface,31 as illustrated in Figure 3. In particular, any protocol that relies on the Internet’s transmission control protocol (TCP) implicitly requires recipients to acknowledge every message they get, which makes 40% of connections through Tor vulnerable to nation-states that passively tap the network links in their territory.26

f3.jpg
Figure 3. The attacker can only observe traffic going through autonomous system C, which allows him to monitor traffic from Bob to Alice and learn they are communicating.

The underlying protocols that facilitate communication over the Internet were not designed to be secure. As a result, even if traffic would not typically route through an attacker-controlled organization, changing the traffic’s path is usually feasible and with minimal effort. Mounting prefix hijacks only requires administrating one autonomous system and announcing someone else’s address space. The false announcement manipulates the Internet’s routing protocol into delivering to the attacker messages intended to the (hijacked) address space. In fact, Internet routing is so fragile that it is difficult to distinguish between common configuration mistakes and deliberate attacks, providing attackers a reasonable excuse even if they are detected. In 2013, an ISP in Belarus hijacked Internet traffic intended for Iceland.4 In a similar, but believed to be accidental, incident in 2014 Indosat hijacked traffic for 44 Tor relays (as well as other destinations).40

Dealing with traffic monitoring. To mitigate traffic monitoring attacks, it is crucial to ensure traffic patterns appear the same no matter which users communicate and regardless of the time, length, and content of their conversation. To this aim, many systems that hide metadata route fixed-size messages through a relay server, which unlinks input messages from output messages.

The relay server shuffles the output messages’ transmission order. Since the shuffle permutation is secret, the adversary cannot map a message output from the relay back to the corresponding input. (Hop-to-hop encryption between the users and the relay unlinks the contents of the relay’s inputs from its outputs.) In this manner, directly observable fields in the message, like source and destination addresses, do not reveal information about the user-to-user communication metadata. To prevent timing correlations, many systems are synchronous. They operate in rounds, where at the beginning of the round each user submits a message, and at the end of the round, the system delivers the message to its destination. Synchronicity allows dealing with timing attacks since all message exchanges happen on round boundaries.

Routing through a relay often forces messages through unduly long routes, and synchronicity implies that a relay cannot forward even a single message before it is done processing the entire batch of messages for a particular round. Another unfortunate consequence of synchronous designs is that they require all users to keep submitting messages to the system at every round or it becomes apparent when users are involved in a conversation. Short rounds would facilitate low latency communication, but require everyone to send messages all of the time. Having clients constantly send messages increases the load on the system and the cost for the clients. It implies that operating a client has high network and energy costs that make existing synchronous solutions prohibitively expensive to run on mobile devices, which are limited by data plans and battery life. Support for mobile devices in a synchronous system remains a largely unsolved challenge, which I discuss later.

Indeed, asynchronous systems are more perfomant than others7,27 and better accommodate latency-sensitive applications and mobile clients, but may suffer from statistical attacks that correlate users’ traffic patterns.7,27

Corrupt servers and colluding operators. Attackers may compromise the system’s servers. Moreover, nation-states might be in a position to coerce a server operator to cooperate with them.12 Such attacks would expose the server’s secrets to the attacker. In the typical relay-based operation sketched here, the server’s secrets would allow the attacker to learn the message-shuffle permutation and map messages sent from the relay server back to its inputs, thereby unveiling the source and destination of each message. One standard approach for resisting malicious servers is to route each message through several relays, each administered by a different organization, as shown for Tor in Figure 1.

A typical design goal is to guarantee that if any of these servers are honest, metadata remains hidden. Of course, processing a message by multiple servers induces latency, adding to the challenge of supporting latency-sensitive applications in practice. Notably, it is possible to avoid trusting any of the system’s servers using sophisticated cryptographic constructs such as private information retrieval, which obviates routing messages through multiple relays.

Service blocking. An attacker might give up on breaking the system’s privacy guarantees, and block connections to the system altogether or detain its users. In particular, there is evidence that some governments fingerprint and block connections to Tor.2,38 One approach for protecting against fingerprinting is to disguise communication as other popular services, which are already perceived legitimate.23,22,27 However, disguising the service requires keeping its server addresses hidden to avoid blacklisting (see discussion in Moghaddam23). The approach taken by the Tor project to protect against blacklisting is to secretly distribute ephemeral addresses of “bridge” nodes, relay servers to which users directly connect to access Tor. Bridges ultimately get discovered and blocked, creating a cat-and-mouse game between Tor’s operators and some governments.

A perhaps more concerning problem is that attackers might consider running the system’s client to be suspicious in its own right. It seems that the most effective way to combat this concern is to make the metadata-hiding service so popular, such that using it does not raise suspicion (an argument originally made by Tor’s designers6). Making a privacy-preserving service appeal to a broad audience of users, who often do not worry much about their privacy, requires achieving comparable performance to the available non-metadata-hiding alternative. Achieving good performance together with the design constraints forced by the privacy or security requirements is difficult.

Back to Top

Online Privacy at Scale

A large user base is crucial for privacy; so metadata-hiding communication systems must be designed to scale. One standard approach to scaling a system, in general, is to design it such that the more servers are available, the more users it can support. In the context of metadata-hiding systems, which typically rely on volunteer organizations to contribute and operate servers, we would like the performance to improve as the number of those organizations grows. This property is known as horizontal scaling. (Contrasted to vertical scaling, which requires every provider to contribute beefier machines and more bandwidth.) Recent research shows a fundamental trade-off:5 to achieve low-latency communication without compromising on privacy, the system must increase its bandwidth proportionally to the number of users; so horizontal scaling should be treated as a first-class design principle, as it allows to keep the load on each contributing organization moderate even when the user base grows large. Through horizontal scaling, Tor can serve millions of concurrent users. It distributes the client-load across more than 6,000 servers and reaches acceptable latency to support a variety of Internet applications.a

The research community’s efforts to build metadata-private systems that can resist nation-state adversaries have so far resulted in systems that provide medium to high latency, ranging from several seconds to hours. Three recent examples, Atom, Stadium, and Karaoke,18,19,35 have shown how to design such communication systems that scale horizontally. Notably, with 100 organizations contributing servers, Karaoke can exchange messages between two million users every seven seconds—over 20 billion messages per day.19 Exchanging this many messages per day falls in the ballpark of popular (encrypted, but not metadata-hiding) messaging applications such as WhatsApp and Telegram, which reported delivering 55 billion and 15 billion messages per day.3,24 However, relatively high latency has limited the applications for metadata-private systems. For example, it may be acceptable for an email message to reach the recipient’s mailbox with a latency of a few minutes, but messaging applications are expected to deliver within seconds, and voice and video chats require sub-second latency for adequate user experience that is key for broad adoption. This limitation is unfortunate since latency-sensitive communication mediums, such as VoIP calls and video conferencing, are popular. For example, in 2013, Microsoft reported two billion minutes of Skype calls every day.22

The primary challenge in designing communication systems that hide metadata is achieving a combination of three crucial goals: providing strong privacy guarantees, resisting attacks from nation-state adversaries, and attaining sufficient performance to support a target higher-level application (such as email, text messaging, or voice chats) with the adequate user experience.

The cost of perfect guarantees. Targeting the best privacy guarantees under the most challenging trust assumptions leads to performance problems. Specifically, challenging the ability of the design to scale to support a large user base.

Privacy. It is possible to design systems that avoid leaking any information at the cost of excessive communication or computation. For example, a system where each user sends every message to all others can resist passive and active attacks.3 Cryptographic techniques like DC-nets39 and Private Information Retrieval,1 keep the client’s communication cost to a small constant, but introduce computational overheads that are quadratic in the number of users. These designs1,3,39 provide the strongest form of privacy that users could hope for, but due to the overhead of these schemes performance suffers severely as the userbase grows.

However, even if the system facilitates communication without leaking any information about metadata, some information might leak just by the limitations of a human user. For example, if Alice is currently on a voice call with Bob, she is probably incapable of simultaneously talking with another user. Therefore, if the attacker tries to call Alice and Bob at the same time and the two do not respond, then he learns some statistical information about the possibility that they are communicating. By repeating this experiment, the attacker might find that Alice and Bob’s availability is correlated, and reach an informed conclusion about whether they communicate. A fundamental question in designing metadata-hiding systems is therefore what kind of information leakage is acceptable for the particular application, and whether we can trade some leakage to get better performance.

Strength of trust assumptions. Systems must have clear underlying trust assumptions to provide meaningful privacy guarantees. Weaker assumptions mean the system’s privacy guarantees are more likely to hold in practice. The weakest form of trust assumption—referred to as zero-trust—is not trusting the system at all. It means the system’s servers facilitate communication, but even if they all turn out to be malicious, the system maintains its privacy guarantee—to keep communication metadata hidden (although malicious servers might still prevent users from communicating).

The zero-trust assumption inevitably comes with a significant computational cost. To understand why, consider an idealized scenario where users deposit messages at a server, and each user can poll that server for messages intended for them; fetching a message without leaking information about which one is being fetched can be done using cryptographic protocols for private information retrieval. Fundamentally, however, the server must process all deposited messages for each user query to be untrusted. Otherwise, the server must know that some user could not have sent a message to some other user, so some information about the metadata was surely exposed to the server. This implicit computational requirement leads to significant challenges in supporting many users. As one concrete example, Pung1 uses zero-trust as its underlying security assumption, with 2M users its communication latency grows to 18 minutes.


Internet routing is so fragile that it is difficult to distinguish between common configuration mistakes and deliberate attacks, providing attackers a reasonable excuse even if they are detected.


The 3-way trade-off. To get around the performance challenges discussed earlier, it seems necessary to compromise on weaker privacy guarantees and stronger trust assumptions. Here, possible compromises are examined as well as what they enable in terms of performance. The accompanying table summarizes the trade-off points of several recent proposals.

t1.jpg
Table 1. The 3-way trade-off in state-of-the-art systems. More performant systems provide weaker privacy and security guarantees. (Karaoke and Pung are horizontally scalable and evaluated with 100 servers.)

Privacy vs. performance. Hiding all communication metadata, no matter what actions the attacker takes, results in significant overhead. However, as a recent design shows, it is possible to efficiently prevent any leakage of information about a conversation’s metadata when the attacker is passive.19 This is important because, as we noted earlier, passive attacks are tough to detect.

It is also possible to avoid severe performance penalties in case the attacker is active by relaxing the privacy goal to allow leaking a small amount of statistical information on every message exchange. Systematic mechanisms for quantifying and limiting information leakage to an adversary were studied through differential privacy.8 Differentially private systems protect an individual’s data by stochastically noising the system’s outputs which the attacker can observe. These systems provide a weaker notion of privacy than those that rely on message broadcast or private information retrieval, and have seen significant adoption in practice (for example, by Google9 and Apple14).

In context of communication systems, differential privacy limits information leakage by adding dummy messages as cover traffic.19,20,35,36 The system’s servers generate these dummies, and their amount is decided at random by each server in every communication round according to a fixed distribution (set by the system’s designers). The stochastic process of adding dummy messages to user messages limits what the attacker could learn on the actual user-to-user communication from whatever attack he executes. The more dummy messages the system processes, the less information that might leak on each message-exchange. The performance benefit of differential privacy stems from the fact that the quantity of dummies is independent of the number of users and messages they exchange.

Strength of trust assumptions vs. performance. Zero trust induces high computational costs, which limits the ability to support a large user base. It is therefore important to identify weaker trust assumptions that are likely to hold in practice. Two other forms of trust assumptions are common.

Any-trust. One common alternative is to deploy the communication service over several servers administered by independent organizations, and to trust that at least one of these organizations is honest (without requiring users to trust a specific one). Distributing trust across many servers in the context of metadata-private communication systems dates back to Chaum’s mixnets in the 1980s. By relaxing the security goal from zero-trust to any-trust, we can avoid the overhead of scanning all senders’ inputs for each output that the system provides to a recipient. In practice, Vuvuzela,36 the most performant any-trust system, can exchange messages between 2M users in about a minute (over an order of magnitude of improvement in latency and throughput over Pung).

A minute of latency is, however, far from being on par with vanilla messaging applications. A key reason for this performance handicap is that the any-trust assumption excludes horizontal scaling. Since under this assumption there may be just one honest server in the entire system, each server must process all messages (otherwise some messages might have skipped processing by the honest server, leaving their metadata exposed). In contrast, vanilla applications distribute the load over many servers.

A fraction of the system’s servers are honest. It seems inevitable to require a stronger trust assumption than zero-or any-trust to be able to scale the system to support a large user base. A much more performance-friendly assumption is that some fraction of the system’s servers are honest. It allows sharding the load of user messages among servers (enabling horizontal scaling).18,19,35 Under this assumption, a user’s message may be processed only by a small subset of the servers, where one server in the subset is assumed to be honest (that is, each subgroup of servers is any-trust). Karaoke, which operates under this assumption, improved on Vuvuzela’s performance by about an order of magnitude in latency and throughput.19

Back to Top

Discussion

We have so far discussed the difficulties in building popular means of communication that hide metadata. However, recent progress in research on metadata-hiding systems is promising. We next motivate why building such systems, that support low-latency applications like voice and video chats, may be tangible and discuss some of the remaining challenges that the research community would need to tackle to make it happen.

Metadata-private low-latency communication at scale is tangible. The latency of communication through today’s systems is high, but there is room for optimism about supporting low-latency applications in the future. This section explores why providing meaningful metadata privacy guarantees for important types of applications may be feasible.

Focusing on human communication. The volume of machine-to-machine communication proliferates, and constant improvements in network infrastructure support this growth. Latency-sensitive human-to-human communication typically involves lively interactions between people. As such, users are typically not part of simultaneous conversations and the duration of conversations is often limited. For many types of applications, such as voice calls, these properties imply the number of conversations and the amount of metadata we need to protect does not grow as rapidly as machine-to-machine communication (as one example, the average monthly mobile voice minutes per person in the U.K. increased only by 10% between 2008 and 201330). The difference in growth rates suggests that systems’ capabilities may catchup with the amount of human-generated communication, and its metadata that we might wish to hide.

Leaking some information may be acceptable. As discussed previously, even a perfect metadata-hiding communication system might not prevent the attacker from learning some information about the communication (for example, since a user might only be able to have one conversation at a time). The challenge, therefore, lies in leaking as little information as possible while providing solid performance. Differential privacy seems to be a promising direction in consolidating this trade-off. It allows to limit and quantify the amount of information the system leaks to an attacker and allows to circumvent the computation and communication overheads of other approaches (where the system leaks no additional information about its users’ traffic).

Challenges ahead. Recent works show that metadata-private systems can provide meaningful privacy guarantees and support a large user base. In the last eight years, the throughput of metadata-private systems has increased from a few hundreds of messages per second39 to over 570k messages per second.19 Achieving this improvement is accredited to theoretical advances in cryptographic constructs and differential privacy, horizontally scalable designs, and engineering efforts. However, despite these advances, the current state-of-the-art metadata-private communication systems still induce significant performance penalties. Here are three remaining challenges that current systems face, which, if alleviated, could dramatically improve performance and drive user adoption:

Supporting mobile devices. A common approach to avoid exposing correlations between the times that Alice and Bob are online and might be communicating is to have the clients regularly send messages (sending dummies when the users are not involved in a conversation). This approach, which is taken by state-of-the-art systems, induces significant cost to battery life and data-plans when running on mobiles. To support mobile clients used by a massive portion of the Internet’s users, it seems necessary to rethink this strategy.

Reducing computations. The recent horizontally scalable designs distribute the communication overhead over many contributing organizations. The state-of-the-art systems, however, make extensive use of public key cryptography and rely on hefty cryptographic protocols like zero-knowledge proofs of shuffle and correct decryption. In particular, the performance of the horizontally scalable systems in the table—Karaoke and Pung—is bounded by computations (see experimental evaluation.1,19). Finding a way to minimize the use of these cryptographic constructs, such as by establishing persistent private sessions and using symmetric-key cryptography within these sessions (as often done when hiding content), would alleviate the computational bottleneck and reduce communication latency significantly.

Improving the topology. It is common to route messages through servers operated by organizations in different political and geographic regions, to reduce the chance that the organizations administrating these servers would collude or coerced to expose secrets. Topology studies were performed mostly on Tor, to avoid routing through specific autonomous systems16,25 (combating the attacks mentioned previously) and to avoid overloading specific relays.15

A largely remaining challenge is to optimize the route that messages would take through different geographic regions, so as to avoid sending messages through an overly long distance. A route with many distant randomly selected hops18,19,35,36 means that even if each server only relays a small portion of the messages, and does not perform any computationally heavy processing, the aggregate of the interserver latencies might be too expensive to support some applications. A significant challenge in supporting latency-sensitive applications is therefore to identify better routing topologies, which allow to mix all messages for privacy, yet do not require messages to go through many hops for performance.

Back to Top

Acknowledgments

The author thanks Sharon Goldberg, David Lazar, Michael Schapira, Adam Suhl, Moshe Vardi, Nickolai Zeldovich, and the anonymous reviewers for their helpful comments on earlier versions of this article.

Back to Top

Back to Top

Back to Top

    1. Angel, S. and T. V. Setty, S. Unobservable communication over fully untrusted infrastructure. In Proceedings of OSDI, 2016. K. Keeton and T. (Eds.). USENIX Assoc., 551–569.

    2. Aryan, S., Aryan, H. and Halderman, J.A. Internet censorship in Iran: A first look. In Proceedings of FOCI, 2013. J.R. Crandall and J. Wright (Eds.). USENIX Assoc.

    3. Corrigan-Gibbs, H., Boneh, D. and Mazières, D. Riposte: An anonymous messaging system handling millions of users. In Proceedings of IEEE Symposium on Security and Privacy, 2015. IEEE Computer Society, 321–338; http://ieeexplore.ieee.org/xpl/mostRecentIssue.jsp?punumber=7160813

    4. Cowie, J. New Threat: Targeted Internet Traffic Misdirection; http://www.renesys.com/2013/11/mitm-internet-hijacking.

    5. Das, D., Meiser, S., Mohammadi, E. and Kate, A. Anonymity trilemma: Strong anonymity, low bandwidth, low latency—Choose two. In Proceedings of Security and Privacy. IEEE, 2018.

    6. Dingledine, R. and Mathewson, N. Anonymity loves company: Usability and the network effect. In Proceedings of Workshop on the Economics of Information Security, 2006.

    7. Dingledine, R., Mathewson, N. and Syverson, P.F. Tor: The second-generation onion router. In Proceedings of USENIX Security Symposium, 2004. M. Blaze (Ed.). USENIX, 303–320.

    8. Dwork, C., McSherry, F., Nissim, K. and Smith, A.D. Calibrating noise to sensitivity in private data analysis. TCC 3876 (2006). Springer, 265–284.

    9. Erlingsson, U., Pihur, V., and Korolova, A. RAPPOR: Randomized aggregatable privacy-preserving ordinal response. In Proceedings of ACM Conf. Computer and Communications Security, 2014. G.-J. Ahn, M. Yung, and N. Li (Eds.). ACM, 1054–1067; http://dl.acm.org/citation.cfm?id=2660267

    10. Finley, K. Half of the Internet is now encrypted. This makes everyone safer; https://www.wired.com/2017/01/half-web-now-encrypted-makeseveryone-safer/.

    11. Gilad, Y. and Herzberg, A. Spying in the dark: TCP and Tor traffic analysis. Privacy Enhancing Technologies, LNCS 7384 (2012). S. Fischer-Hübner and M.K. Wright (Eds.). Springer, 100–119.

    12. Greenwald, G. and MacAskill, E. NSA Prism program taps in to user data of Apple, Google and others; https://www.theguardian.com/world/2013/jun/06/us-tech-giants-nsa-data.

    13. Hayden, M. The price of privacy: Re-evaluating the NSA. Proceedings of the Johns Hopkins Foreign Affairs Symposium. (Apr. 2014); https://www.youtube.com/watch?v=kV2HDM86XgI&t=17m50s.

    14. Apple, Inc. Differential Privacy, 2016; https://www.apple.com/privacy/docs/Differential_Privacy_Overview.pdf.

    15. Johnson, A., Jansen, R., Hopper, N., Segal, A. and Syverson, P. PeerFlow: Secure load balancing in Tor. Proceedings of PoPETs 2 (2017), 74–94.

    16. Johnson, A., Wacek, C., Jansen, R., Sherr, M. and Syverson, P. Users get routed: Traffic correlation on Tor by realistic adversaries. In Proceedings of ACM Conf. Computer and Communications Security, 2013 A-R Sadeghi, V.D. Gligor and M. Yung (Eds.). ACM, 337–348; http://dl.acm.org/citation.cfm?id=2541806

    17. Khattak, S. et al. Do you see what I see? Differential treatment of anonymous users. In Proceedings of NDSS, 2016. The Internet Society; https://bit.ly/2XbOnZ6

    18. Kwon, A., Corrigan-Gibbs, H., Devadas, S. and Ford, B. Atom: Horizontally scaling strong anonymity. In Proceedings of SOSP, 2017. ACM, 406–422; http://dl.acm.org/citation.cfm?id=3132747

    19. Lazar, D., Gilad, Y. and Zeldovich, N. Karaoke: Fast and strong metadata privacy with low noise. In Proceedings of OSDI, 2018. USENIX Assoc.

    20. Lazar, D. and Zeldovich, N. Alpenhorn: Bootstrapping secure communication without leaking metadata. In Proceedings of OSDI, 2016. K. Keeton and T. Roscoe (Eds.). USENIX Assoc., 571–586.

    21. Levine, B.N., Reiter, M.K., Wang, C. and Wright, M.K. 2004. Timing attacks in low-latency mix systems (extended abstract). Financial Cryptography LNCS, 3110. A. Juels (Ed.). Springer, 251–265.

    22. Microsoft. 2 Billion Minutes a Day! Skype blog; https://bit.ly/2EGMYm2

    23. Moghaddam, H.M., Li, B., Derakhshani, M., and Goldberg, I. SkypeMorph: Protocol obfuscation for Tor bridges. In Proceedings of ACM Conf. Computer and Communications Security. T. Yu, G. Danezis, and V.D. Gligor (Eds.). ACM, 97–108; http://dl.acm.org/citation.cfm?id=2382196

    24. National Security Agency. Tor stinks. The Guardian. (Oct. 2013); https://bit.ly/2Qzntb7.

    25. Nithyanand, R., Singh, R., Cho, S. and Gill, P. Holding all the ASes: Identifying and circumventing the pitfalls of AS-aware Tor client design. CoRR, 2016; http://arxiv.org/abs/1605.03596

    26. Nithyanand, R., Starov, O., Gill, P., Zair, A. and Schapira, M. Measuring and mitigating AS-level adversaries against Tor. In Proceedings of NDSS, 2016. The Internet Society; https://bit.ly/2wqK54o

    27. Piotrowska, A.M., Hayes, J., Elahi, T., Meiser, S. and Danezis, G. The Loopix anonymity system. In Proceedings of USENIX Security Symposium, 2017. E. Kirda and T. Ristenpart (Eds.). USENIX Assoc., 1199–1216.

    28. Rusbridger, A. The Snowden leaks and the public; https://www.nybooks.com/articles/2013/11/21/snowden-leaks-and-public/

    29. Singh, R. et al. Characterizing the nature and dynamics of Tor exit blocking. In Proceedings of USENIX Security Symposium, 2017. E. Kirda and T. Ristenpart, (Eds.). USENIX Assoc., 325–341.

    30. Statistica, the statistics portal. Average monthly outbound mobile voice minutes per person in the United Kingdom (UK) from 2008 to 2013 (in minutes). (2013).

    31. Sun, Y. et al. RAPTOR: Routing attacks on privacy in Tor. In Proceedings of USENIX Security Symposium, 2015. J. Jung and T. Holz (Eds.). USENIX Assoc., 271–286.

    32. Telegram. 15 Billion Telegrams Delivered Daily. Telegram announcement, 2017; https://telegram.org/blog/15-billion.

    33. The Tor project. Pluggable Transports, 2017; https://www.torproject.org/docs/pluggable-transports.

    34. Tung, L. WhatsApp: Now one billion people send 55 billion messages per day, 2017; http://www.zdnet.com/article/whatsapp-now-one-billion-people-send-55-billion-messages-per-day/.

    35. Tyagi, N., Gilad, Y., Leung, E., Zaharia, M. and Zeldovich, N. Stadium: A distributed metadata-private messaging system. In Proceedings of SOSP, 2017. ACM, 423–440; http://dl.acm.org/citation.cfm?id=3132747

    36. Hooff, J., Lazar, D., Zaharia, M. and Zeldovich, N. Vuvuzela: Scalable private messaging resistant to traffic analysis. In Proceedings of SOSP, 2015. E.L. Miller and S. Hand (Eds.). ACM, 137–152; http://dl.acm.org/citation.cfm?id=2815400

    37. Weinberg, Z. et al. StegoTorus: A camouflage proxy for the Tor anonymity system. In Proceedings of ACM Conf. on Computer and Communications Security, 2012. T. Yu, G. Danezis and V.D. Gligor (Eds.). ACM, 109–120; http://dl.acm.org/citation.cfm?id=2382196

    38. Winter, P. and Lindskog, S. How the great firewall of China is blocking Tor. In Proceedings of FOCI, 2012. R. Dingledine and J. Wright (Eds.). USENIX Assoc.

    39. Wolinsky, D.I., Corrigan-Gibbs, H., Ford, B. and Johnson, A. dissent in numbers: Making strong anonymity scale. In Proceedings of OSDI, 2012. C. Thekkath and A. Vahdat (Eds.). USENIX Assoc., 179–182.

    40. Zmijewski, E. Indonesia Hijacks the World, 2013; https://dyn.com/blog/indonesia-hijacks-world/.

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More