Threat Models for Quantum Computers
Developing and understanding threat models for quantum computers will be a crucial step in helping to design defenses for these emergent computing platforms. Threat models can clearly specify what should be protected and what capabilities possible attackers have. Once the valuable assets are identified, and the potential attacks understood, only then can defenses be developed. This newsletter article gives brief background on threat models, and outlines some possible threat models that can be useful in thinking about security of quantum computers. Also, by working to establish a common set of threat models, different security solutions can be compared more fairly. This article advocates for establishing a common set of threats and threat models that researchers and industry can aim to prevent.
What is a Threat Model
According to Wikipedia, “threat modeling is a process by which potential threats, such as structural vulnerabilities or the absence of appropriate safeguards, can be identified and enumerated, and countermeasures prioritized.” Further, threat modeling helps system designers to answers questions such as “what is the sensitive information that should be protected?”, “what attacks is a system vulnerable to?”, “what attacks are most likely?”, and also importantly “what threats and attacks are out-of-scope?”.
Threat modeling is a subjective process. Experts in the research field define the threat model based on their intuition and knowledge of the prior security threats that have emerged. An important aspect of a computer system that helps determine the threat model is how the system is used. For example, a smartphone typically has one user who runs his or her applications on the smartphone. The applications may be malicious, if user downloads an application from an unknown source, for example, but it is unlikely that there could be treats form other users running concurrently on the smartphone – since the smartphone typically has only one user. On the other hand, in a public cloud-computing data center, there are many users running concurrently on the same hardware, and in this setting one of the users could be malicious and try to attack a victim user who happens to run on the same cloud platform. Among cloud-computing data centers, threats for public clouds will be different from private clouds. Private clouds may have more trusted users, simply since they are all from the same organization, and remote software attacks among users may be unlikely. On the other hand, a private cloud belonging to government or military organization may be worried about malicious insiders or spies within the data center trying to steal highly sensitive information.
Defining Sensitive Information and Capabilities of the Attackers
Sensitive information or data is the asset that needs to be protected. From hardware security perspective, the focus is on computation and data storage, less on networking. Due to this focus, communication is usually assumed to be secure and encrypted and sensitive information is not leaked during transmission. This of course may not always be true, with poor encryption, for example, data could be captured in transit. But it is usually implied that networking is not a problem. On the other hand, the sensitive information or data could be vulnerable during storage or computation. What the sensitive information is, depends on the context.
Cryptographic keys are the most straightforward type of information that can be defined as sensitive. If attacker has access to the cryptographic keys, he or she can then decrypt data secured with these keys. When considering attacks on cryptographic algorithms, it is almost always assumed that the encryption algorithm is known. The Kerckhoffs's principle effectively states that the algorithms should not be secret, but only the encryption keys. The principle is in contrast to security through obscurity.
Other types of information can also be sensitive, such as medical data, hardware design files, etc. It is also almost always assumed that the attacker knows the program when the target is some sort of data, such as medical data. From hardware security perspective, attackers who collect some side-channel information need the knowledge of the software to correlate the side-channel information back to the data.
Most recently, machine learning model parameters have become another type of sensitive information. Attackers may want to learn the secret or proprietary parameters. The assumption here is that the attacker knows the type of machine learning algorithm, such as that it is a convolutional neural network, but does not now the kernel parameters or number of layers, for example. This is the information he or she may want to steal.
The attackers may also want to learn not just data or parameters, but the raw programs or code. In this case the attacker may have some information, for example when the user will execute his or her code, and then can collect side-channel information to learn what instructions the victim executed. This could let them copy or learn some proprietary program executed by the user.
From above examples it can be seen that “sensitive information” can be almost anything from data, to parameters, to program instructions. Consequently, the threat model needs to clearly define which one (or more than one) of these is sensitive and needs protection.
Above examples also illustrate different capabilities of attackers. The capabilities can be divided into attacker’s knowledge, and his or her ability to collect information. Regarding the knowledge, it may be the knowledge of what encryption algorithm is used, or what machine learning algorithm is used. The ability to collect information is about whether he or she can run software concurrently with the victim on a CPU, or run hardware designs concurrently on an FPGA, and collect side-channel information that way; or maybe he or she has physical access and can collect power, thermal, acoustic, EM, or other emanations.
In context of quantum computers, similar types of sensitive information are possible. Notable difference is that through no-cloning theorem, state of qubits cannot simply be copied. Thus in quantum computers, the quantum data may be not a target, but the classical measurement of that data could be. Also, recovering the instructions (gates) could be target since no-cloning theorem states nothing about ability to spy on quantum operations. Regarding the attackers’ abilities, there are likewise parallels to classical computers. Most notably, attackers could use software, i.e. quantum circuits, to spy on other users, for example.
Possible Threats and Threat Models for Quantum Computers
Modeled on threat models found in classical computer security, and few different threats and resulting threat models for quantum computers could be proposed. Below assumes secure networking and that there are no threats in the communication between users and the quantum computers.
Untrusted remote users. With many of the current quantum computers being cloud-based, it is natural to assume that there could be malicious users who run their code on the remote quantum computers. If single-tenancy is considered, the users could either try to leak information from users running on the quantum computer before them, or affect the computation of the subsequent users, since in single-tenancy only one user runs on a machine at the same time. The users could also try to disrupt the operation of the quantum computers, possibly to inflict damage on the equipment or reputation of the quantum computer provider. Lastly, users could also try to run code to reverse engineer the infrastructure or quantum computer design. If multi-tenancy is considered, there are even more threats, such as users spying on concurrently executing users, or trying to affect their operations. With qubits, it is not possible to obtain the quantum state, due to the no-cloning theorem; however, it may be possible to spy on the operations, i.e. quantum gates, performed. Or possibly steal the measurement results, which are classical data.
Honest-but-curious cloud provider. Another possible threat due to many of the current quantum computers being cloud-based is that the cloud provider is honest-but-curious. The provider may not maliciously alter the computation, but could try to learn what the users are doing. Especially, there is a tension between independent developers and startup making quantum algorithms, and the quantum computer providers who have the hardware. The startups need to execute their code on the quantum computers they do not own, and the valuable intellectual property is in the design of the algorithms, which the cloud provider could try to steal.
Untrusted cloud provider or malicious insiders. A more extreme example of threats from cloud provider side are outright malicious providers, or malicious insiders inside the data center. The latter is more probable, and represents someone, e.g. a technician, with access to the physical infrastructure, but maybe not access to the software. As quantum computers become smaller in size, and perhaps eventually can be made fit into a standard server-rack chassis, this threat of physical attacks becomes even greater as they will no-longer be in secured data centers.
About the author:
Jakub Szefer is an Associate Professor of Electrical Engineering at Yale University where he leads the Computer Architecture and Security Laboratory (CASLAB). His research interests broadly encompass computer architecture and hardware security of computing systems, including security of quantum computers and post-quantum cryptography.