Ex) Article Title, Author, Keywords
Current Optics
and Photonics
Ex) Article Title, Author, Keywords
Curr. Opt. Photon. 2025; 9(1): 1-18
Published online February 25, 2025 https://doi.org/10.3807/COPP.2025.9.1.1
Copyright © Optical Society of Korea.
Corresponding author: *changhun0218@kaist.ac.kr, ORCID 0000-0003-2002-1928
This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/4.0/) which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.
Boson sampling is a restricted model of quantum computation, designed to achieve quantum advantage using nonuniversal quantum systems. By harnessing the quantum interference of indistinguishable bosons (typically photons), it becomes possible to sample from a probability distribution, which is intractable for classical computers. This paper reviews the theoretical foundations of boson sampling and its variations, including Fock-state, scattershot, and Gaussian boson sampling, along with significant experimental progress, from early small-scale demonstrations to large-scale quantum supremacy claims. We further explore classical algorithms for simulating boson sampling, which are crucial for benchmarking the performance of experimental results. Finally we examine potential applications of boson sampling in various fields, including simulation of molecular vibronic spectra in quantum chemistry, and solution of graph-based problems in optimization. These applications demonstrate the wide-ranging impact that boson sampling could have on industries that rely on complex computational models, making it a promising quantum technology for near-term applications.
Keywords: Boson sampling, Quantum advantage, Quantum computer, Quantum optics
OCIS codes: (270.0270) Quantum optics; (270.5585) Quantum information and processing; (270.6570) Squeezed states
Quantum computers are widely believed to hold the potential to solve problems that are intractable for classical computers, offering unprecedented computational power in areas such as cryptography, optimization, and materials science [1–3]. However, one of the significant obstacles hindering us from realizing this potential is the presence of inevitable noise in quantum systems. Quantum noise, caused by unwanted interactions with the environment and imperfections in quantum operations, can significantly disturb the quantum states during computation, which causes errors in the results. While quantum error correction has been proposed as a solution to this problem [1], enabling reliable and large-scale quantum computation for arbitrarily long run times, and while there has been rapid experimental progress in realizing it in various experimental platforms [4–9], its practical implementation for larger systems and eventually for large-scale quantum computation requires a vast overhead of additional qubits and resources, which are beyond the capabilities of current technology.
Hence, the development of large-scale, fault-tolerant universal quantum computers capable of outperforming classical systems on a broad range of problems remains a daunting challenge. This has led to increased interest in employing near-term quantum devices: Quantum systems with limited capabilities that may still provide computational advantages over classical methods for specific tasks, even though they suffer from noise. More specifically, these are called noisy intermediate-scale quantum (NISQ) devices [10], since they are still prone to noise because we do not apply quantum error correction, and the number of qubits is still limited but large enough to prevent efficient brute-force classical simulation. One of the most promising avenues within this framework is the study of sampling problems [11], where quantum devices are designed to generate samples from probability distributions that are hard to mimic using classical computers. The importance of such sampling problems lies in the complexity-theoretic hardness of simulating the sampling using classical computers, i.e., quantum advantage. The most representative sampling problems are random circuit sampling [12–14], boson sampling [15–17], and instantaneous quantum polynomial (IQP) sampling [18–20].
First of all, in random circuit sampling we simply apply a randomly chosen quantum circuit of sufficient depth to many qubits, and then measure the qubits on a computational basis. The measurement provides a random bit string, the length of which is the same as the number of qubits, that follows the output distribution determined by the circuit configuration. A crucial feature of the distribution that makes random circuit sampling attractive is that no classical algorithm can sample from it efficiently (unless some plausible conjectures turn out to be false). Random circuit sampling gained more prominence as Google’s Sycamore quantum processor performed a random-circuit-sampling experiment with 53 qubits, marking a milestone in the quantum supremacy race by claiming that their output could not be simulated by classical supercomputers within a reasonable time [13]. While there has been much progress in classically simulating such circuits, mostly by using tensor-network algorithms [21–23], random circuit sampling of better quality and quantity has been implemented by several groups [24, 25] and has not been simulated using classical computers so far.
IQP sampling involves much simpler quantum circuits that only contain diagonal gates on the Pauli-X basis, after and before a single layer of Hadamard gates. Although IQP circuits are not universal for quantum computation because of their simple structure, their output distributions are still conjectured to be hard to sample from classically via random circuit sampling. There have been recent experiments implementing IQP sampling using atoms, along with error-detection schemes [7].
Finally, boson sampling, the main focus of this review, was first proposed by Aaronson and Arkhipov [15] in 2011. Unlike other sampling problems that use qubits, boson sampling explores the computational power of linear-optical networks. In this proposed setup, indistinguishable bosons (typically photons) are passed through a large-scale interferometer, forming complicated interference patterns and ultimately creating a probability distribution from output states that is hard to simulate classically, when output photons are measured on the Fock basis. Despite its nonuniversality, just like IQP circuits boson sampling is considered a promising candidate for demonstrating quantum advantage, due to both the complexity-theoretic hardness results showing that any classical algorithm would take exponential time to simulate the systems, and its experimental feasibility, as its requirement for dynamics is simply linear-optical circuits. The original boson sampling based on Fock-state input has been further extended to more general input states, such as Gaussian boson sampling [16, 17], to make scaling up experiments and eventually achieving quantum advantage more feasible.
Since the theoretical proposals, there have been several successful experimental demonstrations of boson sampling [26–29], showing the feasibility of the approach with increasing numbers of photons and optical modes. However, these experiments also highlight the technical challenges that must be overcome to scale boson sampling further. Issues such as photon loss, imperfect mode matching, and the need for high-quality photon sources pose significant obstacles to achieving scalable, high-fidelity boson-sampling systems. Noise and imperfections can degrade the quality of the output quantum states, making it harder to distinguish genuine quantum advantage from classical noise effects. This has led to ongoing efforts to improve the scalability and reliability of these systems, to make them outperform classical computers.
The study of boson sampling has broader implications for our understanding of quantum computational complexity and the limits of classical computation. Boson sampling and other quantum sampling problems challenge the extended Church-Turing thesis [30], which posits that any physically realizable computation can be efficiently simulated by a classical computer. The inability of classical systems to efficiently simulate boson sampling, if rigorously proven, would provide a strong counterexample to this thesis and highlight the unique power of quantum systems in solving specific tasks. In this context, boson sampling offers not just an experimental platform for quantum advantage, but also theoretical insights into the computational boundaries between classical and quantum systems. Furthermore, it has been shown that boson sampling has many potential applications to practical problems, such as quantum chemistry, optimization problems, and machine learning [31].
This review provides a detailed exploration of boson sampling, beginning with its theoretical foundations and computational complexity, proceeding to its various versions, and then moving to experimental implementations and challenges. We will discuss key milestones in experimental work and examine the practical obstacles to scaling. We will then provide various classical algorithms to benchmark the boson sampling experiments and the quantum advantage from them. Finally, we will discuss boson sampling’s potential practical applications for solving real-world problems, beyond mere demonstration of quantum advantage. Through this review, we aim to offer a comprehensive understanding of the significance of boson sampling as both a computational challenge and an experimental tool for exploring the capabilities of quantum systems.
Boson sampling is a computational problem that can be implemented using linear-optical quantum circuits but is hard to classically simulate, making it a promising candidate for demonstrating quantum advantage. In this model, indistinguishable bosons (typically photons) are passed through a linear-optical network composed of beam splitters and phase shifters. Due to quantum interference formed through the linear-optical network, the bosons create a highly complex quantum state with a large degree of entanglement across output modes, and the task is to sample from the probability distribution obtained by measuring the output photon numbers of each output mode. The computational difficulty of simulating such quantum interference on classical computers increases exponentially with the number of bosons and the number of optical modes in the system. This characteristic positions boson sampling as a key computational problem for early quantum devices, specifically in the NISQ era when fully fault-tolerant quantum computers are not yet available.
Several variations of boson sampling have been proposed, either to extend the model or improve its experimental feasibility and scalability. These include the original Fock-state boson sampling [15], scattershot boson sampling [32], and Gaussian boson sampling [16]. In this section we review these variations in detail, focusing on their theoretical structures and experimental implementations. While there are other variants, such as bipartite boson sampling [33] and nonlinear boson sampling [34], we focus on the three boson-sampling models above because of their experimental realizations.
Fock-state boson sampling is the canonical model for boson sampling and the original problem proposed by Aaronson and Arkhipov [15]. In this setup, illustrated in Fig. 1, a number N of bosons are initialized in a specific input configuration of the linear-optical network composed of M modes, where each boson occupies a distinct mode in a definite number state, known as a Fock state. Without loss of generality, the input state can be written as |n > = |1, …, 1, 0, …, 0>, where the first N ones mean that the first N modes have single photons, while the remaining M − N zeros mean that the corresponding modes are in the vacuum state. The input bosons are sent through a large interferometer, resulting in a quantum interference pattern that determines the probability distribution of the bosons across the output modes. After the interferometer we measure the number of bosons for each output mode, which produces an M random number m = (m1, …, mM), where
Mathematically, the output probability distribution is described by the so-called permanent of a submatrix of the interferometer’s unitary matrix U [15, 36]:
where n is the input photon configuration and PerUn,m is the associated submatrix of U with the input and output photon configuration. Computing the permanent of large matrices is known to be #P-hard even in approximation [15, 37], which means that computing the output probability of boson-sampling circuits for a large number of bosons and modes is also #P-hard. More technically, for a Haar-random unitary U the corresponding output probabilities are expressed as the squares of the permanents of Gaussian random matrices, which is conjectured to be #P-hard on average [15]. Such #P-hardness of computing the output probability leads to the hardness of classical simulation of boson sampling, because otherwise the Stockmeyer reduction technique results in PP ⊂ BPPNP and thus, combining this with Toda’s theorem, the polynomial hierarchy collapses to finite level (see [15] for more details). This is what makes Fock-state boson sampling an excellent candidate for demonstrating quantum advantage.
Despite its theoretical significance, implementing Fock-state boson sampling in practice is technically challenging. In particular, aside from other technical difficulties shared by other boson sampling models, such as high-precision optical networks with low loss and perfect mode-matching, Fock-state boson sampling requires highly efficient, deterministic single-photon sources that can reliably generate multiple indistinguishable photons simultaneously, which at present is highly demanding. Still, experimental realizations of Fock-state boson sampling have made progress over the years, with demonstrations of systems using up to 20 photons [38] [see Section 3.1]. However, scaling up these experiments further to achieve quantum advantage remains difficult, due to the abovementioned technical challenge. The difficulty has motivated researchers to explore alternative boson sampling that reduces the dependence on high-quality, deterministic single-photon sources, leading to models like scattershot and Gaussian boson sampling. We note that there has been a recent experiment with a large number of bosons, using atoms [39].
Scattershot boson sampling is an extension of the Fock-state boson-sampling model that aims to overcome the practical limitations of the simultaneous generation of many photons, as shown in Fig. 2 [32]. In the standard Fock-state boson-sampling model, generating a fixed number of photons and injecting them into specific input modes with perfect timing is a major technical hurdle in experiments. Typically a single photon is generated by first producing a photon pair probabilistically using spontaneous parametric down-conversion, and then postselecting the detection event of a single photon from one side, which heralds a single photon on the other side. Because of the postselection procedure to guarantee that N photons are generated as the desired configuration, e.g., |n > = |1, …, 1, 0, …, 0>, there is an exponentially large overhead as we scale up the system size.
Scattershot boson sampling alleviates this issue by including all of the events with N photons, without the requirement of generating N photons at the exact desired configuration. In this way the postselection of generating N photons at any place entails only polynomial overhead, instead of exponential overhead. Interestingly, the hardness of classical simulations is maintained just as in Fock-state boson sampling.
More specifically, multiple probabilistic photon sources are placed at various input modes of the interferometer, replacing single-photon sources. Such probabilistic photon sources generate the so-called two-mode squeezed vacuum states, which have photons as pairs across two modes. Thus a two-mode squeezed vacuum state may have more than a single pair of photons:
where r is the squeezing parameter. Rather than requiring a fixed configuration of N input photons, scattershot boson sampling allows any subset of these sources to generate photons at a given time, as long as the total photon number is N. The number of photons in the input modes connected to an interferometer can be inferred, based on the number of photons detected at the output modes for the postselection. This “scattershot” approach increases the likelihood of successfully producing the desired number of photons, as the experiment benefits from a large number of potential input modes.
The key advantage of scattershot boson sampling is that it reduces the experimental burden of producing a specific configuration of input photons, while still maintaining the quantum interference that makes the sampling task computationally difficult for classical systems. Furthermore, it allows for experiments to use photon-pair sources like spontaneous parametric down-conversion, which are more readily available and easier to use than deterministic single-photon sources.
Inspired by scattershot boson sampling, another extension of the Fock-state boson-sampling model has appeared, called Gaussian boson sampling [16, 17, 40], which uses Gaussian states—quantum states of light that can be described by Gaussian functions in phase space—as inputs, instead of Fock states. The most common Gaussian states used in this model are single-mode squeezed vacuum states, which are generated by squeezing the quantum vacuum using nonlinear optical processes such as parametric down-conversion. Unlike Fock-state boson sampling, where the input is a fixed number of single photons, Gaussian boson sampling allows for continuous-variable quantum states as scattershot boson sampling, making it a more experimentally accessible model. While scattershot boson sampling also employs squeezed states, in Gaussian boson sampling all input bosons pass through the linear-optical circuit, as described in Fig. 3; Thus there is no postselection procedure at all.
More specifically, in Gaussian boson sampling single-mode squeezed vacuum states are injected into the input modes of a linear-optical network. After these states pass through the network, which introduces complex quantum interference, the number of bosons in each output mode is measured, which again produces random numbers (m1, …, mM), where each mi represents the number of photons that clicked on the ith mode. Therefore, the setup is identical to Fock-state boson sampling, except for the input states. The computational complexity of Gaussian boson sampling is also related to matrix permanents, but with an important distinction, due to the difference of the input states: The photon statistics in Gaussian boson sampling are connected to hafnians, a generalization of permanents that describe correlations in Gaussian states. Using a similar Stockmeyer reduction by #P-hardness of computing hafnians, we can also prove the hardness of simulating large-scale Gaussian boson sampling systems, under some plausible assumptions [16, 17].
Just like scattershot boson sampling, one of the main advantages of Gaussian boson sampling is that it leverages well-developed technologies from quantum optics, particularly squeezed light sources, which are easier to produce than deterministic single-photon sources. Squeezed vacuum states are routinely generated in laboratories with relatively high efficiency, and the use of such states avoids the need for post-selecting specific photon numbers at the input, as in the Fock-state or scattershot models.
The practical challenges of Gaussian boson sampling are similar to those of the other models. Photon loss, mode mismatch, and detector inefficiency all introduce errors that affect the output distribution. However, due to the nature of Gaussian states, Gaussian boson sampling may be more resilient to certain types of noise compared to Fock-state boson sampling. As a result, Gaussian boson sampling has emerged as a promising candidate for large-scale quantum sampling experiments, and several successful demonstrations of Gaussian boson sampling have been conducted, with increasing numbers of photons and modes.
Since the introduction of boson sampling in 2011, considerable progress has been made in experimentally realizing this computational task. These experiments, ranging from small-scale implementations with just a few photons to large-scale demonstrations of quantum supremacy, have shown the potential of boson sampling to push the boundaries of classical computational capabilities. Alongside the development of more sophisticated experimental techniques, various platforms—including integrated photonics, free-space optics, and fiber optics—have been used to demonstrate boson sampling, each overcoming significant technical challenges in photon generation, interference, and detection.
In this section we review the key experimental milestones in boson sampling, focusing first on early small-scale experiments that laid the groundwork for scalable implementations, and then on large-scale experiments that have made claims of quantum supremacy.
The earliest boson-sampling experiments focused on small-scale implementations, involving 3 to 6 photons and relatively simple optical networks. These demonstrations were crucial for validating the theoretical foundations of boson sampling, though they did not yet reach the regime where classical computers cannot simulate the process in a reasonable time. Despite their modest scale, these early experiments were essential proof-of-concept studies.
One of the first experimental realizations of boson sampling was reported in 2013 by Broome et al. [41], shown in Fig. 4. In their experiment, a 3-photon and 6-mode Fock-state boson-sampling setup was constructed using integrated photonic circuits on a silicon chip, which contained a network of beam splitters and phase shifters. This experiment demonstrated quantum interference between photons and verified that the resulting output distribution was consistent with the predicted boson-sampling distribution, i.e., the permanents of the submatrices of the unitary of the interferometer. That same year, Spring et al. [42] and Tillmann et al. [43] independently reported similar small-scale experiments using 3 and 4 photons respectively. These experiments used different optical platforms, including integrated waveguides and free-space optics, but all demonstrated interference patterns that could not easily be simulated by classical computers if scaled up. Many experiments for demonstration and scaling up have been constantly conducted [44–47].
After the initial small-scale demonstrations of boson sampling, research efforts shifted towards scaling up the number of photons to push the boundaries of classical simulation capabilities. A major milestone was reached in 2019, when a team at the University of Science and Technology of China (USTC) conducted a boson sampling experiment with 20 photons, using Fock states, which is shown in Fig. 5. This experiment represented the largest Fock-state boson sampling to date in an optical setup, involving 20 photons passing through a 60-mode interferometer [38].
In this experiment single-photon sources were carefully prepared, and high-precision optical circuits were used to ensure stable interference. The technical achievements required to produce and maintain 20 indistinguishable photons are considerable, as larger photon numbers increase susceptibility to noise, photon loss, and mode-matching errors. Despite these challenges, the USTC team successfully performed the experiment, demonstrating that the sampling complexity increased significantly compared to previous small-scale experiments.
Even at this scale, though, the 20-photon Fock-state boson-sampling experiment did not yet achieve quantum advantage. Classical algorithms are still able to simulate this setup, even without substantial computational effort. The result highlighted the difficulty of scaling up Fock-state boson-sampling experiments to the point where classical simulation becomes infeasible. Achieving quantum advantage with Fock-state boson sampling would likely require even larger photon numbers and improved experimental efficiency, to reduce photon loss and detector inefficiency.
This 20-photon experiment remains the largest Fock-state boson-sampling demonstration (except for the latest Fock-state boson-sampling experiment using atoms [39]) and serves as a benchmark for further experimental advancements. It illustrates both the progress made and the remaining challenges in reaching a regime where classical simulation is not yet feasible, underscoring the importance of continued innovation in photon-source quality, optical-circuit integration, and detection technologies.
In recent years experimental groups have made significant advances in scaling up boson sampling, moving from small-scale implementations to large-scale experiments involving dozens of photons and hundreds of optical modes, by focusing on Gaussian boson sampling instead of Fock-state boson sampling. These large-scale experiments aim to demonstrate quantum supremacy, where the output distribution of the boson-sampling system becomes too complex for classical computers to simulate in any reasonable time frame. The primary challenge of these experiments is maintaining high photon-generation rates, low-loss optical networks, and efficient detection, all of which are required for scaling up boson sampling.
One of the most significant breakthroughs in this area came in 2020, when the USTC group reported the first large-scale Gaussian boson-sampling experiment, with 50 squeezed states distributed across 100 output modes [26], which is shown in Fig. 6. This experiment marked a transition from standard Fock-state boson sampling to Gaussian boson sampling using squeezed states of light. By leveraging squeezed vacuum states, this approach allowed the experimentalists to generate more photons and achieve larger mode counts, pushing classical simulation methods to their limits. They successfully detected up to 76 photon clicks at the same time using threshold detectors, and claimed a quantum advantage from this large number of coincident photon detections for the first time using boson sampling.
To make this quantum advantage claim stronger and more rigorous, in 2021 the USTC group further scaled up their experiment, reporting a Gaussian boson sampling system involving 50 squeezed states, 144 output modes, and 113 photon-coincidence detections [27]. An important improvement over the previous experiment was that by using more optical modes, from 100 to 144, they could detect more photon clicks, because they used threshold detectors instead of photon-number-resolving detectors. Such an increased number of photon clicks was the basis of a stronger quantum advantage claim. In addition to the increased number of photons, its flexibility of freely changing the phase of the input squeezed states is another improvement, making it more suitable for many practical applications. Also, by comparing many benchmarking methods, such as high-order correlation comparison to the ideal cases [48], as shown in Fig. 7, they supported their claim of quantum advantage.
Again, to further boost the quantum advantage from boson sampling, in 2023 the USTC group replaced their threshold detectors with pseudo-photon-number-resolving detectors by multiplexing each output mode into eight modes, followed by threshold detectors [29]. By doing this, they finally observed up to 255 photon-click counts, which is much larger than the previous 133 photon-coincidence detections, and claimed a tremendous degree of quantum advantage.
In parallel, other groups have made important strides. In 2022, a team at Xanadu Quantum Technologies in Canada performed a Gaussian-boson-sampling experiment using integrated photonics and squeezed light sources, focusing on achieving scalability using time-bin modes [28]. More specifically, a distinction from the USTC group’s experimental setup was that by employing time-bin modes instead of spatial modes, they could more easily scale up the experiment by using just a single source generating squeezed states and a photon-number-resolving detector (In principle a single photon-number resolving detector would have been sufficient, but the experiment used 16 detectors to avoid crosstalk). A large-scale interferometer was configured by using variable beam splitters with different amounts of time delays, as illustrated in Fig. 8. An advantage of the setup over the USTC’s experiment was its flexibility of changing their interferometer in real-time so that they could easily tune the circuit for many applications. In addition, they used photon-number-resolving detections to count the number of photons, unlike the threshold detectors. Thanks to their clever setup, they could use the same number of squeezed vacuum states as the number of modes (up to 288 squeezed states), and could detect up to 242 photons. As shown in Fig. 9, the experimental data is consistent with the ideal probability distributions, to high accuracy. For larger-scale experiments for quantum advantage, they used the cross-entropy benchmarking, Bayesian test, and high-order correlation benchmarks to claim that their experimental data cannot be mimicked by classical algorithms.
These large-scale experiments have highlighted both the promise and the remaining challenges of boson sampling. Certainly these experiments represent a crucial step toward building scalable quantum systems that can tackle problems beyond classical reach. Nevertheless, the practicality of such systems for real-world applications is still limited by technical issues such as photon loss, noise, and detection inefficiency. More specifically, those large-scale boson sampling experiments suffer from a large amount of photon loss, such as over than at least half of input photons get lost in the middle of the circuit, due to various processes. In the following section we review the efforts to benchmark those experiments by developing classical algorithms and identifying the boundary of quantum and classical algorithms for boson sampling.
Given that boson sampling involves calculating matrix permanents, which is a #P-hard problem, and given hardness results for noiseless boson sampling, simulating large-scale boson sampling (even approximately) is believed to be computationally intractable on classical computers as the number of photons grows. Nevertheless, there have been extensive studies of classical algorithms for simulating boson sampling, because of its importance for benchmarking quantum experiments and understanding their computational complexity under realistic circumstances, such as when noise occurs.
In this section we first review the classical algorithms that aim to simulate noiseless boson sampling in an exact manner, which inevitably takes exponential time but is important to identify the boundary of a classical computer’s computational power for simulating boson sampling. We then review other classical algorithms that aim to simulate noisy boson sampling by taking advantage of the noise, which is crucial for practical benchmarking because the current experiments suffer from a large amount of noise, such as photon loss.
First of all, the main difficulty of simulating Fock-state boson sampling is to compute marginal probabilities, which is necessary to use the chain-rule-based sampler, i.e.,
but the complexity of computing marginal probabilities of Fock-state boson sampling is too great [15]. For example, the brute-force algorithm that computes all of the output probabilities to compute marginal probabilities must compute
However, Clifford and Clifford [51] developed a classical algorithm for simulating Fock-state boson sampling without directly computing the output marginal probabilities. The main idea behind this algorithm is to decompose the output probability as a probabilistic mixture of many probability distributions, the marginal probabilities of each of which are much easier to compute than the original output probabilities. Ultimately the time complexity of the Clifford-Clifford algorithm is O(N2N + poly(M, N)), which is much smaller than that of the brute-force computation of all permanents. They then developed another classical algorithm, called the faster Clifford-Clifford algorithm, that can be speedier when the number of photons is comparable to the number of modes, i.e., N ≈ M [52]. To the best of our knowledge, the Clifford-Clifford algorithm remains the best-known classical algorithm for exact Fock-state boson sampling.
On the other hand, for Gaussian boson sampling the first classical algorithm’s time complexity is O(MN3 2N), and the space complexity is polynomial in M and N [53]. The main idea of this algorithm is to compute marginal probabilities, which are written as hafnians, and use the chain rule of conditional probabilities. The main bottleneck of this algorithm is, again, computing the marginal probability. The reason for this is different from that in the Fock-state boson-sampling case because Gaussian states’ marginal probability can be easily written by using the covariance matrix of the output state, which can also be obtained easily. However, computing the marginal probabilities is still a bottleneck because when we compute the marginal probability, we need to compute the output probabilities of a reduced density matrix instead of a pure quantum state, which typically carries quadratically higher computational cost. This is why the time complexity is exponential in N instead of N/2, which is the complexity of computing a hafnian corresponding to a pure state with N photons [54].
This bottleneck is then resolved by using a similar idea to that of the Clifford-Clifford algorithm from a high level, in the sense that the output probability distribution is decomposed into a probabilistic mixture of probability distributions corresponding to pure Gaussian states (see Fig. 10) [55]. More specifically, this classical algorithm computes the output probability of photon-number detections with heterodyne detections, i.e., projection by coherent states, at the places that were supposed to be marginalized; In this way, we always stick to a pure state’s output probabilities because we do not marginalize. By resolving the bottleneck, the time complexity becomes O(MN3 2N/2) as desired, which is consistent with the complexity of computing hafnians [55]. This algorithm has then been further optimized, especially when collision occurs [56], just like the faster Clifford-Clifford algorithm.
Meanwhile, there have been some efforts to develop classical algorithms to exploit the structure of the interferometer [57, 58]. The aforementioned classical algorithms do not exploit the structure of a given interferometer, so that the time complexity is always exponential, even if the interferometer is relatively simple. A classical algorithm developed by Oh et al. [57] for both Fock-state boson sampling and Gaussian boson sampling takes advantage of the structure, in that when the underlying graph structure formed by the interferometer has limited treewidth, the complexity of the algorithm does not scale as the number of photons, but scales as the treewidth. From a physical perspective, it indicates that when the circuit depth for the interferometer is limited, the classical algorithm becomes efficient even if the number of photons is sufficiently large.
In practice, boson-sampling experiments inevitably involve noise, such as photon loss and partial distinguishability of photons. Classical simulations must account for this noise to provide accurate benchmarks, and these simulations are crucial for verifying quantum advantage in noisy, real-world experiments, as they provide a benchmark to distinguish between classical noise effects and genuine quantum behavior. Hence, there have been extensive studies of classical algorithms for simulating noisy boson sampling.
Rahimi-Keshari et al. [59] developed methods for efficiently simulating boson sampling under realistic experimental conditions, by incorporating noise models into classical simulations. The main idea behind this algorithm is to use a quasiprobability representation [60, 61] and to find a regime where the entire process can be described by positive probability distributions, which can be efficiently simulated by classical computers [62]. Here the authors observed that when the system is subject to noise or photon loss, the quasiprobability distribution tends to converge to a positive distribution, so their classical algorithm can exploit this when noise or photon loss is severe.
Many other classical algorithms have been developed inspired by this algorithm. Garcia-Patron et al. [35] invented a classical algorithm that takes advantage of photon loss using the quasiprobability-distribution method. The main idea is to approximate a lossy single-photon state by a thermal state, which has a positive P-representation and can be efficiently simulated with a small approximation error when the loss rate is high. A similar observation was made in Gaussian boson sampling and led to another classical algorithm [63], where again a squeezed thermal state, the output of a squeezed vacuum state after photon loss, can be approximated by another squeezed thermal state of a positive quasiprobability distribution when the loss rate is sufficiently high. This algorithm has been used as a benchmark for recent Gaussian-boson-sampling experiments [27–29, 64]. A caveat of this approach is that it requires a high loss rate for the algorithms to work with high accuracy.
Meanwhile, tensor-network-based classical algorithms have also been proposed to simulate lossy boson sampling [65–67]. The main observation is that the effect of loss intuitively destroys the entanglement of the system, so matrix-product states or matrix-product operators can be good candidates for simulating lossy boson sampling, even for a large system [68–70]. While numerical results confirm that the entanglement actually does not increase much even when the system size scales, and it has a potential advantage over the aforementioned algorithms in that it can increase the accuracy by increasing the bond dimension used for the simulation, the matrix-product-operator method itself entails a large computational cost; Thus, it has not been scaled up for a sufficiently large system size.
However, such studies have inspired another classical algorithm that can ultimately simulate the state-of-the-art Gaussian-boson-sampling experiment by taking advantage of a high loss rate [71]. While this classical algorithm is also based on matrix-product states [65], it employs an additional important structure of lossy Gaussian states, which is that the output quantum state can be decomposed into a Gaussian-boson-sampling circuit with an additional random displacement channel, as illustrated in Fig. 11. Using this decomposition, the matrix-product state becomes highly efficient, especially when the loss rate is high; Ultimately it succeeds in simulating the largest Gaussian-boson-sampling experiments with a reasonable computation cost, questioning the quantum advantage claims from the state-of-the-art experiments and highlighting the importance of reducing photon loss and other experimental noises for rigorous quantum advantage. Thus the algorithm provides a boundary for Gaussian boson sampling in practice and the best-known classical algorithm, as shown in Fig. 12. For example, assuming that the storage capacity of Frontier poses a classical computer’s boundary, we need at least 80 squeezed states of squeezing parameter r = 1.5 as an input for the worst-case circuit, which corresponds to an average photon number of around 362.7. Note that the boundary can change as better classical algorithms are developed. The best-known classical algorithm exploits the structure of Gaussian states, and its generalization to lossy Fock-state boson sampling is still open.
While many classical algorithms have focused on simulating lossy boson sampling because photon loss is the most dominant noise model in the current experiments, photon loss is not the only noise source in experiments, and many other classical algorithms have also been developed to take advantage of the other noise sources, such as partial distinguishability [72–74] or beam-splitter noise [75–77].
Even though the competition between classical algorithms and boson-sampling experiments is still ongoing, since there has been a lot of experimental progress to achieve quantum advantage using boson sampling and it is likely that we can achieve rigorous quantum advantage in the near future, it is natural and important to seek its specialized applications utilizing its computational advantage, particularly where classical computational methods struggle. Those applications include quantum chemistry, combinatorial optimization, and machine learning [31]. The inherent complexity of boson sampling provides computational power in areas where solving specific, structured problems is intractable for classical algorithms, offering advantages across a variety of scientific domains. While we will focus on two promising proposals for applications (the molecular-vibronic-spectra problem and graph-related problems), there are other interesting proposals for boson sampling’s applications, such as permanent point processing [78] and quantum cryptography [79–81].
One of the most compelling applications of boson sampling is in the study of molecular vibronic spectra [82], which are critical for understanding molecular behavior during chemical reactions. Vibronic spectra arise from the interplay of electronic and vibrational transitions within molecules, and calculating these spectra for large molecules is highly computationally demanding for classical methods [83]. The computational difficulty arises because the number of transition configurations is exponentially large and, in addition, computing the transition probability corresponds to computing the hafnian of an associated matrix, which is #P-hard. Interestingly, Gaussian boson sampling leverages quantum optical processes to model vibrational modes of molecules using Gaussian states of light, offering a potential quantum advantage by computing these complex spectra more efficiently than classical methods, which is illusrtaed in Fig. 13. This application has direct implications for quantum chemistry, drug discovery, and materials science, where understanding molecular structures and behaviors is essential.
Due to its potential utility, many experiments have been conducted to demonstrate the ability of Gaussian boson sampling to compute molecular vibronic spectra of small molecules [84–88] using photonic devices and superconducting bosonic processors, although the system size is not sufficiently large to enjoy the potential quantum advantage yet. An example of the experiments is shown in Fig. 14.
Meanwhile, the quantum advantage from Gaussian boson sampling for solving the molecular-vibronic-spectra problem has not been rigorously proven. In fact, recently a quantum-inspired classical algorithm has been developed [89]. The main observation of this classical algorithm is that the Gaussian-boson-sampling-based quantum algorithm entails a sampling error, due to the finite number of samples. Based on this observation, the classical algorithm also aims to approximate the spectra instead of exactly computing them, and to do that computes the Fourier transformation of the spectra instead of directly computing the spectra. Ultimately, the proposed classical algorithm can achieve the same accuracy as Gaussian boson sampling. The algorithm has been generalized to the problem corresponding to Fock-state boson sampling by using Gurvits’s algorithm [90, 91] to compute the Fourier coefficients efficiently with a certain additive error. While the algorithm can achieve the same accuracy and efficiency as the associated boson samplers, i.e., Fock-state boson sampling and Gaussian boson sampling, its accuracy is not sufficient if one generalizes the problem to displaced squeezed Fock-state boson sampling; Thus, the latter may provide a potential quantum advantage.
Another promising area for applications of boson sampling is in solving graph-based problems, due to an interesting correspondence between Gaussian boson sampling and graphs, especially in combinatorial optimization and graph theory. Many complex computational problems, such as graph isomorphism [92, 93], finding the densest subgraph [94], and finding the maximum clique [95], can be mapped onto the Gaussian-boson-sampling framework [96]. The problem of graph isomorphism involves determining whether two graphs are identical in structure, which is computationally difficult for classical algorithms as the number of vertices grows. Gaussian boson sampling can encode graph structures into the photon-mode configurations of the sampling device, allowing the quantum interference of photons to reveal patterns that correspond to graph properties. By analyzing the output distribution of the photons, one can infer whether two graphs are isomorphic, providing a potential speedup over classical methods.
In addition to graph isomorphism, Gaussian boson sampling has applications in other graph-related problems, such as finding maximum cliques (complete subgraphs) in large networks, which is known to be an NP-hard problem, and relevant for analyzing social networks, biological systems, and optimization problems, and finding the densest subgraph [87, 97]. Gaussian boson sampling may be able to find the solution, i.e., the densest subgraph or the maximum weighted clique, because it can be designed to increase the probability of obtaining the solutions, potentially enabling more efficient searches for solutions to problems that would otherwise require exhaustive classical search methods.
Many experiments have been implemented to demonstrate such interesting proposals (e.g., Figs. 15 and 16), revealing that the Gaussian boson sampler outperforms certain kinds of classical algorithms, such as the uniform sampler [98–100]. However, for the molecular-vibronic-spectra problem its quantum advantage has not been rigorously proven, and some classical algorithms claim that the efficiency from Gaussian boson sampling can be attained classically [96, 101]. Therefore, a more rigorous complexity-theoretic proof of the quantum advantage of Gaussian boson sampling in solving graph-related problems is needed.
Boson sampling has proven to play a vital role toward demonstrating quantum computational advantage. By focusing on the sampling problem, this model, together with random circuit sampling, highlights how even restricted quantum systems can outperform classical supercomputers on specific tasks. Over the past decade, boson-sampling experiments have evolved from small-scale proof-of-concept demonstrations involving a few photons to large-scale systems that claim quantum supremacy. These experimental achievements illustrate the practicality of quantum sampling tasks for proving quantum advantage, even prior to the advent of fault-tolerant universal quantum computers.
However, significant challenges remain in scaling these systems further and addressing issues related to noise, photon loss, and detection inefficiency. Currently the most dominant obstacle to achieving rigorous quantum advantage is photon loss. Thus, reducing the photon-loss rate is the first crucial step to achieving quantum advantage. Nonetheless, advancements in experimental techniques, such as the use of more efficient photon sources, integrated photonics, and better noise-tolerant designs, suggest that boson sampling will continue to evolve as a practical quantum technology.
Beyond its role in demonstrating quantum supremacy, boson sampling has the potential for practical applications. Many interesting applications have been proposed, such as problems involving molecular vibronic spectra and graph theory, but further theoretical studies on the complexity of those applications must be conducted for a rigorous quantum advantage from boson sampling. Also, more interesting applications are expected to be found in the near future.
In addition, boson sampling is an important stepping stone toward accomplishing the ultimate goal of building fault-tolerant universal quantum computers, and many efforts in realizing large-scale boson sampling experiments align with the ultimate goal. Therefore, as research continues, boson sampling is likely to become an essential tool in the quantum computing landscape. Whether through the exploration of new applications or further scaling in experimental capabilities, boson sampling demonstrates how specific quantum tasks can be accelerated beyond the reach of classical computation, offering exciting possibilities for both theoretical and practical advancements in quantum science.
C. O. was supported by the Quantum Technology R&D Leading Program (Quantum Computing) (Grant no. RS-2024-00431768) through the National Research Foundation of Korea (NRF), funded by the Korean government (Ministry of Science and ICT (MSIT)).
The author declares that he has no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
Data sharing is not applicable to this article, as no datasets were generated or analyzed during the current study.
Curr. Opt. Photon. 2025; 9(1): 1-18
Published online February 25, 2025 https://doi.org/10.3807/COPP.2025.9.1.1
Copyright © Optical Society of Korea.
Department of Physics, Korea Advanced Institute of Science and Technology, Daejeon 34141, Korea
Correspondence to:*changhun0218@kaist.ac.kr, ORCID 0000-0003-2002-1928
This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/4.0/) which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.
Boson sampling is a restricted model of quantum computation, designed to achieve quantum advantage using nonuniversal quantum systems. By harnessing the quantum interference of indistinguishable bosons (typically photons), it becomes possible to sample from a probability distribution, which is intractable for classical computers. This paper reviews the theoretical foundations of boson sampling and its variations, including Fock-state, scattershot, and Gaussian boson sampling, along with significant experimental progress, from early small-scale demonstrations to large-scale quantum supremacy claims. We further explore classical algorithms for simulating boson sampling, which are crucial for benchmarking the performance of experimental results. Finally we examine potential applications of boson sampling in various fields, including simulation of molecular vibronic spectra in quantum chemistry, and solution of graph-based problems in optimization. These applications demonstrate the wide-ranging impact that boson sampling could have on industries that rely on complex computational models, making it a promising quantum technology for near-term applications.
Keywords: Boson sampling, Quantum advantage, Quantum computer, Quantum optics
Quantum computers are widely believed to hold the potential to solve problems that are intractable for classical computers, offering unprecedented computational power in areas such as cryptography, optimization, and materials science [1–3]. However, one of the significant obstacles hindering us from realizing this potential is the presence of inevitable noise in quantum systems. Quantum noise, caused by unwanted interactions with the environment and imperfections in quantum operations, can significantly disturb the quantum states during computation, which causes errors in the results. While quantum error correction has been proposed as a solution to this problem [1], enabling reliable and large-scale quantum computation for arbitrarily long run times, and while there has been rapid experimental progress in realizing it in various experimental platforms [4–9], its practical implementation for larger systems and eventually for large-scale quantum computation requires a vast overhead of additional qubits and resources, which are beyond the capabilities of current technology.
Hence, the development of large-scale, fault-tolerant universal quantum computers capable of outperforming classical systems on a broad range of problems remains a daunting challenge. This has led to increased interest in employing near-term quantum devices: Quantum systems with limited capabilities that may still provide computational advantages over classical methods for specific tasks, even though they suffer from noise. More specifically, these are called noisy intermediate-scale quantum (NISQ) devices [10], since they are still prone to noise because we do not apply quantum error correction, and the number of qubits is still limited but large enough to prevent efficient brute-force classical simulation. One of the most promising avenues within this framework is the study of sampling problems [11], where quantum devices are designed to generate samples from probability distributions that are hard to mimic using classical computers. The importance of such sampling problems lies in the complexity-theoretic hardness of simulating the sampling using classical computers, i.e., quantum advantage. The most representative sampling problems are random circuit sampling [12–14], boson sampling [15–17], and instantaneous quantum polynomial (IQP) sampling [18–20].
First of all, in random circuit sampling we simply apply a randomly chosen quantum circuit of sufficient depth to many qubits, and then measure the qubits on a computational basis. The measurement provides a random bit string, the length of which is the same as the number of qubits, that follows the output distribution determined by the circuit configuration. A crucial feature of the distribution that makes random circuit sampling attractive is that no classical algorithm can sample from it efficiently (unless some plausible conjectures turn out to be false). Random circuit sampling gained more prominence as Google’s Sycamore quantum processor performed a random-circuit-sampling experiment with 53 qubits, marking a milestone in the quantum supremacy race by claiming that their output could not be simulated by classical supercomputers within a reasonable time [13]. While there has been much progress in classically simulating such circuits, mostly by using tensor-network algorithms [21–23], random circuit sampling of better quality and quantity has been implemented by several groups [24, 25] and has not been simulated using classical computers so far.
IQP sampling involves much simpler quantum circuits that only contain diagonal gates on the Pauli-X basis, after and before a single layer of Hadamard gates. Although IQP circuits are not universal for quantum computation because of their simple structure, their output distributions are still conjectured to be hard to sample from classically via random circuit sampling. There have been recent experiments implementing IQP sampling using atoms, along with error-detection schemes [7].
Finally, boson sampling, the main focus of this review, was first proposed by Aaronson and Arkhipov [15] in 2011. Unlike other sampling problems that use qubits, boson sampling explores the computational power of linear-optical networks. In this proposed setup, indistinguishable bosons (typically photons) are passed through a large-scale interferometer, forming complicated interference patterns and ultimately creating a probability distribution from output states that is hard to simulate classically, when output photons are measured on the Fock basis. Despite its nonuniversality, just like IQP circuits boson sampling is considered a promising candidate for demonstrating quantum advantage, due to both the complexity-theoretic hardness results showing that any classical algorithm would take exponential time to simulate the systems, and its experimental feasibility, as its requirement for dynamics is simply linear-optical circuits. The original boson sampling based on Fock-state input has been further extended to more general input states, such as Gaussian boson sampling [16, 17], to make scaling up experiments and eventually achieving quantum advantage more feasible.
Since the theoretical proposals, there have been several successful experimental demonstrations of boson sampling [26–29], showing the feasibility of the approach with increasing numbers of photons and optical modes. However, these experiments also highlight the technical challenges that must be overcome to scale boson sampling further. Issues such as photon loss, imperfect mode matching, and the need for high-quality photon sources pose significant obstacles to achieving scalable, high-fidelity boson-sampling systems. Noise and imperfections can degrade the quality of the output quantum states, making it harder to distinguish genuine quantum advantage from classical noise effects. This has led to ongoing efforts to improve the scalability and reliability of these systems, to make them outperform classical computers.
The study of boson sampling has broader implications for our understanding of quantum computational complexity and the limits of classical computation. Boson sampling and other quantum sampling problems challenge the extended Church-Turing thesis [30], which posits that any physically realizable computation can be efficiently simulated by a classical computer. The inability of classical systems to efficiently simulate boson sampling, if rigorously proven, would provide a strong counterexample to this thesis and highlight the unique power of quantum systems in solving specific tasks. In this context, boson sampling offers not just an experimental platform for quantum advantage, but also theoretical insights into the computational boundaries between classical and quantum systems. Furthermore, it has been shown that boson sampling has many potential applications to practical problems, such as quantum chemistry, optimization problems, and machine learning [31].
This review provides a detailed exploration of boson sampling, beginning with its theoretical foundations and computational complexity, proceeding to its various versions, and then moving to experimental implementations and challenges. We will discuss key milestones in experimental work and examine the practical obstacles to scaling. We will then provide various classical algorithms to benchmark the boson sampling experiments and the quantum advantage from them. Finally, we will discuss boson sampling’s potential practical applications for solving real-world problems, beyond mere demonstration of quantum advantage. Through this review, we aim to offer a comprehensive understanding of the significance of boson sampling as both a computational challenge and an experimental tool for exploring the capabilities of quantum systems.
Boson sampling is a computational problem that can be implemented using linear-optical quantum circuits but is hard to classically simulate, making it a promising candidate for demonstrating quantum advantage. In this model, indistinguishable bosons (typically photons) are passed through a linear-optical network composed of beam splitters and phase shifters. Due to quantum interference formed through the linear-optical network, the bosons create a highly complex quantum state with a large degree of entanglement across output modes, and the task is to sample from the probability distribution obtained by measuring the output photon numbers of each output mode. The computational difficulty of simulating such quantum interference on classical computers increases exponentially with the number of bosons and the number of optical modes in the system. This characteristic positions boson sampling as a key computational problem for early quantum devices, specifically in the NISQ era when fully fault-tolerant quantum computers are not yet available.
Several variations of boson sampling have been proposed, either to extend the model or improve its experimental feasibility and scalability. These include the original Fock-state boson sampling [15], scattershot boson sampling [32], and Gaussian boson sampling [16]. In this section we review these variations in detail, focusing on their theoretical structures and experimental implementations. While there are other variants, such as bipartite boson sampling [33] and nonlinear boson sampling [34], we focus on the three boson-sampling models above because of their experimental realizations.
Fock-state boson sampling is the canonical model for boson sampling and the original problem proposed by Aaronson and Arkhipov [15]. In this setup, illustrated in Fig. 1, a number N of bosons are initialized in a specific input configuration of the linear-optical network composed of M modes, where each boson occupies a distinct mode in a definite number state, known as a Fock state. Without loss of generality, the input state can be written as |n > = |1, …, 1, 0, …, 0>, where the first N ones mean that the first N modes have single photons, while the remaining M − N zeros mean that the corresponding modes are in the vacuum state. The input bosons are sent through a large interferometer, resulting in a quantum interference pattern that determines the probability distribution of the bosons across the output modes. After the interferometer we measure the number of bosons for each output mode, which produces an M random number m = (m1, …, mM), where
Mathematically, the output probability distribution is described by the so-called permanent of a submatrix of the interferometer’s unitary matrix U [15, 36]:
where n is the input photon configuration and PerUn,m is the associated submatrix of U with the input and output photon configuration. Computing the permanent of large matrices is known to be #P-hard even in approximation [15, 37], which means that computing the output probability of boson-sampling circuits for a large number of bosons and modes is also #P-hard. More technically, for a Haar-random unitary U the corresponding output probabilities are expressed as the squares of the permanents of Gaussian random matrices, which is conjectured to be #P-hard on average [15]. Such #P-hardness of computing the output probability leads to the hardness of classical simulation of boson sampling, because otherwise the Stockmeyer reduction technique results in PP ⊂ BPPNP and thus, combining this with Toda’s theorem, the polynomial hierarchy collapses to finite level (see [15] for more details). This is what makes Fock-state boson sampling an excellent candidate for demonstrating quantum advantage.
Despite its theoretical significance, implementing Fock-state boson sampling in practice is technically challenging. In particular, aside from other technical difficulties shared by other boson sampling models, such as high-precision optical networks with low loss and perfect mode-matching, Fock-state boson sampling requires highly efficient, deterministic single-photon sources that can reliably generate multiple indistinguishable photons simultaneously, which at present is highly demanding. Still, experimental realizations of Fock-state boson sampling have made progress over the years, with demonstrations of systems using up to 20 photons [38] [see Section 3.1]. However, scaling up these experiments further to achieve quantum advantage remains difficult, due to the abovementioned technical challenge. The difficulty has motivated researchers to explore alternative boson sampling that reduces the dependence on high-quality, deterministic single-photon sources, leading to models like scattershot and Gaussian boson sampling. We note that there has been a recent experiment with a large number of bosons, using atoms [39].
Scattershot boson sampling is an extension of the Fock-state boson-sampling model that aims to overcome the practical limitations of the simultaneous generation of many photons, as shown in Fig. 2 [32]. In the standard Fock-state boson-sampling model, generating a fixed number of photons and injecting them into specific input modes with perfect timing is a major technical hurdle in experiments. Typically a single photon is generated by first producing a photon pair probabilistically using spontaneous parametric down-conversion, and then postselecting the detection event of a single photon from one side, which heralds a single photon on the other side. Because of the postselection procedure to guarantee that N photons are generated as the desired configuration, e.g., |n > = |1, …, 1, 0, …, 0>, there is an exponentially large overhead as we scale up the system size.
Scattershot boson sampling alleviates this issue by including all of the events with N photons, without the requirement of generating N photons at the exact desired configuration. In this way the postselection of generating N photons at any place entails only polynomial overhead, instead of exponential overhead. Interestingly, the hardness of classical simulations is maintained just as in Fock-state boson sampling.
More specifically, multiple probabilistic photon sources are placed at various input modes of the interferometer, replacing single-photon sources. Such probabilistic photon sources generate the so-called two-mode squeezed vacuum states, which have photons as pairs across two modes. Thus a two-mode squeezed vacuum state may have more than a single pair of photons:
where r is the squeezing parameter. Rather than requiring a fixed configuration of N input photons, scattershot boson sampling allows any subset of these sources to generate photons at a given time, as long as the total photon number is N. The number of photons in the input modes connected to an interferometer can be inferred, based on the number of photons detected at the output modes for the postselection. This “scattershot” approach increases the likelihood of successfully producing the desired number of photons, as the experiment benefits from a large number of potential input modes.
The key advantage of scattershot boson sampling is that it reduces the experimental burden of producing a specific configuration of input photons, while still maintaining the quantum interference that makes the sampling task computationally difficult for classical systems. Furthermore, it allows for experiments to use photon-pair sources like spontaneous parametric down-conversion, which are more readily available and easier to use than deterministic single-photon sources.
Inspired by scattershot boson sampling, another extension of the Fock-state boson-sampling model has appeared, called Gaussian boson sampling [16, 17, 40], which uses Gaussian states—quantum states of light that can be described by Gaussian functions in phase space—as inputs, instead of Fock states. The most common Gaussian states used in this model are single-mode squeezed vacuum states, which are generated by squeezing the quantum vacuum using nonlinear optical processes such as parametric down-conversion. Unlike Fock-state boson sampling, where the input is a fixed number of single photons, Gaussian boson sampling allows for continuous-variable quantum states as scattershot boson sampling, making it a more experimentally accessible model. While scattershot boson sampling also employs squeezed states, in Gaussian boson sampling all input bosons pass through the linear-optical circuit, as described in Fig. 3; Thus there is no postselection procedure at all.
More specifically, in Gaussian boson sampling single-mode squeezed vacuum states are injected into the input modes of a linear-optical network. After these states pass through the network, which introduces complex quantum interference, the number of bosons in each output mode is measured, which again produces random numbers (m1, …, mM), where each mi represents the number of photons that clicked on the ith mode. Therefore, the setup is identical to Fock-state boson sampling, except for the input states. The computational complexity of Gaussian boson sampling is also related to matrix permanents, but with an important distinction, due to the difference of the input states: The photon statistics in Gaussian boson sampling are connected to hafnians, a generalization of permanents that describe correlations in Gaussian states. Using a similar Stockmeyer reduction by #P-hardness of computing hafnians, we can also prove the hardness of simulating large-scale Gaussian boson sampling systems, under some plausible assumptions [16, 17].
Just like scattershot boson sampling, one of the main advantages of Gaussian boson sampling is that it leverages well-developed technologies from quantum optics, particularly squeezed light sources, which are easier to produce than deterministic single-photon sources. Squeezed vacuum states are routinely generated in laboratories with relatively high efficiency, and the use of such states avoids the need for post-selecting specific photon numbers at the input, as in the Fock-state or scattershot models.
The practical challenges of Gaussian boson sampling are similar to those of the other models. Photon loss, mode mismatch, and detector inefficiency all introduce errors that affect the output distribution. However, due to the nature of Gaussian states, Gaussian boson sampling may be more resilient to certain types of noise compared to Fock-state boson sampling. As a result, Gaussian boson sampling has emerged as a promising candidate for large-scale quantum sampling experiments, and several successful demonstrations of Gaussian boson sampling have been conducted, with increasing numbers of photons and modes.
Since the introduction of boson sampling in 2011, considerable progress has been made in experimentally realizing this computational task. These experiments, ranging from small-scale implementations with just a few photons to large-scale demonstrations of quantum supremacy, have shown the potential of boson sampling to push the boundaries of classical computational capabilities. Alongside the development of more sophisticated experimental techniques, various platforms—including integrated photonics, free-space optics, and fiber optics—have been used to demonstrate boson sampling, each overcoming significant technical challenges in photon generation, interference, and detection.
In this section we review the key experimental milestones in boson sampling, focusing first on early small-scale experiments that laid the groundwork for scalable implementations, and then on large-scale experiments that have made claims of quantum supremacy.
The earliest boson-sampling experiments focused on small-scale implementations, involving 3 to 6 photons and relatively simple optical networks. These demonstrations were crucial for validating the theoretical foundations of boson sampling, though they did not yet reach the regime where classical computers cannot simulate the process in a reasonable time. Despite their modest scale, these early experiments were essential proof-of-concept studies.
One of the first experimental realizations of boson sampling was reported in 2013 by Broome et al. [41], shown in Fig. 4. In their experiment, a 3-photon and 6-mode Fock-state boson-sampling setup was constructed using integrated photonic circuits on a silicon chip, which contained a network of beam splitters and phase shifters. This experiment demonstrated quantum interference between photons and verified that the resulting output distribution was consistent with the predicted boson-sampling distribution, i.e., the permanents of the submatrices of the unitary of the interferometer. That same year, Spring et al. [42] and Tillmann et al. [43] independently reported similar small-scale experiments using 3 and 4 photons respectively. These experiments used different optical platforms, including integrated waveguides and free-space optics, but all demonstrated interference patterns that could not easily be simulated by classical computers if scaled up. Many experiments for demonstration and scaling up have been constantly conducted [44–47].
After the initial small-scale demonstrations of boson sampling, research efforts shifted towards scaling up the number of photons to push the boundaries of classical simulation capabilities. A major milestone was reached in 2019, when a team at the University of Science and Technology of China (USTC) conducted a boson sampling experiment with 20 photons, using Fock states, which is shown in Fig. 5. This experiment represented the largest Fock-state boson sampling to date in an optical setup, involving 20 photons passing through a 60-mode interferometer [38].
In this experiment single-photon sources were carefully prepared, and high-precision optical circuits were used to ensure stable interference. The technical achievements required to produce and maintain 20 indistinguishable photons are considerable, as larger photon numbers increase susceptibility to noise, photon loss, and mode-matching errors. Despite these challenges, the USTC team successfully performed the experiment, demonstrating that the sampling complexity increased significantly compared to previous small-scale experiments.
Even at this scale, though, the 20-photon Fock-state boson-sampling experiment did not yet achieve quantum advantage. Classical algorithms are still able to simulate this setup, even without substantial computational effort. The result highlighted the difficulty of scaling up Fock-state boson-sampling experiments to the point where classical simulation becomes infeasible. Achieving quantum advantage with Fock-state boson sampling would likely require even larger photon numbers and improved experimental efficiency, to reduce photon loss and detector inefficiency.
This 20-photon experiment remains the largest Fock-state boson-sampling demonstration (except for the latest Fock-state boson-sampling experiment using atoms [39]) and serves as a benchmark for further experimental advancements. It illustrates both the progress made and the remaining challenges in reaching a regime where classical simulation is not yet feasible, underscoring the importance of continued innovation in photon-source quality, optical-circuit integration, and detection technologies.
In recent years experimental groups have made significant advances in scaling up boson sampling, moving from small-scale implementations to large-scale experiments involving dozens of photons and hundreds of optical modes, by focusing on Gaussian boson sampling instead of Fock-state boson sampling. These large-scale experiments aim to demonstrate quantum supremacy, where the output distribution of the boson-sampling system becomes too complex for classical computers to simulate in any reasonable time frame. The primary challenge of these experiments is maintaining high photon-generation rates, low-loss optical networks, and efficient detection, all of which are required for scaling up boson sampling.
One of the most significant breakthroughs in this area came in 2020, when the USTC group reported the first large-scale Gaussian boson-sampling experiment, with 50 squeezed states distributed across 100 output modes [26], which is shown in Fig. 6. This experiment marked a transition from standard Fock-state boson sampling to Gaussian boson sampling using squeezed states of light. By leveraging squeezed vacuum states, this approach allowed the experimentalists to generate more photons and achieve larger mode counts, pushing classical simulation methods to their limits. They successfully detected up to 76 photon clicks at the same time using threshold detectors, and claimed a quantum advantage from this large number of coincident photon detections for the first time using boson sampling.
To make this quantum advantage claim stronger and more rigorous, in 2021 the USTC group further scaled up their experiment, reporting a Gaussian boson sampling system involving 50 squeezed states, 144 output modes, and 113 photon-coincidence detections [27]. An important improvement over the previous experiment was that by using more optical modes, from 100 to 144, they could detect more photon clicks, because they used threshold detectors instead of photon-number-resolving detectors. Such an increased number of photon clicks was the basis of a stronger quantum advantage claim. In addition to the increased number of photons, its flexibility of freely changing the phase of the input squeezed states is another improvement, making it more suitable for many practical applications. Also, by comparing many benchmarking methods, such as high-order correlation comparison to the ideal cases [48], as shown in Fig. 7, they supported their claim of quantum advantage.
Again, to further boost the quantum advantage from boson sampling, in 2023 the USTC group replaced their threshold detectors with pseudo-photon-number-resolving detectors by multiplexing each output mode into eight modes, followed by threshold detectors [29]. By doing this, they finally observed up to 255 photon-click counts, which is much larger than the previous 133 photon-coincidence detections, and claimed a tremendous degree of quantum advantage.
In parallel, other groups have made important strides. In 2022, a team at Xanadu Quantum Technologies in Canada performed a Gaussian-boson-sampling experiment using integrated photonics and squeezed light sources, focusing on achieving scalability using time-bin modes [28]. More specifically, a distinction from the USTC group’s experimental setup was that by employing time-bin modes instead of spatial modes, they could more easily scale up the experiment by using just a single source generating squeezed states and a photon-number-resolving detector (In principle a single photon-number resolving detector would have been sufficient, but the experiment used 16 detectors to avoid crosstalk). A large-scale interferometer was configured by using variable beam splitters with different amounts of time delays, as illustrated in Fig. 8. An advantage of the setup over the USTC’s experiment was its flexibility of changing their interferometer in real-time so that they could easily tune the circuit for many applications. In addition, they used photon-number-resolving detections to count the number of photons, unlike the threshold detectors. Thanks to their clever setup, they could use the same number of squeezed vacuum states as the number of modes (up to 288 squeezed states), and could detect up to 242 photons. As shown in Fig. 9, the experimental data is consistent with the ideal probability distributions, to high accuracy. For larger-scale experiments for quantum advantage, they used the cross-entropy benchmarking, Bayesian test, and high-order correlation benchmarks to claim that their experimental data cannot be mimicked by classical algorithms.
These large-scale experiments have highlighted both the promise and the remaining challenges of boson sampling. Certainly these experiments represent a crucial step toward building scalable quantum systems that can tackle problems beyond classical reach. Nevertheless, the practicality of such systems for real-world applications is still limited by technical issues such as photon loss, noise, and detection inefficiency. More specifically, those large-scale boson sampling experiments suffer from a large amount of photon loss, such as over than at least half of input photons get lost in the middle of the circuit, due to various processes. In the following section we review the efforts to benchmark those experiments by developing classical algorithms and identifying the boundary of quantum and classical algorithms for boson sampling.
Given that boson sampling involves calculating matrix permanents, which is a #P-hard problem, and given hardness results for noiseless boson sampling, simulating large-scale boson sampling (even approximately) is believed to be computationally intractable on classical computers as the number of photons grows. Nevertheless, there have been extensive studies of classical algorithms for simulating boson sampling, because of its importance for benchmarking quantum experiments and understanding their computational complexity under realistic circumstances, such as when noise occurs.
In this section we first review the classical algorithms that aim to simulate noiseless boson sampling in an exact manner, which inevitably takes exponential time but is important to identify the boundary of a classical computer’s computational power for simulating boson sampling. We then review other classical algorithms that aim to simulate noisy boson sampling by taking advantage of the noise, which is crucial for practical benchmarking because the current experiments suffer from a large amount of noise, such as photon loss.
First of all, the main difficulty of simulating Fock-state boson sampling is to compute marginal probabilities, which is necessary to use the chain-rule-based sampler, i.e.,
but the complexity of computing marginal probabilities of Fock-state boson sampling is too great [15]. For example, the brute-force algorithm that computes all of the output probabilities to compute marginal probabilities must compute
However, Clifford and Clifford [51] developed a classical algorithm for simulating Fock-state boson sampling without directly computing the output marginal probabilities. The main idea behind this algorithm is to decompose the output probability as a probabilistic mixture of many probability distributions, the marginal probabilities of each of which are much easier to compute than the original output probabilities. Ultimately the time complexity of the Clifford-Clifford algorithm is O(N2N + poly(M, N)), which is much smaller than that of the brute-force computation of all permanents. They then developed another classical algorithm, called the faster Clifford-Clifford algorithm, that can be speedier when the number of photons is comparable to the number of modes, i.e., N ≈ M [52]. To the best of our knowledge, the Clifford-Clifford algorithm remains the best-known classical algorithm for exact Fock-state boson sampling.
On the other hand, for Gaussian boson sampling the first classical algorithm’s time complexity is O(MN3 2N), and the space complexity is polynomial in M and N [53]. The main idea of this algorithm is to compute marginal probabilities, which are written as hafnians, and use the chain rule of conditional probabilities. The main bottleneck of this algorithm is, again, computing the marginal probability. The reason for this is different from that in the Fock-state boson-sampling case because Gaussian states’ marginal probability can be easily written by using the covariance matrix of the output state, which can also be obtained easily. However, computing the marginal probabilities is still a bottleneck because when we compute the marginal probability, we need to compute the output probabilities of a reduced density matrix instead of a pure quantum state, which typically carries quadratically higher computational cost. This is why the time complexity is exponential in N instead of N/2, which is the complexity of computing a hafnian corresponding to a pure state with N photons [54].
This bottleneck is then resolved by using a similar idea to that of the Clifford-Clifford algorithm from a high level, in the sense that the output probability distribution is decomposed into a probabilistic mixture of probability distributions corresponding to pure Gaussian states (see Fig. 10) [55]. More specifically, this classical algorithm computes the output probability of photon-number detections with heterodyne detections, i.e., projection by coherent states, at the places that were supposed to be marginalized; In this way, we always stick to a pure state’s output probabilities because we do not marginalize. By resolving the bottleneck, the time complexity becomes O(MN3 2N/2) as desired, which is consistent with the complexity of computing hafnians [55]. This algorithm has then been further optimized, especially when collision occurs [56], just like the faster Clifford-Clifford algorithm.
Meanwhile, there have been some efforts to develop classical algorithms to exploit the structure of the interferometer [57, 58]. The aforementioned classical algorithms do not exploit the structure of a given interferometer, so that the time complexity is always exponential, even if the interferometer is relatively simple. A classical algorithm developed by Oh et al. [57] for both Fock-state boson sampling and Gaussian boson sampling takes advantage of the structure, in that when the underlying graph structure formed by the interferometer has limited treewidth, the complexity of the algorithm does not scale as the number of photons, but scales as the treewidth. From a physical perspective, it indicates that when the circuit depth for the interferometer is limited, the classical algorithm becomes efficient even if the number of photons is sufficiently large.
In practice, boson-sampling experiments inevitably involve noise, such as photon loss and partial distinguishability of photons. Classical simulations must account for this noise to provide accurate benchmarks, and these simulations are crucial for verifying quantum advantage in noisy, real-world experiments, as they provide a benchmark to distinguish between classical noise effects and genuine quantum behavior. Hence, there have been extensive studies of classical algorithms for simulating noisy boson sampling.
Rahimi-Keshari et al. [59] developed methods for efficiently simulating boson sampling under realistic experimental conditions, by incorporating noise models into classical simulations. The main idea behind this algorithm is to use a quasiprobability representation [60, 61] and to find a regime where the entire process can be described by positive probability distributions, which can be efficiently simulated by classical computers [62]. Here the authors observed that when the system is subject to noise or photon loss, the quasiprobability distribution tends to converge to a positive distribution, so their classical algorithm can exploit this when noise or photon loss is severe.
Many other classical algorithms have been developed inspired by this algorithm. Garcia-Patron et al. [35] invented a classical algorithm that takes advantage of photon loss using the quasiprobability-distribution method. The main idea is to approximate a lossy single-photon state by a thermal state, which has a positive P-representation and can be efficiently simulated with a small approximation error when the loss rate is high. A similar observation was made in Gaussian boson sampling and led to another classical algorithm [63], where again a squeezed thermal state, the output of a squeezed vacuum state after photon loss, can be approximated by another squeezed thermal state of a positive quasiprobability distribution when the loss rate is sufficiently high. This algorithm has been used as a benchmark for recent Gaussian-boson-sampling experiments [27–29, 64]. A caveat of this approach is that it requires a high loss rate for the algorithms to work with high accuracy.
Meanwhile, tensor-network-based classical algorithms have also been proposed to simulate lossy boson sampling [65–67]. The main observation is that the effect of loss intuitively destroys the entanglement of the system, so matrix-product states or matrix-product operators can be good candidates for simulating lossy boson sampling, even for a large system [68–70]. While numerical results confirm that the entanglement actually does not increase much even when the system size scales, and it has a potential advantage over the aforementioned algorithms in that it can increase the accuracy by increasing the bond dimension used for the simulation, the matrix-product-operator method itself entails a large computational cost; Thus, it has not been scaled up for a sufficiently large system size.
However, such studies have inspired another classical algorithm that can ultimately simulate the state-of-the-art Gaussian-boson-sampling experiment by taking advantage of a high loss rate [71]. While this classical algorithm is also based on matrix-product states [65], it employs an additional important structure of lossy Gaussian states, which is that the output quantum state can be decomposed into a Gaussian-boson-sampling circuit with an additional random displacement channel, as illustrated in Fig. 11. Using this decomposition, the matrix-product state becomes highly efficient, especially when the loss rate is high; Ultimately it succeeds in simulating the largest Gaussian-boson-sampling experiments with a reasonable computation cost, questioning the quantum advantage claims from the state-of-the-art experiments and highlighting the importance of reducing photon loss and other experimental noises for rigorous quantum advantage. Thus the algorithm provides a boundary for Gaussian boson sampling in practice and the best-known classical algorithm, as shown in Fig. 12. For example, assuming that the storage capacity of Frontier poses a classical computer’s boundary, we need at least 80 squeezed states of squeezing parameter r = 1.5 as an input for the worst-case circuit, which corresponds to an average photon number of around 362.7. Note that the boundary can change as better classical algorithms are developed. The best-known classical algorithm exploits the structure of Gaussian states, and its generalization to lossy Fock-state boson sampling is still open.
While many classical algorithms have focused on simulating lossy boson sampling because photon loss is the most dominant noise model in the current experiments, photon loss is not the only noise source in experiments, and many other classical algorithms have also been developed to take advantage of the other noise sources, such as partial distinguishability [72–74] or beam-splitter noise [75–77].
Even though the competition between classical algorithms and boson-sampling experiments is still ongoing, since there has been a lot of experimental progress to achieve quantum advantage using boson sampling and it is likely that we can achieve rigorous quantum advantage in the near future, it is natural and important to seek its specialized applications utilizing its computational advantage, particularly where classical computational methods struggle. Those applications include quantum chemistry, combinatorial optimization, and machine learning [31]. The inherent complexity of boson sampling provides computational power in areas where solving specific, structured problems is intractable for classical algorithms, offering advantages across a variety of scientific domains. While we will focus on two promising proposals for applications (the molecular-vibronic-spectra problem and graph-related problems), there are other interesting proposals for boson sampling’s applications, such as permanent point processing [78] and quantum cryptography [79–81].
One of the most compelling applications of boson sampling is in the study of molecular vibronic spectra [82], which are critical for understanding molecular behavior during chemical reactions. Vibronic spectra arise from the interplay of electronic and vibrational transitions within molecules, and calculating these spectra for large molecules is highly computationally demanding for classical methods [83]. The computational difficulty arises because the number of transition configurations is exponentially large and, in addition, computing the transition probability corresponds to computing the hafnian of an associated matrix, which is #P-hard. Interestingly, Gaussian boson sampling leverages quantum optical processes to model vibrational modes of molecules using Gaussian states of light, offering a potential quantum advantage by computing these complex spectra more efficiently than classical methods, which is illusrtaed in Fig. 13. This application has direct implications for quantum chemistry, drug discovery, and materials science, where understanding molecular structures and behaviors is essential.
Due to its potential utility, many experiments have been conducted to demonstrate the ability of Gaussian boson sampling to compute molecular vibronic spectra of small molecules [84–88] using photonic devices and superconducting bosonic processors, although the system size is not sufficiently large to enjoy the potential quantum advantage yet. An example of the experiments is shown in Fig. 14.
Meanwhile, the quantum advantage from Gaussian boson sampling for solving the molecular-vibronic-spectra problem has not been rigorously proven. In fact, recently a quantum-inspired classical algorithm has been developed [89]. The main observation of this classical algorithm is that the Gaussian-boson-sampling-based quantum algorithm entails a sampling error, due to the finite number of samples. Based on this observation, the classical algorithm also aims to approximate the spectra instead of exactly computing them, and to do that computes the Fourier transformation of the spectra instead of directly computing the spectra. Ultimately, the proposed classical algorithm can achieve the same accuracy as Gaussian boson sampling. The algorithm has been generalized to the problem corresponding to Fock-state boson sampling by using Gurvits’s algorithm [90, 91] to compute the Fourier coefficients efficiently with a certain additive error. While the algorithm can achieve the same accuracy and efficiency as the associated boson samplers, i.e., Fock-state boson sampling and Gaussian boson sampling, its accuracy is not sufficient if one generalizes the problem to displaced squeezed Fock-state boson sampling; Thus, the latter may provide a potential quantum advantage.
Another promising area for applications of boson sampling is in solving graph-based problems, due to an interesting correspondence between Gaussian boson sampling and graphs, especially in combinatorial optimization and graph theory. Many complex computational problems, such as graph isomorphism [92, 93], finding the densest subgraph [94], and finding the maximum clique [95], can be mapped onto the Gaussian-boson-sampling framework [96]. The problem of graph isomorphism involves determining whether two graphs are identical in structure, which is computationally difficult for classical algorithms as the number of vertices grows. Gaussian boson sampling can encode graph structures into the photon-mode configurations of the sampling device, allowing the quantum interference of photons to reveal patterns that correspond to graph properties. By analyzing the output distribution of the photons, one can infer whether two graphs are isomorphic, providing a potential speedup over classical methods.
In addition to graph isomorphism, Gaussian boson sampling has applications in other graph-related problems, such as finding maximum cliques (complete subgraphs) in large networks, which is known to be an NP-hard problem, and relevant for analyzing social networks, biological systems, and optimization problems, and finding the densest subgraph [87, 97]. Gaussian boson sampling may be able to find the solution, i.e., the densest subgraph or the maximum weighted clique, because it can be designed to increase the probability of obtaining the solutions, potentially enabling more efficient searches for solutions to problems that would otherwise require exhaustive classical search methods.
Many experiments have been implemented to demonstrate such interesting proposals (e.g., Figs. 15 and 16), revealing that the Gaussian boson sampler outperforms certain kinds of classical algorithms, such as the uniform sampler [98–100]. However, for the molecular-vibronic-spectra problem its quantum advantage has not been rigorously proven, and some classical algorithms claim that the efficiency from Gaussian boson sampling can be attained classically [96, 101]. Therefore, a more rigorous complexity-theoretic proof of the quantum advantage of Gaussian boson sampling in solving graph-related problems is needed.
Boson sampling has proven to play a vital role toward demonstrating quantum computational advantage. By focusing on the sampling problem, this model, together with random circuit sampling, highlights how even restricted quantum systems can outperform classical supercomputers on specific tasks. Over the past decade, boson-sampling experiments have evolved from small-scale proof-of-concept demonstrations involving a few photons to large-scale systems that claim quantum supremacy. These experimental achievements illustrate the practicality of quantum sampling tasks for proving quantum advantage, even prior to the advent of fault-tolerant universal quantum computers.
However, significant challenges remain in scaling these systems further and addressing issues related to noise, photon loss, and detection inefficiency. Currently the most dominant obstacle to achieving rigorous quantum advantage is photon loss. Thus, reducing the photon-loss rate is the first crucial step to achieving quantum advantage. Nonetheless, advancements in experimental techniques, such as the use of more efficient photon sources, integrated photonics, and better noise-tolerant designs, suggest that boson sampling will continue to evolve as a practical quantum technology.
Beyond its role in demonstrating quantum supremacy, boson sampling has the potential for practical applications. Many interesting applications have been proposed, such as problems involving molecular vibronic spectra and graph theory, but further theoretical studies on the complexity of those applications must be conducted for a rigorous quantum advantage from boson sampling. Also, more interesting applications are expected to be found in the near future.
In addition, boson sampling is an important stepping stone toward accomplishing the ultimate goal of building fault-tolerant universal quantum computers, and many efforts in realizing large-scale boson sampling experiments align with the ultimate goal. Therefore, as research continues, boson sampling is likely to become an essential tool in the quantum computing landscape. Whether through the exploration of new applications or further scaling in experimental capabilities, boson sampling demonstrates how specific quantum tasks can be accelerated beyond the reach of classical computation, offering exciting possibilities for both theoretical and practical advancements in quantum science.
C. O. was supported by the Quantum Technology R&D Leading Program (Quantum Computing) (Grant no. RS-2024-00431768) through the National Research Foundation of Korea (NRF), funded by the Korean government (Ministry of Science and ICT (MSIT)).
The author declares that he has no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
Data sharing is not applicable to this article, as no datasets were generated or analyzed during the current study.