Amplitude Amplification¶
In this tutorial we will review Workbench Algorithms Qubricks for amplitude amplification - one of the most important subroutines for fault-tolerant quantum algorithms.
Problem overview¶
The general idea behind amplitude amplification routine is as follows.
Let's say we want to prepare a target state $|\psi_{\text{good}}\rangle$ with high fidelity. However, we don't have access to explicit information about this state, the way we do in state preparation problem. Instead, we know how to prepare some other state $|\psi_0\rangle$ that has some non-zero (but potentially quite low) overlap $p_0$ with $|\psi_{\text{good}}\rangle$.
To prepare $|\psi_0\rangle$, we use a unitary $U_{\psi_0}$ for which $U_{\psi_0}|0^n\rangle = |\psi_0\rangle$. We assume that we know how to implement this unitary. This also allows us to implement a reflection about the initial state $R_{\psi_0} := 1 - 2|\psi_{0}\rangle\langle\psi_{0}| = U(1 - 2|0\rangle\langle 0|)U^{\dagger}$.
We also assume that we have access to an oracle that implements a reflection about the target state $R_{\text{good}} := 1 - 2|\psi_{\text{good}}\rangle\langle\psi_{\text{good}}|$.
If we take these two reflections and apply them to the initial state $k$ times, we get the following state:
$$ (R_{\psi_0}R_{\text{good}})^k |\psi_0\rangle $$
In this case, for $k=\mathcal{O}(1/\sqrt{p_0})$, the final state will have $\Omega(1)$ overlap with the target state $|\psi_{\text{good}}\rangle$.
Depending on our particular problem, we can do much better than this by using generalized reflections, $1-(1-e^{i\phi})|\psi\rangle\langle\psi|$. These allow us to more finely control how the states are transformed in the amplification process. We can recover the original reflection by setting $\phi=\pi$.
Amplitude amplification frameworks¶
There exist different frameworks for performing amplitude amplification in this way. Workbench Algorithms implements two of the most common methods that cover two separate cases:
If you know the precise overlap $p_{\text{succ}}$ between your target state and your starting state (for example, because it comes from an analytical expression), then we can determine the exact number of iterations $m$ needed to boost the amplitude of the target state to 1 using the equation
$$m = \frac{\pi}{4 \arcsin(\sqrt{p_{\text{succ}}})} - \frac{1}{2}$$
In this case, $m$ will not be an exact integer, so we will either over- or undershoot if we round to the closest integer, leading to poor overlap. Instead, we can take a number of iterations $\tilde{m} = \lfloor m\rfloor$ and then for the final reflection do a partial reflection to exactly realize the target state. In Workbench Algorithms, this is determined by the
get_final_reflection_anglefunction that solves Eq. 12 in "Quantum Amplitude Amplification and Estimation" (arXiv:quant-ph/0005055 ⧉).If you don't know the precise success probability $p_{\text{succ}}$, we can instead use fixed point amplitude amplification from "Fixed-point quantum search with an optimal number of queries" (arXiv:1409.3305 ⧉). This is typically less efficient than the vanilla version, but has a key advantage in that you can't overcook the state: if you have a lower bound on the success probability, you can guarantee that you will boost the success probability to some value $1-\epsilon$. This is done by changing all reflections to generalized reflections and choosing the angles to
$$\alpha_j = \beta_{m-j+1} = 2\cot^{-1}(\tan(2\pi j/(2m+1))\sqrt{1-\gamma^2})$$
Here $\gamma^{-1} = T_{1/(2l+1)}(1/\sqrt{\epsilon})$ and $T_k(x)$ is the generalized Chebyshev polynomial. $\alpha$ sets the angle for the reflection about the starting state and $\beta$ sets the angle for reflection about the target state.
Workbench Algorithms has a single AmpAmp Qubrick for handling both flavors of amplitude amplification. There are also several Qubricks for implementing reflections:
GoodReflectionBlockEncodingperforms a reflection about the good state defined by a block encoding (that is, we measure all auxiliary qubits in $|0\rangle$).InitReflectionFromUnitarygenerates a reflection from a given unitary.GroverDiffuserconstructs the starting state reflection for performing Grover search (this algorithm is just amplitude amplification!)GoodReflectionGroverconstructs the reflection about the target state for Grover search.
How can we instruct
AmpAmpQubrick to use fixed point amplitude amplification vs vanilla?If we don't know our success probability, it is generally better to use fixed point amplitude amplification, which must be provided with an
epsvalue to work. If we knowp_succexactly, it is better to use vanilla amplitude amplification, which results in success probability 1 (and thus noepsneeds to be supplied). The two cases can be distinguished by setting thefixed_pointattribute in the initialization ofAmpAmp.
Let's look at how these Qubricks work by using a small artificial example.
from psiqworkbench import QPU, Qubits
from workbench_algorithms import (
AmpAmp,
GoodReflectionBlockEncoding,
InitReflectionBlockEncoding,
AmplitudeAmplifiedQubrick,
SelectNaive,
LCU,
PrepareNaive
)
from workbench_algorithms.utils.paulimask import PauliMask, PauliSum
We'll use LCU block encoding as an example. Let's consider a single-qubit Hamiltonian represented as the following linear combination of unitaries:
$$H = \frac13 (X + Y + Z) = \frac13 \begin{bmatrix} 1 & 1 - i \\ 1 + i & -1 \end{bmatrix}$$
In Workbench, we can represent this Hamiltonian as a PauliSum object:
prep_probs = [1/3, 1/3, 1/3]
ham = PauliSum(
[1, PauliMask(1, 0)],
[1, PauliMask(0, 1)],
[1, PauliMask(1, 1)]
)
def init_lcu_example():
"""Helper function that initializes QPU and Qubricks used across multiple examples."""
# Initialize the QPU and registers
qpu = QPU(num_qubits=3)
aux_reg = Qubits(2, "aux_reg", qpu)
psi_reg = Qubits(1, "psi_reg", qpu)
# Initialize the Qubricks
state_prep = PrepareNaive(prep_probs)
select = SelectNaive()
block_encoding = LCU(state_prep=state_prep, select=select)
zero_refl = GoodReflectionBlockEncoding()
init_refl = InitReflectionBlockEncoding(unitary=block_encoding)
return qpu, psi_reg, aux_reg, block_encoding, zero_refl, init_refl
For LCU block encoding, the "success" is defined as the probability to project onto the subspace where the auxiliary qubits are in the $\ket{0}$ state.
The following code snippet applies the block encoding unitary to two registers - the main register psi_reg and the auxiliary register aux_reg - once and prints the resulting state and the probability of success. You can see that if we collapse the state vector to the state in which aux_reg is in the $\ket{0}$ state, the main register psi_reg ends up in the state that is proportional to the first column of the matrix $H$ - our target state. However, the probability of this is only $\tfrac13$.
qpu, psi_reg, aux_reg, block_encoding, _, _ = init_lcu_example()
# Apply the block encoding once
block_encoding.compute(psi_reg, aux_reg, data=ham)
# Print the success probability - the probability of aux_reg being in the |0⟩ state
print(f"Success probability = {aux_reg.peek_read_probability(0)}")
# Print the full state of the system
qpu.print_state_vector()
# Collapse the system state to terms with aux_reg = |0⟩
aux_reg.postselect(0)
psi_reg.print_state_vector()
qpu.draw(show_qubricks=True)
Success probability = 0.33333333333333337 |aux_reg|psi_reg> |0|0> 0.333333+0.000000j |0|1> 0.333333+0.333333j |1|0> 0.333333+0.000000j |1|1> -0.333333+0.000000j |2|0> -0.235702+0.000000j |2|1> -0.235702+0.471405j |3|0> -0.235702+0.000000j |3|1> 0.235702+0.000000j |psi_reg> |0> 0.577350+0.000000j |1> 0.577350+0.577350j
Vanilla amplitude amplification¶
Now, let's see how amplitude amplification works in this scenario.
The following code snippet sets up the same set of unitaries, but this time the application of the block encoding unitary is followed by vanilla amplitude amplification which uses one iteration. You can see that this boosts success probability to $1$: the state of the system after vanilla AA has no terms for which aux_reg is in a state other than $\ket{0}$.
qpu, psi_reg, aux_reg, block_encoding, zero_refl, init_refl = init_lcu_example()
amplitude_amplification = AmpAmp(init_refl=init_refl, good_refl=zero_refl, fixed_point=False)
# Apply the block encoding once
block_encoding.compute(psi_reg, aux_reg, data=ham)
# Apply amplitude amplification
p_succ = 1/3
amplitude_amplification.compute(sys_reg=psi_reg, p_succ=p_succ, be_ancilla_reg=aux_reg, ancillae=aux_reg, data=ham)
print(f"Success probability = {aux_reg.peek_read_probability(0)}")
qpu.print_state_vector()
aux_reg.postselect(0)
psi_reg.print_state_vector()
qpu.draw(show_qubricks=True)
Success probability = 1.0000000000000004 |aux_reg|psi_reg> |0|0> -0.500000+0.288675j |0|1> -0.788675-0.211325j |1|0> -0.000000+0.000000j |1|1> 0.000000-0.000000j |2|0> 0.000000-0.000000j |2|1> -0.000000-0.000000j |3|0> 0.000000-0.000000j |3|1> -0.000000+0.000000j |psi_reg> |0> 0.577350-0.000000j |1> 0.577350+0.577350j
Fixed point amplitude amplification¶
If we don't know the exact success probability but only a lower bound on it, we can use fixed point amplitude amplification to achieve the same result. The following example demonstrates that, using $p_{\text{succ}} = \tfrac1{10}$ instead of $\tfrac13$.
qpu, psi_reg, aux_reg, block_encoding, zero_refl, init_refl = init_lcu_example()
amplitude_amplification = AmpAmp(init_refl=init_refl, good_refl=zero_refl, eps=1e-3, fixed_point=True)
# Apply the block encoding once
block_encoding.compute(psi_reg, aux_reg, data=ham)
# Apply amplitude amplification
p_succ = 1/10
amplitude_amplification.compute(sys_reg=psi_reg, p_succ=p_succ, be_ancilla_reg=aux_reg, ancillae=aux_reg, data=ham)
print(f"Success probability = {aux_reg.peek_read_probability(0)}")
qpu.draw(show_qubricks=True)
Success probability = 0.9999990167863042
As you can see, fixed point amplitude amplification uses far more resources - nine iterations instead of just one - to achieve success probability close to $1$. But, if we use vanilla amplitude amplification with the wrong value for success probability, we can fail to amplify our state at all!
The following example illustrates this, using vanilla amplitude amplification in the previous scenario, using $p_{\text{succ}} = \tfrac1{10}$. You can see that, while it uses two iterations instead of nine, the results are worse than not doing amplitude amplification at all!
qpu, psi_reg, aux_reg, block_encoding, zero_refl, init_refl = init_lcu_example()
amplitude_amplification = AmpAmp(init_refl=init_refl, good_refl=zero_refl, fixed_point=False)
# Apply the block encoding once
block_encoding.compute(psi_reg, aux_reg, data=ham)
# Apply amplitude amplification
p_succ = 1/10
amplitude_amplification.compute(sys_reg=psi_reg, p_succ=p_succ, be_ancilla_reg=aux_reg, ancillae=aux_reg, data=ham)
print(f"Success probability = {aux_reg.peek_read_probability(0)}")
qpu.draw(show_qubricks=True)
Success probability = 0.05071225270550227
The further is our lower bound estimate from the true value of $p_{\text{succ}}$, the worse the results can be. This is why fixed point amplitude amplification is useful: you can never go wrong with it, even though often there is a more efficient way to get there (even if it's impossible to determine what that is!)
Implicit amplitude amplification¶
When looking at algorithm design, we often see that protocols that are inherently probabilistic are "made deterministic" through the use of amplitude amplification. As a user, it's sometimes useful to not even have to think of doing amplitude amplification ourselves, but instead have it happen automatically.
One way to do this is through the use of the AmplitudeAmplifiedQubrick class. With this Qubrick, we can pass in a base Qubrick of our choice that we assume implements some probabilistic process (in our case, the LCU block encoding Qubrick). When we pass it into the AmplitudeAmplifiedQubrick, we can then "make it deterministic" by allowing amplitude amplification to happen under the hood.
The following code snippet shows an example that recreates our previous example of vanilla amplitude amplification using AmplitudeAmplifiedQubrick.
qpu, psi_reg, aux_reg, block_encoding, zero_refl, init_refl = init_lcu_example()
amplified_block_encoding = AmplitudeAmplifiedQubrick(base_qubrick=block_encoding,
init_refl=init_refl,
good_refl=zero_refl,
fixed_point=False)
# Apply amplitude amplified Qubrick
p_succ = 1/3
amplified_block_encoding.compute(psi_reg, p_succ=p_succ, be_ancilla_reg=aux_reg, ancillae=aux_reg, data=ham)
print(f"Success probability = {aux_reg.peek_read_probability(0)}")
qpu.draw(show_qubricks=True)
Success probability = 1.0000000000000004
Summary¶
In this tutorial, we walked through amplitude amplification and the different options Workbench Algorithms offers for this.
AmpAmpQubrick handles two flavors of amplitude amplification: vanilla, which relies on knowing the overlap of the target state and the initial state, and fixed point, which only needs an estimate of the lower bound of that overlap.- If you know the overlap of the target state and the initial state with high fidelity, vanilla amplitude amplification will be more efficient. However, fixed point amplitude amplification will succeed even if your estimate of this overlap is inaccurate.
AmplitudeAmplifiedQubrickallows you to "make a Qubrick deterministic" by wrapping it in amplitude amplification protocol.
Have fun amplifying amplitudes!