Deutsch-Jozsa, Bernstein-Vazirani & Simon's
The first quantum algorithms — promise problems with exponential speedup

1. The Big Picture — Promise Problems & Quantum Advantage

All three algorithms on this page share a common structure. You are given a black-box function (an oracle) about which you are told some structural promise. Your task is to determine a hidden property of the function. Classically, you'd need many oracle calls. Quantumly, you need far fewer — often just one.

These are promise problems: the oracle is guaranteed to satisfy certain conditions, and the algorithm may behave arbitrarily if the promise is violated. They don't have known practical applications — but they rigorously prove that quantum computers can be faster than classical ones in terms of query complexity (the number of oracle calls).

The shared pattern
  1. Prepare a uniform superposition with \(H^{\otimes n}\)
  2. Apply the oracle (using phase kickback to encode \(f\) in the phases)
  3. Apply \(H^{\otimes n}\) again to decode the phase information
  4. Measure

This "Hadamard sandwich" converts phase patterns into measurable computational basis states. It's the backbone of quantum algorithms.

Critical Prerequisite: The Hadamard Transform

You know what \(H\) does to one qubit: \(H\ket{0} = \ket{+} = \tfrac{1}{\sqrt{2}}(\ket{0} + \ket{1})\) and \(H\ket{1} = \ket{-} = \tfrac{1}{\sqrt{2}}(\ket{0} - \ket{1})\). But what happens when you apply \(H\) to every qubit in a multi-qubit system?

Build it up from 1 qubit:

So \(H^{\otimes n}\ket{0\ldots0}\) = equal superposition of all possible bit strings. That's the starting move of every algorithm on this page.

But what about \(H^{\otimes n}\) on a non-zero input? This is where the signs come in.

Recall: \(H\ket{1} = \tfrac{1}{\sqrt{2}}(\ket{0} - \ket{1})\). The \(\ket{1}\) component gets a minus sign. For multiple qubits, the sign of each output term depends on how many positions have a "1 meets 1" overlap between input and output:

Input \(x\) Output \(y\) Overlap (\(x \cdot y\)) Sign \((-1)^{x \cdot y}\)
10 00 0 +1
10 01 0 +1
10 10 1 −1
10 11 1 −1

So \(H^{\otimes 2}\ket{10} = \tfrac{1}{2}(\ket{00} + \ket{01} - \ket{10} - \ket{11})\). Every output term where the first bit is 1 gets a minus sign (because that's where input has a 1).

The general formula:

$$H^{\otimes n}\ket{x} = \frac{1}{\sqrt{2^n}} \sum_{y \in \{0,1\}^n} (-1)^{x \cdot y} \ket{y}$$

where the bitwise inner product determines the sign:

$$x \cdot y = \bigoplus_{j=0}^{n-1} x_j y_j = \left(\sum_{j=0}^{n-1} x_j y_j\right) \bmod 2$$

In plain English: for each position, check if both \(x\) and \(y\) have a 1 there (AND). Count how many positions match. If that count is odd, the sign is \(-1\). If even, the sign is \(+1\). This is the same inner product operation from Bernstein-Vazirani.

Two essential consequences:

The cancellation lemma

For any \(z \in \{0,1\}^n\):

$$\sum_{x \in \{0,1\}^n} (-1)^{x \cdot z} = \begin{cases} 2^n & \text{if } z = 0^n \\ 0 & \text{if } z \neq 0^n \end{cases}$$

When \(z = 0^n\), every term is \(+1\). When \(z \neq 0^n\), exactly half the \(x\) values give \(x \cdot z = 0\) and half give \(x \cdot z = 1\), so the terms cancel perfectly. This lemma is the engine behind all three algorithms.

2. Deutsch-Jozsa Algorithm

The Problem

Given \(f: \{0,1\}^n \to \{0,1\}\) that is promised to be either:

Determine which one.

Classical Complexity
The Circuit

The Deutsch-Jozsa circuit uses \(n\) qubits and a phase oracle \(P_f\):

$$\ket{0}^{\otimes n} \xrightarrow{H^{\otimes n}} \frac{1}{\sqrt{2^n}}\sum_x \ket{x} \xrightarrow{P_f} \frac{1}{\sqrt{2^n}}\sum_x (-1)^{f(x)}\ket{x} \xrightarrow{H^{\otimes n}} \ket{\psi} \xrightarrow{\text{measure}} \text{result}$$

Deutsch-Jozsa / Bernstein-Vazirani circuit (n qubits, phase oracle)

Recall from the previous page: the phase oracle \(P_f\ket{x} = (-1)^{f(x)}\ket{x}\) is built from a Boolean oracle \(U_f\) using phase kickback with an ancilla in \(\ket{-}\).

Step-by-step analysis

Step 1: Apply \(H^{\otimes n}\) to \(\ket{0}^{\otimes n}\). Each qubit becomes \(\ket{+}\), and the tensor product \(\ket{+}^{\otimes n}\) expands into a sum over all \(2^n\) computational basis states:

$$\ket{\psi_1} = H^{\otimes n}\ket{0}^{\otimes n} = \ket{+}^{\otimes n} = \frac{1}{\sqrt{2^n}} \sum_{x \in \{0,1\}^n} \ket{x}$$

Here \(x\) is a dummy variable (like i in a for-loop) that runs over every possible bit string: \(\ket{00\ldots0}, \ket{00\ldots1}, \ldots, \ket{11\ldots1}\). The \(x\) has nothing to do with the input \(\ket{0}^{\otimes n}\) — it's just a label for each term in the resulting superposition.

Step 2: Apply the phase oracle \(P_f\). It acts on each basis state independently, stamping \((-1)^{f(x)}\) onto each \(\ket{x}\):

$$\ket{\psi_2} = P_f\ket{\psi_1} = \frac{1}{\sqrt{2^n}} \sum_{x \in \{0,1\}^n} (-1)^{f(x)} \ket{x}$$

Each basis state picks up a phase of \(+1\) (if \(f(x) = 0\)) or \(-1\) (if \(f(x) = 1\)). The state \(\ket{x}\) itself doesn't change — only its coefficient does.

Step 3: Apply \(H^{\otimes n}\) again. What does this do? Each \(\ket{x}\) in the current state gets replaced by its Hadamard expansion \(\frac{1}{\sqrt{2^n}} \sum_y (-1)^{x \cdot y} \ket{y}\). So every term spawns \(2^n\) new terms:

$$\ket{\psi_3} = H^{\otimes n}\ket{\psi_2} = \frac{1}{\sqrt{2^n}} \sum_x (-1)^{f(x)} \cdot \underbrace{\frac{1}{\sqrt{2^n}} \sum_y (-1)^{x \cdot y} \ket{y}}_{\text{H applied to } \ket{x}}$$

Now rearrange: instead of grouping by input \(x\), group by output \(\ket{y}\). For each \(\ket{y}\), collect all the contributions from every \(x\):

$$= \sum_y \underbrace{\left[\frac{1}{2^n} \sum_x (-1)^{f(x) + x \cdot y}\right]}_{\text{amplitude of } \ket{y}} \ket{y}$$

The exponent \(f(x) + x \cdot y\) is computed mod 2: the sign of each contribution depends on both what \(f\) does to \(x\) and the overlap between \(x\) and \(y\).

Step 4: The amplitude of measuring \(\ket{0}^{\otimes n}\) (i.e., \(y = 0^n\)) is:

$$\alpha_{0^n} = \frac{1}{2^n} \sum_{x \in \{0,1\}^n} (-1)^{f(x) + x \cdot 0^n} = \frac{1}{2^n} \sum_x (-1)^{f(x)}$$

since \(x \cdot 0^n = 0\) for all \(x\). The probability of measuring all zeros is \(|\alpha_{0^n}|^2\).

The decision rule

Measure \(\ket{0\ldots 0}\) → constant. Measure anything else → balanced. One oracle call, 100% deterministic.

Concrete Example: \(n = 2\), balanced \(f\)

Let's trace it with actual numbers. Suppose \(f(00) = 0,\ f(01) = 1,\ f(10) = 1,\ f(11) = 0\) (balanced).

Classical approach
Quantum approach — 1 call (balanced: \(f(00)=0, f(01)=1, f(10)=1, f(11)=0\))

Step 1: Hadamard on \(\ket{00}\):

$$H^{\otimes 2}\ket{00} = \frac{1}{2}\left(\ket{00} + \ket{01} + \ket{10} + \ket{11}\right)$$

Step 2: Phase oracle — each \(\ket{x}\) picks up \((-1)^{f(x)}\):

$$\frac{1}{2}\left((-1)^{f(00)}\ket{00} + (-1)^{f(01)}\ket{01} + (-1)^{f(10)}\ket{10} + (-1)^{f(11)}\ket{11}\right)$$

$$= \frac{1}{2}\left((-1)^0\ket{00} + (-1)^1\ket{01} + (-1)^1\ket{10} + (-1)^0\ket{11}\right)$$

$$= \frac{1}{2}\left(\ket{00} - \ket{01} - \ket{10} + \ket{11}\right)$$

Step 3: Factor by qubit (group by first qubit, then second):

$$= \frac{1}{2}\left(\ket{0}(\ket{0} - \ket{1}) + \ket{1}(-\ket{0} + \ket{1})\right) = \frac{1}{2}(\ket{0} - \ket{1})(\ket{0} - \ket{1}) = \ket{-}\ket{-}$$

Step 4: Apply \(H \otimes H\). Since \(H\ket{-} = \ket{1}\):

$$H \otimes H\ket{-}\ket{-} = \ket{1}\ket{1} = \ket{11}$$

Measure: \(11 \neq 00\) → balanced. One oracle call, certain answer.


Compare with constant \(f \equiv 0\):

$$\frac{1}{2}((-1)^0\ket{00} + (-1)^0\ket{01} + (-1)^0\ket{10} + (-1)^0\ket{11}) = \frac{1}{2}(\ket{00}+\ket{01}+\ket{10}+\ket{11}) = \ket{+}\ket{+}$$

Apply \(H \otimes H\): \(H\ket{+} = \ket{0}\), so we get \(\ket{00}\). Measure \(00\) → constant.

Why It Works — Intuition

Think of it in terms of interference:

What if the promise is violated?

If \(f\) is neither constant nor balanced (e.g., 30% of inputs map to 1), the algorithm still runs, but the outcome is undefined. You might or might not measure \(\ket{0}^{\otimes n}\). Promise problems only guarantee correctness when the promise holds.

3. Bernstein-Vazirani Algorithm

The Problem

Given \(f: \{0,1\}^n \to \{0,1\}\) promised to be a parity function:

$$f(x) = s \cdot x = \bigoplus_{j=0}^{n-1} s_j x_j \pmod{2}$$

for some hidden bitstring \(s \in \{0,1\}^n\). Find \(s\).

In other words, \(f\) computes the bitwise inner product of the input with a secret string. Each bit of \(s\) determines whether the corresponding input bit contributes to the parity.

Classical Complexity

Classically, \(n\) queries suffice and are necessary. Query with each standard basis vector \(e_i = 00\ldots 010\ldots 0\) (with a 1 in position \(i\)):

$$f(e_i) = s \cdot e_i = s_i$$

Each query reveals one bit of \(s\). After \(n\) queries, you know all of \(s\). No classical strategy can do better, because each query reveals at most one bit of information about \(s\).

The Circuit

The circuit is identical to Deutsch-Jozsa. Same structure: \(H^{\otimes n} \to P_f \to H^{\otimes n} \to \text{measure}\).

$$\ket{0}^{\otimes n} \xrightarrow{H^{\otimes n}} \frac{1}{\sqrt{2^n}}\sum_x \ket{x} \xrightarrow{P_f} \frac{1}{\sqrt{2^n}}\sum_x (-1)^{s \cdot x}\ket{x} \xrightarrow{H^{\otimes n}} \ket{s} \xrightarrow{\text{measure}} s$$

Step-by-step analysis

Step 1: Apply \(H^{\otimes n}\) to \(\ket{0}^{\otimes n}\):

$$\ket{\psi_1} = \frac{1}{\sqrt{2^n}} \sum_{x \in \{0,1\}^n} \ket{x}$$

Step 2: Apply the phase oracle. Since \(f(x) = s \cdot x\):

$$\ket{\psi_2} = \frac{1}{\sqrt{2^n}} \sum_x (-1)^{s \cdot x} \ket{x}$$

Step 3 (the key insight): Look at the state after the oracle: \(\frac{1}{\sqrt{2^n}} \sum_x (-1)^{s \cdot x} \ket{x}\). Now recall the Hadamard formula — what does \(H^{\otimes n}\ket{s}\) look like?

$$H^{\otimes n}\ket{s} = \frac{1}{\sqrt{2^n}} \sum_x (-1)^{s \cdot x} \ket{x}$$

It's the exact same expression. Why? Because the oracle stamped \((-1)^{s \cdot x}\) on each \(\ket{x}\), and the Hadamard of \(\ket{s}\) produces exactly this sign pattern (positions where \(s\) has a 1 contribute a minus sign via the inner product). The oracle accidentally created the Hadamard encoding of \(\ket{s}\).

So \(\ket{\psi_2} = H^{\otimes n}\ket{s}\). Applying \(H^{\otimes n}\) again just undoes it:

$$\ket{\psi_3} = H^{\otimes n}\ket{\psi_2} = H^{\otimes n} H^{\otimes n}\ket{s} = \ket{s}$$

because \(H^{\otimes n}\) is its own inverse (\(H^2 = I\) on every qubit).

Result

Measurement yields \(s\) with 100% probability. One oracle call instead of \(n\). The speedup is from \(O(n)\) to \(O(1)\).

Concrete Example: \(n = 3\), \(s = 101\)

The secret string is \(s = 101\). The function \(f(x) = s \cdot x\) computes the bitwise inner product with \(s\).

Classical approach — 3 calls

Query with standard basis vectors to extract one bit of \(s\) at a time:

Three queries, one per bit. \(s = 101\).

Quantum approach — 1 call (\(s = 101\))

Step 1: Hadamard on \(\ket{000}\):

$$H^{\otimes 3}\ket{000} = \frac{1}{\sqrt{8}}\left(\ket{000}+\ket{001}+\ket{010}+\ket{011}+\ket{100}+\ket{101}+\ket{110}+\ket{111}\right)$$

Step 2: Phase oracle stamps \((-1)^{s \cdot x}\) on each term. Compute \(s \cdot x = 101 \cdot x\) for each input:

\(101 \cdot 000 = 0\) → \(+1\) \(101 \cdot 001 = 1\) → \(-1\)
\(101 \cdot 010 = 0\) → \(+1\) \(101 \cdot 011 = 1\) → \(-1\)
\(101 \cdot 100 = 1\) → \(-1\) \(101 \cdot 101 = 0\) → \(+1\)
\(101 \cdot 110 = 1\) → \(-1\) \(101 \cdot 111 = 0\) → \(+1\)

State after oracle:

$$\frac{1}{\sqrt{8}}\left(\ket{000}-\ket{001}+\ket{010}-\ket{011}-\ket{100}+\ket{101}-\ket{110}+\ket{111}\right)$$

Step 3: Apply \(H^{\otimes 3}\) again. Factor by qubit to see what's happening:

$$= \frac{1}{\sqrt{8}}(\ket{0}-\ket{1}) \otimes (\ket{0}+\ket{1}) \otimes (\ket{0}-\ket{1}) = \ket{-}\ket{+}\ket{-}$$

Apply \(H\) to each: \(H\ket{-}=\ket{1},\ H\ket{+}=\ket{0},\ H\ket{-}=\ket{1}\):

$$H^{\otimes 3}\ket{-}\ket{+}\ket{-} = \ket{1}\ket{0}\ket{1} = \ket{101}$$

Measure: \(101 = s\). One oracle call, 100% certain.

Formal proof that only \(y = s\) has nonzero amplitude

For completeness, let's verify directly. The amplitude of \(\ket{y}\) in the final state is:

$$\alpha_y = \frac{1}{2^n} \sum_{x \in \{0,1\}^n} (-1)^{s \cdot x + x \cdot y} = \frac{1}{2^n} \sum_x (-1)^{x \cdot (s \oplus y)}$$

Why can we combine the exponents like that? Because everything is mod 2: \(s \cdot x + x \cdot y \pmod{2}\) equals \(x \cdot (s \oplus y)\). In plain terms: flipping a bit in \(s\) vs \(y\) is the same as XOR-ing them together, and the inner product is linear — it doesn't matter whether you add two separate inner products or take one inner product with the XOR.

Now \(s \oplus y\) is just a bit string. By the cancellation lemma:

Only \(\ket{s}\) survives. The measurement is deterministic.

Why it works — the deeper view

The Hadamard transform converts between two "conjugate" bases: the computational basis and the Fourier/Hadamard basis. The phase oracle for \(f(x) = s \cdot x\) encodes \(s\) as a phase pattern across the computational basis — which is exactly what \(H^{\otimes n}\ket{s}\) looks like. The second \(H^{\otimes n}\) is a change of basis back to computational, perfectly decoding the phase pattern into a measurable state.

4. Simon's Algorithm

The Problem — Intuition First

Before the formal statement, let's see what this problem actually looks like. Here's a black-box function with 2-bit inputs and outputs:

Input Output
00 A
01 B
10 A
11 B

Notice the collisions: two different inputs give the same output. \(f(00) = f(10) = A\) and \(f(01) = f(11) = B\). The outputs A and B are arbitrary — we don't know or care what they are. We only care about the collision pattern.

Now look at which inputs collide:

Same pattern every time. That pattern is the secret: \(s = 10\) (first bit = 1 means "this bit differs", second bit = 0 means "this bit stays the same").

XOR extracts this difference: \(00 \oplus 10 = 10 = s\), and \(01 \oplus 11 = 10 = s\).

Important: \(f\) is NOT XOR

\(f\) is a black box. We cannot compute its outputs — we can only query it and see what comes back. The outputs (A, B above) could be anything. XOR with \(s\) tells you which inputs are partners (they'll give the same output), but it does NOT compute the output itself.

The equation \(f(x) = f(x \oplus s)\) is a property of the function, not its definition.

The Problem — Formal Statement

Given \(f: \{0,1\}^n \to \{0,1\}^n\) promised to satisfy: there exists a secret \(s \in \{0,1\}^n\) such that

$$f(x) = f(y) \iff x = y \oplus s$$

Find \(s\).

Classical Complexity

Both are exponential in \(n\).

Key Differences from DJ and BV
Simon's algorithm differs in three important ways
  1. Standard oracle, not phase oracle: Uses \(U_f\ket{x}\ket{y} = \ket{x}\ket{y \oplus f(x)}\), a Boolean oracle with two registers.
  2. Two registers: An \(n\)-qubit input register and an \(n\)-qubit output register.
  3. Multiple runs + classical post-processing: A single run gives one linear equation. You need \(\sim n\) runs, then solve the system using Gaussian elimination.
The Circuit (one run)

$$\ket{0}^{\otimes n}\ket{0}^{\otimes n} \xrightarrow{H^{\otimes n} \otimes I} \frac{1}{\sqrt{2^n}}\sum_x \ket{x}\ket{0}^{\otimes n} \xrightarrow{U_f} \frac{1}{\sqrt{2^n}}\sum_x \ket{x}\ket{f(x)} \xrightarrow{\text{measure 2nd}} \text{collapse 1st} \xrightarrow{H^{\otimes n}} \text{measure 1st}$$

Simon's circuit (two registers: n input + n output qubits, Boolean oracle)
Step-by-step analysis

Step 1: Apply \(H^{\otimes n}\) to the top (input) register only:

$$\ket{\psi_1} = \frac{1}{\sqrt{2^n}} \sum_{x \in \{0,1\}^n} \ket{x}\ket{0}^{\otimes n}$$

Step 2: Apply the Boolean oracle \(U_f\):

$$\ket{\psi_2} = \frac{1}{\sqrt{2^n}} \sum_x \ket{x}\ket{f(x)}$$

This is an entangled state between the input and output registers.

Step 3: Measure the output (bottom) register. Suppose we observe some value \(f(x_0)\). Since \(f(x_0) = f(x_0 \oplus s)\) (by the promise), the input register collapses to the equal superposition of the two preimages:

$$\ket{\psi_3} = \frac{1}{\sqrt{2}}\left(\ket{x_0} + \ket{x_0 \oplus s}\right)$$

(We ignore the measured output register from here on — it's now classical.)

What if \(s = 0^n\)?

If \(s = 0^n\), the function is injective, so each output has only one preimage. The input register collapses to a single \(\ket{x_0}\) (not a superposition of two states). The analysis still works: setting \(s = 0^n\) in the formulas below, we get \(\ket{x_0} + \ket{x_0 \oplus 0^n} = 2\ket{x_0}\), which after normalization is just \(\ket{x_0}\).

Step 4: Apply \(H^{\otimes n}\) to the input register:

$$H^{\otimes n}\ket{\psi_3} = \frac{1}{\sqrt{2}} \left(H^{\otimes n}\ket{x_0} + H^{\otimes n}\ket{x_0 \oplus s}\right)$$

Expanding each term:

$$= \frac{1}{\sqrt{2}} \cdot \frac{1}{\sqrt{2^n}} \sum_y \left[(-1)^{x_0 \cdot y} + (-1)^{(x_0 \oplus s) \cdot y}\right] \ket{y}$$

Since the inner product distributes over XOR — \((x_0 \oplus s) \cdot y = x_0 \cdot y \oplus s \cdot y\) (flipping bits in the input just flips the corresponding AND results) — we can factor out \((-1)^{x_0 \cdot y}\):

$$= \frac{1}{\sqrt{2^{n+1}}} \sum_y (-1)^{x_0 \cdot y} \left[1 + (-1)^{s \cdot y}\right] \ket{y}$$

Step 5: The factor \(\left[1 + (-1)^{s \cdot y}\right]\) is the key:

What one run gives you

Measuring the input register yields a uniformly random \(y\) from the set \(\{y \in \{0,1\}^n : s \cdot y = 0 \pmod{2}\}\). Each run gives you one linear equation \(s \cdot y_i = 0\) over \(\mathbb{F}_2\).

Classical Post-Processing

After \(k\) runs, you have a collection of bit strings \(y_1, y_2, \ldots, y_k\), each satisfying \(s \cdot y_i = 0\). This is a system of equations:

$$\begin{cases} s \cdot y_1 = 0 \pmod{2} \\ s \cdot y_2 = 0 \pmod{2} \\ \vdots \\ s \cdot y_k = 0 \pmod{2} \end{cases}$$

Each equation says: "at the positions where \(y_i\) has a 1, the corresponding bits of \(s\) must XOR to 0." Each equation rules out half the candidate \(s\) values.

GF(2) just means "arithmetic where 1+1=0" — i.e., everything is mod 2 / XOR. Gaussian elimination over GF(2) is the same row-reduction you know from linear algebra, except addition = XOR and there are no fractions. You're solving for which bit string \(s\) satisfies all the equations simultaneously.

Null space = the set of all \(s\) satisfying \(Y \cdot s = 0\), where \(Y\) is the matrix with rows \(y_1, \ldots, y_k\). You want the unique nonzero solution.

Linearly independent = an equation tells you something new (can't be derived from the others by XOR-ing them together). If you get \(y_3 = y_1 \oplus y_2\), that third run was wasted — it gave redundant information.

How many runs do you need?

You need \(n - 1\) linearly independent equations to pin down \(s\) uniquely. Each run gives a random \(y\) from a pool of \(2^{n-1}\) valid values. A new \(y\) is independent of the previous ones with probability at least \(1/2\), so \(\sim n\) total runs suffice with high probability.

Think of it like this: with \(n\) bits to determine (minus 1, since we know \(s \neq 0\)), each independent equation eliminates one degree of freedom. After \(n-1\) independent equations, only one nonzero \(s\) fits.

Worked Example: \(n = 3\), \(s = 110\)

Suppose \(f: \{0,1\}^3 \to \{0,1\}^3\) with \(s = 110\), meaning \(f(x) = f(x \oplus 110)\). The pairs are:

The constraint \(s \cdot y = 0\) means \(1 \cdot y_0 + 1 \cdot y_1 + 0 \cdot y_2 = 0 \pmod{2}\), so \(y_0 \oplus y_1 = 0\). The valid \(y\) values are: \(\{000, 001, 110, 111\}\).

Suppose three runs give:

From \(s_2 = 0\) and \(s_0 = s_1\), the nontrivial solution is \(s = 110\). Confirmed.

Concrete Comparison: Classical vs Quantum (\(n = 3\), \(s = 110\))
Classical approach — collect outputs until collision
  1. Query \(f(000) = \text{C}\). Noted.
  2. Query \(f(001) = \text{D}\). No match. Noted.
  3. Query \(f(010) = \text{E}\). No match. Noted.
  4. Query \(f(100) = \text{E}\). Match! Same output as \(f(010)\).

Collision found: \(f(010) = f(100)\). XOR the inputs: \(010 \oplus 100 = 110 = s\). Done in 4 queries.

With \(n = 3\) this is manageable. With \(n = 100\), you'd need \(\sim 2^{50}\) queries on average.

Quantum approach — tracing one run in full (\(n=3\), \(s=110\))

The collision pairs are: \(f(000)=f(110)\), \(f(001)=f(111)\), \(f(010)=f(100)\), \(f(011)=f(101)\). Call the outputs A, B, C, D.

Step 1: \(H^{\otimes 3}\) on input register:

$$\frac{1}{\sqrt{8}}\left(\ket{000}+\ket{001}+\ket{010}+\ket{011}+\ket{100}+\ket{101}+\ket{110}+\ket{111}\right)\ket{000}$$

Step 2: Oracle writes \(f(x)\) into output register:

$$\frac{1}{\sqrt{8}}\left(\ket{000}\ket{A}+\ket{001}\ket{B}+\ket{010}\ket{C}+\ket{011}\ket{D}+\ket{100}\ket{C}+\ket{101}\ket{D}+\ket{110}\ket{A}+\ket{111}\ket{B}\right)$$

Step 3: Measure output register. Say we get A. Only terms with \(\ket{A}\) survive — that's \(\ket{000}\) and \(\ket{110}\):

$$\frac{1}{\sqrt{2}}\left(\ket{000} + \ket{110}\right)$$

The input register collapsed to the two partners: \(x_0 = 000\) and \(x_0 \oplus s = 110\).

Step 4: Apply \(H^{\otimes 3}\) to input register. Expand each term using the Hadamard formula:

$$H^{\otimes 3}\ket{000} = \frac{1}{\sqrt{8}}\sum_y (+1)\ket{y} \qquad \text{(all signs +, since } 000 \cdot y = 0\text{)}$$

$$H^{\otimes 3}\ket{110} = \frac{1}{\sqrt{8}}\sum_y (-1)^{110 \cdot y}\ket{y}$$

The combined amplitude of each \(\ket{y}\) is proportional to \(\left[1 + (-1)^{110 \cdot y}\right]\):

Only \(y \in \{000, 001, 110, 111\}\) can be measured. All satisfy \(s \cdot y = 110 \cdot y = 0\). ✓

Step 5: Measure input register. Get one random \(y\) from \(\{000, 001, 110, 111\}\). Repeat ~\(n\) times:

Nontrivial solution: \(s = 110\). Done in 2 oracle calls + simple algebra.

Total Complexity
Simon's algorithm: exponential speedup

This was the first exponential quantum speedup. Simon's algorithm directly inspired Peter Shor to develop Shor's factoring algorithm, which uses a similar structure (period finding) but over a different group (\(\mathbb{Z}_N\) instead of \(\mathbb{Z}_2^n\)).

5. Comparison of Oracle Types

Understanding when to use which oracle is crucial:

Feature Phase Oracle \(P_f\) Boolean Oracle \(U_f\)
Action \(P_f\ket{x} = (-1)^{f(x)}\ket{x}\) \(U_f\ket{x}\ket{y} = \ket{x}\ket{y \oplus f(x)}\)
Registers 1 register (no ancilla) 2 registers (input + output)
Output range \(f: \{0,1\}^n \to \{0,1\}\) only \(f: \{0,1\}^n \to \{0,1\}^m\) for any \(m\)
Where is \(f(x)\)? In the phase (\(\pm 1\)) In the output register
Used by Deutsch-Jozsa, BV, Grover Simon's, Shor's
Conversion Phase kickback: \(U_f\ket{x}\ket{-} = (-1)^{f(x)}\ket{x}\ket{-}\)

Simon's algorithm must use the Boolean oracle because \(f\) maps to \(\{0,1\}^n\), not \(\{0,1\}\). You can't encode a multi-bit output in a single phase sign. The measurement of the output register (to get a specific \(f(x_0)\)) is essential — it creates the two-element superposition that makes the algorithm work.

6. Summary & Comparison

Algorithm Problem Classical (det.) Classical (prob.) Quantum
Deutsch 1-bit constant/balanced 2 1
Deutsch-Jozsa \(n\)-bit constant/balanced \(2^{n-1}+1\) \(O(1)\) 1
Bernstein-Vazirani Find hidden \(s\) in \(s \cdot x\) \(n\) 1
Simon's Find hidden \(s\) in \(f(x)=f(x \oplus s)\) \(2^{n-1}+1\) \(\sim 2^{n/2}\) \(\sim n\)
Caveat: query complexity, not time complexity

These speedups are measured in oracle calls (query complexity). The oracle itself may be expensive to implement as a quantum circuit. These algorithms don't have known practical applications — they are theoretical demonstrations that quantum computers can be faster. The real significance is conceptual: they prove separation between classical and quantum computational models, and the techniques (phase kickback, Hadamard sandwich, linear system extraction) are reused in algorithms that do have applications.

The pattern that keeps recurring

Hadamard → Oracle → Hadamard → Measure. This "sandwich" pattern (also called the Hadamard test) is the backbone of quantum algorithms. The first Hadamard creates superposition. The oracle stamps information into the phases. The second Hadamard converts those phase patterns into measurable states via interference. Later algorithms (QFT-based, Shor's) generalize this by replacing the second Hadamard with the inverse Quantum Fourier Transform — same idea, different group.

7. Exam Checklist

What you should be able to do