This was the highest-rated crypto challenge in our recent hxp CTF 2017.
In a nutshell, the task is to find a collision in a weak instantiation of the Charles-Goren-Lauter hash function on a supersingular isogeny graph.
The idea of that hash function is: Given an exponentially large graph,
we can construct a hash function using graph walking
by choosing the edge taken at each step according to the next input bit.
Once all bits are consumed, some identifier of the end node is returned.
Assuming that the underlying graph is “random”, this yields a
collision-resistant hash function.
(For example: collisions give rise to nontrivial cycles by concatenating the two different paths with the same end node;
finding a preimage would amount to finding a path between the start node and the target node. Since the graph is exponentially large, obtaining either of these objects by brute force is infeasible.)
However, in our instantiation, the underlying graph has some efficiently computable structure that can be used to find collisions much faster than by brute force.
The graph the challenge uses is a supersingular $\ell$-isogeny graph (a.k.a. Pizer graph). It consists of (roughly, modulo taking equivalence classes here and there):
The full challenge source code (written in sage) thus reads:
#!/usr/bin/env sage p = 361550014853497117429835520396253724671 bits = 128 def cgl(m): n = int('0' + m.encode('hex'), 16) F.<i> = GF(p ** 2, modulus = x ** 2 + 1) E = EllipticCurve(F, [1, 0]) P, Q = (E.lift_x(x) for x in (i, 0)) for b in reversed(range(8 * len(m) + 1)): if n >> b & 1: P, Q = Q, P phi = E.isogeny(P) E = phi.codomain() f = E.torsion_polynomial(2) // (polygen(F) - phi(Q).xy()[0]) P, Q = (E.lift_x(x) for x in sorted(f.roots(multiplicities = False))) sys.stdout.write('.'); sys.stdout.flush() r = ZZ(E.j_invariant().polynomial()(34192034817997177)) & (1 << bits) - 1 return '{{:0{:d}x}}'.format(bits // 4).format(r).decode('hex') s1 = raw_input().strip().decode('hex') s2 = raw_input().strip().decode('hex') if s1 == s2: print('no'); exit() if cgl(s1) != cgl(s2): print('\nno no'); exit() print('\n' + open('flag.txt').read().strip())
The code is pretty straightforward: Take all the input bits and
decide which of the three out-edges to take based on the value of
the next bit.
Note that by dividing out $Y-x(\varphi(Q))$ from the division polynomial,
we avoid the edge dual to the one we just came from.
Moreover, observe that we first take a step away from the starting node
before consuming the actual input bits: Otherwise, (e.g.) the byte strings
00
and 7f
would collide.
(The “why” is left as an exercise to the reader ;-).)
WARNING: hard math ahead — read at your own risk.
Since this margin web page is too narrow to contain all the
background theory required for the solution, I will only present an
outline of the solution and some useful references.
If you have questions about any of this stuff, I’m happy to help!
(A comprehensive general reference for quaternion algebras is Voight’s “Quaternion algebras”.)
Almost all of the theory below is described in Petit-Lauter’s “Hard and easy problems for supersingular isogeny graphs”; this attack specifically is a specialized (easier) version of Section 4.2.
The weakness of this hash function is the start node: Its endomorphism ring is known! It is \[ \mathcal O_0=\Big\langle 1,\, i,\, \frac{1+ij}2,\, \frac{i+j}2\Big\rangle \text, \] where the endomorphisms $i$ and $j$ are given by \[ i\colon (x,y)\mapsto (-x,\sqrt{-1}y) \\ j\colon (x,y)\mapsto (x^p,y^p) \text. \]
This is useful since (for a fixed supersingular elliptic curve $E_0$) there exists a bijection \[ \{\text{left ideals of $\operatorname{End}(E_0)$ of norm coprime to $p$}\} \overset\sim\longleftrightarrow \{\text{separable isogenies $E_0\to E$}\} \text, \] and the forward direction of this correspondence is computable if the endomorphism ring of $E_0$ is known (and given in some computationally useful format). From ideals to isogenies, the correspondence is given by \[ \mathcal I \mapsto \text{(the isogeny with kernel $\bigcap_{\alpha\in\mathcal I}\operatorname{ker}\alpha$)} \text. \] Moreover, the degree of an isogeny equals the norm of the corresponding ideal!
Therefore, finding a nontrivial cycle in the $2$-isogeny graph (i.e., a collision for the hash function) reduces to finding a principal left ideal of $\mathcal O_0$ of $2$-power norm and computing the corresponding isogeny. In our case, we have the additional constraint that the hash function only consumes whole bytes, so the cycle must have a certain length modulo $8$.
Finding a principal left ideal of $2$-power norm boils down to finding an element of $2$-power norm in $\mathcal O_0$. Petit-Lauter obtain such an element by estimating how large the exponent $e$ in $2^e$ must be for a solution to exist (spoiler: about $2\log_2p$), and then choosing random $y$ and $z$ until the equation \[ w^2+x^2 = 2^e - p(y^2+z^2) \] has an integer solution $(w,x)$ obtained by Cornacchia’s algorithm. Then $w+xi+yj+zij$ is an element of $\mathcal O_0$ of norm $2^e$.
The next step is recovering the cycle in the isogeny graph corresponding to the ideal $\mathcal O_0q$. According to the formula above, the kernel is the intersection of the kernels of all elements (endomorphisms) in $\mathcal O_0q$. To find the kernel of an endomorphism of degree $m$, we could consider its action on a basis of the $m$-torsion and solve a discrete logarithm system to find the kernel. For our parameters, discrete logarithms are easy to compute due to smoothness. However, the kernel of the isogeny corresponding to $\mathcal O_0q$ is a subgroup of $E[2^e]$, and since $e\approx256$, that group is only defined over a huge extension of $\mathbb F_{p^2}$. Hence this approach is computationally infeasible.
In general, Petit-Lauter solve this problem by (for each $k$) computing an ideal corresponding to the first $k$ steps of the isogeny and reducing it to an equivalent ideal of powersmooth norm. Due to the powersmoothness, that ideal can then be translated to an isogeny in polynomial time (and using only “small” field extensions). From the codomains of these shortened isogenies, it is subsequently easy to recover the correct isogeny as a composition of degree-$2$ isogenies. Although this works fine in theory, it is quite tricky to implement efficiently in practice, and “polynomial time” still does not imply a lot about the concrete running time of the algorithm.
It is much easier to make use of a special property of our parameters: The prime $p=2^{124}\cdot 17-1$ was chosen nicely such that the whole $2^{124}$-torsion of $E_0$ is defined over $\mathbb F_{p^2}$, thus we can directly translate ideals of norm $2^k$ for $k<124$ without transitioning to higher extension fields. Since our $e$ is about $256$, this seems like it’s still not enough to recover the whole path defined by $\mathcal O_0q$, but here’s a trick to make it work: Instead of following the entire cycle in one direction, we can easily construct two (almost) halves of the cycle by also walking in the opposite direction from the start node. The isogeny corresponding to the opposite direction is just the dual isogeny, hence it is defined by the conjugate $\mathcal O_0\overline q$. To “shorten” these two isogenies to $123$ steps, we instead compute and translate the ideals \[ \mathcal O_0q+2^{123}\mathcal O_0 \] and \[ \mathcal O_0\overline q+ 2^{123}\mathcal O_0 \] of norm $2^{123}$. This still won’t give the whole path without taking field extensions, but the remaining few steps can simply be brute-forced starting from the codomains of the isogenies we just constructed.
Once we have found all the nodes on the cycle, all that remains to
be done is determining the input bits that will make the cgl()
function follow the desired path.
The steps outlined above can be implemented in sage as follows:
#!/usr/bin/env sage
import binascii
p = 2 ** 124 * 17 - 1
e = 260
F.<t> = GF(p ** 2, modulus = x ** 2 + 1)
E0 = EllipticCurve(F, [1, 0])
B.<i,j,k> = QuaternionAlgebra(-1, -p)
O0 = B.quaternion_order([1, i, (1 + k) / 2, (i + j) / 2])
assert O0.discriminant() == B.discriminant() # is maximal
endos = [
lambda x, y: E0(x, y),
lambda x, y: E0(-x, t * y),
lambda x, y: E0(x**p, y**p),
lambda x, y: E0(-x**p, t * y**p),
]
def eval_endo(f, P):
d = lcm([c.denominator() for c in list(f)])
Pd = P.division_points(d)[0]
return sum(ZZ(d * c) * r(*Pd.xy()) for c, r in zip(f, endos))
def find_element_norm_power(l, e):
while True:
y, z = (randrange(ceil(sqrt(l ** e / p))) for _ in range(2))
N = l ** e - p * (y ** 2 + z ** 2)
if not is_prime(N): continue
try:
w, x = two_squares(N)
except ValueError: continue
r = w + x * i + y * j + z * k
assert r.reduced_norm() == l ** e
return r
def endo_kernel(P, Q, gP, gQ, m):
# only solves a special case
try: return P - gQ.discrete_log(gP, m) * Q
except ValueError: return Q - gP.discrete_log(gQ, m) * P
from sage.algebras.quatalg.quaternion_algebra import basis_for_quaternion_lattice
def ideal_sum(I, J):
L = basis_for_quaternion_lattice(list(I.basis()) + list(J.basis()), True)
return I.left_order().left_ideal(L)
def ideal_to_path(I, l, e):
I = ideal_sum(I, O0.unit_ideal().scale(l ** e))
assert I.norm() == l ** e
P, Q = (E0.lift_x(x) for x in (t, 0))
for _ in range(1, e):
Ps, Qs = ([R for R in S.division_points(l) if R != S] for S in (P, Q))
P, Q = sorted(Ps)[0], sorted(Qs)[0]
gP, gQ = (eval_endo(I.basis()[0], S) for S in (P, Q))
K = endo_kernel(P, Q, gP, gQ, l ** e)
Es = [E0]
for i in range(e):
phi = Es[-1].isogeny(l ** (e - i - 1) * K)
E, K = phi.codomain(), phi(K)
Es.append(E)
return Es
################################################################################
from sage.modular.ssmod.ssmod import Phi_polys
def walk(j, l, d):
tab, q = {}, [[j]]
for step in reversed(range(d)):
print('{:3d}...'.format(step))
newq = []
for js in q:
for j in Phi_polys(l, polygen(F), js[-1]).roots(multiplicities = False):
if len(js) >= 2 and j == js[-2]:
continue
if step:
newq.append(js + [j])
else:
tab[j] = js + [j]
q = newq
print
return tab
def path_to_bytes(path):
assert len(path) % 8 == 2
inp = 0
E, (P, Q) = E0, (E0.lift_x(x) for x in (t, 0))
assert P.order() == Q.order() == 2
assert E.isogeny_codomain(P).j_invariant() != E0.j_invariant() # forced first step
for j in path[1:]:
inp <<= 1
if E.isogeny_codomain(Q).j_invariant() == j:
inp, P, Q = inp | 1, Q, P
phi = E.isogeny(P)
E = phi.codomain()
assert E.j_invariant() == j
f = E.torsion_polynomial(2) // (polygen(F) - phi(Q).xy()[0])
P, Q = (E.lift_x(x) for x in sorted(f.roots(multiplicities = False)))
return binascii.unhexlify('{{:0{:d}x}}'.format(len(path) // 4).format(inp))
################################################################################
l = 2
while True:
q = find_element_norm_power(l, e)
for r in range(1, e):
if ideal_sum(O0.unit_ideal().scale(q), O0.unit_ideal().scale(l ** r)).norm() != l ** r:
# not cyclic
break
else: break
print('e = {}'.format(e))
print('q = {}'.format(q))
print('-' * 80 + '\n')
paths = [
ideal_to_path(O0.unit_ideal().scale(q), l, (p + 1).valuation(2) - 1),
ideal_to_path(O0.unit_ideal().scale(q.conjugate()), l, (p + 1).valuation(2) - 1)
]
paths = [[E.j_invariant() for E in path] for path in paths]
for path in paths:
for i, (j1, j2) in enumerate(zip(path, path[1:])):
print('{:3d}: {} -> {}'.format(i, j1, j2))
assert Phi_polys(l, polygen(F), j1)(j2) == 0
print
print('-' * 80 + '\n')
walks = [walk(path[-1], l, e // 2 - len(path) + 1) for path in paths]
print('-' * 80 + '\n')
meet = sorted(set(walks[0].keys()) & set(walks[1].keys()), key = lambda _: random())[0]
for path, walk in zip(paths, walks):
for i, (j1, j2) in enumerate(zip(walk[meet], walk[meet][1:]), len(path) - 1):
print('{:3d}: {} -> {}'.format(i, j1, j2))
assert Phi_polys(l, polygen(F), j1)(j2) == 0
print
paths = [path + walk[meet][1:] for path, walk in zip(paths, walks)]
print('=' * 80 + '\n')
for path in paths:
print('{}'.format(binascii.hexlify(path_to_bytes(path[1:])).decode()))
print
The script is randomized, so it produces a lot of different collisions. Here’s one possible output:
ccc1e85731c3a43378839b1f3b94ed51
1a4121d9e5edf01e583777c10ec242bb
Upon receiving those two hex strings, the server prints the flag:
hxp{I'm_f33L1nG_a_b1T_sup3rs1nguL4r_t0daY}