CTF

Magnet CTF Week 5: I'm sorry, what?

 · 10 mins read

TL;DR: Week 5 of the #MagnetWeeklyCTF got a little sporty with the addition of a Linux image (yay) and Hadoop questions (oh no).

Review

Check out the week 1 blog post for how to get started on the Magnet Weekly CTF.

Get the challenge

The weekly challenge for week 5 was:

What is the original filename for block 1073741825?

Ok, seems straight forward enough, we just need to know how to map a file system’s block number to the actual filename.

Open the target file(s)

Magnet provided three disk images this week, all of which are in Encase format (.E01). To open them up on a Linux machine, you need to install ewf-tools and use the ewfmount command. That will give you an image named ewf1 which you can then mount to get to your actual file system.

Note: I was doing this last minute and hit some odd permissions errors with ewfmount as it mounted everything as if it were owned by root, despite not being run by root. That was all cleared up by making sure I sudo’d everything. I also had to make sure I specified the offset within the file for which partition I was trying to mount. All three drives have their main partition 2048 blocks into the file, and the blocks are 512 bytes, so the offset is 512x2048=1048576. I also made sure to specify in the options (-o) that it was a loop device, readonly (ro), and not to try to recover anything (norecovery).

[notta@cuppa case2]$ unzip Case2-HDFS.zip 
Archive:  Case2-HDFS.zip
 extracting: HDFS-Slave1.E01.txt     
 extracting: Case2-HDFS_meta.xml     
 extracting: Case2-HDFS_meta.sqlite  
 extracting: HDFS-Slave2.E01.csv     
 extracting: HDFS-Slave2.E01         
 extracting: Case2-HDFS_files.xml    
 extracting: Case2-HDFS_archive.torrent  
 extracting: HDFS-Master.E01.csv     
 extracting: HDFS-Slave2.E01.txt     
 extracting: HDFS-Master.E01         
 extracting: HDFS-Master.E01.txt     
 extracting: HDFS-Slave1.E01         
 extracting: HDFS-Slave1.E01.csv     
[notta@cuppa case2]$ sudo mkdir /mnt/ewf-master
[notta@cuppa case2]$ sudo mkdir /mnt/ewf-slave1
[notta@cuppa case2]$ sudo mkdir /mnt/ewf-slave2
[notta@cuppa case2]$ sudo ewfmount HDFS-Master.E01 /mnt/ewf-master/
ewfmount 20140608

[notta@cuppa case2]$ sudo ewfmount HDFS-Slave1.E01 /mnt/ewf-slave1/
ewfmount 20140608

[notta@cuppa case2]$ sudo ewfmount HDFS-Slave2.E01 /mnt/ewf-slave2/
ewfmount 20140608

[notta@cuppa case2]$ sudo mount /mnt/ewf-master/ewf1 /mnt/case2_master/ \
-o ro,loop,norecovery,offset=1048576
[notta@cuppa case2]$ sudo mount /mnt/ewf-slave1/ewf1 /mnt/case2_slave1/ \
-o ro,loop,norecovery,offset=1048576
[notta@cuppa case2]$ sudo mount /mnt/ewf-slave2/ewf1 /mnt/case2_slave2/ \
-o ro,loop,norecovery,offset=1048576

Learn some background

Having never run a Hadoop cluster myself, I did some brief googling to figure out what made Hadoop’s file system different from others I knew. Apache’s HDFS design document gives a great overview of the file system and Stack Overflow comes in clutch as always with the command which should be the answer: hadoop fsck / -files -blocks | grep blk_1073741825. That is older syntax and should be even easier with hdfs fsck -blockId blk_1073741825.

Derp for an hour or two

With the disks mounted, I tried a lot of different ways to get hdfs fsck to work, to no avail. This was a serious point of frustration, but mainly because it was last minute, I didn’t have time to go study how to do it right, and I very much wanted sleep at that time. Finally, my brain was able to pull me head out of the frustration enough to try something else (which I should always do way sooner than I do).

Find the Fsimage file

I at least had enough understanding at this point to know I was looking for the fsimage files. They literally could not be found anywhere on the copies I mounted on my first machine and in desparation I copied everything over to another testbed and tried again. Lo, and behold, when I did a find | grep fsimage this time, there they were! I am fairly certain one of the ways I tried to get Hadoop running overwrote them initially, before I remembered the ro flag on mount. On the HDFS-Master.E01 file, it can be found in /usr/local/hadoop/hadoop2_data/hdfs/namenode/current/fsimage_*. With this open in a hexeditor, you can clearly see the filename present, at which point I tried “AptSource” and got the flag.

[notta@cuppa case2_master]$ hexdump -C \
usr/local/hadoop/hadoop2_data/hdfs/namenode/current/fsimage_0000000000000000024

00000000  48 44 46 53 49 4d 47 31  16 08 b8 c5 8f d4 01 10  |HDFSIMG1........|
00000010  e8 07 18 ea 07 20 00 28  82 80 80 80 04 30 18 06  |..... .(.....0..|
00000020  08 84 80 01 10 04 2f 08  02 10 81 80 01 1a 00 2a  |....../........*|
00000030  25 08 d3 f3 d1 e7 f9 2b  10 ff ff ff ff ff ff ff  |%......+........|
00000040  ff 7f 18 ff ff ff ff ff  ff ff ff ff 01 21 ed 01  |.............!..|
00000050  02 00 00 01 00 00 34 08  02 10 82 80 01 1a 04 74  |......4........t|
00000060  65 78 74 2a 26 08 bd ca  fe f0 f9 2b 10 ff ff ff  |ext*&......+....|
00000070  ff ff ff ff ff ff 01 18  ff ff ff ff ff ff ff ff  |................|
00000080  ff 01 21 ed 01 02 00 00  01 00 00 41 08 01 10 83  |..!........A....|
00000090  80 01 1a 09 41 70 74 53  6f 75 72 63 65 22 2e 08  |....AptSource"..|
000000a0  02 10 98 9e d2 e7 f9 2b  18 dc 97 d2 e7 f9 2b 20  |.......+......+ |
000000b0  80 80 80 40 29 a4 01 02  00 00 01 00 00 32 0c 08  |...@)........2..|
000000c0  81 80 80 80 04 10 e9 07  18 a4 17 50 00 41 08 01  |...........P.A..|
000000d0  10 84 80 01 1a 08 73 65  72 76 69 63 65 73 22 2f  |......services"/|
000000e0  08 02 10 b1 ca fe f0 f9  2b 18 fb c4 fe f0 f9 2b  |........+......+|
000000f0  20 80 80 80 40 29 a4 01  02 00 00 01 00 00 32 0d  | ...@)........2.|
00000100  08 82 80 80 80 04 10 ea  07 18 95 99 01 50 00 09  |.............P..|
00000110  08 81 80 01 12 03 82 80  01 0c 08 82 80 01 12 06  |................|
00000120  83 80 01 84 80 01 04 08  00 18 00 08 08 00 10 00  |................|
00000130  18 00 20 00 06 08 01 10  00 18 00 02 08 02 0e 08  |.. .............|
00000140  02 12 0a 73 75 70 65 72  67 72 6f 75 70 0a 08 01  |...supergroup...|
00000150  12 06 68 61 64 6f 6f 70  c2 01 08 01 10 c1 ff ff  |..hadoop........|
00000160  ff 0f 22 0d 0a 07 4e 53  5f 49 4e 46 4f 10 17 18  |.."...NS_INFO...|
00000170  08 22 0c 0a 05 49 4e 4f  44 45 10 f0 01 18 1f 22  |."...INODE....."|
00000180  10 0a 09 49 4e 4f 44 45  5f 44 49 52 10 17 18 8f  |...INODE_DIR....|
00000190  02 22 1e 0a 17 46 49 4c  45 53 5f 55 4e 44 45 52  |."...FILES_UNDER|
000001a0  43 4f 4e 53 54 52 55 43  54 49 4f 4e 10 00 18 a6  |CONSTRUCTION....|
000001b0  02 22 0f 0a 08 53 4e 41  50 53 48 4f 54 10 05 18  |."...SNAPSHOT...|
000001c0  a6 02 22 16 0a 0f 49 4e  4f 44 45 5f 52 45 46 45  |.."...INODE_REFE|
000001d0  52 45 4e 43 45 10 00 18  ab 02 22 15 0a 0e 53 45  |RENCE....."...SE|
000001e0  43 52 45 54 5f 4d 41 4e  41 47 45 52 10 09 18 ab  |CRET_MANAGER....|
000001f0  02 22 14 0a 0d 43 41 43  48 45 5f 4d 41 4e 41 47  |."...CACHE_MANAG|
00000200  45 52 10 07 18 b4 02 22  13 0a 0c 53 54 52 49 4e  |ER....."...STRIN|
00000210  47 5f 54 41 42 4c 45 10  1d 18 bb 02 00 00 00 c4  |G_TABLE.........|
00000220

Alternatives

Log Files

Early on, before I wanted to commit to installed hadoop and couldn’t get the images to mount, I tried my usual opening move, I used grep to see what showed up. I saw an answer and tried the answer, but my lack of Hadoop knowledge meant I tried the wrong answer. The filename and block ID were both found in a log file, my issue was I took the entire path given in the log file, instead of only the name.

[notta@cuppa ewf_files]$ strings ewf1 | grep 1073741825

... [snip]
2017-11-08 20:46:33,257 INFO org.apache.hadoop.hdfs.StateChange: BLOCK allocate blk_1073741825_1001, replicas=192.168.2.100:50010, 192.168.2.101:50010 for /text/AptSource._COPYING_
2017-11-08 20:46:33,602 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: BLOCK blk_1073741825_1001 is COMMITTED but not COMPLETE(numNodes= 0 <  minimum = 1) in file /text/AptSource._COPYING_

Notice these two lines imply that block blk_10737418251001 is allocated for “/text/AptSource._COPYING”. I figured that the .COPYING was likely garbage, but thought “/text/” was needed. As it turns out, the filename itself was “AptSource” and clearly present. I had the right answer, but lacked the understanding to implement it.

File Size

Along the way, I also discovered the size of the file is logged in the log file and using file -size [size]c was able to find the original source file using this. Unfortunately, that name is different and not the actual “original” filename that the question asks for.

[notta@cuppa case2_master]$ find -size 2980c
./usr/lib/grub/i386-pc/iorw.mod
./usr/lib/x86_64-linux-gnu/perl-base/unicore/To/Ea.pl
./usr/share/perl/5.22.1/unicore/To/Ea.pl
./home/hadoop/temp/sources.list
./boot/grub/i386-pc/iorw.mod

Looking at these files and the contents of the actual block found in usr/local/hadoop/hadoop2_data/hdfs/namenode/current/fsimage_0000000000000000024 showed that the original file was home/hadoop/temp/sources.list.

Console history

You can also find this file by examining the actions of the admin running the cluster. Looking at the user’s bash history gives insight into when the cluster was created and this file was pushed in. You can even see that the admin made the usual mistake I do, trying to move a file to a directory which doesn’t exist.

[notta@cuppa case2_master]$ cat home/hadoop/.bash_history | grep "hdfs dfs "
hdfs dfs -ls /
hdfs dfs -put sources.list /text/AptSource
hdfs dfs -mkdir /text/
hdfs dfs -put sources.list /text/AptSource

In this case, the admin pushed in the file sources.list, which we had already discovered as being the same size and content as our block in question.

Conclusion

I have a lot to learn over the next few weeks and very little spare time to do so, unfortunately. I wasted a lot of time trying to get the assumed answer from the Internet running, instead of looking at what evidence was in front of me and working with it. This is going to be rough, but I sure hope I can continue to scrape by the usual CLI tools without actually getting hadoop running.