Skip to content

Pentesting and Reverse Engineering Tools

This post serves as an 'interview checklist' of things that are important for a potential future penetration tester or somebody going into a cybersecurity career. This post will cover the tools I have used at university, and their place within the suite of tools one might use.

OSINT

This is typically the first stage that is considered before making an attack on a company's infrastructure. It consists of looking through all publicly available information on a company, e.g., their IP ranges, key people, open ports, services, etc.

When doing web application testing, this might involve enumeration of all pages available on a website, usually through automated crawling. At University, we were taught about nmap, whois, dig, etc. You can also look through a company's LinkedIn, commits, and blog posts, as ways to see who works there and the projects they have on the go.

Once a company has been comprehensively mapped, then we can begin looking at ways into their infrastructure. This might be through the web, exploiting unpatched vulnerabilities; bribing an insider to install a payload for us; USB dropping, where we rely on a curious employee to plug in and test a USB drive with a malware payload; or potentially run a phishing campaign, using tools such as Gophish. We can register a domain, as found by dnstwister, then send an email pretending it's from the company internally.

A lot of OSINT can be automated for us, to find issues such as potential CVEs, misconfigured firewalls, and software versions exposed to the internet, generating a report for a penetration tester to use.

Many of these organisations' security teams pay large amounts for products that do exactly the same, with vulnerabilities flagged. These technologies fall under a managed detection and response category of services, with automatic threat hunting and reporting to a dedicated security operations centre.

The Cyber Kill Chain

When formulating an attack, both the threat actor and the people defending against attacks can make use of this model to see what steps an attacker is likely to take and the steps that can be taken to prevent this happening.

The first step is reconnaissance, which I covered as part of the OSINT above. Following this, we have weaponisation, where we develop a piece of malware to compromise a system. This is then used via some delivery method e.g., insiders, phishing, CVE compromise.

After delivery, the exploitation phase takes advantages of the vulnerabilities we discover in the system. Sometimes, this phase of the attack leads to the target we want to compromise. Following exploitation, we then install the malware to take control of the systems, including programs such as Trojan horses, or rootkits. Once installed, the malware will then reach out to the command and control (C2) infrastructure, to receive instructions from the attackers.

Finally, we have the actions on objectives stage, where the attacker has taken control of the network, and can thus begin carrying out their attack, e.g., to steal company/client data, or use the computers as part of a botnet for a denial of service attack.

The Cyber Kill Chain. Image by Farcaster from Wikipedia.

Internal On-Host Detection

In addition to their exposed infrastructure, companies will also typically run a daemon or tool on all machines and networking appliances within the internal network, to quickly and effectively remediate any issues. This means that should an attacker gain a foothold on a bastion host or an internal machine, their chances of lateral movement within the network to a more privileged or important host are limited.

These tools typically complement antivirus applications such as Windows Defender and provide signature-based checking, configuration file parsing, version detection, etc. and compare to a list of CVEs to see what issues might be on the network. They can also look for suspicious payloads or anomalies on the network and alert teams. Sometimes, intrusion detection systems may also switch off switch ports to isolate potentially compromised network appliances.

Web and Remote

One of my main areas of 'expertise' is in web application testing, having completed a couple of courseworks at university, both compromising hosts (with permission, on an isolated host), and patching a beyond-terrible PHP Wordpress-like framework against a variety of exploits (check out the Fat-Free Framework 🤮).

When contracting with clients, we typically agree on the scope of the test, and any parts of their infrastructure that are out-of-bounds. Testing can then begin.

Burp Suite

For the penetration testing side of things, I made use of Burp Suite, which is a proprietary intercepting proxy. This sits between the browser and the site being visited, allowing us to see all requests being made to the browser, and allowing modification of these too. We can set the scope to a domain, subdomain, or even part of a domain, then as we crawl the site, we passively build the sitemap and possible endpoints we can attack.

The professional version of the tool allows us to automate this scanning by crawling recursively, then trying various wordlists. Following the scan, Burp then lets us know what issues it sees on the site, and the severity of these issues. A security auditor or pentester can then add this to a report to be fixed, or exploit the vulnerability within the app to see if they can elevate their own privileges further and compromise more user data.

Top Vulnerabilities

I'd only be regurgitating these here, so it makes a lot more sense to link straight to the OWASP page. The OWASP top 10 highlights the 10 most major issues with web application year on year, as a sort of 'state of the industry'.

These map to different common weakness enumeration (CWE) types, and OWASP provide a list of most of the web ones here.

CVEs

Lots of applications have vulnerabilities of sorts somewhere in their code. Despite our best efforts as developers, sometimes array sizes are not checked (leading to buffer overflows), or exceptions are not properly handled, or rate limits aren't applied correctly. To track all these issues, the security community has CVE numbers, of the form CVE-YYYY-NNNNN, showing us the year, and the monotonically increasing number for each CVE.

Typically, cybersecurity researchers will follow a responsible disclosure procedure and let the vendor know ahead of publishing their research, both to allow the vendor time to fix the vulnerability, and to encourage the vendor to fix it with some sense of urgency. A typical disclosure process is about 2 weeks from notification, striking a good balance between allowing time to fix, and not allowing the vulnerability to be discovered and used by others.

When updates are published to fix the issue, the CVE is also published, with a working example of the exploit, meaning that anyone who doesn't then update is vulnerable. Many frameworks then use this to exploit software that is not up-to-date, with automated detection and exploits commonly employed.

Malware and Reverse Engineering

Once on a system, we likely want to try and escalate privileges. On systems such as Linux boxes, this might be done by running our own programs on the filesystem. Historically, crafting inputs to cause buffer overflows, or running binaries with setuid or seteuid, then jumping to our own code might've been possible, but this is largely mitigated with ASLR and other mechanisms preventing the data section of a program from being executed.

A lot of this is documented in my software security notes, so I won't go into more detail here.

Frameworks and Tools

Frameworks can make life easier for a pentester or cybersecurity researcher, automating away a lot of the boring manual work. I'll give a list below, with a quick explanation as to what each does:

  • Volatility: memory forensics tool written in Python. Allows users to examine memory dumps from processes to extract program-, or OS-level data.
  • Metasploit: automated exploitation and enumeration of large numbers of programs. Not personally used.
  • BurpSuite: web application enumeration and proxy, with automated crawling and vulnerability detection (pro).
  • nmap: Port enumeration for specific network ranges
  • dig, nslookup, whois: Unix tools to find DNS records and owners

Conclusion

Thanks for reading! If you've got any comments or suggestions for more tools or bits I've missed, let me know in the comments below, and I'll update the post to keep this as a somewhat handy cheatsheet for those who might be interviewing for positions as a pentester or a cybersecurity researcher. Good luck with your interview if you've got one!

Comments