Bug Bounty Take-Aways – 02 – OFJAAAH Edition
YouTube: @OFJAAAH Playlist: https://www.youtube.com/@OFJAAAH/videos ─ One of the go-to-reference and most favourite security researcher is OFJAAAH. The way he approaches targets, usage of tools, attack mindset everything is an inspiration for many. Recon is the most critical phase, as it uncovers a larger and more unique attack surface, leading to vulnerabilities that others miss. Key takeaways include the strategic enumeration of subdomains, JavaScript files, and URL parameters as foundational steps. The methodology heavily emphasizes automation and orchestration, using powerful tools like Axiom to distribute scans across hundreds of virtual private servers (VPS) for speed and scale. Storing and managing the vast amounts of data generated is addressed through frameworks like BBRF. This blog details numerous tools, specific command-line one-liners, and practical workflows, positioning itself as a dense guide for hunters looking to evolve from basic scanning to a sophisticated, recon-driven approach. 1. The Philosophy of Reconnaissance Reconnaissance is presented as the cornerstone of successful bug bounty hunting. The central argument is that the majority of hunters follow the same basic procedures, leading to identical, low-value findings. A more effective strategy involves deviating from the common path to discover a unique attack surface. Go Where Others Don’t: The speaker emphasizes that when everyone is “going to the right,” they will all find the same results. By “going to the left” and exploring less-common areas like JavaScript enumeration and deep parameter analysis, a hunter can find completely different and often more valuable results. Recon of Recon: This concept involves taking initial recon data and using it as a starting point for deeper investigation. For example, after finding subdomains, one can identify their DNS servers and then query those specific servers to find even more, potentially non-public, subdomains. The Value of Subdomains: Subdomains are highlighted as extremely valuable targets. Development teams often focus their primary security efforts on the main domain (e.g., company.com), leaving subdomains (dev.api.company.com, internal.company.com) with weaker security, older software, or misconfigurations, making them prime targets. Automation is Essential for Scale: Manual recon is insufficient for large programs. A significant portion of the methodology is built around automating every possible step, from subdomain discovery to vulnerability scanning, often through custom scripts and orchestration tools. 2. Core Reconnaissance Workflows A multi-stage workflow is outlined, demonstrating how to systematically build up a comprehensive picture of the target’s attack surface by chaining specialized tools together. 2.1. Subdomain Enumeration This is the foundational step. The goal is to collect the most exhaustive list of subdomains possible by aggregating results from numerous sources. Passive Enumeration: Tools like Subfinder, Amass, Assetfinder, and Findomain are used to query passive sources (e.g., crt.sh, VirusTotal, DNS records). Project Chaos: The Chaos project is mentioned as a valuable source for pre-compiled lists of subdomains for bug bounty programs. Combining and Deduplicating: It is crucial to combine the output from all tools and remove duplicates. The anew tool is frequently used for this purpose. Example Command Chain: 1. Enumerate using multiple tools: subfinder -d target.com -o subfinder.txt amass enum -passive -d target.com -o amass.txt assetfinder –subs-only target.com > assetfinder.txt 2. Combine all results and create a unique list: cat subfinder.txt amass.txt assetfinder.txt | sort -u > all_subs.txt 2.2. Resolving Live Hosts and Ports A list of subdomains is not useful until it’s determined which ones are active. HTTP/HTTPS Probing: HTTPX is the primary tool for this task. It quickly probes a list of domains to see which ones are hosting live web services. It can also grab status codes, page titles, and other useful metadata. Port Scanning: For a more thorough analysis, Naabu is used to perform fast port scans on the list of subdomains, identifying not just standard web ports (80, 443) but also other services on non-standard ports (e.g., 8080, 8000, 8443). Example Command Chain: 1. Check for live web servers and save the results: cat all_subs.txt | httpx -silent -o live_subs.txt 2. Scan live hosts for a custom list of common web ports: cat live_subs.txt | naabu -p 80,443,8000,8080,8081,8443 -o open_ports.txt 2.3. URL and Endpoint Discovery Once live hosts are identified, the next step is to discover all accessible URLs, endpoints, and parameters. Archival Scraping: Waybackurls and Gau are used to fetch all known URLs for a domain from historical archives like the Wayback Machine. This is extremely effective for finding old, forgotten endpoints. Active Crawling: Katana is used as a headless crawler to navigate the live websites and discover URLs that may not be in archives, including those generated by JavaScript. The -d (deep) flag is recommended for more comprehensive crawling. Example Command Chain: 1.Get all historical URLs for the target: cat live_subs.txt | gau –threads 5 > all_urls.txt 2.Crawl the live sites to find more URLs: cat live_subs.txt | katana -d 5 > crawled_urls.txt 2.4. JavaScript Analysis JavaScript files are described as a “gold mine” for vulnerabilities because they contain client-side logic, API endpoints, secrets, and other sensitive information. Extraction: Use archival tools like Gau or Waybackurls and pipe the output to grep ‘.js$’ to filter for JavaScript files. The tool getJS is also mentioned. Analysis: Jaeles and JSScanner are used to automatically scan JS files for secrets, tokens, and vulnerable patterns. Manually inspect the code, particularly using browser DevTools (Console, Network tab), to understand application logic, find hidden functions, and identify potential bypasses. The window object in the console is highlighted as a repository for global variables and functions that can reveal sensitive information. Example Workflow: 1. Fetch all JS files: gau target.com | grep ‘.js$’ | anew > js_files.txt 2. Scan for secrets: cat js_files.txt | jaeles scan -c /path/to/js-scan-config/ 2.5. Parameter Enumeration Finding hidden or unlinked parameters is key to discovering vulnerabilities like XSS, SQLi, and IDOR. Passive Discovery: Tools like Paramspider are used to find parameters from historical sources. Active Brute-Forcing: Arjun is a powerful tool that takes a list of URLs and brute-forces them with a large wordlist of common parameter names to see which ones are accepted by the server. Pattern Matching with
Bug Bounty Take-Aways – 01 – NahamSec Edition
YouTube: @NahamSec Playlist: https://www.youtube.com/playlist?list=PLKAaMVNxvLmAkqBkzFaOxqs3L66z2n8LA ─ This document synthesizes insights from a series of live bug bounty reconnaissance sessions and interviews with prominent security researchers and hackers. The core themes that emerge are the diverse and evolving nature of reconnaissance, the critical role of customized tooling and automation, and the profound value of community, collaboration, and continuous learning. Reconnaissance is presented not as a monolithic process but as a spectrum of philosophies, ranging from broad, automated discovery of attack surfaces to “reconless” deep dives into application logic. Successful practitioners tailor their approach to the target and their personal strengths, often blending large-scale data gathering with manual analysis. A vast arsenal of open-source and custom-built tools is employed, with an emphasis on chaining simple, single-purpose utilities through scripting—predominantly in Bash—to create powerful, personalized workflows. Beyond the technical, the sources emphasize a mindset of perseverance, creativity, and intellectual curiosity. The community is depicted as a vital resource for knowledge sharing and collaboration, which is repeatedly cited as essential for finding critical vulnerabilities. Advice for newcomers centers on building a solid foundation in application security, focusing on one vulnerability class at a time, reading disclosed reports, and practicing consistently through platforms like CTFs and VDPs, rather than pursuing immediate financial gain. Ultimately, success in bug bounty hunting is portrayed as a marathon of continuous learning, adaptation, and disciplined effort, not a sprint for easy bugs. Reconnaissance: Philosophies and Approaches Reconnaissance (recon) is the foundational phase of bug bounty hunting, but its execution varies dramatically among practitioners. The source context reveals several distinct philosophies and methodologies. 1. Broad Attack Surface Discovery This is the most common approach, focusing on identifying as many assets belonging to a target as possible. Subdomain Enumeration: The primary goal is to discover all subdomains. This is achieved through both passive and active methods. Passive Sources: Tools query public data sources like certificate transparency logs (cert.sh, Cert Spotter, Censys), DNS aggregators (assetfinder, findomain, Sublist3r), and historical archives (Wayback Machine). Active Methods: Once a baseline list of subdomains is established, tools like massdns and altdns are used for brute-forcing with wordlists and performing permutations to discover unlinked subdomains. Root Domain Discovery: A key technique involves using certificate transparency logs to find primary or root domains that are not immediately obvious. By searching for the organization’s name in certificates, hunters can uncover entirely separate domains (e.g., ops.yahoo.com, bf1.yahoo.com) which can then be used as seeds for further subdomain enumeration. This “search and destroy” method significantly expands the potential scope. IP and Certificate Scanning: Advanced techniques involve scanning the entire internet or large cloud provider IP ranges for TLS certificates containing target-owned domain names. This can uncover assets that do not have public DNS records, giving the hunter access to a unique attack surface that others might miss. 2. Deep Dive and “Reconless” Approaches In contrast to broad discovery, this methodology focuses on deeply understanding a single application or a small set of core applications. Application Logic Mapping: This “reconless” or manual approach involves interacting with an application as a user (and as different user roles, like admin or low-privilege user) to map out its features, workflows, and business logic. The goal is to identify structural issues, permission flaws, and authentication vulnerabilities that automated scanners would miss. Reading Documentation: A frequently cited technique is to thoroughly read all available developer documentation, API guides, and tutorials for the target application or its underlying technologies. This provides a sanctioned list of endpoints, parameters, and expected behaviors that can be systematically tested. Source Code and JavaScript Analysis: This involves manually or automatically parsing JavaScript files to discover hidden API endpoints, routes, parameters, and developer comments. Diffs of JavaScript files over time are used to identify new and emerging functionality before it is fully released. 3. Continuous Reconnaissance This strategy involves automating the discovery process to monitor targets over time for changes. Automated Monitoring: Scripts are set up to run periodically (e.g., daily or weekly) to perform subdomain enumeration and endpoint discovery. The results are compared against a known baseline to identify new assets as soon as they appear. Change Detection: Tools like anychanges are used to monitor specific endpoints for modifications, which could indicate new code deployments and potential vulnerabilities. 4. Information Gathering Recon is defined as more than just finding technical assets. It extends to gathering any information that provides context about the target. OSINT: This includes analyzing a company’s GitHub repositories for leaked credentials, internal hostnames, or sensitive code. It also involves reviewing the company’s careers page to understand the technologies they use and the structure of their teams. Historical Analysis: Using the Wayback Machine to find old, forgotten endpoints, parameters, and JavaScript files that may still be active but are no longer linked from the main application. Core Methodologies and Approaches Analysis of the provided context reveals several overarching methodologies that guide the work of top-tier hackers and security researchers. These approaches, while varied in execution, share common principles of thoroughness, creativity, and efficiency. The Foundational Role of Reconnaissance Reconnaissance, or “recon,” is universally cited as the most critical and foundational stage of any offensive security engagement. It is the process of identifying and gathering information about a target’s assets. Practitioner approaches can be broadly categorized into two philosophies: Functionality-Driven Recon: This manual, in-depth approach prioritizes a deep understanding of a single application’s features and business logic. Practitioners like Farah Hawa and Rhynorater champion this method, which involves meticulously mapping every function, taking extensive notes in platforms like Notion, and downloading all associated JavaScript files for analysis. The goal is to discover logical flaws, access control issues, and vulnerabilities that automated scanners typically miss. Asset-Driven Recon: This large-scale, automated approach focuses on discovering the entirety of a target’s external footprint, including subdomains, IP ranges, and related corporate entities. This is the philosophy behind tools like Axiom and the workflows of experts like Dan Miessler and codingo_. The process involves using multiple data sources (e.g., SecurityTrails, Rapid7 FDNS, reverse whois lookups) and chaining tools
AppSec All-in-One – All About JWT and its Attacks
Hello all !!! Here is the first write up in “AppSec All-in-One” series. As like I said I will be taking one attack vector at a time and go deeper into it by explaining each layer in that attack. Now, am going to discuss about “All About JWT and its Attacks“. Heads-up: This blog is completely theoretical which helps for easy access and go through the concepts (acts as a revise book for you). A practical walkthrough of all these JWT attacks will be demonstrated in DeDefence YouTube channel. Stay tuned and make sure to subscribe. Why Read This Blog: Because it covers right from fundamentals, attacks, approaches, mitigation. This blog will be very handy for: To prepare for AppSec interviews (Because it is most common topic which is often asked in every interview) Security Professionals Bug Bounty Hunters Developers QA First Things First: JSON Web Tokens (JWTs) have become a cornerstone of modern web applications and APIs, serving as a standardized, self-contained method for securely transmitting information like user authentication and authorization claims. Unlike traditional session tokens, JWTs store data on the client side, which simplifies architecture in distributed systems. However, this convenience introduces a unique and critical attack surface. The security of any system using JWTs is fundamentally dependent on the cryptographic integrity of the token’s signature. This blog synthesizes extensive research on JWT security, revealing that the most severe vulnerabilities stem from improper implementation and flawed signature validation. Attackers can exploit these weaknesses to bypass authentication, escalate privileges, and gain complete control over user accounts. Common attack vectors include: Stripping the signature by manipulating the header algorithm to none Cracking weak Guessable secret keys used for signing Tricking servers into using the wrong algorithm (an “algorithm confusion” attack) to validate a forged token Further risks arise from: Injecting malicious parameters into the token header to control the key verification process, potentially leading to path traversal, SQL injection, and the use of attacker-controlled keys. How to mitigate these hinges on a simple principle: Never trust user-controllable input within the token before its signature is rigorously verified. Security best practices mandate the use of strong, high-entropy secret keys, robust cryptographic algorithms, and strict server-side validation that enforces a specific, expected algorithm. By adhering to secure implementation guidelines—including using up-to-date libraries, setting short token expiration times, avoiding sensitive data in the payload, and storing tokens securely—organizations can leverage the power of JWTs without succumbing to their considerable risks. The Anatomy of a JSON Web Token (JWT) At its core, a JSON Web Token (also known as a “jot”) is a compact, URL-safe standard (RFC 7519) for creating data with an optional signature and encryption. Its payload holds JSON that asserts a number of “claims.” Because it is self-contained, all the information needed to verify the user is inside the token itself, reducing the need for server-side session storage. The Three Parts of a JWT A JWT is composed of three distinct parts, separated by dots (.), each of which is Base64Url encoded. HEADER.PAYLOAD.SIGNATURE Let’s examine a typical token: eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiIxMjM0NTY3ODkwIiwibmFtZSI6IkpvaG4gRG9lIiwiaXNBZG1pbiI6ZmFsc2V9.EypViEDiJhjeuXgjtGdibxrFPFZyYKn-KqFeAw3c2No About Header: The header provides metadata about the token, primarily the signing algorithm used and the token type. alg: Specifies the cryptographic algorithm used to sign the token, such as HS256 (HMAC with SHA-256) or RS256 (RSA with SHA-256). This field is the source of several critical vulnerabilities. typ: Declares the token type, which is almost always JWT. About Payload: The payload contains the claims, which are statements about an entity (typically the user) and additional data. The data is structured as a JSON object. Decoded Payload commonly consists of: Registered Claims: Predefined claims like iss (issuer), sub (subject), aud (audience), exp (expiration time), and iat (issued at). Public Claims: Custom claims defined to avoid collisions, usually registered in the IANA JSON Web Token Claims registry. Private Claims: Custom claims created for information sharing between parties. Titbits: The header and payload are only Base64Url encoded, not encrypted. And also keep in mind that the header and payload are Base64Url encoded/decoded not Base64 encoded/decoded. This means anyone who intercepts the token can easily decode and read its contents. Therefore, sensitive information like passwords, credit card numbers, or social security numbers should never be stored in a JWT payload. About Signature: The signature is the cryptographic component that guarantees the token’s integrity. It is created by signing the encoded header and payload with a secret or private key, using the algorithm specified in the header. If an attacker modifies the header or payload, the signature will no longer match when the server re-calculates it, thus invalidating the token—assuming the signature is properly verified. JWS vs. JWE: Signing vs. Encrypting The term JWT is often used interchangeably with JSON Web Signature (JWS), which is the most common implementation. JWS (JSON Web Signature): The token is signed to ensure data integrity, but the payload is readable. It proves that the data has not been tampered with. JWE (JSON Web Encryption): The token’s payload is encrypted, providing confidentiality. The content is hidden from parties who do not possess the decryption key. * This blog primarily focuses on JWS, as it is more widely used and is the source of the most common JWT vulnerabilities. Titbits: JWT tokens can be implemented “path-wise” in the same domain. For example: JWT token 1 can be used in domain.com/profile, JWT token 2 can be used in domain.com/cart. So here two JWT tokens have been used for two different paths in single domain. So have a keen observation on implementation of JWT tokens in all the paths of the target domain. The Attacker’s Playbook: Common JWT Vulnerabilities and Exploits The security of JWTs is brittle; a single implementation flaw can lead to a complete authentication and authorization bypass. The following sections detail the most prevalent attack vectors in JWT. Here you go !!! Attack 01: JWT Token Sent in GET Method Vulnerability: Sending a JWT as a query parameter in a URL (e.g., example.com/api/data?token=eyJ…) instead of in a secure Authorization header. URLs are frequently logged by browsers, proxies, and web servers. Relatable Example: It’s like writing your house key code on the outside of the