Firefly v1.1.0: A smart black-box fuzzer for testing web applications
September 14, 2023
The new version of Firefly v1.1.0 includes new features and improvements to the tool. The new version can now handle more complex analysis for each response collected from the target. When analysing differences in responses, Firefly can now better handle dynamic content through the use of mathematical algorithms along with other random detection techniques. This makes it possible to perform more accurate scans against the target to reveal new, unknown behaviours.
Firefly v1.1.0 workflow
Firefly v1.1.0 still provides standard resources such as dictionaries containing payloads and patterns/errors used to detect trace errors or indications of a backend behaviour. What makes Firefly unique is that it focuses on a high level of customisation with the goal of making the tool unique to each individual user. This is done by allowing the user to have more control over the various processes that Firefly provides, such as response filters, wordlists, pattern/error detection and high payload modification support. By allowing a wide range of customisations for payloads within a given wordlist, it makes it possible to customise any wordlist in a unique way.
In the Firefly v1.0 workflow, the tool used a single handler to perform requests and scans. This worked well as long as the target didn't have too much dynamic content. However, there was a major drawback to using only one handler, if too many responses were received at the same time and the scanner engine wasn't finished with the previous scans: a queue of tasks started to build up. When the queue was full, the engine was in need of setting a limit, which also had the effect of delaying the request process. This meant that the request process had to wait for the scanner before it could continue.
Firefly v1.1.0 now solves this problem by providing a new code structure that includes two handlers along with two listeners. The request handler builds a workpool of workers to manage the request and response process while the scanner listener intercepts the responses and start performing the scans. The different scanning techniques had to be redesigned in order to handle all tasks with ultra-high performance.
The target knowledge is obtained from the verified responses that Firefly has collected from a verification process. This process sends a series of identical requests to simulate a normal user. The responses are then stored in the memory and used by the attack process to discover new potential behaviours.
Error and pattern discovery
To improve the performance when searching for patterns in the responses, the wordlist which contains all the patterns is split into several groups based on the common pattern prefix. Each pattern prefix then has its own dictionary of all patterns that have the same common prefix. This makes it possible to search only for these prefixes in the responses. Once a prefix has been identified in an response, the full dictionary related to the prefix is used.
The collection of headers, text, words, tags, attributes and attribute values can be a heavy process to collect when it has to be in balance with the request process. This can be improved by splitting the different tasks and taking advantage of each task's results to shorten the process for the next data collection task. This significantly improved the speed and it is now possible to collect data from a response containing a large amount of data and share it with the scanning processes without suffering any throttling.
This process analyses the payloads that were sent and reflected in the response, and searches for transformation by comparing the payload that was sent with the reflected payload version that the target provided in the response. The transformation configuration is read by a YAML file that can be customised by the user.
Example of the default YAML file containing the transformation configuration :
#+----------------------------------------------------+ #| [README] | #| | #| Variable = Expected payload transformation | #| Payload = First item in the list | #| Description = Secound item in the list | #| | #| !Note : The list MUST contain the payload + desc | #+----------------------------------------------------+ #Example any first payload in the list should be equal to "Q": "Q": - ["#&x51;", "html"] - ["\x51", "hex"] - ["\u0051", "unicode"] - ["U+0051", "unicode"] - ["%51", "url"] - ["%2551", "double-url"]
As described by "README" in the YAML file. The variable Q is the expected transformation that the target should produce from our payload. The payloads we are sending are all the items that are first in the list. The second value in the list is a description/keyword for the transformation. This configuration makes it easy to add custom payloads to be used in the scanning process.
The improvements to the collection of response data have made it possible to better analyse the responses to detect differences. It now takes the intercepted responses with all their data directly from a map stored in memory and compares it with the newly intercepted response.
The technique used to compare responses is based on a map stored in memory. The map has divided the response data into different parts, which are stored as a key in the map. The value stored with the key is a value indicating how many times the specific data item was repeated in the response. This allows the scanning process to compare the map keys (data) and the values stored in them. This avoids all the unnecessary work of comparing data that has already been compared.
In case you want a deeper understanding of how black-box testing works and what techniques are used, we have written the blog "Web Application Black-Box Testing" which explains the different techniques in more detail.