Skip to content

Command line user interface

Tasos Laskos edited this page Aug 3, 2014 · 92 revisions

Version 1.0

There are a lot of options, customizations and tweaks you can use but fear not and don't let yourself be overwhelmed. This guide will walk you through each and every one of them and teach you how to use them in order to make your scans as efficient as possible.

If you intend to scan big and complex sites it's best that you read through this guide and evaluate all available options.

Quickstart

Help

In order to see everything Arachni has to offer execute:

arachni -h

Examples

You can simply run Arachni like so:

arachni http://test.com

which will load all checks, the plugins under /plugins/defaults and audit all forms, links and cookies.

In the following example, all checks will be run against http://test.com, auditing links/forms/cookies and following subdomains while also printing verbose mesages.

The results of the scan will be saved in the the file test.com.afr.

arachni --verbose --scope-include-subdomains http://test.com --report-save-path=test.com.afr

The Arachni Framework Report (.afr) file can later be used to create reports in several formats, like so:

arachni_reporter test.com.afr --report=html:outfile=my_report.html

To see all available reporter components run:

arachni_reporter --reporters-list

You can make check loading easier by using wildcards (*) and exclusions (-).

To load all xss checks using a wildcard:

arachni http://example.net --checks=xss*

To load all active checks using a wildcard:

arachni http://example.net --checks=active/*

To exclude only the csrf check:

arachni http://example.net --checks=*,-csrf

Or you can mix and match; to run everything but the xss checks:

arachni http://example.net --checks=*,-xss*

More resources

For more resources you can consult the articles in the knowledge base.

Command reference

Command Line Interface help output

Expects: <n/a>

Default: disabled

Multiple invocations?: no

Outputs the Arachni banner and version information.

Expects: string

Default: disabled

Multiple invocations?: no

The string passed to this option will be used as the value for the From HTTP reuest header field. The option value should be the e-mail address of the person who authorized the scan.

Expects: <n/a>

Default: disabled

Multiple invocations?: no

When verbose messages are enabled, Arachni will give you detailed information about what's going on during the whole process.

Let's give this a try:

arachni --audit-forms --checks=xss http://testfire.net/ --scope-page-limit=1

This will load the XSS checks and audit all the forms in http://testfire.net/.

Verbose mode disabled

Observe that there's no --verbose flag in the previous run.

Don't worry about the rest of the parameters right now.

Quick note:

Arachni's output messages are classified into several categories, each of them prefixed with a different colored symbol:

  • [*] are status messages.
  • [~] are informational messages.
  • [+] are sucess messages.
  • [v] are verbose messages.
  • [!] are debug messages.
  • [-] are error messages.

I won't bother with coloring during the examples.

Arachni - Web Application Security Scanner Framework v1.0
   Author: Tasos "Zapotek" Laskos <tasos.laskos@gmail.com>

           (With the support of the community and the Arachni Team.)

   Website:       http://arachni-scanner.com
   Documentation: http://arachni-scanner.com/wiki


 [*] Initializing...
 [*] Waiting for plugins to settle...
 [*] BrowserCluster: Initializing 6 browsers...
 [*] BrowserCluster: Initialization completed with 6 browsers in the pool.

 [*] [HTTP: 200] http://testfire.net/
 [~] Identified as: windows, iis, asp, aspx
 [~] Analysis resulted in 0 usable paths.
 [~] DOM depth: 0 (Limit: 10)
 [*] XSS: Auditing form input 'txtSearch' pointing to: 'http://testfire.net/search.aspx'
 [*] XSS: Submitting form with original values for txtSearch at 'http://testfire.net/search.aspx'.
 [*] XSS: Submitting form with sample values for txtSearch at 'http://testfire.net/search.aspx'.
 [*] XSS: Auditing form input 'txtSearch' pointing to: 'http://testfire.net/search.aspx'
 [*] XSS: Auditing form input 'txtSearch' pointing to: 'http://testfire.net/search.aspx'
 [*] Harvesting HTTP responses...
 [~] Depending on server responsiveness and network conditions this may take a while.
 [*] XSS: Analyzing response #2...
 [~] XSS: Response is tainted, looking for proof of vulnerability.
 [+] XSS: In form input 'txtSearch' with action http://testfire.net/search.aspx
 [*] XSS: Analyzing response #3...
 [*] XSS: Analyzing response #4...
 [*] XSS: Analyzing response #5...
 [~] XSS: Response is tainted, looking for proof of vulnerability.
 [+] XSS: In form input 'txtSearch' with action http://testfire.net/search.aspx
 [*] XSS: Analyzing response #6...
 [~] XSS: Response is tainted, looking for proof of vulnerability.
 [+] XSS: In form input 'txtSearch' with action http://testfire.net/search.aspx

Verbose mode enabled

Observe the extra information in this run.

[v] messages are verbose messages.

$ arachni --audit-forms --checks=xss http://testfire.net/ --scope-page-limit=1 --verbose
Arachni - Web Application Security Scanner Framework v1.0
   Author: Tasos "Zapotek" Laskos <tasos.laskos@gmail.com>

           (With the support of the community and the Arachni Team.)

   Website:       http://arachni-scanner.com
   Documentation: http://arachni-scanner.com/wiki


 [*] Initializing...
 [*] Waiting for plugins to settle...
 [*] BrowserCluster: Initializing 6 browsers...
 [*] BrowserCluster: Initialization completed with 6 browsers in the pool.

 [*] [HTTP: 200] http://testfire.net/
 [~] Identified as: windows, iis, asp, aspx
 [~] Analysis resulted in 0 usable paths.
 [~] DOM depth: 0 (Limit: 10)
 [*] XSS: Auditing form input 'txtSearch' pointing to: 'http://testfire.net/search.aspx'
 [v] XSS: --> With: "<some_dangerous_input_b2816f222dd9fce0ce8f0cda12aaf714/>" -> "arachni_text<some_dangerous_input_b2816f222dd9fce0ce8f0cda12aaf714/>"
 [*] XSS: Submitting form with original values for txtSearch at 'http://testfire.net/search.aspx'.
 [v] XSS: --> With: nil -> ""
 [*] XSS: Submitting form with sample values for txtSearch at 'http://testfire.net/search.aspx'.
 [v] XSS: --> With: nil -> ""
 [*] XSS: Auditing form input 'txtSearch' pointing to: 'http://testfire.net/search.aspx'
 [v] XSS: --> With: "()\"&%1'-;<some_dangerous_input_b2816f222dd9fce0ce8f0cda12aaf714/>'" -> "arachni_text()\"&%1'-;<some_dangerous_input_b2816f222dd9fce0ce8f0cda12aaf714/>'"
 [*] XSS: Auditing form input 'txtSearch' pointing to: 'http://testfire.net/search.aspx'
 [v] XSS: --> With: "--><some_dangerous_input_b2816f222dd9fce0ce8f0cda12aaf714/><!--" -> "arachni_text--><some_dangerous_input_b2816f222dd9fce0ce8f0cda12aaf714/><!--"
 [*] Harvesting HTTP responses...
 [~] Depending on server responsiveness and network conditions this may take a while.
 [*] XSS: Analyzing response #2...
 [~] XSS: Response is tainted, looking for proof of vulnerability.
 [+] XSS: In form input 'txtSearch' with action http://testfire.net/search.aspx
 [v] XSS: Injected:  "arachni_text<some_dangerous_input_b2816f222dd9fce0ce8f0cda12aaf714/>"
 [v] XSS: Proof:     <some_dangerous_input_b2816f222dd9fce0ce8f0cda12aaf714/>
 [v] XSS: Request:
GET /search.aspx?txtSearch=arachni_text%3Csome_dangerous_input_b2816f222dd9fce0ce8f0cda12aaf714%2F%3E HTTP/1.1
Host: testfire.net
Accept-Encoding: gzip, deflate
User-Agent: Arachni/v1.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Cookie: ASP.NET_SessionId=e4h4wy45jmb5vkrg0wl1rj45;amSessionId=15420499882


 [*] XSS: Analyzing response #3...
 [*] XSS: Analyzing response #4...
 [*] XSS: Analyzing response #6...
 [~] XSS: Response is tainted, looking for proof of vulnerability.
 [+] XSS: In form input 'txtSearch' with action http://testfire.net/search.aspx
 [v] XSS: Injected:  "arachni_text--><some_dangerous_input_b2816f222dd9fce0ce8f0cda12aaf714/><!--"
 [v] XSS: Proof:     <some_dangerous_input_b2816f222dd9fce0ce8f0cda12aaf714/>
 [v] XSS: Request:
GET /search.aspx?txtSearch=arachni_text--%3E%3Csome_dangerous_input_b2816f222dd9fce0ce8f0cda12aaf714%2F%3E%3C%21-- HTTP/1.1
Host: testfire.net
Accept-Encoding: gzip, deflate
User-Agent: Arachni/v1.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Cookie: ASP.NET_SessionId=e4h4wy45jmb5vkrg0wl1rj45;amSessionId=15420499882


 [*] XSS: Analyzing response #5...
 [~] XSS: Response is tainted, looking for proof of vulnerability.
 [+] XSS: In form input 'txtSearch' with action http://testfire.net/search.aspx
 [v] XSS: Injected:  "arachni_text()\"&%1'-;<some_dangerous_input_b2816f222dd9fce0ce8f0cda12aaf714/>'"
 [v] XSS: Proof:     <some_dangerous_input_b2816f222dd9fce0ce8f0cda12aaf714/>
 [v] XSS: Request:
GET /search.aspx?txtSearch=arachni_text%28%29%22%26%251%27-%3B%3Csome_dangerous_input_b2816f222dd9fce0ce8f0cda12aaf714%2F%3E%27 HTTP/1.1
Host: testfire.net
Accept-Encoding: gzip, deflate
User-Agent: Arachni/v1.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Cookie: ASP.NET_SessionId=e4h4wy45jmb5vkrg0wl1rj45;amSessionId=15420499882

Expects: integer

Default: 1

Multiple invocations?: no

When this flag is enabled the system will output a lot of messages detailing what's happening internally. The level/detail of the messages can be specified in the form of an integer between 1 and 3.

If you don't want to be flooded by annoying and obscure messages, you can pipe debugging output to a separate file when running Arachni using:

arachni http://example.com --debug 2> debug.log

Expects: <n/a>

Default: disabled

Multiple invocations?: no

This will suppress all messages except for for the ones denoting sucess -- usually regarding the discovery of some issue.

  • pattern refers to valid Ruby regular expressions without being enclosed by /.

Expects: pattern

Default: disabled

Multiple invocations?: yes

Restricts the scope of the scan to resources whose URL matches the pattern.

Expects: <n/a>

Default: disabled

Multiple invocations?: no

Allow the system to include subdomains in the scan.

Expects: pattern

Default: disabled

Multiple invocations?: yes

Excludes resources whose URL matches the pattern.

Expects: pattern

Default: disabled

Multiple invocations?: yes

Excludes pages whose content matches the pattern.

Expects: <n/a>

Default: disabled

Multiple invocations?: no

Excludes pages with binary content.

Note: Binary content can confuse passive check that perform pattern matching.

Expects: pattern:integer

Default: disabled

Multiple invocations?: yes

This option expects a pattern and a counter, like so: --scope-redundant-path-pattern='calendar.php:3'

This will cause URLs that contain calendar.php to be crawled only 3 times.

This option is useful when scanning websites that have a lot of redundant pages like a photo gallery or a dynamically generated calendar.

Expects: integer

Default: disabled (with a value of 10 if none has been specified)

Multiple invocations?: no

This option limits how many resources with URLs with identical parameters should be includes in the scan.

This can prevent infinite loops caused by pages like photo galleries or catalogues.

With --scope-auto-redundant=2 and given the following list of URLs:

http://test.com/?stuff=1
http://test.com/?stuff=2
http://test.com/?stuff=other-stuff
http://test.com/?stuff=blah
http://test.com/?stuff=blah&stuff2=1
http://test.com/?stuff=blah&stuff2=2
http://test.com/?stuff=blah2&stuff2=bloo
http://test.com/path.php?stuff=blah&stuff2=1

Only the following will be included:

http://test.com/?stuff=1
http://test.com/?stuff=2
http://test.com/?stuff=blah&stuff2=1
http://test.com/?stuff=blah&stuff2=2
http://test.com/path.php?stuff=blah&stuff2=1

Expects: integer

Default: infinite

Multiple invocations?: no

This option limits how deep into the site structure the scan should go.

Expects: integer

Default: infinite

Multiple invocations?: no

This option limits how many pages should be included in the scan.

Expects: filepath

Default: disabled

Multiple invocations?: yes

Allows you to extend the scope of the scan by seeding the system with the paths contained within the given file.

Note: The file must contains one path per line.

Expects: filepath

Default: disabled

Multiple invocations?: yes

Uses the paths contained within the given file instead of performing a crawl.

Note: The file must contains one path per line.

Expects: pattern:substitution

Default: disabled

Multiple invocations?: yes

This option expects a pattern and a substitution, like so: --scope-url-rewrite='/articles\/[\w-]+\/(\d+)/:articles.php?id=\1'

The above will rewrite the URL http://test.com/articles/some-stuff/23 as http://test.com/articles.php?id=23.

Expects: integer

Default: infinite

Multiple invocations?: no

This option limits how deep into each page's DOM structure the scan should go.

Note: DOM levels are counted as stacked interactions with the page's interface.

Expects: <n/a>

Default: disabled

Multiple invocations?: no

Forces the system to only follow HTTPS URLs.

Note: The target URL must be an HTTPS one as well.

Expects: <n/a>

Default: disabled

Multiple invocations?: no

Enable auditing of links.

Expects: <n/a>

Default: disabled

Multiple invocations?: no

Enable auditing of forms.

Expects: <n/a>

Default: disabled

Multiple invocations?: no

Enable auditing of cookies.

Expects: <n/a>

Default: disabled

Multiple invocations?: no

If enabled the system will submit all links and forms of the page along with the cookie permutations.

Warning: Will severely increase the scan-time.

Expects: <n/a>

Default: disabled

Multiple invocations?: no

Audit HTTP request headers.

Note: Header audits use brute force. Almost all valid HTTP request headers will be audited even if there's no indication that the web app uses them.

Warning: Enabling this option will result in increased requests, maybe by an order of magnitude.

Expects: pattern

Default: disabled

Multiple invocations?: yes

This option allows you to extract and audit inputs from generic paths based on a specified template in the form of a Ruby regular expression using names groups.

To extract the input1 and input2 inputs from: http://test.com/input1/value1/input2/value2

Use: /input1\/(?<input1>\w+)\/input2\/(?<input2>\w+)/

Expects: <n/a>

Default: disabled

Multiple invocations?: no

If enabled, the system will submit all elements using both GET and POST HTTP request methods.

Warning: Will severely increase the scan-time.

Expects: pattern

Default: disabled

Multiple invocations?: yes

Don't audit input vectors whose name matches the pattern.

Expects: pattern

Default: disabled

Multiple invocations?: yes

Only audit input vectors whose name matches the pattern.

Expects: string

Default: "Arachni/<version>"

Multiple invocations?: no

Specify a value for the User-Agent request header field.

Expects: integer

Default: 20

Multiple invocations?: no

Sets the maximum amount of requests to be active at any given time; this usually directly translates to the amount of open connections.

Note: If your scan seems unresponsive try lowering the limit to easy the server's burden.

Warning: Given enough bandwidth and a high enough concurrency setting the scan could cause a DoS. Be careful when setting this option too high, don't kill your server.

Expects: integer (milliseconds)

Default: 50000

Multiple invocations?: no

Limit how long the client should wait for a response from the server.

Expects: integer

Default: 5

Multiple invocations?: no

Limits the amount of redirects the client should follow for each request.

Expects: integer

Default: 500

Multiple invocations?: no

Maximum amount of requests to keep in the client queue.

Note: More means better scheduling and better performance, less means less RAM consumption.

Expects: string

Default: disabled

Multiple invocations?: yes

Allows you to specify custom request headers in the form of key-value pairs.

--http-request-header='field_name=field value'

Expects: integer

Default: infinite

Multiple invocations?: no

Limits the size of response bodies the client accepts. Essentially, the client will not download bodies of responses which have a Content-Length larger than the specified value.

Expects: filepath

Default: disabled

Multiple invocations?: no

Arachni allows you to pass your own cookies in the form of a Netscape cookie-jar file. If you want to audit restricted parts of a website that are accessible only to logged in users you should pass the session cookies to Arachni.

There's a number of ways to do that, I've found that Firebug's export cookie feature works best.

Note: If you don't feel comfortable setting your own cookie-jar, you can use the proxy or autologin plugins to login to the web application.

Expects: string

Default: disabled

Multiple invocations?: no

Cookies, as a string, to be sent to the web application.

--http-cookie-string='userid=19;sessionid=deadbeefbabe'

Expects: string

Default: disabled

Multiple invocations?: no

Username to use for HTTP authentication.

Expects: string

Default: disabled

Multiple invocations?: no

Password to use for HTTP authentication.

Expects: server:port

Default: disabled

Multiple invocations?: no

Sets a proxy server for the client.

Expects: username:password

Default: disabled

Multiple invocations?: no

Sets authentication credentials for the specified proxy server.

Expects: http, http_1_0, socks4, socks5, socks4a

Default: auto

Multiple invocations?: no

Sets the protocol for the specified proxy server.

Expects: pattern:value

Default: disabled

Multiple invocations?: yes

Sets a value for inputs whose name matches the pattern.

Expects: filepath

Default: disabled

Multiple invocations?: no

YAML file containing a Hash object with regular expressions, to match against input names, as keys and input values as values.

Expects: <n/a>

Default: disabled

Multiple invocations?: no

If enabled, system default values won't be used.

Expects: <n/a>

Default: disabled

Multiple invocations?: no

Forces the system to fill-in even non-empty inputs.

Expects: pattern

Default: disabled

Multiple invocations?: yes

Lists all available checks.

If an option has been provided, it will be treated as a pattern and be used to filter the displayed checks.

Expects: string,string

Default: * (all)

Multiple invocations?: no

Loads the given checks, by name.

You can specify the checks to load as comma separated values (without spaces) or * to load all. You can prevent checks from being loaded by prefixing their name with a dash (-).

Note: Checks are referenced by their filename without the .rb extension, use --checks-list to see all.

As CSV:

arachni --checks=xss,sqli,path_traversal http://example.com/

All:

arachni http://example.com/

Excluding checks:

arachni --checks=*,-backup_files,-xss http://example.com/

The above will load all checks except for the backup_files and xss ones.

Expects: <n/a>

Default: disabled

Multiple invocations?: no

Lists all available plugins.

Expects: string

Default: disabled

Multiple invocations?: yes

Loads a plugin by name and configures it with the given options.

Note: Plugins are referenced by their filename without the .rb extension, use --plugins-list to see all.

Excluding the logout URL and running the `autologin1 plugin to automatically login to a web application:

arachni http://testfire.net --scope-page-limit=1 --checks=xss \
    --plugin=autologin:url=http://testfire.net/bank/login.aspx,parameters='uid=jsmith&passw=Demo1234',check='Sign Off|MY ACCOUNT' \
    --scope-exclude-pattern logout

Expects: <n/a>

Default: disabled

Multiple invocations?: no

Lists all available platforms.

Expects: <n/a>

Default: disabled

Multiple invocations?: no

Disables platform fingerprinting and results in all audit payloads being sent to the webapp.

Expects: string,string,...

Default: auto

Multiple invocations?: no

Explicitly sets the platforms for the remote web application. You can use this to help the system be more efficient in its scan.

Expects: string

Default: disabled

Multiple invocations?: no

Requires: login-check-pattern

The URL passed to this option will be used to verify that the system is still logged in to the web application.

If the HTTP response body of URL matches the login-check-pattern this should indicate that the system is logged in.

Expects: string

Default: disabled

Multiple invocations?: no

Requires: login-check-url

A pattern used against the body of the login-check-url to verify that the system is still logged in to the web application.

A positive match should indicate that the system is logged in.

Expects: filepath

Default: disabled

Multiple invocations?: no

This option allows you to save your current running configuration, all the options passed to Arachni, to an Arachni Framework Profile (.afp) file.

Expects: filepath

Default: disabled

Multiple invocations?: no

This option allows you to load and run a saved profile.

Note: This option does not impede your ability to specify more options or resave the profile.

$ arachni -h
Arachni - Web Application Security Scanner Framework v1.0
   Author: Tasos "Zapotek" Laskos <tasos.laskos@gmail.com>

           (With the support of the community and the Arachni Team.)

   Website:       http://arachni-scanner.com
   Documentation: http://arachni-scanner.com/wiki


Usage: ./bin/arachni [options] URL

Generic
  -h, --help                  Output this message.

      --version               Show version information.

      --authorized-by EMAIL_ADDRESS
                              E-mail address of the person who authorized the scan.
                                (It'll make it easier on the sys-admins during log reviews.)
                                (Will be used as a value for the 'From' HTTP request header.)


Output
      --verbose               Show verbose output.

      --debug [LEVEL 1-3]     Show debugging information.

      --only-positives        Only output positive results.


Scope
      --scope-include-pattern PATTERN
                              Only include resources whose path/action matches PATTERN.
                                (Can be used multiple times.)

      --scope-include-subdomains
                              Follow links to subdomains.
                                (Default: false)

      --scope-exclude-pattern PATTERN
                              Exclude resources whose path/action matches PATTERN.
                                (Can be used multiple times.)

      --scope-exclude-content-pattern PATTERN
                              Exclude pages whose content matches PATTERN.
                                (Can be used multiple times.)

      --scope-exclude-binaries
                              Exclude non text-based pages.
                                (Binary content can confuse passive checks that perform pattern matching.)

      --scope-redundant-path-pattern PATTERN:LIMIT
                              Limit crawl on redundant pages like galleries or catalogs.
                                (URLs matching PATTERN will be crawled LIMIT amount of times.)
                                (Can be used multiple times.)

      --scope-auto-redundant [LIMIT]
                              Only follow URLs with identical query parameter names LIMIT amount of times.
                                (Default: 10)

      --scope-directory-depth-limit LIMIT
                              Directory depth limit.
                                (Default: inf)
                                (How deep Arachni should go into the site structure.)

      --scope-page-limit LIMIT
                              How many pages to crawl and audit.
                                (Default: inf)

      --scope-extend-paths FILE
                              Add the paths in FILE to the ones discovered by the crawler.
                                (Can be used multiple times.)

      --scope-restrict-paths FILE
                              Use the paths in FILE instead of crawling.
                                (Can be used multiple times.)

      --scope-url-rewrite PATTERN:SUBSTITUTION
                              Rewrite URLs based on the given PATTERN and SUBSTITUTION.
                                To convert:  http://test.com/articles/some-stuff/23 to http://test.com/articles.php?id=23
                                Use:         /articles\/[\w-]+\/(\d+)/:articles.php?id=\1

      --scope-dom-depth-limit LIMIT
                              How deep to go into the DOM tree of each page, for pages with JavaScript code.
                                (Default: 10)
                                (Setting it to '0' will disable browser analysis.)

      --scope-https-only      Forces the system to only follow HTTPS URLs.
                                (Default: false)


Audit
      --audit-links           Audit links.

      --audit-forms           Audit forms.

      --audit-cookies         Audit cookies.

      --audit-cookies-extensively
                              Submit all links and forms of the page along with the cookie permutations.
                                (*WARNING*: This will severely increase the scan-time.)

      --audit-headers         Audit headers.

      --audit-link-template TEMPLATE
                              Regular expression with named captures to use to extract input information from generic paths.
                                To extract the 'input1' and 'input2' inputs from:
                                  http://test.com/input1/value1/input2/value2
                                Use:
                                  /input1\/(?<input1>\w+)\/input2\/(?<input2>\w+)/
                                (Can be used multiple times.)

      --audit-with-both-methods
                              Audit elements with both GET and POST requests.
                                (*WARNING*: This will severely increase the scan-time.)

      --audit-exclude-vector PATTERN
                              Exclude input vectorS whose name matches PATTERN.
                                (Can be used multiple times.)

      --audit-include-vector PATTERN
                              Include only input vectors whose name matches PATTERN.
                                (Can be used multiple times.)


Input
      --input-value PATTERN:VALUE
                              PATTERN to match against input names and VALUE to use for them.
                                (Can be used multiple times.)

      --input-values-file FILE
                              YAML file containing a Hash object with regular expressions, to match against input names, as keys and input values as values.

      --input-without-defaults
                              Do not use the system default input values.

      --input-force           Fill-in even non-empty inputs.


HTTP
      --http-user-agent USER_AGENT
                              Value for the 'User-Agent' HTTP request header.
                                (Default: Arachni/v1.0)

      --http-request-concurrency MAX_CONCURRENCY
                              Maximum HTTP request concurrency.
                                (Default: 20)
                                (Be careful not to kill your server.)
                                (*NOTE*: If your scan seems unresponsive try lowering the limit.)

      --http-request-timeout TIMEOUT
                              HTTP request timeout in milliseconds.
                                (Default: 50000)

      --http-request-redirect-limit LIMIT
                              Maximum amount of redirects to follow for each HTTP request.
                                (Default: 5)

      --http-request-queue-size QUEUE_SIZE
                              Maximum amount of requests to keep in the queue.
                                Bigger size means better scheduling and better performance,
                                smaller means less RAM consumption.
                                (Default: 500)

      --http-request-header NAME=VALUE
                              Specify custom headers to be included in the HTTP requests.
                                (Can be used multiple times.)

      --http-response-max-size LIMIT
                              Do not download response bodies larger than the specified LIMIT, in bytes.
                                (Default: inf)

      --http-cookie-jar COOKIE_JAR_FILE
                              Netscape-styled HTTP cookiejar file.

      --http-cookie-string COOKIE
                              Cookie representation as an 'Cookie' HTTP request header.

      --http-authentication-username USERNAME
                              Username for HTTP authentication.

      --http-authentication-password PASSWORD
                              Password for HTTP authentication.

      --http-proxy ADDRESS:PORT
                              Proxy to use.

      --http-proxy-authentication USERNAME:PASSWORD
                              Proxy authentication credentials.

      --http-proxy-type http,http_1_0,socks4,socks5,socks4a
                              Proxy type.
                                (Default: auto)


Checks
      --checks-list [PATTERN] List available checks based on the provided pattern.
                                (If no pattern is provided all checks will be listed.)

      --checks CHECK,CHECK2,...
                              Comma separated list of checks to load.
                                    Checks are referenced by their filename without the '.rb' extension, use '--checks-list' to list all.
                                    Use '*' as a check name to load all checks or as a wildcard, like so:
                                        xss*   to load all XSS checks
                                        sqli*  to load all SQL injection checks
                                        etc.

                                    You can exclude checks by prefixing their name with a minus sign:
                                        --checks=*,-backup_files,-xss
                                    The above will load all checks except for the 'backup_files' and 'xss' checks.

                                    Or mix and match:
                                        -xss*   to unload all XSS checks.


Plugins
      --plugins-list [PATTERN]
                              List available plugins based on the provided pattern.
                                (If no pattern is provided all plugins will be listed.)

      --plugin 'PLUGIN:OPTION=VALUE,OPTION2=VALUE2'
                              PLUGIN is the name of the plugin as displayed by '--plugins-list'.
                                (Plugins are referenced by their filename without the '.rb' extension, use '--plugins-list' to list all.)
                                (Can be used multiple times.)


Platforms
      --platforms-list        List available platforms.

      --platforms-no-fingerprinting
                              Disable platform fingerprinting.
                                (By default, the system will try to identify the deployed server-side platforms automatically
                                in order to avoid sending irrelevant payloads.)

      --platforms PLATFORM,PLATFORM2,...
                              Comma separated list of platforms (by shortname) to audit.
                                (The given platforms will be used *in addition* to fingerprinting. In order to restrict the audit to
                                these platforms enable the '--platforms-no-fingerprinting' option.)


Session
      --login-check-url URL   URL to use to verify that the scanner is still logged in to the web application.
                                (Requires 'login-check-pattern'.)

      --login-check-pattern PATTERN
                              Pattern used against the body of the 'login-check-url' to verify that the scanner is still logged in to the web application.
                                (Requires 'login-check-url'.)


Profiles
      --profile-save-filepath FILEPATH
                              Save the current configuration profile/options to FILEPATH.

      --profile-load-filepath FILEPATH
                              Load a configuration profile from FILEPATH.


Browser cluster
      --browser-cluster-pool-size SIZE
                              Amount of browser workers to keep in the pool and put to work.

      --browser-cluster-job-timeout SECONDS
                              Maximum allowed time for each job.

      --browser-cluster-worker-time-to-live LIMIT
                              Re-spawn the browser of each worker every LIMIT jobs.

      --browser-cluster-ignore-images
                              Do not load images.

      --browser-cluster-screen-width
                              Browser screen width.

      --browser-cluster-screen-height
                              Browser screen height.


Report
      --report-save-path PATH Directory or file path where to store the scan report.
                                You can use the generated file to create reports in several formats with the 'arachni_report' executable.


Snapshot
      --snapshot-save-path PATH
                              Directory or file path where to store the snapshot of a suspended scan.
                                You can use the generated file to resume the scan with the 'arachni_restore' executable.


Timeout
      --timeout HOURS:MINUTES:SECONDS
                              Stop the scan after the given duration is exceeded.

      --timeout-suspend       Suspend after the timeout.
                                You can use the generated file to resume the scan with the 'arachni_restore' executable.```