This is default featured slide 1 title

Go to Blogger edit html and find these sentences.Now replace these sentences with your own descriptions.

This is default featured slide 2 title

Go to Blogger edit html and find these sentences.Now replace these sentences with your own descriptions.

This is default featured slide 3 title

Go to Blogger edit html and find these sentences.Now replace these sentences with your own descriptions.

This is default featured slide 4 title

Go to Blogger edit html and find these sentences.Now replace these sentences with your own descriptions.

This is default featured slide 5 title

Go to Blogger edit html and find these sentences.Now replace these sentences with your own descriptions.

Showing posts with label Bing. Show all posts
Showing posts with label Bing. Show all posts

Belati - The Traditional Swiss Army Knife for OSINT


Belati is tool for Collecting Public Data & Public Document from Website and other service for OSINT purpose. This tools is inspired by Foca and Datasploit for OSINT.

What Belati can do?
  • Whois(Indonesian TLD Support)
  • Banner Grabbing
  • Subdomain Enumeration
  • Service Scanning for all Subdomain Machine
  • Web Appalyzer Support
  • DNS mapping / Zone Scanning
  • Mail Harvester from Website & Search Engine
  • Mail Harvester from MIT PGP Public Key Server
  • Scrapping Public Document for Domain from Search Engine
  • Fake and Random User Agent ( Prevent from blocking )
  • Proxy Support for Harvesting Emails and Documents
  • Public Git Finder in domain/subdomain
  • Public SVN Finder in domain/subdomain
  • Robot.txt Scraper in domain/subdomain
  • Gather Public Company Info & Employee
  • SQLite3 Database Support for storing Belati Results
  • Setup Wizard/Configuration for Belati

TODO
  • Automatic OSINT with Username and Email support
  • Organization or Company OSINT Support
  • Collecting Data from Public service with Username and Email for LinkedIn and other service.
  • Setup Wizard for Token and setting up Belati
  • Token Support
  • Email Harvesting with multiple content(github, linkedin, etc)
  • Scrapping Public Document with multiple search engine(yahoo, yandex, bing etc)
  • Metadata Extractor
  • Web version with Django
  • Scanning Report export to PDF
  • domain or subdomain reputation checker
  • Reporting Support to JSON, PDF
  • Belati Updater

Install/Usage
git clone https://github.com/aancw/Belati.git
cd Belati
git submodule update --init --recursive --remote
pip install -r requirements.txt #please use pip with python v2
sudo su
python Belati.py --help

Tested On
Ubuntu 16.04 x86_64 Arch Linux x86_64 CentOS 7

Python Requirements
This tool not compatible with Python 3. So use python v2.7 instead!

Why Need Root Privilege?
Nmap need Root Privilege. You can add sudo or other way to run nmap without root privilege. It's your choice ;)
Reference -> https://secwiki.org/w/Running_nmap_as_an_unprivileged_user
Don't worry. Belati still running when you are run with normal user ;)

Dependencies
  • urllib2
  • dnspython
  • requests
  • argparse
  • texttable
  • python-geoip-geolite2
  • python-geoip
  • dnsknife
  • termcolor
  • colorama
  • validators
  • tqdm
  • tldextract
  • fake-useragent

System Dependencies
For CentOS/Fedora user, please install this:
yum install gcc gmp gmp-devel python-devel

Library
  • python-whois
  • Sublist3r
  • Subbrute
  • nmap
  • git
  • sqlite3

Notice
This tool is for educational purposes only. Any damage you make will not affect the author. Do It With Your Own Risk!

Author
Aan Wahyu a.k.a Petruknisme(https://petruknisme.com)


PowerMeta - PowerShell Script to Search Publicly Files for a Particular Domain and Get the Associated MetaData


PowerMeta searches for publicly available files hosted on various websites for a particular domain by using specially crafted Google, and Bing searches. It then allows for the download of those files from the target domain. After retrieving the files, the metadata associated with them can be analyzed by PowerMeta. Some interesting things commonly found in metadata are usernames, domains, software titles, and computer names.

Public File Discovery
For many organizations it's common to find publicly available files posted on their external websites. Many times these files contain sensitive information that might be of benefit to an attacker like usernames, domains, software titles or computer names. PowerMeta searches both Bing and Google for files on a particular domain using search strings like "site:targetdomain.com filetype:pdf". By default it searches for "pdf, docx, xlsx, doc, xls, pptx, and ppt".

Metadata Extraction
PowerMeta uses Exiftool by Phil Harvey to extract metadata information from files. If you would prefer to download the binary from his site directly instead of using the one in this repo it can be found here: http://www.sno.phy.queensu.ca/~phil/exiftool/. Just make sure the exiftool executable is in the same directory as PowerMeta.ps1 when it is run. By default it just extracts the 'Author' and 'Creator' fields as these commonly have usernames saved. However all metadata for files can be extracted by passing PowerMeta the -ExtractAllToCsv flag.

Usage

Basic Search
This command will initiate Google and Bing searches for files on the 'targetdomain.com' domain ending with a file extension of pdf, docx, xlsx, doc, xls, pptx, or pptx. Once it has finished crafting this list it will prompt the user asking if they wish to download the files from the target domain. After downloading files it will prompt again for extraction of metadata from those files.
C:\PS> Invoke-PowerMeta -TargetDomain targetdomain.com

Changing FileTypes and Automatic Download and Extract
This command will initiate Google and Bing searches for files on the 'targetdomain.com' domain ending with a file extension of pdf, or xml. It will then automatically download them from the target domain and extract metadata.
C:\PS> Invoke-PowerMeta -TargetDomain targetdomain.com -FileTypes "pdf, xml" -Download -Extract

Downloading Files From A List
This command will initiate Google and Bing searches for files on the 'targetdomain.com' domain ending with a file extension of pdf, docx, xlsx, doc, xls, pptx, or pptx and write the links of files found to disk in a file called "target-domain-links.txt".
C:\PS> Invoke-PowerMeta -TargetDomain targetdomain.com -TargetFileList target-domain-links.txt

Extract All Metadata and Limit Page Search
This command will initiate Google and Bing searches for files on the 'targetdomain.com' domain ending with a file extension of pdf, docx, xlsx, doc, xls, pptx, or pptx but only search the first two pages. All metadata (not just the default fields) will be saved in a CSV called all-target-metadata.csv.
C:\PS> Invoke-PowerMeta -TargetDomain targetdomain.com -MaxSearchPages 2 -ExtractAllToCsv all-target-metadata.csv

Extract Metadata From Files In A Directory
This command will simply extract all the metadata from all the files in the folder "\2017-03-031-144953" and save it in a CSV called all-target-metadata.csv
C:\PS> ExtractMetadata -OutputDir .\2017-03-031-144953\ -ExtractAllToCsv all-target-metadata.csv

PowerMeta Options
TargetDomain        - The target domain to search for files. 
FileTypes - A comma seperated list of file extensions to search for. By default PowerMeta searches for "pdf, docx, xlsx, doc, xls, pptx, ppt".
OutputList - A file to output the list of links discovered through web searching to.
OutputDir - A directory to store all downloaded files in.
TargetFileList - List of file links to download.
Download - Instead of being prompted interactively pass this flag to auto-download files found.
Extract - Instead of being prompted interactively pass this flag to extract metadata from found files pass this flag to auto-extract any metadata.
ExtractAllToCsv - All metadata (not just the default fields) will be extracted from files to a CSV specified with this flag.
UserAgent - Change the default User Agent used by PowerMeta.
MaxSearchPages - The maximum number of pages to search on each search engine.


Bing Dork Scanner - Tool to extract urls from a bing search


This is a simple script with GUI, to extract urls from a bing search.

Support only HTTP proxy.

Required Perl Modules:
  • LWP
  • Gtk2
  • Glib
  • uft8
  • threads
  • threads::shared
  • URI::Escape