Taking Snaffler analysis to the next level using ElasticSearch. Currently in Alpha (α)
Simply run:
pip install -r requirements.txt
Supported arguments are as follows:
usage: snafflemonster.py [-h] -f FILE -n HOSTNAME [-i INDEX] [-k APIKEY] [-r REPLACE] [-a APPEND] [--insecure INSECURE]
Send Snaffler Output to ElasticSearch for analysis.
optional arguments:
-h, --help show this help message and exit
-f FILE, --file FILE The path to the JSON file to process.
-n HOSTNAME, --hostname HOSTNAME
Hostname or IP pointing to the ElasticSearch instance.
-i INDEX, --index INDEX
The name of the index to store results in.
-k APIKEY, --apikey APIKEY
The API key used to authentiate to ElasticSearch.
-r REPLACE, --replace REPLACE
Optional argument to delete existing items in the index selected before adding new items.
-a APPEND, --append APPEND
Optional argument to append new items to the selected index.
--insecure INSECURE Toggle for allowing sending over HTTPS with verification turned off so self signed or invalid ceritficates can be used.
Happy Snaffling
The following is an example of using the program:
python3 snafflemonster.py -f /Path/To/Snaffler.json -n elasticsearch.snaffler.com
If necessary arguments such as Index and Apikey are not provided then the user will be prompted for these values at runtime.
If you don't already have any snaffler output to analyse/send to elasticsearch then you can get/create some by grabbing a copy of Snaffler from here.
You can then get/create some output to analyse by running Snaffler like so:
snaffler.exe -s -o snaffout.json -t json
If you don't already have an elasticsearch instance setup then you will want to do that first. I recommend using Docker and Docker Compose because it is nice and easy.
ElasticSearch has documentation on this if you have trouble getting it setup that you can refer to here.
The following is some condensed instructions to get you running quickly:
- You will need to configure your environment file:
# Password for the 'elastic' user (at least 6 characters)
ELASTIC_PASSWORD=
# Password for the 'kibana_system' user (at least 6 characters)
KIBANA_PASSWORD=
# Version of Elastic products
STACK_VERSION=8.1.0
# Set the cluster name (You can change this if you want to)
CLUSTER_NAME=snafflesearch-docker-cluster
# Set to 'basic' or 'trial' to automatically start the 30-day trial
LICENSE=basic
#LICENSE=trial
# Port to expose Elasticsearch HTTP API to the host
ES_PORT=9200
#ES_PORT=127.0.0.1:9200
# Port to expose Kibana to the host
KIBANA_PORT=5601
#KIBANA_PORT=80
# Increase or decrease based on the available host memory (in bytes)
MEM_LIMIT=1073741824
# Project namespace (defaults to the current folder name if not set, again you can change this if you need/want).
COMPOSE_PROJECT_NAME=snafflesearch
Make sure to change any values here to suit your own requirements.
- Make sure the system supports the memory requirements or ElasticSearch in docker. You can do this by running the following command on a live system:
sysctl -w vm.max_map_count=262144
OR by adding the following to /etc/sysctl.conf:
vm.max_map_count=262144
There are separate instructions for Windows and Mac
ElasticSearch also recommends modifying ulimit and disabling swap.
- Once your configuration is ready it should be as easy as:
docker-compose up
To get your ElasticSearch instance up and runnning.
In order to perform operations against the ElasticSearch cluster you just setup you will need an API key. You can create one quite easily by going:
- Stack Management
- API Keys
- Create API Key
- Give your API Key a name and fill in some options then hit "Create API Key"
- Save it in secure storage somewhere so you don't forget it or lose it.
This process is also illustrated here:
Kibana will let you visualise the data that you upload using the ingestor. You can create your own dashboards to suit your needs. Currently next on the list of things to implement is creating a demo dashboard automatically along with a requirred data view so wath this space. Whilst I do that though here are some instructions on how to create your own dashboard and start visualising things:
Before you can get started with a dashboard you will need to do two important things.
- Upload your data to ElasticSearch Index using the ingestor.
- Create a Data View/Index Pattern for your uploaded data.
Without a Data View/Index Pattern ElasticSearch won't be able to display any data in a Dashboard.
You can create a Data View/Index Pattern by going:
- Stack Management
- Data Views
- Create Data View
This process is also illustrated here:
You can create a Dashboard by going:
- Dashboard
- Create Dashboard
- Create Visualisation
- Pick your data view that you created
- Choose a field that you want to visualise eg: FileResult.MatchedRule.Triage.keyword (If you get there are no available fields then try extending the time range)
- Drag the desired field(s) from the available fields section into the middle
- Change your visualisation type to suit your needs
- Hit save and return to complete the visualisation
- Repeat as many times as needed for all the visualisations you want in your dashboard
This process is also illustrated here:
Once you have populated your dashboard with different visualisations you can apply filters by clicking on different fields such as the regions of the pie chart or by clicking add filter and then selecting some options.
Doing this you can end up with something like the following:
If you have suggestions for better information to display in the dashboard then please share it!
- Automatic Dashboard and DataView Generation (in progress)
- Built in queries (eg: find all scripts that are writable)
- Better visualisation of an individual result
- Web/Desktop UI To allow for a cleaner/simpler user experience
Please create an issue if you find a problem. Pull requests are welcome.
Lynkle
This project is license under the GPLv3 license.