Malcolm - A Powerful, Easily Deployable Network Traffic Analysis Tool Suite For Sum Bundle Capture Artifacts (Pcap Files) Together With Zeek Logs


Malcolm is a powerful network traffic analysis tool suite designed alongside the next goals inwards mind:
  • Easy to use – Malcolm accepts network traffic information inwards the shape of total packet capture (PCAP) files in addition to Zeek (formerly Bro) logs. These artifacts tin live uploaded via a uncomplicated browser-based interface or captured alive in addition to forwarded to Malcolm using lightweight forwarders. In either case, the information is automatically normalized, enriched, in addition to correlated for analysis.
  • Powerful traffic analysis – Visibility into network communications is provided through 2 intuitive interfaces: Kibana, a flexible information visualization plugin alongside dozens of prebuilt dashboards providing an at-a-glance overview of network protocols; in addition to Moloch, a powerful tool for finding in addition to identifying the network sessions comprising suspected safety incidents.
  • Streamlined deployment – Malcolm operates every bit a cluster of Docker containers, isolated sandboxes which each serve a dedicated business office of the system. This Docker-based deployment model, combined alongside a few uncomplicated scripts for setup in addition to run-time management, makes Malcolm suitable to live deployed speedily across a diversity of platforms in addition to usage cases, whether it live for long-term deployment on a Linux server inwards a safety operations pump (SOC) or for incident response on a Macbook for an private engagement.
  • Secure communications – All communications alongside Malcolm, both from the user interface in addition to from remote log forwarders, are secured alongside manufacture touchstone encryption protocols.
  • Permissive license – Malcolm is comprised of several widely used opened upwards source tools, making it an attractive alternative to safety solutions requiring paid licenses.
  • Expanding command systems visibility – While Malcolm is nifty for general-purpose network traffic analysis, its creators view a particular demand inwards the community for tools providing insight into protocols used inwards industrial command systems (ICS) environments. Ongoing Malcolm evolution volition aim to render additional parsers for mutual ICS protocols.
Although all of the opened upwards source tools which brand upwards Malcolm are already available in addition to inwards full general use, Malcolm provides a framework of interconnectivity which makes it greater than the amount of its parts. And spell at that spot are many other network traffic analysis solutions out there, ranging from consummate Linux distributions similar Security Onion to licensed products similar Splunk Enterprise Security, the creators of Malcolm experience its slowly deployment in addition to robust combination of tools fill upwards a void inwards the network safety infinite that volition brand network traffic analysis accessible to many inwards both the populace in addition to private sectors every bit good every bit private enthusiasts.
In short, Malcolm provides an easily deployable network analysis tool suite for total packet capture artifacts (PCAP files) in addition to Zeek logs. While Internet access is required to construct it, it is non required at runtime.

Quick start

Getting Malcolm
For a TL;DR representative of downloading, configuring, in addition to running Malcolm on a Linux platform, view Installation representative using Ubuntu 18.04 LTS.

Source code
The files required to construct in addition to run Malcolm are available on the Idaho National Lab's GitHub page. Malcolm's source code is released nether the price of a permissive opened upwards source software license (see view License.txt for the price of its release).

Building Malcolm from scratch
The build.sh script tin Malcolm's Docker from scratch. See Building from source for to a greater extent than information.

Pull Malcolm's Docker images
Malcolm's Docker images are periodically built in addition to hosted on Docker Hub. If you lot already have got Docker in addition to Docker Compose, these prebuilt images tin live pulled past times navigating into the Malcolm directory (containing the docker-compose.yml file) in addition to running docker-compose pull similar this:
$ docker-compose clit Pulling elasticsearch ... done Pulling kibana        ... done Pulling elastalert    ... done Pulling curator       ... done Pulling logstash      ... done Pulling filebeat      ... done Pulling moloch        ... done Pulling file-monitor  ... done Pulling pcap-capture  ... done Pulling upload        ... done Pulling htadmin       ... done Pulling nginx-proxy   ... done
You tin in addition to then observe that the images have got been retrieved past times running docker images:
$ docker images REPOSITORY                                          TAG                 IMAGE ID            CREATED             SIZE malcolmnetsec/moloch                                1.4.0               xxxxxxxxxxxx        27 minutes agone      517MB malcolmnetsec/htadmin                               1.4.0               xxxxxxxxxxxx        2 hours agone         180MB malcolmnetsec/nginx-proxy                           1.4.0               xxxxxxxxxxxx        iv hours agone         53MB malcolmnetsec/file-upload                           1.4.0               xxxxxxxxxxxx        24 hours agone        198MB malcolmnetsec/pcap-capture                          1.4.0               xxxxxxxxxxxx        24 hours agone        111MB malcolmnetsec/file-monitor                          1.4.0               xxxxxxxxxxxx        24 hours agone        355MB malcolmnetsec/logstash-oss                          1.4.0               xxxxxxxxxxxx        25 hours agone        1.2   4GB malcolmnetsec/curator                               1.4.0               xxxxxxxxxxxx        25 hours agone        303MB malcolmnetsec/kibana-oss                            1.4.0               xxxxxxxxxxxx        33 hours agone        944MB malcolmnetsec/filebeat-oss                          1.4.0               xxxxxxxxxxxx        xi days agone         459MB malcolmnetsec/elastalert                            1.4.0               xxxxxxxxxxxx        xi days agone         276MB docker.elastic.co/elasticsearch/elasticsearch-oss   6.8.1               xxxxxxxxxxxx        five weeks agone         769MB
You must run auth_setup.sh prior to running docker-compose pull. You should also ensure your scheme configuration in addition to docker-compose.yml settings are tuned past times running ./scripts/install.py or ./scripts/install.py --configure (see System configuration in addition to tuning).

Import from pre-packaged tarballs
Once built, the malcolm_appliance_packager.sh script tin live used to create pre-packaged Malcolm tarballs for import on some other machine. See Pre-Packaged Installation Files for to a greater extent than information.

Starting in addition to stopping Malcolm
Use the scripts inwards the scripts/ directory to start in addition to halt Malcolm, thought debug logs of a currently running instance, wipe the database in addition to restore Malcolm to a fresh state, etc.

User interface
H5N1 few minutes after starting Malcolm (probably five to 10 minutes for Logstash to live completely up, depending on the system), the next services volition live accessible:

Overview


Malcolm processes network traffic information inwards the shape of packet capture (PCAP) files or Zeek logs. H5N1 packet capture appliance ("sensor") monitors network traffic mirrored to it over a SPAN port on a network switch or router, or using a network TAP device. Zeek logs are generated containing of import session metadata from the traffic observed, which are in addition to then securely forwarded to a Malcolm instance. Full PCAP files are optionally stored locally on the sensor device for attempt later.
Malcolm parses the network session information in addition to enriches it alongside additional lookups in addition to mappings including GeoIP mapping, hardware manufacturer lookups from organizationally unique identifiers (OUI) inwards MAC addresses, assigning names to network segments in addition to hosts based on user-defined IP address in addition to MAC mappings, performing TLS fingerprinting, in addition to many others.
The enriched information is stored inwards an Elasticsearch document shop inwards a format suitable for analysis through 2 intuitive interfaces: Kibana, a flexible information visualization plugin alongside dozens of prebuilt dashboards providing an at-a-glance overview of network protocols; in addition to Moloch, a powerful tool for finding in addition to identifying the network sessions comprising suspected safety incidents. These tools tin live accessed through a spider web browser from analyst workstations or for display inwards a safety operations pump (SOC). Logs tin also optionally live forwarded on to some other instance of Malcolm.
For smaller networks, usage at dwelling past times network safety enthusiasts, or inwards the land for incident response engagements, Malcolm tin also easily live deployed locally on an ordinary consumer workstation or laptop. Malcolm tin procedure local artifacts such every bit locally-generated Zeek logs, locally-captured PCAP files, in addition to PCAP files collected offline without the usage of a dedicated sensor appliance.

Components
Malcolm leverages the next splendid opened upwards source tools, amidst others.
  • Moloch - for PCAP file processing, browsing, searching, analysis, in addition to carving/exporting; Moloch itself consists of 2 parts:
    • moloch-capture - a tool for traffic capture, every bit good every bit offline PCAP parsing in addition to metadata insertion into Elasticsearch
    • viewer - a browser-based interface for information visualization
  • Elasticsearch - a search in addition to analytics engine for indexing in addition to querying network traffic session metadata
  • Logstash in addition to Filebeat - for ingesting in addition to parsing Zeek Log Files in addition to ingesting them into Elasticsearch inwards a format that Moloch understands in addition to is able to sympathize inwards the same means it natively understands PCAP data
  • Kibana - for creating additional ad-hoc visualizations in addition to dashboards beyond that which is provided past times Moloch Viewer
  • Zeek - a network analysis framework in addition to IDS
  • ClamAV - an antivirus engine for scanning files extracted past times Zeek
  • CyberChef - a "swiss-army knife" information conversion tool
  • jQuery File Upload - for uploading PCAP files in addition to Zeek logs for processing
  • Docker in addition to Docker Compose - for simple, reproducible deployment of the Malcolm appliance across environments in addition to to coordinate communication betwixt its diverse components
  • nginx - for HTTPS in addition to contrary proxying Malcolm components
  • ElastAlert - an alerting framework for Elasticsearch. Specifically, the BitSensor fork of ElastAlert, its Docker configuration in addition to its corresponding Kibana plugin are used.

Development
Checking out the Malcolm source code results inwards the next subdirectories inwards your malcolm/ working copy:
  • curator - code in addition to configuration for the curator container which define rules for closing and/or deleting old Elasticsearch indices
  • Dockerfiles - a directory containing construct instructions for Malcolm's docker images
  • docs - a directory containing instructions in addition to documentation
  • elastalert - code in addition to configuration for the elastalert container which provides an alerting framework for Elasticsearch
  • elasticsearch - an initially empty directory where the Elasticsearch database instance volition reside
  • elasticsearch-backup - an initially empty directory for storing Elasticsearch index snapshots
  • filebeat - code in addition to configuration for the filebeat container which ingests Zeek logs in addition to forwards them to the logstash container
  • file-monitor - code in addition to configuration for the file-monitor container which tin scan files extracted past times Zeek
  • file-upload - code in addition to configuration for the upload container which serves a spider web browser-based upload shape for uploading PCAP files in addition to Zeek logs, in addition to which serves an SFTP portion every bit an alternate method for upload
  • htadmin - configuration for the htadmin user concern human relationship administration container
  • iso-build - code in addition to configuration for edifice an installer ISO for a minimal Debian-based Linux installation for running Malcolm
  • kibana - code in addition to configuration for the kibana container for creating additional ad-hoc visualizations in addition to dashboards beyond that which is provided past times Moloch Viewer
  • logstash - code in addition to configuration for the logstash container which parses Zeek logs in addition to forwards them to the elasticsearch container
  • moloch - code in addition to configuration for the moloch container which handles PCAP processing in addition to which serves the Viewer application
  • moloch-logs - an initially empty directory to which the moloch container volition write some debug log files
  • moloch-raw - an initially empty directory to which the moloch container volition write captured PCAP files; every bit Moloch every bit employed past times Malcolm is currently used for processing previously-captured PCAP files, this directory is currently unused
  • nginx - configuration for the nginx contrary proxy container
  • pcap - an initially empty directory for PCAP files to live uploaded, processed, in addition to stored
  • pcap-capture - code in addition to configuration for the pcap-capture container which tin capture network traffic
  • scripts - command scripts for starting, stopping, restarting, etc. Malcolm
  • shared - miscellaneous code used past times diverse Malcolm components
  • zeek-logs - an initially empty directory for Zeek logs to live uploaded, processed, in addition to stored
in addition to the next files of special note:
  • auth.env - the script ./scripts/auth_setup.sh prompts the user for the administrator credentials used past times the Malcolm appliance, in addition to auth.env is the surroundings file where those values are stored
  • cidr-map.txt - specify custom IP address to network segment mapping
  • host-map.txt - specify custom IP and/or MAC address to host mapping
  • docker-compose.yml - the configuration file used past times docker-compose to build, start, in addition to halt an instance of the Malcolm appliance
  • docker-compose-standalone.yml - similar to docker-compose.yml, only used for the "packaged" installation of Malcolm
  • docker-compose-standalone-zeek-live.yml - identical to docker-compose-standalone.yml, only Filebeat is configured to monitor alive Zeek logs (ie., beingness actively written to)

Building from source
Building the Malcolm docker images from scratch requires network access to clit source files for its components. Once network access is available, execute the next command to construct all of the Docker images used past times the Malcolm appliance:
$ ./scripts/build.sh
Then, travel select a walk or something since it volition live a while. When you're done, you lot tin run docker images in addition to view you lot have got fresh images for:
  • malcolmnetsec/curator (based on debian:buster-slim)
  • malcolmnetsec/elastalert (based on bitsensor/elastalert)
  • malcolmnetsec/file-monitor (based on debian:buster-slim)
  • malcolmnetsec/file-upload (based on debian:buster-slim)
  • malcolmnetsec/filebeat-oss (based on docker.elastic.co/beats/filebeat-oss)
  • malcolmnetsec/htadmin (based on debian:buster-slim)
  • malcolmnetsec/kibana-oss (based on docker.elastic.co/kibana/kibana-oss)
  • malcolmnetsec/logstash-oss (based on centos:7)
  • malcolmnetsec/moloch (based on debian:stretch-slim)
  • malcolmnetsec/nginx-proxy (based on jwilder/nginx-proxy:alpine)
  • malcolmnetsec/pcap-capture (based on debian:buster-slim)
Additionally, the command volition clit from Docker Hub:
  • docker.elastic.co/elasticsearch/elasticsearch-oss

Pre-Packaged installation files

Creating pre-packaged installation files
scripts/malcolm_appliance_packager.sh tin live run to bundle upwards the configuration files (and, if necessary, the Docker images) which tin live copied to a network portion or USB drive for distribution to non-networked machines. For example:
$ ./scripts/malcolm_appliance_packager.sh  You must laid a username in addition to password for Malcolm, in addition to self-signed X.509 certificates volition live generated Administrator username: analyst analyst password:  analyst password (again):   (Re)generate self-signed certificates for HTTPS access [Y/n]?   (Re)generate self-signed certificates for a remote log forwarder [Y/n]?   Store username/password for forwarding Logstash events to a secondary, external Elasticsearch instance [y/N]?  Packaged Malcolm to "/home/user/tmp/malcolm_20190513_101117_f0d052c.tar.gz"   Do you lot demand to bundle docker images also [y/N]? y This mightiness select a few minutes...  Packaged Malcolm docker images to "/home/user/tmp/malcolm_20190513_101117_f0d052c_images.tar.gz"   To install Malcolm:   1. Run install.py   2. Follow the prompts  To start, stop, restart, etc. Malcolm:   Use the command scripts inwards the "scripts/" dir   ectory:    - start.sh      (start Malcolm)    - stop.sh       (stop Malcolm)    - restart.sh    (restart Malcolm)    - logs.sh       (monitor Malcolm logs)    - wipe.sh       (stop Malcolm in addition to clear its database)    - auth_setup.sh (change authentication-related settings)  H5N1 infinitesimal or so after starting Malcolm, the next services volition live accessible:   - Moloch: https://localhost/   - Kibana: https://localhost:5601/   - PCAP Upload (web): https://localhost:8443/   - PCAP Upload (sftp): sftp://USERNAME@127.0.0.1:8022/files/   - Account management: https://localhost:488/
The inwards a higher seat representative volition upshot inwards the next artifacts for distribution every bit explained inwards the script's output:
$ ls -lh total 2.0G -rwxr-xr-x 1 user user  61k May thirteen 11:32 install.py -rw-r--r-- 1 user user 2.0G May thirteen 11:37 malcolm_20190513_101117_f0d052c_images.tar.gz -rw-r--r-- 1 user user  683 May thirteen 11:37 malcolm_20190513_101117_f0d052c.README.txt -rw-r--r-- 1 user user 183k May thirteen 11:32 malcolm_20190513_101117_f0d052c.tar.gz

Installing from pre-packaged installation files
If you lot have got obtained pre-packaged installation files to install Malcolm on a non-networked machine via an internal network portion or on a USB key, you lot probable have got the next files:
  • malcolm_YYYYMMDD_HHNNSS_xxxxxxx.README.txt - This readme file contains a minimal laid upwards instructions for extracting the contents of the other tarballs in addition to running the Malcolm appliance.
  • malcolm_YYYYMMDD_HHNNSS_xxxxxxx.tar.gz - This tarball contains the configuration files in addition to directory configuration used past times an instance of Malcolm. It tin live extracted via tar -xf malcolm_YYYYMMDD_HHNNSS_xxxxxxx.tar.gz upon which a directory volition live created (named similarly to the tarball) containing the directories in addition to configuration files. Alternately, install.py tin select this filename every bit an declaration in addition to grip its extraction in addition to initial configuration for you.
  • malcolm_YYYYMMDD_HHNNSS_xxxxxxx_images.tar.gz - This tarball contains the Docker images used past times Malcolm. It tin live imported manually via docker charge -i malcolm_YYYYMMDD_HHNNSS_xxxxxxx_images.tar.gz
  • install.py - This install script tin charge the Docker images in addition to extract Malcolm configuration files from the aforementioned tarballs in addition to do some initial configuration for you.
Run install.py malcolm_XXXXXXXX_XXXXXX_XXXXXXX.tar.gz in addition to follow the prompts. If you lot do non already have got Docker in addition to Docker Compose installed, the install.py script volition assist you lot install them.

Preparing your system

Recommended scheme requirements
Malcolm needs a reasonably up-to-date version of Docker in addition to Docker Compose. In theory this should live possible on Linux, macOS, in addition to recent Windows 10 releases, although so far it's only been tested on Linux in addition to macOS hosts.
To quote the Elasticsearch documentation, "If at that spot is i resources that you lot volition run out of first, it volition probable live memory." The same is truthful for Malcolm: you lot volition desire at to the lowest degree sixteen gigabytes of RAM to run Malcolm comfortably. For processing large volumes of traffic, I'd recommend at a bare minimum a dedicated server alongside sixteen cores in addition to sixteen gigabytes of RAM. Malcolm tin run on less, but to a greater extent than is better. You're going to desire every bit much difficult drive infinite every bit possible, of course, every bit the amount of PCAP information you're able to analyze in addition to shop volition live limited past times your difficult drive.
Moloch's wiki has a couplet of documents (here in addition to here in addition to here in addition to a calculator here) which may live helpful, although non everything inwards those documents volition apply to a Docker-based setup similar Malcolm.

System configuration in addition to tuning
If you lot already have got Docker in addition to Docker Compose installed, the install.py script tin yet assist you lot melody scheme configuration in addition to docker-compose.yml parameters for Malcolm. To run it inwards "configuration only" mode, bypassing the steps to install Docker in addition to Docker Compose, run it similar this:
sudo ./scripts/install.py --configure
Although install.py volition attempt to automate many of the next configuration in addition to tuning parameters, they are nonetheless listed inwards the next sections for reference:

docker-compose.yml parameters
Edit docker-compose.yml in addition to search for the ES_JAVA_OPTS key. Edit the -Xms4g -Xmx4g values, replacing 4g alongside a number that is one-half of your total scheme memory, or exactly nether 32 gigabytes, whichever is less. So, for example, if I had 64 gigabytes of retentiveness I would edit those values to live -Xms31g -Xmx31g. This indicates how much retentiveness tin live allocated to the Elasticsearch heaps. For a pleasant experience, I would propose non using a value nether 10 gigabytes. Similar values tin live modified for Logstash alongside LS_JAVA_OPTS, where using 3 or iv gigabytes is recommended.
Various other surroundings variables within of docker-compose.yml tin live tweaked to command aspects of how Malcolm behaves, especially alongside regards to processing PCAP files in addition to Zeek logs. The surroundings variables of particular involvement are located close the top of that file nether Commonly tweaked configuration options, which include:
  • INITIALIZEDB – indicates to Malcolm to create (or recreate) Moloch’s internal settings database on startup; this setting is managed past times the wipe.sh in addition to start.sh scripts in addition to does non mostly demand to live changed manually
  • MANAGE_PCAP_FILES – if laid to true, all PCAP files imported into Malcolm volition live marked every bit available for deletion past times Moloch if available storage infinite becomes also depression (default false)
  • ZEEK_AUTO_ANALYZE_PCAP_FILES – if laid to true, all PCAP files imported into Malcolm volition automatically live analyzed past times Zeek, in addition to the resulting logs volition also live imported (default false)
  • MOLOCH_ANALYZE_PCAP_THREADS – the number of threads available to Moloch for analyzing PCAP files (default 1)
  • ZEEK_AUTO_ANALYZE_PCAP_THREADS – the number of threads available to Malcolm for analyzing Zeek logs (default 1)
  • LOGSTASH_JAVA_EXECUTION_ENGINE – if laid to true, Logstash volition usage the novel Logstash Java Execution Engine which may significantly speed upwards Logstash startup in addition to processing (default false, every bit it is currently considered experimental)
  • LOGSTASH_OUI_LOOKUP – if laid to true, Logstash volition map MAC addresses to vendors for all source in addition to destination MAC addresses when analyzing Zeek logs (default true)
  • LOGSTASH_REVERSE_DNS – if laid to true, Logstash volition perform a contrary DNS lookup for all external source in addition to destination IP address values when analyzing Zeek logs (default false)
  • ES_EXTERNAL_HOSTS – if specified (in the format '10.0.0.123:9200'), logs received past times Logstash volition live forwarded on to some other external Elasticsearch instance inwards add-on to the i maintained locally past times Malcolm
  • ES_EXTERNAL_SSL – if laid to true, Logstash volition usage HTTPS for the connexion to external Elasticsearch instances specified inwards ES_EXTERNAL_HOSTS
  • ES_EXTERNAL_SSL_CERTIFICATE_VERIFICATION – if laid to true, Logstash volition require total SSL certificate validation; this may neglect if using self-signed certificates (default false)
  • KIBANA_OFFLINE_REGION_MAPS – if laid to true, a modest internal server volition live surfaced to Kibana to render the powerfulness to thought part map visualizations fifty-fifty when an Internet connexion is non available (default true)
  • CURATOR_CLOSE_COUNT in addition to CURATOR_CLOSE_UNITS - decide behaviour for automatically closing older Elasticsearch indices to conserve memory; view Elasticsearch index curation
  • CURATOR_DELETE_COUNT in addition to CURATOR_DELETE_UNITS - decide behaviour for automatically deleting older Elasticsearch indices to bring down disk usage; view Elasticsearch index curation
  • CURATOR_DELETE_GIGS - if the Elasticsearch indices representing the log information transcend this size, inwards gigabytes, older indices volition live deleted to convey the total size dorsum nether this threshold; view Elasticsearch index curation
  • CURATOR_SNAPSHOT_DISABLED - if laid to False, daily snapshots (backups) volition live made of the previous day's Elasticsearch log index; view Elasticsearch index curation
  • AUTO_TAG – if laid to true, Malcolm volition automatically create Moloch sessions in addition to Zeek logs alongside tags based on the filename, every bit described inwards Tagging (default true)
  • BEATS_SSL – if laid to true, Logstash volition usage require encrypted communications for whatever external Beats-based forwarders from which it volition select logs; if Malcolm is beingness used every bit a standalone tool in addition to then this tin safely live laid to false, but if external log feeds are to live accepted in addition to then setting it to truthful is recommended (default false)
  • ZEEK_EXTRACTOR_MODE – determines the file extraction behaviour for file transfers detected past times Zeek; view Automatic file extraction in addition to scanning for to a greater extent than details
  • EXTRACTED_FILE_IGNORE_EXISTING – if laid to true, files extant inwards ./zeek-logs/extract_files/ directory volition live ignored on startup rather than scanned
  • EXTRACTED_FILE_PRESERVATION – determines behaviour for preservation of Zeek-extracted files
  • VTOT_API2_KEY – used to specify a VirusTotal Public API v.20 key, which, if specified, volition live used to submit hashes of Zeek-extracted files to VirusTotal
  • EXTRACTED_FILE_ENABLE_CLAMAV – if laid to true (and VTOT_API2_KEY is unspecified), Zeek-extracted files volition live scanned alongside ClamAV
  • EXTRACTED_FILE_ENABLE_FRESHCLAM – if laid to true, ClamAV volition periodically update virus databases
  • PCAP_ENABLE_NETSNIFF – if laid to true, Malcolm volition capture network traffic on the local network interface(s) indicated inwards PCAP_IFACE using netsniff-ng
  • PCAP_ENABLE_TCPDUMP – if laid to true, Malcolm volition capture network traffic on the local network interface(s) indicated inwards PCAP_IFACE using tcpdump; at that spot is no argue to enable both PCAP_ENABLE_NETSNIFF in addition to PCAP_ENABLE_TCPDUMP
  • PCAP_IFACE – used to specify the network interface(s) for local packet capture if PCAP_ENABLE_NETSNIFF or PCAP_ENABLE_TCPDUMP are enabled; for multiple interfaces, separate the interface names alongside a comma (eg., 'enp0s25' or 'enp10s0,enp11s0')
  • PCAP_ROTATE_MEGABYTES – used to specify how large a locally-captured PCAP file tin travel (in megabytes) earlier it closed for processing in addition to a novel PCAP file created
  • PCAP_ROTATE_MINUTES – used to specify an fourth dimension interval (in minutes) after which a locally-captured PCAP file volition live closed for processing in addition to a novel PCAP file created
  • PCAP_FILTER – specifies a tcpdump-style filter facial expression for local packet capture; leave of absence blank to capture all traffic

Linux host scheme configuration

Installing Docker
Docker installation instructions vary slightly past times distribution. Please follow the links below to docker.com to uncovering the instructions specific to your distribution:
After installing Docker, because Malcolm should live run every bit a non-root user, add together your user to the docker grouping alongside something like:
$ sudo usermod -aG docker yourusername
Following this, either reboot or log out in addition to then log dorsum in.
Docker starts automatically on DEB-based distributions. On RPM-based distributions, you lot demand to start it manually or enable it using the appropriate systemctl or service command(s).
You tin attempt docker past times running docker info, or (assuming you lot have got network access), docker run --rm hello-world.

Installing docker-compose
Please follow this link on docker.com for instructions on installing docker-compose.

Operating scheme configuration
The host scheme (ie., the i running Docker) volition demand to live configured for the best possible Elasticsearch performance. Here are a few suggestions for Linux hosts (these may vary from distribution to distribution):
  • Append the next lines to /etc/sysctl.conf:
# the maximum number of opened upwards file handles fs.file-max=65536  # the maximum number of user inotify watches fs.inotify.max_user_watches=131072  # the maximum number of retentiveness map areas a procedure may have got vm.max_map_count=262144  # decrease "swappiness" (swapping out runtime retentiveness vs. dropping pages) vm.swappiness=1  # the maximum number of incoming connections net.core.somaxconn=65535  # the % of scheme retentiveness fillable alongside "dirty" pages earlier flushing vm.dirty_background_ratio=40  # maximum % of muddied scheme retentiveness earlier committing everything vm.dirty_ratio=80
  • Depending on your distribution, create either the file /etc/security/limits.d/limits.conf containing:
# the maximum number of opened upwards file handles * soft nofile 65535 * difficult nofile 65535 # do non bound the size of retentiveness that tin live locked * soft memlock unlimited * difficult memlock unlimited
OR the file /etc/systemd/system.conf.d/limits.conf containing:
[Manager] # the maximum number of opened upwards file handles DefaultLimitNOFILE=65535:65535 # do non bound the size of retentiveness that tin live locked DefaultLimitMEMLOCK=infinity
  • Change the readahead value for the disk where the Elasticsearch information volition live stored. There are a few ways to do this. For example, you lot could add together this delineate to /etc/rc.local (replacing /dev/sda alongside your disk block descriptor):
# modify disk read-adhead value (# of blocks) blockdev --setra 512 /dev/sda
  • Change the I/O scheduler to deadline or noop. Again, this tin live done inwards a diversity of ways. The simplest is to add together elevator=deadline to the arguments inwards GRUB_CMDLINE_LINUX inwards /etc/default/grub, in addition to then running sudo update-grub2
  • If you lot are planning on using real large information sets, consider formatting the drive containing elasticsearch majority every bit XFS.
After making all of these changes, do a reboot for practiced measure!

macOS host scheme configuration

Automatic installation using install.py
The install.py script volition attempt to guide you lot through the installation of Docker in addition to Docker Compose if they are non present. If that industrial plant for you, you lot tin skip ahead to Configure docker daemon option inwards this section.

Install Homebrew
The easiest means to install in addition to maintain docker on Mac is using the Homebrew cask. Execute the next inwards a terminal.
$ /usr/bin/ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)" $ brew install cask $ brew tap caskroom/versions

Install docker-edge
$ brew cask install docker-edge
This volition install the latest version of docker in addition to docker-compose. It tin live upgraded subsequently using brew every bit well:
$ brew cask upgrade --no-quarantine docker-edge
You tin directly run docker from the Applications folder.

Configure docker daemon option
Some changes should live made for performance (this link gives a practiced succinct overview).
  • Resource allocation - For a practiced experience, you lot probable demand at to the lowest degree a quad-core MacBook Pro alongside 16GB RAM in addition to an SSD. I have got run Malcolm on an older 2013 MacBook Pro alongside 8GB of RAM, but the to a greater extent than the better. Go inwards your scheme tray in addition to select DockerPreferencesAdvanced. Set the resources available to docker to at to the lowest degree iv CPUs in addition to 8GB of RAM (>= 16GB is preferable).
  • Volume mountain performance - You tin speed upwards performance of majority mounts past times removing unused paths from DockerPreferencesFile Sharing. For example, if you’re only going to live mounting volumes nether your dwelling directory, you lot could portion /Users but take other paths.
After making these changes, right click on the Docker icon inwards the scheme tray in addition to select Restart.

Windows host scheme configuration
There are several ways of installing in addition to running docker alongside Windows, in addition to they vary depending on the version of Windows you lot are running, whether or non Hyper-V must live enabled (which is a requirement for VMWare, but is precluded past times the recent non-virtual machine liberate of Docker).
As the writer supposes that the target audience of this document are to a greater extent than probable to live running macOS or Linux, detailed instructions for Docker setup nether Windows are non included here. Instead, refer to the next links:

Running Malcolm

Configure authentication
Run ./scripts/auth_setup.sh earlier starting Malcolm for the foremost fourth dimension inwards club to:
  • define the administrator concern human relationship username in addition to password
  • specify whether or non to (re)generate the self-signed certificates used for HTTPS access
    • key in addition to certificate files are located inwards the nginx/certs/ directory
  • specify whether or non to (re)generate the self-signed certificates used past times a remote log forwarder (see the BEATS_SSL surroundings variable above)
    • certificate authority, certificate, in addition to key files for Malcolm’s Logstash instance are located inwards the logstash/certs/ directory
    • certificate authority, certificate, in addition to key files to live copied to in addition to used past times the remote log forwarder are located inwards the filebeat/certs/ directory
  • specify whether or non to shop the username/password for forwarding Logstash events to a secondary, external Elasticsearch instance (see the ES_EXTERNAL_HOSTS, ES_EXTERNAL_SSL, in addition to ES_EXTERNAL_SSL_CERTIFICATE_VERIFICATION surroundings variables above)
    • these parameters are stored securely inwards the Logstash keystore file logstash/certs/logstash.keystore

Account management
auth_setup.sh is used to define the username in addition to password for the administrator account. Once Malcolm is running, the administrator concern human relationship tin live used to care other user accounts via a Malcolm User Management page served over HTTPS on port 488 (eg., https://localhost:488 if you lot are connecting locally).
Malcolm user accounts tin live used to access the interfaces of all of its components, including Moloch. Moloch uses its ain internal database of user accounts, so when a Malcolm user concern human relationship logs inwards to Moloch for the foremost fourth dimension Malcolm creates a corresponding Moloch user concern human relationship automatically. This beingness the case, it is not recommended to usage the Moloch Users settings page or modify the password via the Password shape nether the Moloch Settings page, every bit those settings would non live consistently used across Malcolm.
Users may modify their passwords via the Malcolm User Management page past times clicking User Self Service. H5N1 forgotten password tin also live reset via an emailed link, though this requires SMTP server settings to live specified inwards htadmin/config.ini inwards the Malcolm installation directory.

Starting Malcolm
Docker compose is used to coordinate running the Docker containers. To start Malcolm, navigate to the directory containing docker-compose.yml in addition to run:
$ ./scripts/start.sh
This volition create the containers' virtual network in addition to instantiate them, in addition to then leave of absence them running inwards the background. The Malcolm containers may select a several minutes to start upwards completely. To follow the debug output for an already-running Malcolm instance, run:
$ ./scripts/logs.sh
You tin also usage docker stats to monitor the resources utilization of running containers.

Stopping in addition to restarting Malcolm
You tin run ./scripts/stop.sh to halt the docker containers in addition to take their virtual network. Alternately, ./scripts/restart.sh volition restart an instance of Malcolm. Because the information on disk is stored on the host inwards docker volumes, doing these operations volition non upshot inwards loss of data.
Malcolm tin live configured to live automatically restarted when the Docker scheme daemon restart (for example, on scheme reboot). This behaviour depends on the value of the restart: setting for each service inwards the docker-compose.yml file. This value tin live laid past times running ./scripts/install.py --configure in addition to answering "yes" to "Restart Malcolm upon scheme or Docker daemon restart?."

Clearing Malcolm’s data
Run ./scripts/wipe.sh to halt the Malcolm instance in addition to wipe its Elasticsearch database (including index snapshots).

Capture file in addition to log archive upload
Malcolm serves a spider web browser-based upload shape for uploading PCAP files in addition to Zeek logs over HTTPS on port 8443 (eg., https://localhost:8443 if you lot are connecting locally).


Additionally, at that spot is a writable files directory on an SFTP server served on port 8022 (eg., sftp://USERNAME@localhost:8022/files/ if you lot are connecting locally).
The types of files supported are:
  • PCAP files (of mime type application/vnd.tcpdump.pcap or application/x-pcapng)
    • PCAPNG files are partially supported: Zeek is able to procedure PCAPNG files, but non all of Moloch's packet attempt features operate correctly
  • Zeek logs inwards archive files (application/gzip, application/x-gzip, application/x-7z-compressed, application/x-bzip2, application/x-cpio, application/x-lzip, application/x-lzma, application/x-rar-compressed, application/x-tar, application/x-xz, or application/zip)
    • where the Zeek logs are constitute inwards the internal directory construction inwards the archive file does non matter
Files uploaded via these methods are monitored in addition to moved automatically to other directories for processing to begin, mostly within i infinitesimal of completion of the upload.

Tagging
In add-on to live processed for uploading, Malcolm events volition live tagged according to the components of the filenames of the PCAP files or Zeek log archives files from which the events were parsed. For example, records created from a PCAP file named ACME_Scada_VLAN10.pcap would live tagged alongside ACME, Scada, in addition to VLAN10. Tags are extracted from filenames past times splitting on the characters "," (comma), "-" (dash), in addition to "_" (underscore). These tags are viewable in addition to searchable (via the tags field) inwards Moloch in addition to Kibana. This behaviour tin live changed past times modifying the AUTO_TAG environment variable inwards docker-compose.yml.
Tags may also live specified manually alongside the browser-based upload form.

Processing uploaded PCAPs alongside Zeek
The browser-based upload interface also provides the powerfulness to specify tags for events extracted from the files uploaded. Additionally, an Analyze alongside Zeek checkbox may live used when uploading PCAP files to displace them to live analyzed past times Zeek, similarly to the ZEEK_AUTO_ANALYZE_PCAP_FILES surroundings variable described above, only on a per-upload basis. Zeek tin also automatically carve out files from file transfers; view Automatic file extraction in addition to scanning for to a greater extent than details.

Live analysis

Capturing traffic on local network interfaces
Malcolm's pcap-capture container tin capture traffic on i or to a greater extent than local network interfaces in addition to periodically rotate these files for processing alongside Moloch in addition to Zeek. The pcap-capture Docker container is started alongside additional privileges (IPC_LOCK, NET_ADMIN, NET_RAW, in addition to SYS_ADMIN) inwards club for it to live able to opened upwards network interfaces inwards promiscuous manner for capture.
The surroundings variables prefixed alongside PCAP_ inwards the docker-compose.yml file decide local packet capture behavior. Local capture tin also live configured past times running ./scripts/install.py --configure in addition to answering "yes" to "Should Malcolm capture network traffic to PCAP files?."
Note that currently Microsoft Windows in addition to Apple macOS platforms run Docker within of a virtualized environment. This would require additional configuration of virtual interfaces in addition to port forwarding inwards Docker, the procedure for which is exterior of the range of this document.

Zeek logs from an external source
Malcolm’s Logstash instance tin also live configured to select Zeek logs from a remote forwarder past times running ./scripts/install.py --configure in addition to answering "yes" to "Expose Logstash port to external hosts?." Enabling encrypted carry of these logs files is discussed inwards Configure authentication in addition to the description of the BEATS_SSL surroundings variable inwards the docker-compose.yml file.
Configuring Filebeat to forwards Zeek logs to Malcolm mightiness hold off something similar this representative filebeat.yml:
filebeat.inputs: - type: log   paths:     - /var/zeek/*.log   fields_under_root: truthful   fields:     type: "session"   compression_level: 0   exclude_lines: ['^\s*#']   scan_frequency: 10s   clean_inactive: 180m   ignore_older: 120m   close_inactive: 90m   close_renamed: truthful   close_removed: truthful   close_eof: fake   clean_renamed: truthful   clean_removed: truthful  output.logstash:   hosts: ["192.0.2.123:5044"]   ssl.enabled: truthful   ssl.certificate_authorities: ["/foo/bar/ca.crt"]   ssl.certificate: "/foo/bar/client.crt"   ssl.key: "/foo/bar/client.key"   ssl.supported_protocols: "TLSv1.2"   ssl.verification_mode: "none"
H5N1 futurity liberate of Malcolm is planned which volition include a customized Linux-based network sensor appliance OS installation icon to assist automate this setup.

Monitoring a local Zeek instance
Another selection for analyzing alive network information is to run an external re-create of Zeek (ie., non within Malcolm) so that the log files it creates are seen past times Malcolm in addition to automatically processed every bit they are written.
To do this, you'll demand to configure Malcolm's local Filebeat log forwarder so that it volition travel on to hold off for changes to Zeek logs that are actively beingness written to fifty-fifty i time it reaches the halt of the file. You tin do this past times replacing docker-compose.yml alongside docker-compose-zeek-live.yml earlier starting Malcolm:
$ mv -f ./docker-compose-zeek-live.yml ./docker-compose.yml
Alternately, you lot tin run the start.sh script (and the other command scripts) similar this, without modifying your original docker-compose.yml file:
$ ./scripts/start.sh ./docker-compose-zeek-live.yml
Once Malcolm has been started, cd into ./zeek-logs/current/ in addition to run bro from within that directory.

Moloch
The Moloch interface volition live accessible over HTTPS on port 443 at the docker hosts IP address (eg., https://localhost if you lot are connecting locally).

Zeek log integration
H5N1 stock installation of Moloch extracts all of its network connexion ("session") metadata ("SPI" or "Session Profile Information") from total packet capture artifacts (PCAP files). Zeek (formerly Bro) generates similar session metadata, linking network events to sessions via a connexion UID. Malcolm aims to facilitate analysis of Zeek logs past times mapping values from Zeek logs to the Moloch session database schema for equivalent fields, in addition to past times creating novel "native" Moloch database fields for all the other Zeek log values for which at that spot is non currently an equivalent inwards Moloch:


In this way, when total packet capture is an option, analysis of PCAP files tin live enhanced past times the additional information Zeek provides. When total packet capture is non an option, similar analysis tin yet live performed using the same interfaces in addition to processes using the Zeek logs alone.
One value of particular advert is Zeek Log Type (zeek.logType inwards Elasticsearch). This value corresponds to the variety of Zeek .log file from which the tape was created. In other words, a search could live restricted to records from conn.log past times searching zeek.logType == conn, or restricted to records from weird.log past times searching zeek.logType == weird. In this same way, to thought only records from Zeek logs (excluding whatever from PCAP files), usage the special Moloch EXISTS filter, every bit inwards zeek.logType == EXISTS!. On the other hand, to exclude Zeek logs in addition to only thought records from PCAP files, usage zeek.logType != EXISTS!.
Click the icon of the owl inwards the upper-left paw corner of to access the Moloch usage documentation (accessible at https://localhost/help if you lot are connecting locally), click the Fields label inwards the navigation pane, in addition to then search for zeek to view a listing of the other Zeek log types in addition to fields available to Malcolm.


The values of records created from Zeek logs tin live expanded in addition to viewed similar whatever native moloch session past times clicking the plus icon to the left of the tape inwards the Sessions view. However, depository fiscal establishment complaint that when dealing alongside these Zeek records the total packet contents are non available, so buttons dealing alongside viewing in addition to exporting PCAP information volition non comport every bit they would for records from PCAP files. However, clicking the Source Raw or Destination Raw buttons volition allow you lot to thought the original Zeek log (formatted every bit JSON) from which the tape was created. Other than that, Zeek records in addition to their values are usable inwards Malcolm exactly similar native PCAP session records.


Help
Click the icon of the owl inwards the upper-left paw corner of to access the Moloch usage documentation (accessible at https://localhost/help if you lot are connecting locally), which includes such topics every bit search syntax, the Sessions view, SPIView, SPIGraph, in addition to the Connections graph.

Sessions
The Sessions thought provides low-level details of the sessions beingness investigated, whether they live Moloch sessions created from PCAP files or Zeek logs mapped to the Moloch session database schema.


The Sessions thought contains many controls for filtering the sessions displayed from all sessions downwards to sessions of interest:
  • search bar: Indicated past times the magnifying glass icon, the search bar allows defining filters on session/log metadata
  • time bounding controls: The start icon, Start, End, Bounding, in addition to Interval fields, in addition to the date histogram tin live used to visually zoom in addition to pan the fourth dimension arrive at beingness examined.
  • search button: The Search clit re-runs the sessions query alongside the filters currently specified.
  • views button: Indicated past times the eyeball icon, views allow overlaying additional previously-specified filters onto the electrical current sessions filters. For convenience, Malcolm provides several Moloch preconfigured views including several on the zeek.logType field.

  • map: H5N1 global map tin live expanded past times clicking the globe icon. This allows filtering sessions past times IP-based geolocation when possible.
Some of these filter controls are also available on other Moloch pages (such every bit SPIView, SPIGraph, Connections, in addition to Hunt).
The number of sessions displayed per page, every bit good every bit the page currently displayed, tin live specified using the paging controls underneath the fourth dimension bounding controls.
The sessions tabular array is displayed below the filter controls. This tabular array contains the sessions/logs matching the specified filters.
To the left of the column headers are 2 buttons. The Toggle visible columns button, indicated past times a grid icon, allows toggling which columns are displayed inwards the sessions table. The Save or charge custom column configuration button, indicated past times a columns icon, allows saving the electrical current displayed columns or loading previously-saved configurations. This is useful for customizing which columns are displayed when investigating unlike types of traffic. Column headers tin also live clicked to sort the results inwards the table, in addition to column widths may live adjusted past times dragging the separators betwixt column headers.
Details for private sessions/logs tin live expanded past times clicking the plus icon on the left of each row. Each row may comprise multiple sections in addition to controls, depending on whether the row represents a Moloch session or a Zeek log. Clicking the land names in addition to values inwards the details sections allows additional filters to live specified or summary lists of unique values to live exported.
When viewing Moloch session details (ie., a session generated from a PCAP file), an additional packets department volition live visible underneath the metadata sections. When the details of a session of this type are expanded, Moloch volition read the packet(s) comprising the session for display here. Various controls tin live used to adapt how the packet is displayed (enabling natural decoding in addition to enabling Show Images & Files may make visually pleasing results), in addition to other options (including PCAP download, carving images in addition to files, applying decoding filters, in addition to examining payloads inwards CyberChef) are available.
See also Moloch's usage documentation for to a greater extent than information on the Sessions view.

PCAP Export
Clicking the downwards arrow icon to the far right of the search bar presents a listing of actions including PCAP Export (see Moloch's sessions help for information on the other actions). When total PCAP sessions are displayed, the PCAP Export characteristic allows you lot to create a novel PCAP file from the matching Moloch sessions, including controls for which sessions are included (open items, visible items, or all matching items) in addition to whether or non to include linked segments. Click Export PCAP clit to generate the PCAP, after which you'll live presented alongside a browser download dialog to salvage or opened upwards the file. Note that depending on the range of the filters specified this mightiness select a long fourth dimension (or, maybe fifty-fifty fourth dimension out).


See the issues department of this document for an mistake that tin occur using this characteristic when Zeek log sessions are displayed.View

SPIView
Moloch's SPI (Session Profile Information) View provides a quick in addition to easy-to-use interface for exploring session/log metrics. The SPIView page lists categories for full general session metrics (eg., protocol, source in addition to destination IP addresses, sort in addition to destination ports, etc.) every bit good every bit for all of diverse types of network understood past times Moloch in addition to Zeek. These categories tin live expanded in addition to the top n values displayed, along alongside each value's cardinality, for the fields of involvement they contain.


Click the the plus icon to the right of a category to expand it. The values for specific fields are displayed past times clicking the land description inwards the land listing underneatn the category name. The listing of land names tin live filtered past times typing constituent of the land call inwards the Search for fields to display inwards this category text input. The Load All in addition to Unload All buttons tin live used to toggle display of all of the fields belonging to that category. Once displayed, a field's call or i of its values may live clicked to render farther actions for filtering or displaying that land or its values. Of particular involvement may live the Open [fieldname] SPI Graph selection when clicking on a field's name. This volition opened upwards a novel tab alongside the SPI Graph (see below) populated alongside the field's top values.
Note that because the SPIView page tin potentially run many queries, SPIView limits the search domain to 7 days (in other words, 7 indices, every bit each index represents i day's worth of data). When using SPIView, you lot volition have got best results if you lot bound your search fourth dimension frame to less than or equal to 7 days. This bound tin live adjusted past times editing the spiDataMaxIndices setting inwards config.ini in addition to rebuilding the malcolmnetsec/moloch docker container.
See also Moloch's usage documentation for to a greater extent than information on SPIView.

SPIGraph
Moloch's SPI (Session Profile Information) Graph visualizes the occurrence of some field's top n values over time, in addition to (optionally) geographically. This is especially useful for identifying trends inwards a particular type of communication over time: traffic using a particular protocol when seen sparsely at regular intervals on that protocol's engagement histogram inwards the SPIGraph may signal a connexion check, polling, or beaconing (for example, view the llmnr protocol inwards the screenshot below).


Controls tin live constitute underneath the fourth dimension bounding controls for selecting the land of interest, the number of elements to live displayed, the sort order, in addition to a periodic refresh of the data.
See also Moloch's usage documentation for to a greater extent than information on SPIGraph.

Connections
The Connections page presents network communications via a force-directed graph, making it slowly to visualize logical relationships betwixt network hosts.


Controls are available for specifying the query size (where smaller values volition execute to a greater extent than speedily but may only comprise an incomplete representation of the top n sessions, in addition to larger values may select longer to execute but volition live to a greater extent than complete), which fields to usage every bit the source in addition to destionation for node values, a minimum connections threshold, in addition to the method for determining the "weight" of the link betwixt 2 nodes. As is the representative alongside most other visualizations inwards Moloch, the graph is interactive: clicking on a node or the link betwixt 2 nodes tin live used to modify query filters, in addition to the nodes themselves may live repositioned past times dragging in addition to dropping them. H5N1 node's coloring textile indicates whether it communicated every bit a source/originator, a destination/responder, or both.
While the default source in addition to destination fields are Src IP in addition to Dst IP:Dst Port, the Connections thought is able to usage whatever combination of whatever of the fields populated past times Moloch in addition to Zeek. For example:
  • Src OUI in addition to Dst OUI (hardware manufacturers)
  • Src IP in addition to Protocols
  • Originating Network Segment in addition to Responding Network Segment (see CIDR subnet to network segment call mapping)
  • Originating GeoIP City in addition to Responding GeoIP City
or whatever other combination of these or other fields.
See also Moloch's usage documentation for to a greater extent than information on the Connections graph.

Hunt
Moloch's Hunt characteristic allows an analyst to search within the packets themselves (including payload data) rather than but searching the session metadata. The search string may live specified using ASCII (with or without representative sensitivity), hex codes, or regular expressions. Once a hunt project is complete, matching sessions tin live viewed inwards the Sessions view.
Clicking the Create a packet search job on the Hunt page volition allow you lot to specify the next parameters for a novel hunt job:
  • a packet search project name
  • a maximum number of packets to examine per session
  • the search string in addition to its format (ascii, ascii (case sensitive), hex, regex, or hex regex)
  • whether to search source packets, destination packets, or both
  • whether to search raw or reassembled packets
Click the Create clit to laid out the search. Moloch volition scan the source PCAP files from which the sessions were created according to the search criteria. Note that whatever filters were specified when the hunt project is executed volition apply to the hunt project every bit well; the number of sessions matching the electrical current filters volition live displayed inwards a higher seat the hunt project parameters alongside text similar "ⓘ Creating a novel packet search project volition search the packets of # sessions."


Once a hunt project is submitted, it volition live assigned a unique hunt ID (a long unique string of characters similar yuBHAGsBdljYmwGkbEMm) in addition to its progress volition live updated periodically inwards the Hunt Job Queue alongside the execution percent complete, the number of matches constitute so far, in addition to the other parameters alongside which the project was submitted. More details for the hunt project tin live viewed past times expanding its row alongside the plus icon on the left.


Once the hunt project is consummate (and a infinitesimal or so has passed, every bit the huntId must live added to the matching session records inwards the database), click the folder icon on the right side of the hunt project row to opened upwards a novel Sessions tab alongside the search bar prepopulated to filter to sessions alongside packets matching the search criteria.


From this listing of filtered sessions you lot tin expand session details in addition to explore packet payloads which matched the hunt search criteria.
The hunt characteristic is available only for sessions created from total packet capture data, non Zeek logs. This beingness the case, it is a practiced thought to click the eyeball icon in addition to select the PCAP Files thought to exclude Zeek logs from candidate sessions prior to using the hunt feature.
See also Moloch's usage documentation for to a greater extent than information on the hunt feature.

Statistics
Moloch provides several other reports which demonstrate information almost the state of Moloch in addition to the underlying Elasticsearch database.
The Files listing displays a listing of PCAP files processed past times Moloch, the engagement in addition to fourth dimension of the earliest packet inwards each file, in addition to the file size:


The ES Indices listing (available nether the Stats page) lists the Elasticsearch indices within which log information is contained:


The History thought provides a historical listing of queries issues to Moloch in addition to the details of those queries:


See also Moloch's usage documentation for to a greater extent than information on the Files list, statistics, in addition to history.

Settings

General settings
The Settings page tin live used to tweak Moloch preferences, defined additional custom views in addition to column configurations, tweak the coloring textile theme, in addition to more.
See Moloch's usage documentation for to a greater extent than information on settings.



Kibana
While Moloch provides real prissy visualizations, especially for network traffic, Kibana (an opened upwards source general-purpose information visualization tool for Elasticsearch) tin live used to create custom visualizations (tables, charts, graphs, dashboards, etc.) using the same data.
The Kibana container tin live accessed over HTTPS on port 5601 (eg., https://localhost:5601 if you lot are connecting locally). Several preconfigured dashboards for Zeek logs are included inwards Malcolm's Kibana configuration.
The official Kibana User Guide has splendid tutorials for a diversity of topics.
Kibana has several components for information searching in addition to visualization:

Discover
The Discover thought enables you lot to thought events on a record-by-record footing (similar to a session tape inwards Moloch or an private delineate from a Zeek log). See the official Kibana User Guide for information on using the Discover view:

Screenshots






Visualizations in addition to dashboards

Prebuilt visualizations in addition to dashboards
Malcolm comes alongside dozens of prebuilt visualizations in addition to dashboards for the network traffic represented past times each of the Zeek log types. Click Dashboard to view a listing of these dashboards. As is the representative alongside all Kibana's visualizations, all of the charts, graphs, maps, in addition to tables are interactive in addition to tin live clicked on to narrow or expand the range of the information you lot are investigating. Similarly, click Visualize to explore the prebuilt visualizations used to construct the dashboards.
Many of Malcolm's prebuilt visualizations for Zeek logs are heavily inspired past times the splendid Kibana Dashboards that are constituent of Security Onion.

Screenshots














Building your ain visualizations in addition to dashboards
See the official Kibana User Guide for information on creating your ain visualizations in addition to dashboards:

Screenshots




Other Malcolm features

Automatic file extraction in addition to scanning
Malcolm tin leverage Zeek's noesis of network protocols to automatically observe file transfers in addition to extract those files from PCAPs every bit Zeek processes them. This behaviour tin live enabled globally past times modifying the ZEEK_EXTRACTOR_MODE environment variable inwards docker-compose.yml, or on a per-upload footing for PCAP files uploaded via the browser-based upload form when Analyze alongside Zeek is selected.
To specify which files should live extracted, the next values are acceptable inwards ZEEK_EXTRACTOR_MODE:
  • none: no file extraction
  • interesting: extraction of files alongside mime types of mutual laid on vectors
  • mapped: extraction of files alongside recognized mime types
  • known: extraction of files for which whatever mime type tin live determined
  • all: extract all files
Extracted files tin live examined through either (but non both) of 2 methods:
Files which are flagged every bit potentially malicious via either of these methods volition live logged every bit Zeek signatures.log entries, in addition to tin live viewed inwards the Signatures dashboard inwards Kibana.
The EXTRACTED_FILE_PRESERVATION environment variable inwards docker-compose.yml determines the behaviour for preservation of Zeek-extracted files:
  • quarantined: preserve only flagged files inwards ./zeek-logs/extract_files/quarantine
  • all: preserve flagged files inwards ./zeek-logs/extract_files/quarantine in addition to all other extracted files inwards ./zeek-logs/extract_files/preserved
  • none: preserve no extracted files

Automatic host in addition to subnet call assignment

IP/MAC address to hostname mapping via host-map.txt
The host-map.txt file inwards the Malcolm installation directory tin live used to define names for network hosts based on IP and/or MAC addresses inwards Zeek logs. The default empty configuration looks similar this:
# IP or MAC address to host call map: #   address|host name|required tag # # where: #   address: comma-separated listing of IPv4, IPv6, or MAC addresses #          eg., 172.16.10.41, 02:42:45:dc:a2:96, 2001:0db8:85a3:0000:0000:8a2e:0370:7334 # #   host name: host call to live assigned when trial address(es) agree # #   required tag (optional): only depository fiscal establishment check agree in addition to apply host call if the trial #                            contains this tag #
Each non-comment delineate (not showtime alongside a #), defines an address-to-name mapping for a network host. For example:
127.0.0.1,127.0.1.1,::1|localhost| 192.168.10.10|office-laptop.intranet.lan| 06:46:0b:a6:16:bf|serial-host.intranet.lan|testbed
Each delineate consists of iii |-separated fields: address(es), hostname, and, optionally, a tag which, if specified, must belong to a log for the matching to occur.
As Zeek logs are processed into Malcolm's Elasticsearch instance, the log's source in addition to destination IP in addition to MAC address fields (zeek.orig_h, zeek.resp_h, zeek.orig_l2_addr, in addition to zeek.resp_l2_addr, respectively) are compared against the lists of addresses inwards host-map.txt. When a agree is found, a novel land is added to the log: zeek.orig_hostname or zeek.resp_hostname, depending on whether the matching address belongs to the originating or responding host. If the 3rd land (the "required tag" field) is specified, a log must also comprise that value inwards its tags land inwards add-on to matching the IP or MAC address specified inwards club for the corresponding _hostname land to live added.
zeek.orig_hostname in addition to zeek.resp_hostname may each comprise multiple values. For example, if both a host's source IP address in addition to source MAC address were matched past times 2 unlike lines, zeek.orig_hostname would comprise the hostname values from both matching lines.

CIDR subnet to network segment call mapping via cidr-map.txt
The cidr-map.txt file inwards the Malcolm installation directory tin live used to define names for network segments based on IP addresses inwards Zeek logs. The default empty configuration looks similar this:
# CIDR to network segment format: #   IP(s)|segment name|required tag # # where: #   IP(s): comma-separated listing of CIDR-formatted network IP addresses #          eg., 10.0.0.0/8, 169.254.0.0/16, 172.16.10.41 # #   segment name: segment call to live assigned when trial IP address(es) agree # #   required tag (optional): only depository fiscal establishment check agree in addition to apply segment call if the trial #                            contains this tag #
Each non-comment delineate (not showtime alongside a #), defines an subnet-to-name mapping for a network host. For example:
192.168.50.0/24,192.168.40.0/24,10.0.0.0/8|corporate| 192.168.100.0/24|control| 192.168.200.0/24|dmz| 172.16.0.0/12|virtualized|testbed
Each delineate consists of iii |-separated fields: CIDR-formatted subnet IP range(s), subnet name, and, optionally, a tag which, if specified, must belong to a log for the matching to occur.
As Zeek logs are processed into Malcolm's Elasticsearch instance, the log's source in addition to destination IP address fields (zeek.orig_h in addition to zeek.resp_h, respectively) are compared against the lists of addresses inwards cidr-map.txt. When a agree is found, a novel land is added to the log: zeek.orig_segment or zeek.resp_segment, depending on whether the matching address belongs to the originating or responding host. If the 3rd land (the "required tag" field) is specified, a log must also comprise that value inwards its tags land inwards add-on to its IP address falling within the subnet specified inwards club for the corresponding _segment land to live added.
zeek.orig_segment in addition to zeek.resp_segment may each comprise multiple values. For example, if cidr-map.txt specifies multiple overlapping subnets on unlike lines, zeek.orig_segment would comprise the hostname values from both matching lines if zeek.orig_h belonged to both subnets.
If both zeek.orig_segment in addition to zeek.resp_segment are added to a log, in addition to if they comprise unlike values, the tag cross_segment volition live added to the log's tags land for convenient identification of cross-segment traffic. This traffic could live easily visualized using Moloch's Connections graph, past times setting the Src: value to Originating Network Segment in addition to the Dst: value to Responding Network Segment:


Applying mapping changes
When changes are made to either cidr-map.txt or host-map.txt, Malcolm's Logstash container must live restarted. The easiest means to do this is to restart malcolm via restart.sh (see Stopping in addition to restarting Malcolm).

Elasticsearch index curation
Malcolm uses Elasticsearch Curator to periodically examine indices representing the log information in addition to perform actions on indices coming together criteria for historic flow or disk usage. The surroundings variables prefixed alongside CURATOR_ inwards the docker-compose.yml file decide the criteria for the next actions:
This behaviour tin also live modified past times running ./scripts/install.py --configure.
Other custom filters in addition to actions may live defined past times the user past times manually modifying the action_file.yml file used past times the curator container in addition to ensuring that it is mounted into the container every bit a majority inwards the curator: department of your docker-compose.yml file:
  curator: …     volumes:       - ./curator/config/action_file.yml:/config/action_file.yml …
The settings governing index curation tin touching Malcolm's performance inwards both log ingestion in addition to queries, in addition to at that spot are caveats that should live taken into consideration when configuring this feature. Please read the Elasticsearch documentation linked inwards this department alongside regards to index curation.
Index curation only deals alongside disk infinite consumed past times Elasticsearch indices: it does non have got anything to do alongside PCAP file storage. The MANAGE_PCAP_FILES surroundings variable inwards the docker-compose.yml file tin live used to allow Moloch to prune old PCAP files based on available disk space.

Known issues

PCAP file export mistake when Zeek logs are inwards Moloch search results
Moloch has a prissy characteristic that allows you lot to export PCAP files matching the filters currently populating the search field. However, Moloch viewer volition elevate an exception if records created from Zeek logs are constitute amidst the search results to live exported. For this reason, if you lot are using the export PCAP characteristic it is recommended that you lot apply the PCAP Files thought to filter your search results prior to doing the export.

Manual Kibana index pattern refresh
Because some fields are created inwards Elasticsearch dynamically when Zeek logs are ingested past times Logstash, they may non have got been introduce when Kibana configures its index pattern land mapping during initialization. As such, those fields volition non demonstrate upwards inwards Kibana visualizations until Kibana’s re-create of the land listing is refreshed. Malcolm periodically refreshes this list, but if fields are missing from your visualizations you lot may wishing to do it manually.
After Malcolm ingests your information (or, to a greater extent than specifically, after it has ingested a novel log type it has non seen before) you lot may manually refresh Kibana’s land listing past times clicking ManagementIndex Patterns, in addition to then selecting the sessions2-* index pattern in addition to clicking the reload clit close the upper-right of the window.


Installation representative using Ubuntu 18.04 LTS
Here's a step-by-step representative of getting Malcolm from GitHub, configuring your scheme in addition to your Malcolm instance, in addition to running it on a scheme running Ubuntu Linux. Your mileage may vary depending on your private scheme configuration, but this should live a practiced starting point.
You tin usage git to clone Malcolm into a local working copy, or you lot tin download in addition to extract the artifacts from the latest release.
To install Malcolm from the latest Malcolm release, browse to the Malcolm releases page on GitHub in addition to download at a minimum install.py in addition to the malcolm_YYYYMMDD_HHNNSS_xxxxxxx.tar.gz file, in addition to then navigate to your downloads directory:
user@host: $ cd Downloads/ user@host: /Downloads$ ls install.py  malcolm_20190611_095410_ce2d8de.tar.gz
If you lot are obtaining Malcolm using git instead, run the next command to clone Malcolm into a local working copy:
user@host: $ git clone https://github.com/idaholab/Malcolm Cloning into 'Malcolm'... remote: Enumerating objects: 443, done. remote: Counting objects: 100% (443/443), done. remote: Compressing objects: 100% (310/310), done. remote: Total 443 (delta 81), reused 441 (delta 79), pack-reused 0 Receiving objects: 100% (443/443), 6.87 MiB | 18.86 MiB/s, done. Resolving deltas: 100% (81/81), done.  user@host: $ cd Malcolm/
Next, run the install.py script to configure your system. Replace user inwards this representative alongside your local concern human relationship username, in addition to follow the prompts. Most questions have got an acceptable default you lot tin select past times pressing the Enter key. Depending on whether you lot are installing Malcolm from the liberate tarball or within of a git working copy, the questions below volition live slightly different, but for the most constituent are the same.
user@host: /Downloads$ sudo python3 install.py Installing required packages: ['apache2-utils', 'make', 'openssl']  "docker info" failed, attempt to install Docker? (Y/n): y  Attempt to install Docker using official repositories? (Y/n): y Installing required packages: ['apt-transport-https', 'ca-certificates', 'curl', 'gnupg-agent', 'software-properties-common'] Installing docker packages: ['docker-ce', 'docker-ce-cli', 'containerd.io'] Installation of docker packages evidently succeeded  Add a non-root user to the "docker" group? (y/n): y  Enter user account: user  Add some other non-root user to the "docker" group? (y/n): n  "docker-compose version" failed, attempt to install docker-compose? (Y/n): y  Install docker-compose direct from docker github? (Y/n): y Download in addition to installation of docker-compose evidently succeeded   fs.file-max increases allowed maximum for file handles fs   .file-max= appears to live missing from /etc/sysctl.conf, append it? (Y/n): y  fs.inotify.max_user_watches increases allowed maximum for monitored files fs.inotify.max_user_watches= appears to live missing from /etc/sysctl.conf, append it? (Y/n): y   vm.max_map_count increases allowed maximum for retentiveness segments vm.max_map_count= appears to live missing from /etc/sysctl.conf, append it? (Y/n): y   net.core.somaxconn increases allowed maximum for socket connections net.core.somaxconn= appears to live missing from /etc/sysctl.conf, append it? (Y/n): y   vm.swappiness adjusts the preference of the scheme to swap vs. drib runtime retentiveness pages vm.swappiness= appears to live missing from /etc/sysctl.conf, append it? (Y/n): y   vm.dirty_background_ratio defines the percent of scheme retentiveness fillable alongside "dirty" pages earlier flushing vm.dirty_background_ratio= appears to live missing from /etc/sysctl.conf, append it? (Y/n):    y   vm.dirty_ratio defines the maximum percent of muddied scheme retentiveness earlier committing everything vm.dirty_ratio= appears to live missing from /etc/sysctl.conf, append it? (Y/n): y   /etc/security/limits.d/limits.conf increases the allowed maximums for file handles in addition to memlocked segments /etc/security/limits.d/limits.conf does non exist, create it? (Y/n): y  The "haveged" utility may assist improve Malcolm startup times past times providing entropy for the Linux kernel. Install haveged? (y/N): y Installing haveged packages: ['haveged'] Installation of haveged packages evidently succeeded
At this point, if you lot are installing from the a liberate tarball you lot volition live asked if you lot would similar to extract the contents of the tarball in addition to to specify the installation directory:
Extract Malcolm runtime files from /home/user/Downloads/malcolm_20190611_095410_ce2d8de.tar.gz (Y/n): y  Enter installation path for Malcolm [/home/user/Downloads/malcolm]: /home/user/Malcolm Malcolm runtime files extracted to /home/user/Malcolm
Alternately, if you lot are configuring Malcolm from within a git working copy, install.py volition directly exit. Run install.py i time to a greater extent than similar you lot did at the showtime of the example, only take the sudo in addition to add together --configure to run install.py inwards "configuration only" mode.
user@host: /Malcolm$ python3 scripts/install.py --configure
Now that whatever necessary scheme configuration changes have got been made, the local Malcolm instance volition live configured:
Setting 10g for Elasticsearch in addition to 3g for Logstash. Is this OK? (Y/n): y  Restart Malcolm upon scheme or Docker daemon restart? (y/N): y  Select Malcolm restart behaviour ('no', 'on-failure', 'always', 'unless-stopped'): unless-stopped  Periodically unopen old Elasticsearch indices? (Y/n): y  Indices older than five years volition live periodically closed. Is this OK? (Y/n): n  Enter index unopen threshold (eg., ninety days, 2 years, etc.): 1 years  Indices older than 1 years volition live periodically closed. Is this OK? (Y/n): y  Periodically delete old Elasticsearch indices? (Y/n): y  Indices older than 10 years volition live periodically deleted. Is this OK? (Y/n): n  Enter index delete threshold (eg., ninety days, 2 years, etc.): five years  Indices older than five years volition live periodically deleted. Is this OK? (Y/n): y  Periodically delete the oldest Elasticsearch indices when the database exceeds a for certain size? (Y/n   ): y  Indices volition live deleted when the database exceeds 10000 gigabytes. Is this OK? (Y/n): n  Enter index threshold inwards gigabytes: 100  Indices volition live deleted when the database exceeds 100 gigabytes. Is this OK? (Y/n): y  Automatically analyze all PCAP files alongside Zeek? (y/N): y  Perform contrary DNS lookup locally for source in addition to destination IP addresses inwards Zeek logs? (y/N): n  Perform hardware vendor OUI lookups for MAC addresses? (Y/n): y  Expose Logstash port to external hosts? (y/N): n  Forward Logstash logs to external Elasticstack instance? (y/N): n  Enable file extraction alongside Zeek? (y/N): y  Select file extraction behaviour ('none', 'known', 'mapped', 'all', 'interesting'): interesting  Select file preservation behaviour ('quarantined', 'all', 'none'): quarantined  Scan extracted files alongside ClamAV? (y/N): y  Download updated ClamAV virus signatures periodically? (Y/n): y     Should Malcolm capture network traffic to PCAP files? (y/N): y  Specify capture interface(s) (comma-separated): eth0  Capture packets using netsniff-ng? (Y/n): y  Capture packets using tcpdump? (y/N): n  Malcolm has been installed to /home/user/Malcolm. See README.md for to a greater extent than information. Scripts for starting in addition to stopping Malcolm in addition to changing authentication-related settings tin live constitute inwards /home/user/Malcolm/scripts.
At this dot you lot should reboot your computer so that the novel scheme settings tin live applied. After rebooting, log dorsum inwards in addition to render to the directory to which Malcolm was installed (or to which the git working re-create was cloned).
Now nosotros demand to set upwards authentication in addition to generate some unique self-signed SSL certificates. You tin supersede analyst inwards this representative alongside whatever username you lot wishing to usage to log inwards to the Malcolm spider web interface.
user@host: /Malcolm$ ./scripts/auth_setup.sh Username: analyst analyst password: analyst password (again):  (Re)generate self-signed certificates for HTTPS access [Y/n]? y  (Re)generate self-signed certificates for a remote log forwarder [Y/n]? y  Store username/password for forwarding Logstash events to a secondary, external Elasticsearch instance [y/N]? n
For now, rather than build Malcolm from scratch, we'll clit images from Docker Hub:
user@host: /Malcolm$ docker-compose clit Pulling elasticsearch ... done Pulling kibana        ... done Pulling elastalert    ... done Pulling curator       ... done Pulling logstash      ... done Pulling filebeat      ... done Pulling moloch        ... done Pulling file-monitor  ... done Pulling pcap-capture  ... done Pulling upload        ... done Pulling htadmin       ... done Pulling nginx-proxy   ... done  user@host: /Malcolm$ docker images REPOSITORY                                          TAG                 IMAGE ID            CREATED             SIZE malcolmnetsec/moloch                                1.4.0               xxxxxxxxxxxx        27 minutes agone      517MB malcolmnetsec/htadmin                               1.4.0               xxxxxxxxxxxx        2 hours agone         180MB malcolmnetsec/nginx-proxy                           1.4.0               xxxxxxxxxxxx        iv hours agone         five   3MB malcolmnetsec/file-upload                           1.4.0               xxxxxxxxxxxx        24 hours agone        198MB malcolmnetsec/pcap-capture                          1.4.0               xxxxxxxxxxxx        24 hours agone        111MB malcolmnetsec/file-monitor                          1.4.0               xxxxxxxxxxxx        24 hours agone        355MB malcolmnetsec/logstash-oss                          1.4.0               xxxxxxxxxxxx        25 hours agone        1.24GB malcolmnetsec/curator                               1.4.0               xxxxxxxxxxxx        25 hours agone        303MB malcolmnetsec/kibana-oss                            1.4.0               xxxxxxxxxxxx        33 hours agone        944MB malcolmnetsec/filebeat-oss                          1.4.0               xxxxxxxxxxxx        xi days agone         459MB malcolmnetsec/elastalert                            1.4.0               xxxxxxxxxxxx        xi days agone         276MB docker.elast   ic.co/elasticsearch/elasticsearch-oss   6.8.1               xxxxxxxxxxxx        five weeks agone         769MB
Finally, nosotros tin start Malcolm. When Malcolm starts it volition current informational in addition to debug messages to the console. If you lot wish, you lot tin safely unopen the console or usage Ctrl+C to halt these messages; Malcolm volition travel on running inwards the background.
user@host: /Malcolm$ ./scripts/start.sh Creating network "malcolm_default" alongside the default driver Creating malcolm_file-monitor_1  ... done Creating malcolm_htadmin_1       ... done Creating malcolm_elasticsearch_1 ... done Creating malcolm_pcap-capture_1  ... done Creating malcolm_curator_1       ... done Creating malcolm_logstash_1      ... done Creating malcolm_elastalert_1    ... done Creating malcolm_kibana_1        ... done Creating malcolm_moloch_1        ... done Creating malcolm_filebeat_1      ... done Creating malcolm_upload_1        ... done Creating malcolm_nginx-proxy_1   ... done  Malcolm started, setting "INITIALIZEDB=false" inwards "docker-compose.yml" for subsequent runs.  In a few minutes, Malcolm services volition live accessible via the next URLs: ------------------------------------------------------------------------------   - Moloch: https://localhost:443/   - Kibana: https:/   /localhost:5601/   - PCAP Upload (web): https://localhost:8443/   - PCAP Upload (sftp): sftp://username@127.0.0.1:8022/files/   - Account management: https://localhost:488/           Name                        Command                       State                                                                          Ports                                                                ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- malcolm_curator_1         /usr/local/bin/cron_env_deb.sh   Up                                                                                                                                                          malcolm_elastalert_1      /usr/local/bin/elastalert- ...   Up (health: starting)   3030/tcp, 3333/tcp                                                                                                                     malcolm_elasticsearch_1   /usr/local/bin/docker-entr ...   Up (health: starting)   9200/tcp, 9300/tcp                                                                                                                  malcolm_file-monitor_1    /usr/local/bin/supervisord ...   Up                      3310/tcp                                                                                                                            malcolm_filebeat_1        /usr/local/bin/docker-entr ...   Up                                                                                                                                                          malcolm_htadmin_1         /usr/bin/supervisord -c /s ...   Up                      80/tcp                                                                                                                              malcolm_kibana_1          /usr/bin/supervisord -   c /e ...   Up (health: starting)   28991/tcp, 5601/tcp                                                                                                                 malcolm_logstash_1        /usr/local/bin/logstash-st ...   Up (health: starting)   5000/tcp, 5044/tcp, 9600/tcp                                                                                                        malcolm_moloch_1          /usr/bin/supervisord -c /e ...   Up                      8000/tcp, 8005/tcp, 8081/tcp                                                                                                        malcolm_nginx-proxy_1     /app/docker-entrypoint.sh  ...   Up                      0.0.0.0:28991->28991/tcp, 0.0.0.0:3030->3030/tcp, 0.0.0.0:443->443/tcp, 0.0.0.0:488->488/tcp, 0.0.0.0:5601->5601/tcp, 80/tcp,                                                                                          0.0.0.0:8443->8443/tcp, 0.0.0.0:9200->9200/tcp, 0.0.0.0:9600->   9600/tcp                                                              malcolm_pcap-capture_1    /usr/local/bin/supervisor.sh     Up                                                                                                                                                          malcolm_upload_1          /docker-entrypoint.sh /usr ...   Up                      127.0.0.1:8022->22/tcp, 80/tcp                                                                                                       Attaching to malcolm_nginx-proxy_1, malcolm_upload_1, malcolm_filebeat_1, malcolm_kibana_1, malcolm_moloch_1, malcolm_elastalert_1, malcolm_logstash_1, malcolm_curator_1, malcolm_elasticsearch_1, malcolm_htadmin_1, malcolm_pcap-capture_1, malcolm_file-monitor_1 …
It volition select several minutes for all of Malcolm's components to start up. Logstash volition select the longest, likely five to 10 minutes. You'll know Logstash is fully ready when you lot view Logstash spit out a bunch of starting upwards messages, ending alongside this:
… logstash_1  | [2019-06-11T15:45:41,938][INFO ][logstash.pipeline ] Pipeline started successfully {:pipeline_id=>"main", :thread=>"#"} logstash_1  | [2019-06-11T15:45:42,009][INFO ][logstash.agent    ] Pipelines running {:count=>3, :running_pipelines=>[:input, :main, :output], :non_running_pipelines=>[]} logstash_1  | [2019-06-11T15:45:42,599][INFO ][logstash.agent    ] Successfully started Logstash API endpoint {:port=>9600} …
You tin directly opened upwards a spider web browser in addition to navigate to i of the Malcolm user interfaces.

Copyright
Malcolm is Copyright 2019 Battelle Energy Alliance, LLC, in addition to is developed in addition to released through the cooperation of the Cybersecurity in addition to Infrastructure Security Agency of the U.S. Department of Homeland Security.
See License.txt for the price of its release.

Contact information of author(s):
Seth Grover

Other Software
Idaho National Laboratory is a cutting border inquiry facility which is a constantly producing high character inquiry in addition to software. Feel gratis to select a hold off at our other software in addition to scientific offerings at:
Primary Technology Offerings Page
Supported Open Source Software
Raw Experiment Open Source Software
Unsupported Open Source Software