Navigate back to the homepage

Gain Control over Log Events with Graylog and Ansible

Sar Malik
September 15th, 2020 · 7 min read

In this guide, we’ll explore how you can leverage open source tools Graylog and Ansible - to gain control over what’s happening in your IT infrastructure with remote logging, analytics, and monitoring.

Our research has identified nearly 92% of small and mid-market businesses don’t have an existing log pipeline for real-time event monitoring.

Centralized log analytics platforms can serve as the key source of truth for effective IT operational and cyber incident response.

Follow along as we deploy a container solution and configure endpoints to push logs remotely.

Here’s what we’ll cover using Docker as the hosting platform of choice:

  1. Creating a MongoDb cluster for Log Storage using Docker Compose
  2. ElasticSearch for Accelerated Log Analytics
  3. Deploying Graylog Web Server
  4. Configuring Graylog to Accept Syslog Input using Web Interface
  5. Setting up Clients for Remote Logging with Ansible

Follow along by creating the following tree structure with a boilerplate docker-compose.yaml.

2├── client
3├── server
4└────└── docker-compose.yaml

If you simply want the code, just clone the Github repo @DigitalTransformation/docker-graylog-ansible, set your environment variables, and deploy in minutes.

Requirements for Deployment

The foundation of this guide uses on-premise Docker infrastructure. Log servers are critical assets during incident response and ensuring a compliant deployment is a pre-requisite for getting started.

As a general rule, you should already be familiar with linux, containerization, and networking technologies but we’ve explained in-depth the reasoning behind each configuration for all developer audience levels.

Docker Environment

Docker Docs Docker is a containerization platform that uses atomic images built from Dockerfile recipies for preconfigured stacks.

IT Infrastructure Planning In the event of a cyber incident, logs quickly become the organizations most valuable risk-management asset and having the right IT infrastructure is imperative.

While the docker runtime itself can be virtualized on shared infrastructure, a dedicated always-on Unix host with power and network redundancy is strongly recommmended.

Similarly, if you’re running Podman as a docker compatible instance, some of the scripts mentioned may need to be adapted as we make heavy use of docker-compose.

Security Considerations

To ensure compliance with PMO IT, we’ll reference the open source docker/docker-bench-security tool.

The host node should be configured with audit.rules and validated as meeting key security specifications.

Securing the log server is just as critical as client endpoints.

Configuring Clients with Ansible Playbooks

Ansible Docs Ansible is a versatile framework for automation of operational IT routines known as Ansible Playbooks.

We’ll configure unix clients to pipeline logs into our log analytics server using a standardized approach.

If this is your first time using Ansible, check out the script deployable over SSH to configure Fedora/CentOS/RHEL compatible hosts and clients.

1. Creating a MongoDb container for Log Storage using Docker Compose

In the docker-compose.yaml we’ll use the following notation in the header to describe the file, useful when inspecting contents with the $ head command.

1# --------------------------------
2# Infra::Docker::Graylog+MongoDB+ElasticSearch
3# Docs: << docs link >>
4# Github: << github repo >>
5# --------------------------------

Graylog has out of the box support for different database servers but we’ve selected MongoDb due to it’s recognized high-availability and scalability in handling semi-structured log data.

Mongo is provided as an official image built and tested from source on Docker Hub with a CI/CD pipeline. However, only the last major release of Mongo is supported by Graylog due to the database scaffolding scripts used.


docker-compose.yaml | Github Source

1version: '3.3'
4 # MongoDb Container
5 mongodb:
6 image: mongo:3
7 container_name: qone.graylog.mongodb
8 restart: always
9 volumes:
10 - ./mongodb:/data/db
11 - ./mongodb/mongo-init.js:/docker-entrypoint-initdb.d/mongo-init.js:ro
12 env_file:
13 - mongo.env
14 networks:
15 - d1_graylog
16 ports:
17 - 9052:27017
18 deploy:
19 resources:
20 limits:
21 cpus: '1'
22 memory: 1G

Let’s breakdown each component of the mongodb services section in more detail to understand the reasoning behind these properties.

restart Restart policy must be configured as a precaution for failure management such as I/O errors thrown from a remounting disk.

Note that simply restarting the container itself is not sufficient and should be paired with other approaches like database and volume cloning (i.e. RAID1) for redundancy.


For the purposes of this guide, we’ll use an isolated and mounted filesystem in /etc/fstab for MongoDb data and can be any arbitrary subdirectory on the host.

Run the following shell command to prevent accidental deletion of the path: $ sudo chattr +i ./mongodb.

Note: Docker security specs recommend isolating the runtime from both data storage and the operating system. To define another storage medium, change the rules in /etc/docker/daemon.json mentioned in docker docs otherwise proceed with caution.

env_file: mongo.env | Github Source

To manage securables in docker-compose, place the credentials in a seperate mongo.env file. This can be excluded globally with **/*.env in .gitignore to prevent leakage of secrets in code repositories.

The env file will contain parameters for the root credentials for the MongoDb cluster from external connections outside of graylog. Replace the following <<placeholders>> accordingly.

mongo-init.js | Github Source

Since Mongo does not include a default database constructor, to generate a custom database for graylog we’ll need to clone the mongo-init.js and place in root where docker-compose.yaml is going to be executed from on the host.

Let’s take a closer look at mongo-init.js which requires setting custom credentials for the new graylog database.

2 {
3 user: "<<graylog_user>>",
4 pwd: "<<password>>",
5 roles: [
6 {
7 role: "readWrite",
8 db: "graylog" # <<graylog_database>>
9 }
10 ]
11 }

Take careful note of these values as they’ll be used to configure graylog.conf connection to the database later in the guide.


While docker creates a default virtual network on the host, we recommend setting an isolated vnet with a meaningful name that can be inspected from $ docker networks command.

The same network must be declared on all services and mentioned globally outside the services section in the yaml file. Refer to the Docker Docs: Compose File v3 Reference - Network Driver on selecting the driver type overlay (swarm) || bridge (default) suitable for your host environment.

2 mongodb:
3 ...
4 networks:
5 - d1_graylog
6 elasticsearch:
7 ...
8 networks:
9 - d1_graylog
10 graylog:
11 ...
12 networks:
13 - d1_graylog
16 d1_graylog:
17 driver: bridge


The default MonoDb port is 27017 which can be reference under expose or by using host forwarding{{HOST_PORT}} -> {container}.

We’ll expose the database port as it’ll allow us to query Mongo directly for services such as a Grafana Dashboard or for Machine Learning use cases on scalable datasets.

Note that the Mongo port 27017 takes presedence in graylog.conf because it’s on an internal docker vnet.

2 mongodb:
3 ...
4 ports:
5 - {{HOST_PORT}}:27017

Be careful exposing this port on public servers that aren’t behind a firewall as vulnerabilities can lead to exposing potentially sensitive data.

Clients will continue to push logs into Graylog server as long as they’re on the same internal network or connected over corporate VPN.


Next, to better comply with Docker Audit rules we provision soft resource limits on each of the container services. Generally the following limits are sufficient to intake logs from up to 50 clients in real time.

Resource limits prevent leakage of system resources such as during a Distributed Denial of Service (DDoS) attack from crashing the container host.

2 resources:
3 limits:
4 cpus: '0.5'
5 memory: '1G'

The same convention with modified values can be used for graylog and elasticsearch container services.

2. ElasticSearch for Accelerated Log Analytics

ElasticSearch provides hot cache acceleration of search on large datasets such as unstructured logs and we’ll deploy it alongside Graylog to improve our analytics workflow.


Here’s how you can define the container in your docker-compose.yaml.

1# ---------------------------
2# Elasticsearch::Cache
3# Docs:
4# ---------------------------
5 elasticsearch:
6 image:
7 container_name: graylog.escache
8 restart: always
9 volumes:
10 - ./escache:/usr/share/elasticsearch/data
11 networks:
12 - d1_graylog
13 environment:
14 -
15 - http.port=9200
16 - transport.tcp.port=9300
17 -
18 -
19 - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
20 ulimits:
21 memlock:
22 soft: -1
23 hard: -1
24 deploy:
25 resources:
26 limits:
27 cpus: '2'
28 memory: 2G

If you’re already familiar with Elasticsearch feel free to skip this section as we’ll breakdown each component of the deployment.


ElasticSearch requires a local tmp fs directory which can be placed on the same relative volume as MongoDB.

For instances with large clients, use a dedicated flash storage medium and $ ln -s <<source>> ./escache link the path.

2 - ./escache:/usr/share/elasticsearch/data


To learn more about memlock refer to the SLES Docs - Memlock which provides an in-depth breakdown.


Since there are no securables in elasticsearch, the network is internal only. These are default values already setup in the generated graylog.conf and don’t need to be changed, more on that below.


Recall from the MongoDB breakdown, we’re using the same network for this service.

3. Deploying Graylog Web Server using Docker Compose

The graylog image can be found in Docker Hub with extensible configuration. As of this guide, there are three release cadence branches but it is strongly recommended to use the latest production variant.


Graylog does require setting properties manually in a number of configuration files. Here’s the source folder on Github with the required scripts.

1# ---------------------------
2 # Graylog::Production
3 # Docs:
4 # ---------------------------
5 graylog:
6 image: graylog/graylog:3.3
7 container_name: graylog.server
8 restart: always
9 volumes:
10 - ./gldata/config:/usr/share/graylog/data/config
11 networks:
12 - d1_graylog
13 - default
14 depends_on:
15 - mongodb
16 - elasticsearch
17 ports:
18 # Host:Container
19 # Graylog Web Interface and REST API
20 - 9050:9050
21 # Syslog TCP
22 - 514:514
23 # Syslog UDP
24 - 514:514/udp
26 - 9051:12201
28 - 9051:12201/udp
29 deploy:
30 resources:
31 limits:
32 cpus: '2'
33 memory: 4G


Following the convention we’ve been using in this guide, set a developer friendly container_name.


Docker audit recommends a detailed restart policy be limited with max_retries.

In case of failure of the database container, using an asynchronous approach that awaits for dependency restart can be padded with a buffer delay such as 15000ms.

However for always on log collection servers, an exception to use always to all services can be made.


Clone the file to generate a runtime configuration for graylog. It includes default values where you can populate the MongoDb connectionString securables.

Generated with $ ./, place the edited graylog.conf in the volume path ./gldata/config where the container is going to be deployed from.


Here’s the sections to edit in the generated graylog.conf, along with detailed explanations below:

1password_secret = << 96_char_token >>
3root_username = << admin_user >>
4root_password_sha2 = << shasum >>
5root_email = << >>
7# CEST: New York / Toronto
8root_timezone = America/Atikokan

Graylog uses 96 char hash for securable key rotation which can be generated using and output stored in password_secret.

Set the credentials for the root user that’ll be used to login to the web interface. As graylog.conf is unencrypted, generate a SHA256 SHASUM of your input password using and store the hash in root_password_sha2.

The timezone should be localized to the clients feeding inputs and will be rendered in log dashboards. If you’re managing geo-distributed systems, use the local time for the PMO office.


To allow MongoDb and Elasticsearch container services to complete entrypoint initialization, we’ll use the docker compose asynchronous await model in depends_on.

Otherwise, a fault may occcur while Graylog attemps to connect to an uninitialized container.


Modified values have been applied for streaming data and input workers. Refer to the explanation in the MonoDB section above.


Finally, if you’ve followed along you should end up with a completed docker-compose.yaml covering the sections mentioned (services, networks, volumes and compliance) that looks something like this.

MongoDb + ElasticSearch + Graylog Container Stack | Github Source
1# --------------------------------
2# Infra::Docker::Graylog+MongoDB+ElasticSearch
3# Docs:
4# Github:
5# --------------------------------
7version: '3.3'
10 # ---------------------------
11 # MongoDB::Data
12 # Docs:
13 # ---------------------------
14 mongodb:
15 image: mongo:3
16 container_name: graylog.mongodb
17 restart: always
18 volumes:
19 - ./mongodb:/data/db
20 - ./mongodb/mongo-init.js:/docker-entrypoint-initdb.d/mongo-init.js:ro
21 env_file:
22 - mongo.env
23 networks:
24 - d1_graylog
25 ports:
26 - 9052:27017
27 deploy:
28 resources:
29 limits:
30 cpus: '1'
31 memory: 1G
33 # ---------------------------
34 # Elasticsearch::Cache
35 # Docs:
36 # ---------------------------
37 elasticsearch:
38 image:
39 container_name: graylog.escache
40 restart: always
41 volumes:
42 - ./escache:/usr/share/elasticsearch/data
43 networks:
44 - d1_graylog
45 environment:
46 -
47 - http.port=9200
48 - transport.tcp.port=9300
49 -
50 -
51 - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
52 ulimits:
53 memlock:
54 soft: -1
55 hard: -1
56 deploy:
57 resources:
58 limits:
59 cpus: '2'
60 memory: 2G
62 # ---------------------------
63 # Graylog::Production
64 # Docs:
65 # ---------------------------
66 graylog:
67 image: graylog/graylog:3.3
68 container_name: graylog.server
69 restart: always
70 volumes:
71 - ./gldata/config:/usr/share/graylog/data/config
72 networks:
73 - d1_graylog
74 - default
75 depends_on:
76 - mongodb
77 - elasticsearch
78 ports:
79 # Host:Container
80 # Graylog Web Interface and REST API
81 - 9050:9050
82 # Syslog TCP
83 - 514:514
84 # Syslog UDP
85 - 514:514/udp
87 - 9051:12201
89 - 9051:12201/udp
90 deploy:
91 resources:
92 limits:
93 cpus: '2'
94 memory: 4G
97 d1_graylog:
98 driver: bridge
2├── server
3│ ├──
4│ ├── mongo.env
5│ ├── mongo-init.js
6│ ├── docker-compose.yaml
7│ ├──

With the server configuration completed, go ahead and deploy the infrastructure to your docker container using the docker-compose command.

$ docker-compose -f docker-compose.yaml up

Once the images are pulled and deployed, you should end up with a running Graylog, MongoDb, and ElasticSearch stack.

We strongly recommended using a failover cluster with reverse proxy or load balancing for scalable event handling in large volume client situations.

The deployment can be adapted for Docker Swarm multi-node clusters by updating the variant spec to version: 3.7 and referencing the compose file specification.

4: Configuring Graylog to Accept Syslog Input using Web Interface

To start accepting new data inputs, launch the Graylog webserver running on your https://node_ip:graylog_port/ and authenticate using the credentials previously defined in configuration.

Create a new input worker to accept 514/tcp and 514/udp by following the guide provided by graylog.

5. Setting up Clients for Remote Logging with Ansible

This last part configures Unix clients to push system log events into the Graylog server on 514/tcp | 514/udp ports.

One such linux runtime is rsyslog that runs as a common linux systemd service for event handling and emitting to virtually any output if a condition is met.

We’ll configure it on clients using Ansible by creating an operational routine known as a playbook. Here’s an example below defined in yaml markup that installs rsyslog packages, copies the config file, and launches as a startup service.

Replace the placeholder <<USER>> with that of the username on the client machine. If you’re configuring this on more than one client, checkout the docs on Ansible inventories.

1# --------------------------
2# Ansible Playbook
3# Configure unix daemon services for rsyslog
4# Creates pipeline into Graylog from client endpoints
5# systemd: rsyslog.unit
6# --------------------------
8- hosts: all
9 remote_user: <<USER>>
10 become: yes
12 tasks:
13 - name: Configure services for ryslog
14 block:
15 - name: Package dependencies
16 yum:
17 state: present
18 name:
19 - rsyslog
21 - name: Set Default Config Params
22 copy:
23 src: rsyslog.conf
24 dest: /etc/rsyslog.conf
25 force: true
27 - name: Configure System Service
28 systemd:
29 name: rsyslog
30 enabled: yes
31 state: started
33# --------------------------
rsyslog.conf | Github Source

Change the daemon settings in rsyslog.conf to match the Graylog server as the target. For the purposes of this guide, we’ve setup rsyslog to push all logs with the rule.

1# OnMessage: Forward Default Message
2*.* action(
3 type="omfwd"
4 target="<<IP_ADDRESS>>"
5 port="514"
6 protocol="udp"
7 template="RSYSLOG_SyslogProtocol23Format"

If you’re satisfied with the condition, simply deploy the playbook on clients using the bash command:

1$ ansible-playbook -i target.env \
2 -k --ask-become-pass \
3 ./configure_rsyslog.yaml

Once it’s done running the tasks, you should start to see data streaming into Graylog within seconds of log events happening on clients.

Here’s the completed tree structure for both client and server deployments that’s reusable across your IT infrastructure portfolio.

2├── client
3│ ├── configure_rsyslog.yaml
4│ ├── target.env
5├── server
6│ ├──
7│ ├── mongo.env
8│ ├── mongo-init.js
9│ ├── docker-compose.yaml
10│ ├──

For the full code sample, checkout the Github repository.


Graylog provides a powerful platform for log analytics, setting up monitoring alerts, and visualizing data in dashboards.

Now that you’re collecting data, the quest begins to analyze it! With streaming data pipelines comes great complexity, some of our clients can generate millions of log events an hour.

Get in touch to learn how we can deliver tailored insights to optimize your IT budget spend and mitigate cyber risk.

In a follow up post we’ll explore using Rundeck and Ansible Playbooks as an automated response to threat incident response. Subscribe to stay notified.

More articles from Quant ONE Inc.

Data Pipelines Strategic approaches reveal how to beat the odds

Deep dive into the key insights that CxOs and growth leaders can apply to enhance their long-term strategy with data pipelines.

August 21st, 2020 · 3 min read

Cloud Enterprise CIO Perspective for Embracing Fragmentation

The cloud-enabled enterprise of the future will look very different from the network architectures of today.

August 12th, 2020 · 3 min read

Build Cloud Native
Build with Certainty

© Quant ONE Inc. All Rights Reserved