I’d like to integrate my log monitoring across my organization, including applications not hosted on Platform.sh.
Both Logz.io’s recommended forwarder, Filebeat, and the Splunk Universal Forwarder expect a file system with write access to ship logs to their indexers. Platform.sh by design is set-up to host applications on a read-only file system, so it is necessary to utilize mounts to provide these services with write access.
Installation, configuration, and shipping are explained briefly below for both Logz.io and Splunk, but more detailed How-tos are also available for more in-depth instruction:
Forwarding to Logz.io
Define a mount in
.platform.app.yaml that Filebeat can write to when it ships your logs,
# .platform.app.yaml mounts: '/.filebeat': source: local source_path: filebeat
Mounts aren’t available during build hooks, so one way we can install Filebeat is to create a subdirectory in our project called
config/filebeat where we can store installation scripts and temporary builds, and then move them here afterwards.
Include the following commands to your existing hooks in
# .platform.app.yaml hooks: build: | if [ ! -z $LOGZ_CONFIG ]; then ./config/filebeat/scripts/install.sh fi pipenv install --system --deploy deploy: | if [ ! "$(ls -A filebeat)" ]; then ./config/filebeat/scripts/config.sh fi ./filebeat/filebeat run --once
In the build hook here an installation script
config/filebeat/scripts/install.sh will install Filebeat in a temporary location during the build hook so long as a project level variable is not defined to show that this step has already been completed.
install.sh below installs Filebeat to the directory
config/filebeat/build/ and downloads a certificate for it. On your Logz.io dashboard under ‘Log Forwarding’ the Filebeat instructions include more detail about these steps.
# config/filebeat/scripts/install.sh #!/usr/bin/env bash TEMP_BEAT_HOME=config/filebeat/build [ ! -d $TEMP_BEAT_HOME ] && mkdir -p $TEMP_BEAT_HOME cd $TEMP_BEAT_HOME # Install Filebeat curl -L -O https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-6.7.0-linux-x86_64.tar.gz tar xzvf filebeat-6.7.0-linux-x86_64.tar.gz rm filebeat-6.7.0-linux-x86_64.tar.gz # Download the certificate wget https://raw.githubusercontent.com/logzio/public-certificates/master/COMODORSADomainValidationSecureServerCA.crt mkdir -p filebeat-6.7.0-linux-x86_64/pki/tls/certs cp COMODORSADomainValidationSecureServerCA.crt filebeat-6.7.0-linux-x86_64/pki/tls/certs/
In the deploy hook if the mount point is empty, a configuration script
config/filebeat/scripts/config.sh moves the temporary Filebeat installation to the
filebeat and creates a directory
registry that will be used to save registry files.
#!/usr/bin/env bash # Move filebeat to mount with write access cd $PLATFORM_HOME cp -v -r config/filebeat/build/filebeat-6.7.0-linux-x86_64/* filebeat mkdir filebeat/registry
In the application directory, place the
filebeat.yml file that the Logz.io Wizard created for you in the ‘Log Forwarding’ section.
To ship all Platform.sh logs, enter
var/log/*.log as its input. It will specify outputs for you as well.
At this point if you push to
platform remote Filebeat will install, configure itself, and ship Platform.sh logs with the last line of the deploy hook:
./filebeat/filebeat run --once.
If you keep the build hook as written, a new install will occur with each deployment unless you set a project level variable that designates that Filebeat is already configured:
$ platform variable:create --level project --name LOGZ_CONFIG --value 'true'
Define a pair mount in
.platform.app.yaml that the Forwarder can write to when it ships your logs. It will use
splunk to write modified log files before they ship and
.splunk to write authorization files so that it can remember configurations and credentials after it connects to your indexer.
# .platform.app.yaml mounts: 'splunk': source: local source_path: splunk '/.splunk': source: local source_path: splauths
Same as above, since mounts aren’t available during build we can install the Universal Forwarder in a subdirectory called
config/filebeat where we can store installation scripts and temporary builds.
# .platform.app.yaml hooks: build: | if [ ! -z $SPLUNK_CONFIG ]; then ./config/splunk/scripts/install.sh fi pipenv install --system --deploy deploy: | if [ ! "$(ls -A splunk)" ]; then ./config/splunk/scripts/config.sh fi ./splunk/splunkforwarder/bin/splunk restart
In the build hook, the Forwarder will be installed in the temporary directory
config/splunk/build, so long as a project level variable that denotes a completed configuration has not been set.
# config/splunk/scripts/install.sh #!/usr/bin/env bash TEMP_SPLUNK_HOME=config/splunk/build # Install Splunk Universal Forwarder [ ! -d $TEMP_SPLUNK_HOME ] && mkdir -p $TEMP_SPLUNK_HOME cd $TEMP_SPLUNK_HOME wget -O splunkforwarder-220.127.116.11-962d9a8e1586-Linux-x86_64.tgz 'https://www.splunk.com/bin/splunk/DownloadActivityServlet?architecture=x86_64&platform=linux&version=18.104.22.168&product=universalforwarder&filename=splunkforwarder-22.214.171.124-962d9a8e1586-Linux-x86_64.tgz&wget=true' tar xvzf splunkforwarder-126.96.36.199-962d9a8e1586-Linux-x86_64.tgz rm splunkforwarder-188.8.131.52-962d9a8e1586-Linux-x86_64.tgz
In the deploy hook,
# config/splunk/scripts/config.sh #!/usr/bin/env bash cd $PLATFORM_APP_DIR TEMP_SPLUNK_HOME=config/splunk/build/* SPLUNK_HOME=$PLATFORM_APP_DIR/splunk/splunkforwarder # Copy temp build to writable storage cp -v -r $TEMP_SPLUNK_HOME splunk # Migrate used-seed.conf to the forwarder cp -v config/splunk/seeds/user.conf $SPLUNK_HOME/etc/system/local/user-seed.conf # Start Splunk for the first time, accepting license ./splunk/splunkforwarder/bin/splunk start --accept-license # Update outputs.conf with receiver address seed cp -v $PLATFORM_APP_DIR/config/splunk/seeds/outputs.conf $PLATFORM_APP_DIR/splunk/splunkforwarder/etc/system/local/outputs.conf # Update inputs.conf with monitor inputs seed cp -v $PLATFORM_APP_DIR/config/splunk/seeds/inputs.conf $PLATFORM_APP_DIR/splunk/splunkforwarder/etc/system/local/inputs.conf
Splunk has a dedicated CLI that allows you to configure inputs and outputs, but it recommends creating a set of configuration seed files instead.
user.conf: The Forwarder comes with a default
adminuser with the password
changeme. You will need to create this file to update the password or else remote access will not be available. Once it has been moved, you can start the Forwarder for the first time and modify the remaining configurations.
# config/splunk/seeds/user.conf [user_info] USERNAME = admin PASSWORD = testpass
outputs.conf: This file configures information about the Splunk Receiver/Indexer. Replace
<receiver ip>with the IP address of the Indexer. If you have changed the default listening port for the Indexer from
9997, you will need to change that as well.
# config/splunk/seeds/outputs.conf [tcpout] defaultGroup=default [tcpout:default] server=<receiver ip>:9997 [tcpout-server:<receiver ip>:9997]
inputs.conf: Finally, configure the forward to monitor all files in
/var/log, the location of Platform.sh log files.
# config/splunk/seeds/inputs.conf [monitor://var/log/] disabled = false
Each of these seed files can be placed in
config/splunk/seeds in your project directory, and each is moved to the Forwarder’s final installation location during the deploy hook.
Like the Logz.io instructions above, these configuration settings will install and configure the Splunk Universal Forwarder to ship Platform.sh logs on every deployment with the last line of the deploy hook:
If you keep the build hook as written, a new install will occur with each deployment unless you set a project level variable that designates that Splunk is already configured:
$ platform variable:create --level project --name SPLUNK_CONFIG --value 'true'