Splunk integration with Docker

I’ve changed over my log aggregation system from ElasticStack to Splunk Free over the past few days.  The primary driver for this is that I use Splunk at work, and since Splunk Free allows 500Mb/day of ingestion, that’s plenty for all my home stuff.  So, using Splunk at home means I gain valuable experience at using Splunk professionally.

What we’ll be talking about here is how you integrate your Docker logging into Splunk.

Configure an HTTP Event Collector

Firstly, you’ll need to enable the Splunk HTTP Event Collector.  In the Splunk UI, click Settings -> Data Inputs -> HTTP Event Collector -> Global Settings.

Click Enabled alongside ‘All Tokens’, and enable SSL.  This will enable the HTTP Event Collector on port 8088 (the default), using the Splunk default certificate.  This isn’t enormously secure (you should use your own cert), but this’ll do for now.

Now, in the HTTP Event Collector window, click New Token and add a token.  Give it whatever details you like, and set the source type to json_no_timestamp.  I’d suggest you send the results to a new index, for now.

Continue the wizard, and you’ll get an access token.  Keep that, you’ll need it.

Configure Docker Default Log Driver

You now need to configure the default logging method used by Docker.  NOTE – Doing this will break the docker logs command, but you can find everything in Splunk anyway.  More on that soon.

You will need to override the startup command for dockerd to include some additional options.  You can do this on CentOS7 by creating a /etc/systemd/system/docker.service.d/docker-settings.conf with the following contents;

[Service]
ExecStart=
ExecStart=/usr/bin/dockerd --log-driver=splunk --log-opt splunk-token=PUTYOURTOKENHERE --log-opt splunk-url=https://PUTYOURSPLUNKHOSTHERE:8088 --log-opt tag={{.ImageName}}/{{.Name}}/{{.ID}} --log-opt splunk-insecureskipverify=1

The options should be fairly evident.  The tag= option configures the tag that is attached to the JSON objects outputted by Docker, so it contains the image name, container name, and unique ID for the container.  By default it’ll be just the unique ID, which frankly isn’t very useful post-mortem.  The last option allows the use of the Splunk SSL certificate.  Get rid of this option when you use a proper certificate.

Getting the driver in place

Now you’ve done that, you should be able to restart the Docker host, then reprovision all the containers to change their logging options.  In my case, this is a simple docker-compose down followed by docker-compose up, after a reboot.

The docker logs command will be broken now, but you can instead use Splunk to replicate the functionality, like this;

index=docker host=dockerhost | spath tag | search tag="*mycontainer*" | table _time,line

That will drop out the logs from the last 60 minutes for the container mycontainer running on the host dockerhost.

You can then start doing wizardry like this;

index=docker | spath tag | search tag="nginx*" 
| rex field=line "^(?<remote_addr>\S+) - (?<remote_user>\S+) \[(?<time_local>.+)\] \"(?<request>.+)\" (?<status>\d+) (?<body_bytes>\d+) \"(?<http_referer>.+)\" \"(?<http_user_agent>).+\" \"(?<http_x_forwarded_for>).+\"$"
| rex field=request "^(?<request_method>\S+) (?<request_url>\S+) (?<request_protocol>\S+)$"
| table _time,tag,remote_addr,request_url

To dynamically parse NGINX container logs outputted by Docker, split up the fields, and then list them by time, remote IP, and the URL requested.

I’m sure there’s better ways of doing this (such as parsing the logs at index time instead of at search time), but this way works pretty well and should function as a decent starting point.

Customizing OwnCloud using Docker

I’m messing around with OwnCloud at the moment, a solution to provide cloud-like access to files and folders through a webapp using your own local storage.  As is my want, I’m doing it in Docker.

There’s a minor catch though – the official OwnCloud Docker image does not include smbclient, which is required to provide access to Samba shares.

Here’s how to take care of that.

FROM owncloud:latest
RUN set -x; \
 apt-get update \
 && apt-get install -y smbclient \
 && rm -rf /var/lib/apt/lists/* \
 && rm -rf /var/cache/apt/archives/*

The above Dockerfile will use the current owncloud:latest image from Docker Hub, and then install smbclient into it.  You want to do the update, install and cleanup in one step so it gets saved as only one layer in the Docker filesystem, saving space.

You can then put that together with the official MySQL Docker Image and a few volumes to have a fully working OwnCloud setup with docker-compose.

version: '2'

services:
  mysql:
    image: mysql:latest
    restart: unless-stopped
    environment:
      - MYSQL_ROOT_PASSWORD=passwordgoeshere
    volumes:
      - ./data/mysql:/var/lib/mysql:rw,Z

  owncloud:
    hostname: owncloud.localdomain
    build: owncloud/
    restart: unless-stopped
    environment:
      - MYSQL_ROOT_PASSWORD=passwordgoeshere
    ports:
      - 8300:80
    volumes:
      - ./data/data:/var/www/html/data:rw,Z
      - ./data/config:/var/www/html/config:rw,Z
      - ./data/apps:/var/www/html/apps:rw,Z
    depends_on:
      - mysql

Create the directories that are mounted there, set the password to something sensible, and docker-compose up !

One thing though.  OwnCloud doesn’t have any built-in account lockout policy, so I wouldn’t go putting this as it is on the ‘Net just yet.  You’ll want something in front of it for security, like nginx.  You’ll also want HTTPS if you’re doing that.

More on that later.

ELK Stack in Docker with NGINX

I’ve done a bit of work in the past few days modifying a Docker ELK Github repository I came across, to make it more suited to my needs.

You can find my efforts at my Github repository.  This setup, when brought up with docker-compose up, will put together a full ELK stack composed of the latest versions of ElasticSearch, Logstash, Kibana, all fronted by NGINX with login required.

The setup persistently stores all Elasticsearch data into the ./esdata directory, and accepts syslog input on port 42185 along with JSON input on port 5000.

In order to access Elasticsearch, use the Sense plugin in Kibana.  You can get at Kibana on port 5601, with a default login of admin/admin.  You can change that by using htpasswd and creating a new user file at ./nginx/htpasswd.users .

A couple of things about Docker in this setup.  When you link containers, it’s not necessary to expose ports between the containers.  Exposing is only required to make a port accessible from outside Docker.  When containers are linked, they get access to all ports on the linked container.

This means that it’s not required to specifically expose all the internal ports of the stack – you only have to expose the entry/exit points you want on the stack as a unit.  In this case, that’s the entry ports to Logstash and the entry point in nginx.

Also, if you use a version 2 docker-compose specification, Docker Compose will also create an isolated network bridge just for your application, which is great here.  It will also manage dependencies appropriately to make sure the stack comes up in the right order.

Oh yeah.  If you bring up the stack with docker-compose up, press Ctrl+\ to break out of it without taking the stack down.

Magic!

NFS Persistent Volumes with OpenShift

Official documentation here.  Following is a (very!) brief summary of how to get your Registry in Openshift working with an NFS backend.  I haven’t been able yet to get it to deploy cleanly straight from the Ansible installer with NFS, but it is pretty straightforward to change it after initial deployment.

NOTE – A lot of this can probably be done in much, much better ways.  This is just how I managed to do it by bumbling around until I got it working.

Creating the NFS Export

First up, you’ll need to provision an NFS export on your NFS server, using the following options;

/srv/registry -rw,async,root_squash,no_wdelay,mp @openshift

Where ‘@openshift’ is the name of a group in /etc/netgroup for all your OpenShift hosts.  I’m also assuming that it a mountpoint, hence ‘mp’.

We then go to that directory and set it to be owned by root, gid is 5555 (example), and 0770 access.

Creating the Persistent Volume

Now first, we need to add that as a persistent volume to OpenShift.  I assume it’ll be 50Gb in size, and you want the data retained if the claim is released.  Create the following file and save as nfs-pv.yml somewhere you can get at it with the oc command.

---
 apiVersion: v1
 kind: PersistentVolume
 metadata:
   name: registry-volume
 spec:
   capacity:
     storage: 50Gi
   accessModes:
     - ReadWriteMany
   nfs:
     path: /srv/registry
     server: nfs.localdomain
   persistentVolumeReclaimPolicy: Retain
...

Right.  Now we change into the default project (where the Registry is located), and add that as a PV;

oc project default
oc create -f nfs-pv.yml
oc get pv

The last command should now show the new PV that you created.  Great.

Creating the Persistent Volume Claim

Now you have the PV, but it’s unclaimed by a project.  Let’s fix that.  Create a new file, nfs-claim.yml where you can get at it.

---
 apiVersion: v1
 kind: PersistentVolumeClaim
 metadata:
   name: registry-storage
 spec:
   accessModes:
     - ReadWriteMany
   resources:
     requests:
       storage: 50Gi
...

Now we can add that claim;

oc project default
oc create -f nfs-claim.yml
oc get pvc

The last command should now show the new PVC that you created.

Changing the Supplemental Group of the Deployment

Right.  Remember we assigned a GID of 5555 to the NFS export?  Now we need to assign that to the Registry deployment.

Unfortunately, I don’t know how to do this with the CLI yet.  So hit the GUI, find the docker-registry deployment, and click Edit YAML under Actions.

In there, scroll down and look for the securityContext tag.  You’ll want to change this as follows;

securityContext:
  supplementalGroups:
  - 5555

This sets the pods deployed with that deployment to have a supplemental group ID of 5555 attached to them.  Now they should get access to the NFS export when we attach it.

Attaching the NFS Storage to the Deployment

Again, I don’t know how to do this in the CLI, sorry.  Click Actions, then Attach Storage, and attach the claim you made.

Once that has finished deploying, you’ll find you have the claim bound to the deployment, but it’s not being used anywhere.  Click Actions, Edit YAML again, and then find the volumes section.  Edit that to;

volumes:
  -
    name: registry-storage
    persistentVolumeClaim:
      claimName: registry-storage

Phew.  Save it, wait for the deployment to be done.  Nearly there!

Testing it out

Now, if you go into Pods, select the pod that’s currently deployed for the Registry, you should be able to click Terminal, and then view the mounts.  You should see your NFS export there, and you should be able to touch files in there and see them on the NFS server.

Good luck!

Deploying a Quickstart Template to Openshift (CLI)

Repeating what we did in my previous post, we’ll deploy the Django example project to OpenShift using the CLI to do it.  This is probably more attractive to many sysadmins.

You can install the client by downloading it from the OpenShift Origin Github repository.  Clients are available for Windows, Linux, and so-on.  I used the Windows client, and I’m running it under Cygwin.

First, log into your OpenShift setup;

oc login --insecure-skip-tls-verify=true https://os-master1.localdomain:8443

We disable TLS verify since our test OpenShift setup doesn’t have proper SSL certificates yet.  Enter the credentials you use to get into OpenShift.

Next up, we’ll create a new project, change into that project, then deploy the test Django example application into it.  Finally, we’ll tail the build logs so we can see how it goes.

oc new-project test-project --display-name="Test Project" --description="Deployed from Command Line"
oc project test-project
oc new-app --template=django-example
oc logs -f bc/django-example

After that finishes, we can review the status of the deployment with oc status;

$ oc status
In project Test Project (test-project) on server https://os-master1.localdomain:8443

http://django-example-test-project.openshift.localdomain (svc/django-example)
 dc/django-example deploys istag/django-example:latest <-
 bc/django-example builds https://github.com/openshift/django-ex.git with openshift/python:3.4
 deployment #1 deployed about a minute ago - 1 pod

View details with 'oc describe <resource>/<name>' or list everything with 'oc get all'.

Ok, looks great.  You can now connect to the URL above and you should see the Django application splash page.

Now that worked, we’ll change back into the default project, display all the projects we have, and then delete that test project;

$ oc project default
Now using project "default" on server "https://os-master1.localdomain:8443".

$ oc get projects
NAME DISPLAY NAME STATUS
default Active
test-project Test Project Active

$ oc delete project test-project
project "test-project" deleted

$

Fantastic.  Setup works!

Deploying a Quickstart Template to Openshift (GUI)

In my last post, I talked about how to set up a quick-and-dirty OpenShift environment on Atomic.  Here, we’ll talk about firing up a test application, just to verify that everything works.

First, log into your OpenShift console, which you can find at (replace hostname);

http://os-master1.localdomain:8443/console

Once in, click the New Project button.  You’ll see something like this;

os-quickstart1

Enter quickstart-project for the name and display name, and click Create.  You’ll now be at the template selection screen, and will be presented with an enormous list of possible templates.

os-quickstart2

Enter “quickstart django” into the list, then click ‘django-example’.  Here is where you would normally customize your template.  Don’t worry about that for now.  Scroll down the bottom

os-quickstart3

You don’t need to change anything, just hit Create.  You now get the following window;

os-quickstart4

Click Continue to overview.  While you can run the oc tool directly from the masters, it’s better practice to not do that, and instead do it from your dev box, wherever that is.

If you’ve been stuffing around like I did, by the time you get to the overview, your build will be done!

os-quickstart5

Click the link directly under SERVICE, named django-example-quickstart-project.YOURDOMAINHERE.  You should now see the Django application splash screen pop up.

If so, congratulations!  You’ve just deployed your first application in OpenShift.

Have a look at the build logs, click the up and down arrows next to the deployment circle and watch what they do.

Deploying OpenShift Origin on CentOS Atomic

For my work, we’re looking at OpenShift, and I decided I’d set up an OpenShift Origin setup at home on my KVM box using CentOS Atomic.  This is a really basic setup, involving one master, two nodes, and no NFS persistent volumes (yet!).  We also don’t permit pushing to DockerHub, since this will be a completely private setup.  I won’t go into how to actually setup Atomic instances here.

Refer to the OpenShift Advanced Install manual for more.

Prerequisites

  • One Atomic master (named os-master1 here)
  • Two Atomic nodes (named os-node1 and os-node2 here)
  • A wildcard domain in your DNS (more on this later, it’s named *.os.localdomain here)
  • A hashed password for your admin account (named admin here), you can generate this with htpasswd.
  • A box elsewhere that you can SSH into your Atomic nodes from, without using a password (read about ssh-copy-id if you need to).  We’ll be putting Ansible on this to do the OpenShift installation.

Setting up a Wildcard Domain

Assuming you’re using BIND, you will need the following stanza in your zone;

; Wildcard domain for OpenShift
$ORIGIN os.localdomain.
* IN CNAME os-master1.localdomain.
$ORIGIN localdomain.

Change to suit your domain, of course.  This causes any attempts to resolve anything in .os.localdomain to be pointed as a CNAME to your master.  This is required so you don’t have to keep messing with your DNS setup whenever you deploy a new pod.

Preparing the Installation Host

As discussed, you’ll need a box you can do your installation from.  Let’s install the pre-reqs onto it (namely, Ansible and Git).  I’m assuming you are using CentOS here.

yum install -y epel-release
yum install ansible python-cryptography python-crypto pyOpenSSL git
git clone https://github.com/openshift/openshift-ansible

As the last step, we pull down the OpenShift Origin installer, which we’ll be using shortly to install OpenShift.

You will now require an inventory file for the installer to use.  The following example should be placed in ~/openshift-hosts .

Substitute out the hashed password you generated for your admin account in there.

About the infrastructure

That inventory file will deploy a fully working OpenShift Origin install in one go, with a number of assumptions made.

  • You have one (non-redundant!) master, which runs the router and registry instances.
  • You have two nodes, which are used to deploy other pods.  Each node is in its own availability zone (named left and right here).

More dedicated setups will have multiple masters, which do not run pods, and will likely set up specific nodes to run the infrastructure pods (registries, routers and such).  Since I’m constrained for resources, I haven’t done this, and the master runs the infrastructure pods too.

It’s also very likely that you’ll want a registry that uses NFS.  More on this later.

Installing OpenShift

Once this is done, installation is very simple;

cd openshift-ansible
ansible-playbook -i ../openshift-hosts playbooks/byo/config.yml

Sit back and wait.  This’ll take quite a while.  When it’s done, you can then (hopefully!) go to;

https://os-master1:8443/console/

To log into your OpenShift web interface.  You can ssh to one of your masters and run oc commands from there too.

I’ll run through a simple quickstart to test the infrastructure shortly.

 

Running daemons under Supervisord

When you want to run multiple processes in a single Docker container, there’s a few ways to do this.  Launch scripts is one.  I chose to use Supervisord.  Supervisord has some cool features, but it’s intended to manage processes that don’t fork (daemonize) themselves.  If you have something that you want to run under Supervisord that you cannot stop from forking, you can use the following script to monitor it;

#! /usr/bin/env bash

set -eu

pidfile="/var/run/your-daemon.pid"
command="/usr/sbin/your-daemon"

# Proxy signals
function kill_app(){
 kill $(cat $pidfile)
 exit 0 # exit okay
}
trap "kill_app" SIGINT SIGTERM

# Launch daemon
$command
sleep 2

# Loop while the pidfile and the process exist
while [ -f $pidfile ] && kill -0 $(cat $pidfile) ; do
 sleep 0.5
done
exit 1000 # exit unexpected

Run that script with supervisord.  What will happen is the script will monitor your daemon until it exits for some reason, then the script will exit, resulting in supervisord taking action.

Solution found at Serverfault.

Asterisk in Docker

I’ve decided to bite the bullet, and I’m working on converting my existing Microserver Centos 6 setup across to Centos 7 with Docker containers for all my applications.

Why Docker?  Why not OpenVZ or KVM?  KVM was out straight away because my Microserver simply doesn’t have the spare CPU and RAM to be running full virtual machines.  OpenVZ is an attractive option, but there’s no non-beta release of OpenVZ for Centos 7.  So that leaves Docker amongst the options I wanted to look at.

Asterisk poses some challenges for Docker, namely that the RTP ports are pseudo-dynamic, and there’s a lot of them.  Docker does proxying for each port that’s mapped into a container, and spawns a docker-proxy process for each one.  That’s fine if you have 1-2 ports, but if you may have over 10,000 of them that’s a big problem.  The solution here is to configure the container to use the host’s networking stack, then do some config on the container so that it uses a different IP from the host (to keep the host’s IP space “clean”).  We’ll also be configuring the container as non-persistent so it pulls config (read-only) from elsewhere on the filesystem and stores no state between restarts.  And lastly, we’ll be using CentOS 6 as the Asterisk container OS (since Asterisk is available in the EPEL repository for that version).  It’s not a very new version of Asterisk, but it’s stable.

Let’s get started.  For the impatient, here’s the gist.

Create the Asterisk Container

First, we’ll assemble a Dockerfile.  We’ll base it off CentOS 6, and just install Asterisk.  We use the ENTRYPOINT command so that we can pass additional arguments straight to Asterisk on running the container.

FROM centos:6
MAINTAINER James Young <jyoung@zencoffee.org>

# Set up EPEL
RUN curl -L http://download.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm -o /tmp/epel-release-6-8.noarch.rpm && \
 rpm -ivh /tmp/epel-release-6-8.noarch.rpm && \
 rm -f /tmp/epel-release-6-8.noarch.rpm

# Update and install asterisk
RUN yum update -y && yum install -y asterisk

# Set config as a volume
VOLUME /etc/asterisk

# And when the container is started, run asterisk
ENTRYPOINT [ "/usr/sbin/asterisk", "-f" ]

Pretty simple stuff.  Note that processes should always run non-daemonized in Docker so that it can track the pid properly.

Prepare the Docker Host

Use whatever tool is appropriate (I’m forcing systemd, firewalld and network-manager on myself) in order to configure a second IP for your Docker host’s primary network interface.  Bleh.  Network Manager.

Be aware that when you use the host’s network stack in Docker and don’t explicitly expose the ports you’ll be using, Docker does not configure the firewall.  You’ll need to do that on the host.  We’ll cover that in the Makefile.

Create the Makefile

We’ll use a Makefile to handle all the tasks we’re dealing with in this container.  Here it is;

CONTAINER=asterisk
SHELLCMD=asterisk -rvvvvv

all: build install start

build:
 docker build -t zencoffee/$(CONTAINER):latest .

install:
 cp -f $(CONTAINER)_container.service /etc/systemd/system/$(CONTAINER)_container.service
 systemctl enable /etc/systemd/system/$(CONTAINER)_container.service
 firewall-cmd --permanent --direct --add-rule ipv4 filter INPUT 0 --proto udp -d 192.168.0.242 --dport 5060 -j ACCEPT
 firewall-cmd --permanent --direct --add-rule ipv4 filter INPUT 0 --proto udp -d 192.168.0.242 --dport 10000:20000 -j ACCEPT
 firewall-cmd --direct --add-rule ipv4 filter INPUT 0 --proto udp -d 192.168.0.242 --dport 5060 -j ACCEPT
 firewall-cmd --direct --add-rule ipv4 filter INPUT 0 --proto udp -d 192.168.0.242 --dport 10000:20000 -j ACCEPT

start:
 systemctl start $(CONTAINER)_container.service
 sleep 2
 systemctl status $(CONTAINER)_container.service

shell:
 docker exec -it $(CONTAINER) $(SHELLCMD)

clean:
 systemctl stop $(CONTAINER)_container.service || true
 docker stop -t 2 $(CONTAINER) || true
 docker rm $(CONTAINER) || true
 docker rmi zencoffee/$(CONTAINER) || true
 systemctl disable /etc/systemd/system/$(CONTAINER)_container.service || true
 rm -f /etc/systemd/system/$(CONTAINER)_container.service || true
 firewall-cmd --permanent --direct --remove-rule ipv4 filter INPUT 0 --proto udp -d 192.168.0.242 --dport 5060 -j ACCEPT || true
 firewall-cmd --permanent --direct --remove-rule ipv4 filter INPUT 0 --proto udp -d 192.168.0.242 --dport 10000:20000 -j ACCEPT || true
 firewall-cmd --direct --remove-rule ipv4 filter INPUT 0 --proto udp -d 192.168.0.242 --dport 5060 -j ACCEPT || true
 firewall-cmd --direct --remove-rule ipv4 filter INPUT 0 --proto udp -d 192.168.0.242 --dport 10000:20000 -j ACCEPT || true

Of course, I’m using firewalld here (bleh again) and systemd (double bleh).  You can see that this simply does a build of the container, then puts the systemd service into place and punches all the appropriate RTP and SIP ports on the IP address that Asterisk will be using.

Configure the Systemd Unit

Now we need a unit for systemd, so we can make this run on startup.  Here it is;

[Unit]
Description=Asterisk Container
Requires=docker.service
After=docker.service

[Service]
Restart=always
ExecStart=/usr/bin/docker run --rm=true --name asterisk -v /docker/asterisk/config:/etc/asterisk:ro -v /docker/asterisk/logs:/var/log/asterisk -v /docker/asterisk/codecs/codec_g723-ast18-gcc4-glibc-x86_64-pentium4.so:/usr/lib64/asterisk/modules/codec_g723-ast18-gcc4-glibc-x86_64-pentium4.so:ro -v /docker/asterisk/codecs/codec_g729-ast18-gcc4-glibc-x86_64-pentium4.so:/usr/lib64/asterisk/modules/codec_g729-ast18-gcc4-glibc-x86_64-pentium4.so:ro --net=host zencoffee/asterisk:latest
ExecStop=/usr/bin/docker stop -t 2 asterisk

[Install]
WantedBy=multi-user.target

The run command there does a number of things;

  • Pulls Asterisk config from /data/asterisk/config (read-only)
  • Writes Asterisk logs to /docker/asterisk/logs
  • Pulls in a g723 codec and a g729 codec for Asterisk to use (read-only)
  • Enables host networking

If you are missing those codecs, remove the two -v’s that talk about them.  Also, you will likely have differently optimized versions anyway (Microserver has a pretty weak CPU, so Pentium4 is the right one to use for that).

Edit the Asterisk Config

You’ll need the default Asterisk config, which you can extract by building the container and running it up with;

docker run -d --name extract -v /docker/asterisk/config:/mnt zencoffee/asterisk:latest
docker exec -it extract /bin/bash
cp /etc/asterisk/* /mnt/
exit
docker stop extract
docker rm extract

From there, you can put in your own customized Asterisk config.  There’s a few bits you need to tweak.  In sip.conf, set udpbindaddr and tcpbindaddr to the secondary IP that you want Asterisk listening on.  in rtp.conf, ensure that rtpstart and rtpend match the ports you set up the firewall for.

Finally, putting it together!

Put your asterisk_container.service, Dockerfile and Makefile into the same directory.  Put your config into /docker/asterisk/config (in this example), your codecs into /docker/asterisk/codecs, and create a blank /docker/asterisk/logs .

You will also need a cdr-csv and cdr-custom directory in the logs dir if you want that functionality (Asterisk doesn’t create it).

Quickstart:  Just make all to do the whole lot and start it 🙂

  1. Run make build to construct the image.
  2. Run make install to configure firewalld rules and put the systemd unit in place
  3. Run make start to start the container
  4. Run make shell to get an Asterisk prompt inside the container

You can also do a docker logs asterisk to see what’s going on, and you can start/stop the container like a normal systemd service.

Good luck!