Leave a reply

docker-compose systemd

If you want to start your docker container with the boot of the system you can use the systemd function in your Ubuntu System.

First you have to find the path to your docker-compose installation. Use the „which“ command.

which docker-compose

Then you get the path to your installation: in this example: /usr/bin/docker-compose

Now you can create the .service file by using:

sudo nano /etc/systemd/system/docker-compose-app.service

Here we need following content:
But you have to adapt some parts for your installation:
– /usr/local/bin/docker-compose -> to the path of you got earlier with the which command
– /srv/docker -> with the path to your docker-compose.yml file

# /etc/systemd/system/docker-compose-app.service

[Unit]
Description=Docker Compose Application Service
Requires=docker.service
After=docker.service

[Service]
WorkingDirectory=/srv/docker
ExecStart=/usr/local/bin/docker-compose up
ExecStop=/usr/local/bin/docker-compose down
TimeoutStartSec=0
Restart=on-failure
StartLimitIntervalSec=60
StartLimitBurst=3

[Install]
WantedBy=multi-user.target

Now save the file and use following command to enable the service at the next reboot:

sudo systemctl enable docker-compose-app

Some helpful commands:

systemctl status docker-compose-app
systemctl start docker-compose-app
systemctl stop docker-compose-app
systemctl restart docker-compose-app

Default config I use:

# /etc/systemd/system/docker-compose-app.service

[Unit]
Description=Docker Compose Application Service
Requires=docker.service
After=docker.service

[Service]
WorkingDirectory=/volume1/docker/wireguard
ExecStart=/usr/bin/docker-compose up
ExecStop=/usr/bin/docker-compose down
TimeoutStartSec=0
Restart=on-failure
StartLimitIntervalSec=60
StartLimitBurst=3

[Install]
WantedBy=multi-user.target

Thanks to:

https://stackoverflow.com/questions/43671482/how-to-run-docker-compose-up-d-at-system-start-up


Leave a reply

Wireguard VPN in Background

I use a special documentation server which I always want to reach from my Mac without establishing an VPN connection. Why? Because I have to write documentations while I am in other VPN (L2TP or WireGuard) tunnels.

For this I need a permanent way to reach a predefined IP adress in my secure network. After a short talk with my college Samuel Oberhofer, we mentioned the solution using the CLI-Wireguard-Tool and YES this works perfectly.

You need the WireGuard Config file and you have to install wireguard-tools via brew with this command:

brew install wireguard-tools

now you can use  „wg-quick“ with following options:
[up | down | save | strip ] [ CONFIG_FILE | INTERFACE ]

so with this command you start up the VPN

wg-quick up "PATH-TO-YOUR-CONFIG"


——————————————————

I created a start script for myself to make it easier to start all services I want to have. For this you just create an hidden folder in your home directory with

mkdir .wireguardconfig

and copy the configs you want to start into this folder

now you can create a script (don’t forget to use chmod +x) and add for example following:

#!/bin/bash

wg-quick up ~/.wireguardconfig/1020.conf
wg-quick up ~/.wireguardconfig/2100.conf

Now you can easily start the Wireguard VPN’s in the background with this script.

——————————————————

to install brew use following:

/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"

Take care that you do not use special characters in the config file or take too long file names. If you do, you get following error:

wg-quick: The config file must be a valid interface name, followed by .conf


Leave a reply

Ubuntu Docker Compose Error

When you install docker with the default installer of Ubuntu (in our case. 20.04.3) you get following error by using „sudo docker-compose up“

ERROR: 
        Can't find a suitable configuration file in this directory or any
        parent. Are you in the right directory?

        Supported filenames: docker-compose.yml, docker-compose.yaml

It is quiet easy to be fixed. Just install/reinstall docker-compose with following command:

sudo apt-get install docker-compose

an then you can use:

sudo docker-compose up


Leave a reply

Checksum – Check

If you want to compare to folder, which are not based on the same system you can create to text files with an command and the compare them.

Linux systems use md5sum:

find . -type f -exec md5sum {} ';' >source_md5.txt
find . -type f -exec md5sum {} ';' >target_md5.txt

Mac systems use md5:

find . -type f -exec md5 {} ';' >source_md5.txt
find . -type f -exec md5 {} ';' >target_md5.txt

Then you can compare this files using following command.

diff <(sort source_md5.txt) <(sort target_md5.txt)


Leave a reply

Synology – Network Speed Test

In the last weeks I searched for a ways to analyse the speed test of my Demo-Synology and to get clear answers how fast a client can maximum be with the 10Gbit connection.

The Hardware is following:
DS1821+ (CPU AMD Ryzen V1500B)
32 GB RAM
System: DSM 6.2.3-25426 Update 3
8x Seagate Exos X16
Synology Cache SSD 2x 400GB Read and Write
10Gbit RJ45 Card
RAID – SHR1

Test Client:
MacPro 2019
96 GB RAM
System: BigSur 11.2
3,3 GHz 12-Core Intel Xeon W

1. Test – Difference between MTU Sizes

9000-1500 MTU
If you mix the MTU size (Server 9000 and Client 1500) the network speed drops down to 7.53 GBytes / 6.47 Gbits/sec testet with iperf. And about 727 write and 700 read, tested with AJA System Test,

1500-1500 MTU
If you use the same default MTU size (Server 1500 and Client 1500) the network speed looks good 10.9 GBytes / 9.35 Gbits/sec testet with iperf. And about 810 write and 785 read, tested with AJA System Test,

9000-9000 MTU
If you use the same big MTU size (Server 9000 and Client 9000) the network speed looks good 11.5 GBytes / 9.90 Gbits/sec testet with iperf. And about 865 write and 1076 read, tested with AJA System Test,

Conclusion:
If you use the same MTU the Network overhead is less and you get more performance. The best performance is with the MTU 9000

2. Test – AmorphousDiskMark

Searching for a tool do make standardized test I found AmorphousDiskMark from Katsura Shareware. It is a great tool and you can download it form the App Store – Download. It automatically creates 4 different Szenarios (Forum):
SEQ1MQD8 – sequential read/write one big file multiple streams
SEQ1MQD1 – sequential read/write one big file single streams
RND4KQD64 – random read/write many small files multiple streams
RND4KQD1 – random read/write many small files single streams

AmorphousDiskMark MTU 9000-9000

3. Test – AJA System Test

An easy way to test your storage is to use the AJA System Test. You can choose an Target Disk and specify the Test File Size. I would recommend to click on the charts icon on the bottom of the window to open the graphics with the frame number vs MB/secs. Here you can easy see if you get dropped frames or if the peak speed is very high but the average speed is weak.

4. Test – iPerf (Network Storages)

You have to make sure that you network connection to the storage is good and gives you the maximum speed. The easiest way to do this is, using iperf. It is an client – server application and you have to download and start it. You can download it at iperf.fr

The „server“ site can be startet with „iperf3 -s“ and on the client site you use „iperf3 -c <ip-address>“

On you Synology you can easily use it in a docker image by using this command: (Install Docker Package, Enable SSH in the System Settings and then type following)

sudo docker run -it --rm -p 5201:5201 networkstatic/iperf3 -s

(Source for this help LINK)

Then you get following protocol that shows you the speed without the overhead of a protocol (SMB, NFS …) With the option „-r“ you can refer the test between server and client.

Connecting to host 172.16.100.20, port 5201
[ 4] local 172.16.100.10 port 50489 connected to 172.16.100.20 port 5201
[ ID] Interval Transfer Bandwidth
[ 4] 0.00-1.00 sec 1.16 GBytes 9.93 Gbits/sec
[ 4] 1.00-2.00 sec 1.15 GBytes 9.90 Gbits/sec
[ 4] 2.00-3.00 sec 1.15 GBytes 9.89 Gbits/sec
[ 4] 3.00-4.00 sec 1.15 GBytes 9.90 Gbits/sec
[ 4] 4.00-5.00 sec 1.15 GBytes 9.90 Gbits/sec
[ 4] 5.00-6.00 sec 1.15 GBytes 9.90 Gbits/sec
[ 4] 6.00-7.00 sec 1.15 GBytes 9.90 Gbits/sec
[ 4] 7.00-8.00 sec 1.15 GBytes 9.90 Gbits/sec
[ 4] 8.00-9.00 sec 1.15 GBytes 9.90 Gbits/sec
[ 4] 9.00-10.00 sec 1.15 GBytes 9.90 Gbits/sec

[ ID] Interval Transfer Bandwidth
[ 4] 0.00-10.00 sec 11.5 GBytes 9.90 Gbits/sec sender
[ 4] 0.00-10.00 sec 11.5 GBytes 9.90 Gbits/sec receiver


Good help for iperf is from JamesCoyle

Results in detail

Server MTU 9000 – Client 1500

Connecting to host 172.16.100.20, port 5201
[ 4] local 172.16.100.10 port 50485 connected to 172.16.100.20 port 5201
[ ID] Interval Transfer Bandwidth
[ 4] 0.00-1.00 sec 467 MBytes 3.92 Gbits/sec
[ 4] 1.00-2.00 sec 723 MBytes 6.07 Gbits/sec
[ 4] 2.00-3.00 sec 882 MBytes 7.40 Gbits/sec
[ 4] 3.00-4.00 sec 718 MBytes 6.02 Gbits/sec
[ 4] 4.00-5.00 sec 911 MBytes 7.64 Gbits/sec
[ 4] 5.00-6.00 sec 883 MBytes 7.41 Gbits/sec
[ 4] 6.00-7.00 sec 868 MBytes 7.28 Gbits/sec
[ 4] 7.00-8.00 sec 817 MBytes 6.86 Gbits/sec
[ 4] 8.00-9.00 sec 932 MBytes 7.81 Gbits/sec
[ 4] 9.00-10.00 sec 512 MBytes 4.30 Gbits/sec


[ ID] Interval Transfer Bandwidth
[ 4] 0.00-10.00 sec 7.53 GBytes 6.47 Gbits/sec sender
[ 4] 0.00-10.00 sec 7.53 GBytes 6.47 Gbits/sec receiver

AmorphousDiskMark Server MTU 9000 – Client 1500
AJA Server MTU 9000 – Client 1500


Server MTU 9000 – Client 9000

Connecting to host 172.16.100.20, port 5201
[ 4] local 172.16.100.10 port 50489 connected to 172.16.100.20 port 5201
[ ID] Interval Transfer Bandwidth
[ 4] 0.00-1.00 sec 1.16 GBytes 9.93 Gbits/sec
[ 4] 1.00-2.00 sec 1.15 GBytes 9.90 Gbits/sec
[ 4] 2.00-3.00 sec 1.15 GBytes 9.89 Gbits/sec
[ 4] 3.00-4.00 sec 1.15 GBytes 9.90 Gbits/sec
[ 4] 4.00-5.00 sec 1.15 GBytes 9.90 Gbits/sec
[ 4] 5.00-6.00 sec 1.15 GBytes 9.90 Gbits/sec
[ 4] 6.00-7.00 sec 1.15 GBytes 9.90 Gbits/sec
[ 4] 7.00-8.00 sec 1.15 GBytes 9.90 Gbits/sec
[ 4] 8.00-9.00 sec 1.15 GBytes 9.90 Gbits/sec
[ 4] 9.00-10.00 sec 1.15 GBytes 9.90 Gbits/sec


[ ID] Interval Transfer Bandwidth
[ 4] 0.00-10.00 sec 11.5 GBytes 9.90 Gbits/sec sender
[ 4] 0.00-10.00 sec 11.5 GBytes 9.90 Gbits/sec receiver

AmorphousDiskMark Server MTU 9000 – Client 9000
AJA Server MTU 9000 – Client 9000

Server MTU 1500 – Client 1500

Connecting to host 172.16.100.20, port 5201
[ 4] local 172.16.100.10 port 50495 connected to 172.16.100.20 port 5201
[ ID] Interval Transfer Bandwidth
[ 4] 0.00-1.00 sec 1.06 GBytes 9.09 Gbits/sec
[ 4] 1.00-2.00 sec 1.10 GBytes 9.41 Gbits/sec
[ 4] 2.00-3.00 sec 1.09 GBytes 9.36 Gbits/sec
[ 4] 3.00-4.00 sec 1.09 GBytes 9.34 Gbits/sec
[ 4] 4.00-5.00 sec 1.10 GBytes 9.41 Gbits/sec
[ 4] 5.00-6.00 sec 1.09 GBytes 9.38 Gbits/sec
[ 4] 6.00-7.00 sec 1.09 GBytes 9.35 Gbits/sec
[ 4] 7.00-8.00 sec 1.09 GBytes 9.38 Gbits/sec
[ 4] 8.00-9.00 sec 1.10 GBytes 9.41 Gbits/sec
[ 4] 9.00-10.00 sec 1.09 GBytes 9.41 Gbits/sec


[ ID] Interval Transfer Bandwidth
[ 4] 0.00-10.00 sec 10.9 GBytes 9.35 Gbits/sec sender
[ 4] 0.00-10.00 sec 10.9 GBytes 9.35 Gbits/sec receiver

AmorphousDiskMark Server MTU 1500 – Client 1500
AJA Server MTU 1500 – Client 1500

Special Thanks to Tools at Work, first point of contact for system integration in Vienna Austria.


Durch die weitere Nutzung der Seite stimmst du der Verwendung von Cookies zu. Weitere Informationen

Die Cookie-Einstellungen auf dieser Website sind auf "Cookies zulassen" eingestellt, um das beste Surferlebnis zu ermöglichen. Wenn du diese Website ohne Änderung der Cookie-Einstellungen verwendest oder auf "Akzeptieren" klickst, erklärst du sich damit einverstanden.

Schließen