Info Docker Version: 18.09.0-0513 Released

Currently reading
Info Docker Version: 18.09.0-0513 Released

1,106
362
NAS
DS418play, DS213j, DS3621+, DSM 7.0.4-11091
Last edited:
Version: 18.09.0-0513
(2020-04-28)

What's New
  1. Updated Docker Daemon to version 18.09.8.

Fixed issues
  1. Updated the link to Docker Hub image.
  2. Fixed an issue where Docker might be stuck in loading status when users try to delete images of running/stopped containers.
  3. Fixed an issue where Docker cannot be installed on an ext4 volume on Synology NAS models with Denverton platform.

Manual Install Download
 
Thanks for sharing, Telos!

Seems like Synology sneaked the update in on 12th March 2020 without someone realy noticed it...
I took 15 minutes to make my personal regression tests.

Finaly, the client and server version are in sync (we already hat the 18.09.8 client, but a 18.09.6 server):
[email protected]:~# docker version
Client:
Version: 18.09.8
API version: 1.39
Go version: go1.11
Git commit: bfed4f5
Built: Fri Mar 13 06:46:11 2020
OS/Arch: linux/amd64
Experimental: false

Server:
Engine:
Version: 18.09.8
API version: 1.39 (minimum version 1.12)
Go version: go1.11
Git commit: 3a371f3
Built: Fri Mar 13 06:44:35 2020
OS/Arch: linux/amd64
Experimental: false

I used this docker-compose.yml to test wether the bug, where the environment variables are removed during deployment, returned for docker-compose and still exists for docker swarm stack deployments:
Code:
[email protected]:~# cat env-test.yml
version: '3.7'
services:
  ubuntu:
    image: ubuntu:18.04
    environment:
      TEST: test
    command: ["tail","-f","/dev/null"]

Docker Compose works as expected, no signs of regression (unchanged):
[email protected]:~# docker-compose --project-name demo -f env-test.yml up -d
[email protected]:~# docker exec demo_ubuntu_1 env
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
HOSTNAME=fd57870a8364
TEST=test
HOME=/root

[email protected]:~# docker-compose --project-name demo -f env-test.yml down
Stopping demo_ubuntu_1 ... done
Removing demo_ubuntu_1 ... done
Removing network demo_default

Docker Swarm Stacks still suffer from removed environment variables (unchanged):
[email protected]:~# docker stack deploy -c env-test.yml demo
[email protected]:~# docker exec $(docker ps -q --filter name=demo_ubuntu) env
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
HOSTNAME=5e92cc53e34c
HOME=/root

[email protected]:~# docker stack remove demo
Removing service demo_ubuntu
Removing network demo_default

Same with docker services:
[email protected]:~# docker service create --name demo_ubuntu --env TEST=test ubuntu:18.04 tail -f /dev/null
image ubuntu:18.04 could not be accessed on a registry to record
its digest. Each node will access ubuntu:18.04 independently,
possibly leading to different nodes running different
versions of the image.

ljmv2cbpvdcq7tt23r7bz40r0
overall progress: 1 out of 1 tasks
1/1: running [==================================================>]
verify: Service converged

[email protected]:~# docker exec $(docker ps -q --filter name=demo_ubuntu) env
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
HOSTNAME=0db8a2f5fcd8
HOME=/root

[email protected]:~# docker service rm demo_ubuntu
demo_ubuntu


sidenotes:
- env is used to print the environment variables from inside the container.
- with docker-compose the argument --project-name is used to get a stable container name, regardless where the compose.yml file is stored (as fallback the foldername is used instead)
- with docker services deployment (stack deployments deploys those as well) , the containername is unpredictable, thus the image name is searched with a subcommand.

Conclusion: Seems like Synology does not realy care about fixing deployments of swarm services/stacks.
 
1,106
362
NAS
DS418play, DS213j, DS3621+, DSM 7.0.4-11091
Just FYI... the manual install side-loaded without issue. Since I'm never really sure whether I should shut down containers and stop Docker prior to updating, I shut down all but 2 containers and ran the side-load install without issue.

Halfway through I wondered whether Docker phones home to Synology, and if so, since my NAS DNS runs through dockerized AdGuard Home, I realized that there would be no NAS connectivity until the AH container started. Regardless, the update went fine.

Even though this worked, I'm not certain updating Docker with containers running is a good practice. Perhaps someone more knowledgeable might chime in here.
 
On a normal linux host you can update the engine without stopping the container as well. Of course each container recieves a SIGTERM signal to initate a graceful shutdown. If the shutdown did not complete within 10 seconds, containers are killed hard. This pretty much is the same like pulling the power plug on a running computer, just because you are too impatiant to wait for it's proper shutdown.
 
18
1
I'm wondering if this must be install manually only or if I miss something?
I'm running a Docker version 18.09.0-0513 since the 24/09/2019.
 

Create an account or login to comment

You must be a member in order to leave a comment

Create account

Create an account on our community. It's easy!

Log in

Already have an account? Log in here.

Welcome to SynoForum.com!

SynoForum.com is an unofficial Synology forum for NAS owners and enthusiasts.

Registration is free, easy and fast!

Similar threads

Similar threads

Trending threads

Top