How to enable keyfile based auth for ssh on DSM?

Currently reading
How to enable keyfile based auth for ssh on DSM?

Last edited:
I wanted to see whether Ansible can be used on DSM :) Modifying the os and installing containers is a no brainer with Ansible. I thought maybe this might be a nice way to setup docker containers, reverse proxy configs and whatever the heart desires.

So far, I wrote a bash script that performs following actions:
- add an ansible user
- add an ansible goup
- add a file in /etc/sudoers.d/ that allows the user ansible to become root without entering a password
- create an rsa keypair for the ansible user
- copy the as ~/.ssh/authorized_keys for the ansible_user
- modify the sshd_conf to permit keyfile based authentification and restart sshd
- start an ansible docker container with playbooks, roles and inventory as bind-mount

Anyway, I must be doing something wrong with the sshd_conf, as no matter what I do, the ansible user is always asked for a password... as if the authorized_keys file is completly ignored...

I assume these are the relevant details:

I created a rsa-based keypair for a user:
ssh-keygen -q -b 4096 -t rsa -N ""  -f id_rsa
copied to ~/.ssh/authorized_keys for the target user
changed permissions to 644 for authorized_keys and 600 for the .ssh folder.

Then I tried to enable Keyfile based auth:
sudo sed -ie 's/^#PubkeyAuthentication/PubkeyAuthentication/g' /etc/ssh/sshd_config
sudo sed -ie 's/^#AuthorizedKeysFile/AuthorizedKeysFile/g' /etc/ssh/sshd_config

And finaly restarted the sshd server:
sudo synoservice  --restart ssh-shell
WARNING: Running the last command will kill all active ssh connections (even the one you use to execute the command). Sometimes the sshd shell is not restarted properly, resulting in no connections beeing accepted. Though, repeating the command using telnet usualy does the trick and sshd starts to accept connections again.

Any Ideas on what I am missing?
Last edited:
I was able to troubleshoot further. I had some ownership and permission issues, which I could sort out using these commands:

Start a sshd server on port 1234 with debug flag:
sudo    /bin/sshd -de -p 1234

Try to connect the ssh client on port 1234 with aditional debug flags:
ssh -vvv -i .ssh/id_rsa ansible@localhost -p 1234

Which lead me to following output:
debug1: session_new: session 0
lastlog_openseek: Couldn't stat /var/log/lastlog: No such file or directory
lastlog_openseek: Couldn't stat /var/log/lastlog: No such file or directory
debug1: session_pty_req: session 0 alloc /dev/pts/15
debug1: server_input_channel_req: channel 0 request shell reply 1
debug1: session_by_channel: session 0 channel 0
debug1: session_input_channel_req: session 0 req shell
Starting session: shell on pts/15 for ansible from port 35690 id 0
debug1: Setting controlling tty using TIOCSCTTY.
debug1: Received SIGCHLD.
debug1: session_by_pid: pid 20450
debug1: session_exit_message: session 0 channel 0 pid 20450
debug1: session_exit_message: release channel 0
debug1: session_by_tty: session 0 tty /dev/pts/15
debug1: session_pty_cleanup: session 0 release /dev/pts/15
Received disconnect from port 35690:11: disconnected by user
Disconnected from port 35690
debug1: do_cleanup
debug1: do_cleanup
debug1: PAM: cleanup
debug1: PAM: closing session
debug1: PAM: deleting credentials
-- post merged: --

.. and now it works. The user must either have it's login shell poiting to /bin/sh or /bin/ash AND needs to be in the administrators group

... and another thing Synology customized to have special behavior

Lessons learned:
- there is no need to customize /etc/ssh/sshd_config. Regardless of what any tutorial says: it is not required. I just reverted all changes and it still works
- error reporting is terrible (part1): the ~ folder of the target user was not 700, the ~/.ssh folder was not 700 and ~/.ssh/authorized_keys was not 600.
- error reporting is terrible (part2): only /bin/sh and /bin/ash are allowed as login shell
- error reporting is terrible (part3): the user MUST be in the admistrators group
- ed25519 and RSA keys work
I know this stuff is unicorn territory for most of you, though it appears the script might do wild things if "ansible_user" is set to an already present admin-account or even worse to root. So please, if you intend to use it, make sure to use a "fresh" user account name.

Also the script is ment to be executed using an admin account, but not as root! It will call sudo inside the script whenever root permissions are required.
Last edited by a moderator:
Oh and one more thing. The script should not be executed directly in a user's home folder!
-- post merged: --

This skript now protects from breakage.
#!/bin/bash -eu
function add_admin_user_and_fix_home_folder_permissions() {
  if [ $(sudo synouser --get "${ansible_user}" > /dev/null 2>&1; echo $?) -ne 0 ];then
    sudo synouser --add "${ansible_user}" "${ansible_pass}" "" 0 "" 0
    # get list of current users in group admistrators
    current_admins=$(sudo synogroup --get administrators | grep --perl-regexp --only-matching '(?<=^\d:\[).*(?=\]$)')
    for admin in ${current_admins};do
      admins="${admins} ${admin}"

    sudo synogroup --member "administrators" ${admins} ${ansible_user} # only uses in group admistrators are allowd to login with key
    if [ $(sudo synogroup --get "ansible" > /dev/null 2>&1; echo $?) -ne 0 ];then
      sudo synogroup --add "ansible"
    sudo synogroup  --member "ansible" "${ansible_user}"
    user_dir=$(sudo synouser --get "${ansible_user}" | grep -oP '(?<=User.Dir(.){4}: \[).*(?=\])')
    until [ -d ${user_dir} ]; do sleep 1;done
    sudo chmod 700 "${user_dir}"
    sudo chown "${ansible_user}:users" -R "${user_dir}"
    if [ $(grep -E '/var/services/homes/${ansible_user}:/sbin/nologin' /etc/passwd > /dev/null 2>&1; echo $?) -eq 0 ];then
      sudo sed -ie 's#/var/services/homes/${ansible_user}:/sbin/nologin#/var/services/homes/${ansible_user}:/bin/sh#g' /etc/passwd
function add_sudoers() {
  sudo bash -c "[ ! -e /etc/sudoers.d/${ansible_user} ] && cat <<-SUDO > /etc/sudoers.d/${ansible_user}
    ${ansible_user} ALL=(ALL) NOPASSWD: ALL
SUDO" || true
function create_key_and_copy_to_home_folder() {
  # subfolder of current user's folder
  if [ ! -e ssh/id_rsa ];then
    mkdir -p ssh
    ssh-keygen -q -b 4096 -t ed25519 -N ""  -f ssh/id_rsa
    sudo chmod 700 "ssh"
    sudo chmod 600 "ssh/id_rsa"
    sudo chmod 600 "ssh/"
  # homedir of ansible user
  user_dir=$(sudo synouser --get ${ansible_user} | grep -oP '(?<=User.Dir(.){4}: \[).*(?=\])')
  if [ ! -e "${user_dir}/.ssh/authorized_keys" ] || [ "${force_copy_key}" == "true" ];then
    sudo mkdir -p "${user_dir}/.ssh"
    sudo chmod 777 "${user_dir}"
    sudo chmod 777 -R "${user_dir}/.ssh/"
    sudo cat "ssh/" > "${user_dir}/.ssh/authorized_keys"
    sudo chmod 700 "${user_dir}"
    sudo chmod 700 "${user_dir}/.ssh/"
    sudo chmod 600 "${user_dir}/.ssh/authorized_keys"
    sudo chown "${ansible_user}:users" -R "${user_dir}/"
function create_ansible_inventory() {
  cat <<-EOF > inventory
function create_ansible_cfg() {
  cat <<-CFG > ansible.cfg
strategy_plugins  = /usr/lib/python3.6/site-packages/ansible_mitogen/plugins/strategy
strategy          = mitogen_linear
host_key_checking = False
function start_ansible_container() {
  docker run -ti --rm \
    -e USER=ansible \
    -e UID=$(id -u) \
    -v ${PWD}/ssh/:/home/ansible/.ssh/ \
    -v ${PWD}:/data cytopia/ansible:2.8-tools \
    ansible-playbook playbook -i inventory
function sanity_check() {
  if [ $(id -u) -eq 0 ];then
    echo "Do not run this script as root user, it will use sudo where ever root priviliges are required!"
    exit 1
  if [ "${PWD}" == "~" ];then
    echo "Do not run this script directly in the user's homefolder. It needs to be run inside a subfolder"
    exit 1
  set +e
  user_data=$(sudo synouser --get ${ansible_user})
  set -e
  if [ $(echo "${user_data}" | grep -wc 'SynoErr') -eq 1 ];then
    echo "user ${ansible_user} does not exist, will create it!"
  elif [ $(echo "${user_data}" | grep -E -wc '\([[:digit:]]*\) ansible') -eq 1 ];then
    echo "user ${ansible_user} exists and is in ansible group. Everything is fine"
    echo "user ${ansible_user} is not in ansible group. Did you try to use an existing account? Don't!"
    exit 1
function main(){
Seems fixing bad code in a forum is not that easy, that's why it's published on github now: meyayl/syno-ansible

It had some flaws, I didn't catch bymyself. Though, now it behaves like it should have in the first place.

Man, I do hate how Synology implemented synouser and synogroup. I wish they would have just used the standard useradd and groupadd...

Create an account or login to comment

You must be a member in order to leave a comment

Create account

Create an account on our community. It's easy!

Log in

Already have an account? Log in here.

Old thread notice: There have been no replies in this thread for quite some time. The last reply was on .
The content in this thread may no longer be relevant. It might be better to open a new thread instead.

Welcome to! is an unofficial Synology forum for NAS owners and enthusiasts.

Registration is free, easy and fast!