Portainer-ce - not able to access the web interface

Currently reading
Portainer-ce - not able to access the web interface

234
14
Operating system
  1. macOS
Mobile operating system
  1. iOS
Portainer-ce used to work fine. Stopped paying attention for a while... revisit (other docker apps work fine).
Getting a blank page...<address no valid> <dropped connection, try again>. Went so far as installing another instance (unique IP of 9001 and name) and same results.

1682902249397.png


1682902208150.png



Code:
2023/05/01 12:17AM INF github.com/portainer/portainer/api/cmd/portainer/main.go:494 > encryption key file not present | filename=portainer
stderr
08:17:04 pm
2023/05/01 12:17AM INF github.com/portainer/portainer/api/cmd/portainer/main.go:517 > proceeding without encryption key |
stderr
08:17:04 pm
2023/05/01 12:17AM INF github.com/portainer/portainer/api/database/boltdb/db.go:124 > loading PortainerDB | filename=portainer.db
stderr
08:17:30 pm
2023/05/01 12:17AM INF github.com/portainer/portainer/api/datastore/backup.go:109 > creating DB backup |
stderr
08:17:30 pm
2023/05/01 12:17AM INF github.com/portainer/portainer/api/datastore/backup.go:45 > copying DB file | from=/data/portainer.db to=/data/backups/common/portainer.db.2.6.20230501001730
stderr
08:17:30 pm
2023/05/01 12:17AM INF github.com/portainer/portainer/api/database/boltdb/db.go:124 > loading PortainerDB | filename=portainer.db
stderr
08:18:00 pm
2023/05/01 12:18AM INF github.com/portainer/portainer/api/datastore/migrate_data.go:109 > migrating database from version 2.6 to 2.18.1 |
stderr
08:18:00 pm
2023/05/01 12:18AM INF github.com/portainer/portainer/api/datastore/migrator/migrate_ce.go:53 > migrating data to 2.9.0 |
stderr
08:18:00 pm
2023/05/01 12:18AM INF github.com/portainer/portainer/api/datastore/migrator/migrate_dbversion31.go:42 > updating registries |
stderr
08:18:00 pm
2023/05/01 12:18AM INF github.com/portainer/portainer/api/datastore/migrator/migrate_dbversion31.go:86 > updating dockerhub |
stderr
08:18:00 pm
2023/05/01 12:18AM INF github.com/portainer/portainer/api/datastore/migrator/migrate_dbversion31.go:176 > updating resource controls |
stderr
08:18:00 pm
2023/05/01 12:18AM INF github.com/portainer/portainer/api/datastore/migrator/migrate_dbversion31.go:264 > updating kubeconfig expiry |
stderr
08:18:01 pm
2023/05/01 12:18AM INF github.com/portainer/portainer/api/datastore/migrator/migrate_dbversion31.go:276 > setting default helm repository URL |
stderr
08:18:01 pm
2023/05/01 12:18AM INF github.com/portainer/portainer/api/datastore/migrator/migrate_ce.go:53 > migrating data to 2.9.2 |
stderr
08:18:01 pm
2023/05/01 12:18AM INF github.com/portainer/portainer/api/datastore/migrator/migrate_dbversion32.go:10 > updating settings |
stderr
08:18:01 pm
2023/05/01 12:18AM INF github.com/portainer/portainer/api/datastore/migrator/migrate_dbversion32.go:16 > setting default kubctl shell |
stderr
08:18:01 pm
2023/05/01 12:18AM INF github.com/portainer/portainer/api/datastore/migrator/migrate_dbversion32.go:22 > setting default kubectl shell image |
stderr
08:18:03 pm
2023/05/01 12:18AM INF github.com/portainer/portainer/api/datastore/migrator/migrate_ce.go:53 > migrating data to 2.10.0 |
stderr
08:18:03 pm
2023/05/01 12:18AM INF github.com/portainer/portainer/api/datastore/migrator/migrate_dbversion33.go:10 > updating stacks |
stderr
08:18:03 pm
2023/05/01 12:18AM INF github.com/portainer/portainer/api/datastore/migrator/migrate_ce.go:53 > migrating data to 2.9.3 |
stderr
08:18:03 pm
2023/05/01 12:18AM INF github.com/portainer/portainer/api/datastore/migrator/migrate_dbversion31.go:86 > updating dockerhub |
stderr
08:18:03 pm
2023/05/01 12:18AM INF github.com/portainer/portainer/api/datastore/migrator/migrate_ce.go:53 > migrating data to 2.12.0 |
stderr
08:18:03 pm
2023/05/01 12:18AM INF github.com/portainer/portainer/api/datastore/migrator/migrate_dbversion35.go:11 > updating user authorizations |
stderr
08:18:03 pm
2023/05/01 12:18AM INF github.com/portainer/portainer/api/datastore/migrator/migrate_dbversion35.go:17 > updating user authorizations |
stderr
08:18:03 pm
2023/05/01 12:18AM INF github.com/portainer/portainer/api/datastore/migrator/migrate_ce.go:53 > migrating data to 2.13.0 |
stderr
08:18:03 pm
2023/05/01 12:18AM INF github.com/portainer/portainer/api/datastore/migrator/migrate_dbversion40.go:14 > trusting current edge endpoints |
stderr
08:18:03 pm
2023/05/01 12:18AM INF github.com/portainer/portainer/api/datastore/migrator/migrate_ce.go:53 > migrating data to 2.14.0 |
stderr
08:18:03 pm
2023/05/01 12:18AM INF github.com/portainer/portainer/api/datastore/migrator/migrate_dbversion50.go:13 > updating required password length |
stderr
08:18:05 pm
2023/05/01 12:18AM INF github.com/portainer/portainer/api/datastore/migrator/migrate_ce.go:53 > migrating data to 2.15.0 |
stderr
08:18:05 pm
2023/05/01 12:18AM INF github.com/portainer/portainer/api/datastore/migrator/migrate_dbversion60.go:14 > add gpu input field |
stderr
08:18:05 pm
2023/05/01 12:18AM INF github.com/portainer/portainer/api/datastore/migrator/migrate_ce.go:53 > migrating data to 2.16.0 |
stderr
08:18:05 pm
2023/05/01 12:18AM INF github.com/portainer/portainer/api/datastore/migrator/migrate_dbversion70.go:10 > add IngressAvailabilityPerNamespace field |
stderr
08:18:06 pm
2023/05/01 12:18AM INF github.com/portainer/portainer/api/datastore/migrator/migrate_dbversion70.go:22 > moving snapshots from endpoint to new object |
stderr
08:18:07 pm
2023/05/01 12:18AM INF github.com/portainer/portainer/api/datastore/migrator/migrate_dbversion70.go:40 > deleting snapshot from endpoint |
stderr
08:18:08 pm
2023/05/01 12:18AM INF github.com/portainer/portainer/api/datastore/migrator/migrate_ce.go:53 > migrating data to 2.16.1 |
stderr
08:18:08 pm
2023/05/01 12:18AM INF github.com/portainer/portainer/api/datastore/migrator/migrate_dbversion71.go:10 > removing orphaned snapshots |
stderr
08:18:08 pm
2023/05/01 12:18AM INF github.com/portainer/portainer/api/datastore/migrator/migrate_ce.go:53 > migrating data to 2.17.0 |
stderr
08:18:08 pm
2023/05/01 12:18AM INF github.com/portainer/portainer/api/datastore/migrator/migrate_dbversion80.go:69 > transfer type field to details field for edge stack status |
stderr
08:18:08 pm
2023/05/01 12:18AM INF github.com/portainer/portainer/api/datastore/migrator/migrate_dbversion80.go:27 > updating existing endpoints to not detect metrics API for existing endpoints (k8s) |
stderr
08:18:08 pm
2023/05/01 12:18AM INF github.com/portainer/portainer/api/datastore/migrator/migrate_dbversion80.go:48 > updating existing endpoints to not detect metrics API for existing endpoints (k8s) |
stderr
08:18:08 pm
2023/05/01 12:18AM INF github.com/portainer/portainer/api/datastore/migrator/migrate_ce.go:53 > migrating data to 2.18.0 |
stderr
08:18:08 pm
2023/05/01 12:18AM INF github.com/portainer/portainer/api/datastore/migrator/migrate_dbversion90.go:48 > updating existing user theme settings |
stderr
08:18:10 pm
2023/05/01 12:18AM INF github.com/portainer/portainer/api/datastore/migrator/migrate_dbversion90.go:23 > clean up deleted endpoints from edge jobs |
stderr
08:18:11 pm
2023/05/01 12:18AM INF github.com/portainer/portainer/api/datastore/migrator/migrate_ce.go:79 > db migrated to 2.18.1 |
stderr
08:18:16 pm
2023/05/01 12:18AM INF github.com/portainer/portainer/api/internal/ssl/ssl.go:80 > no cert files found, generating self signed SSL certificates |
stderr
08:18:17 pm
2023/05/01 00:18:17 server: Reverse tunnelling enabled
stdout
08:18:17 pm
2023/05/01 00:18:17 server: Fingerprint 6e:52:39:d3:a2:f4:97:4a:12:3a:a1:b2:f0:b5:d7:62
stdout
08:18:17 pm
2023/05/01 00:18:17 server: Listening on 0.0.0.0:8000...
stdout
08:18:19 pm
2023/05/01 12:18AM INF github.com/portainer/portainer/api/cmd/portainer/main.go:765 > starting Portainer | build_number=29775 go_version=1.19.4 image_tag=linux-amd64-2.18.1 nodejs_version=18.16.0 version=2.18.1 webpack_version=5.68.0 yarn_version=1.22.19
stderr
08:18:19 pm
2023/05/01 12:18AM INF github.com/portainer/portainer/api/http/server.go:342 > starting HTTPS server | bind_address=:9443
stderr
08:18:19 pm
2023/05/01 12:18AM INF github.com/portainer/portainer/api/http/server.go:327 > starting HTTP server | bind_address=:9000
 
Last edited:
ah... so that happens once in a while (browser thing) but isn't getting in the way as that is often stripped back. Also tried from different browsers (that do not default the www in there)
-- post merged: --

also now getting an error....but no real diagnosis...
unversioned API: requested URI not found
-- post merged: --

There’s a WWW before the IP address?
Rusty - you were a great help two years back when I was setting this up. Not sure what happened other than in trying to fix I essentially pulled a thread and heading further away from solved.

On the Synology Docker, I have a handful of solarr apps running under a single container. Portainer aside, the apps are all running fine (albeit need to be updated) - except portioner.

Likely need to delete any references to portioner and reinstall. Do not recall the specifics on how that was created to get the one portioner installed properly inside the single container. I do have copies of the original compose data if needed however do not want to wipe out EVERYTHING back to essentially factory default...just want portainter-ce running again (from which I'll then install watchtower and update.)

---
I ssh'd into it, did manual installs (few few didn't take, then got one to take however didn't get the configuration accurate. While I wasn't able to get to the portioner host page from the start, the manaul installs might have also contributed.
 
ah... so that happens once in a while (browser thing) but isn't getting in the way as that is often stripped back. Also tried from different browsers (that do not default the www in there)
-- post merged: --

also now getting an error....but no real diagnosis...
unversioned API: requested URI not found
-- post merged: --


Rusty - you were a great help two years back when I was setting this up. Not sure what happened other than in trying to fix I essentially pulled a thread and heading further away from solved.

On the Synology Docker, I have a handful of solarr apps running under a single container. Portainer aside, the apps are all running fine (albeit need to be updated) - except portioner.

Likely need to delete any references to portioner and reinstall. Do not recall the specifics on how that was created to get the one portioner installed properly inside the single container. I do have copies of the original compose data if needed however do not want to wipe out EVERYTHING back to essentially factory default...just want portainter-ce running again (from which I'll then install watchtower and update.)

---
I ssh'd into it, did manual installs (few few didn't take, then got one to take however didn't get the configuration accurate. While I wasn't able to get to the portioner host page from the start, the manaul installs might have also contributed.
Well running Portainer should be a simple SSH (as root) command to get it up and running

Code:
docker run -d --name=portainer_ce --restart always -v /volume/docker/portainer_ce/data:/data -v /var/run/docker.sock:/var/run/docker.sock -p 9000:9000 --pid=host portainer/portainer-ce:latest

Before that, ofc, remove any current Portainer container and re-run it again while connecting to the existing volume. If that fails, rerun it against a new volume and try again.
 
Just my 2c...

If you use firewall of DSM, check again the list of firewall rules.
It has happened to me, after an update or reinstall, that the rule for Portainer ports disappeared and I had to create it again.
 
@dimfil Firewall has actually been turned off. Glad you mentioned or I wouldn't have noticed and will turn back on once I get these issues ironed out.

@itsjasper Ages ago I had a certificate and that company bellied up..something I forgot about and didn't seem to matter for the services I was using. Why would portioner need a certificate? It is essentially (as I see it) an easier interface to work w/docker, all internally inside the network.

@Rusty Removed the visible Portainer containers. That said, when I was dong the manual install via terminal I kept having to edit the container name as it wouldn't accept a previous name even though it didn't look like anything happened. There are likely 5 instances of Portainer that I am not seeing...how are those seen or removed? That said, the problems experiences (original) were before installing versions of portioner.
 
Well running Portainer should be a simple SSH (as root) command to get it up and running

Code:
docker run -d --name=portainer_ce --restart always -v /volume/docker/portainer_ce/data:/data -v /var/run/docker.sock:/var/run/docker.sock -p 9000:9000 --pid=host portainer/portainer-ce:latest

Before that, ofc, remove any current Portainer container and re-run it again while connecting to the existing volume. If that fails, rerun it against a new volume and try again.


Error response from daemon: Bind mount failed: '/volume/docker/portainer_ce/data' does not exists.
1683153102057.png
1683153013891.png
 
Here are a few commands I use to manage my portainer installation:
  • To list all current containers... sudo docker container ls -a
  • To stop a container (put in CONTAINER_ID found from the previous command) ... sudo docker container stop CONTAINER_ID
  • To remove a container... sudo docker container rm CONTAINER_ID
If you have a few rogue portainer containers that you want to clean up then use the ls -a command and then stop, if running, and rm them. Then you can start a new install with the latest image, making sure to include in the command -v /var/run/docker.sock:/var/run/docker.sock.

I have a script that I use to update the image for my simple portainer setup.

I think the default HTTPS port is 9443, but could be wrong.
 
Here's my upgrade_portainer.sh script to install/update Portainer-CE. Change DATA_FOLDER to the path of your Portainer data folder in DSM. You can map just the ports you want to use externally, I have just HTTPS. It's only expecting to find, at most, one matching container.

Bash:
#!/bin/bash

#define parameters
CONTAINER_NAME="Portainer-CE"              # whatever you want to call the container
IMAGE="portainer/portainer-ce"             # image name on docker hub
IMAGE_VERSION="latest"                     # tagged version of image
DATA_FOLDER="/volume1/docker/portainer"    # !! you must create this directory !!

#port: leave parameter empty to ignore in container creation command
PORT_TUNNEL=""                             # internal port 8000
PORT_WEB_UI_HTTPS="9443"                   # internal port 9443
PORT_WEB_UI_HTTP=""                        # internal port 9000

#function to add desired ports to container creation command
#$1: your PORT_... number
#$2: container's internal port number
add_port () {
    if [ $1 ] && [ $2 ];
    then
        echo -n "-p ${1}:${2}"
    fi
}

# simple function to print out progress messages
echo_msg () {
    echo ""
    echo ">> $@"
}

#get ID of installed portainer-ce container, if any
CONTAINER_ID=`sudo docker container ls -a | grep "${IMAGE}" | awk '{print $1}'`

#stop and remove any installed portainer-ce, if found
if [ "${CONTAINER_ID}" ];
then

    echo_msg "${IMAGE} container found: ${CONTAINER_ID}"
    echo_msg "stopping ${IMAGE} container"
    sudo docker container stop "${CONTAINER_ID}"

    echo_msg "removing ${IMAGE} container from Docker"
    sudo docker container rm "${CONTAINER_ID}"

else
    echo_msg "did not find a ${IMAGE} container"
fi

#retrieve the correct version of the docker image
echo_msg "downloading lastest ${IMAGE} image"
sudo docker pull "${IMAGE}:${IMAGE_VERSION}"

#create new, running container using the required parameters linked to NAS folder data
echo_msg "creating new ${IMAGE} container"
sudo docker run -d `add_port "${PORT_TUNNEL}" 8000` `add_port "${PORT_WEB_UI_HTTPS}" 9443` `add_port "${PORT_WEB_UI_HTTP}" 9000` --name="${CONTAINER_NAME}" --restart=always -v /var/run/docker.sock:/var
/run/docker.sock -v "${DATA_FOLDER}":/data "${IMAGE}"

#list the new container that uses the new image. 'Up' status = it worked
echo_msg "check ${IMAGE} is running"
sudo docker container ls -a | grep "${IMAGE}"


And the output the other day when I last ran it...
Code:
$ ./upgrade_portainer.sh
Password:

>> portainer/portainer-ce container found: c147e035499f

>> stopping portainer/portainer-ce container
c147e035499f

>> removing portainer/portainer-ce container from Docker
c147e035499f

>> downloading lastest portainer/portainer-ce image
latest: Pulling from portainer/portainer-ce
772227786281: Already exists
96fd13befc87: Already exists
5171176db7f2: Pull complete
a143fdc4fa02: Pull complete
b622730c7bdc: Pull complete
69dd1305b74e: Pull complete
1b47df7d0b89: Pull complete
a323bd23deb3: Pull complete
d05534ed631a: Pull complete
001eed39bcf2: Pull complete
4f4fb700ef54: Pull complete
Digest: sha256:4488739b16a98364f7ffdb32f1152708ae2b2388add71b9aabbb1ab15275791c
Status: Downloaded newer image for portainer/portainer-ce:latest
docker.io/portainer/portainer-ce:latest

>> creating new portainer/portainer-ce container
5034c82047076fb1a6af9a96cb1cab67e4eb69608250ace0e39e23faba981b51

>> check portainer/portainer-ce is running
5034c8204707   portainer/portainer-ce           "/portainer"             11 seconds ago   Up Less than a second   8000/tcp, 9000/tcp, 0.0.0.0:9443->9443/tcp   Portainer-CE
 
Last edited:
I see a mismatch above:
/volume/docker/portainer_ce/data in the SSH command vs /volume/docker/portainer-ce in your folder structure.
Could it be the problem?
Good catch - so many experiments it is getting a little messy.
The original directory would be -ce. Change the command line from _ce to -ce (didn't work.). Changed the original directory from -ce to _ce. Not working.

Note that while I am unable to see instances for Portainer (deleted what I could see, none show in the list) it appears the instances still exist in the background and as such each try requires a unique name given the conflict w/a previous name.

Code:
ash-4.4# docker run -d --name=portainer_ce --restart always -v /volume/docker/portainer_ce/data:/data -v /var/run/docker.sock:/var/run/docker.sock -p 9000:9000 --pid=host portainer/portainer-ce:latest
053fd7de40c780833119a3e2f4d43125a1ffb252d96202cd0fb85e2ab2a23f63
docker: Error response from daemon: Bind mount failed: '/volume/docker/portainer_ce/data' does not exists.
ash-4.4# docker run -d --name=portainer-ce --restart always -v /volume/docker/portainer_ce/data:/data -v /var/run/docker.sock:/var/run/docker.sock -p 9000:9000 --pid=host portainer/portainer-ce:latest
b48e20cc835128fc4b6d4dc8ae0cb860c21e77fd129a36e446a7d106d55f64f7
docker: Error response from daemon: Bind mount failed: '/volume/docker/portainer_ce/data' does not exists.
ash-4.4# docker run -d --name=portainer_ce --restart always -v /volume/docker/portainer_ce/data:/data -v /var/run/docker.sock:/var/run/docker.sock -p 9000:9000 --pid=host portainer/portainer-ce:latest
docker: Error response from daemon: Conflict. The container name "/portainer_ce" is already in use by container "053fd7de40c780833119a3e2f4d43125a1ffb252d96202cd0fb85e2ab2a23f63". You have to remove (or rename) that container to be able to reuse that name.
See 'docker run --help'.
ash-4.4# docker run -d --name=portainer_ce7 --restart always -v /volume/docker/portainer_ce/data:/data -v /var/run/docker.sock:/var/run/docker.sock -p 9000:9000 --pid=host portainer/portainer-ce:latest
f181f3df8217aa5ea6ace7bcdfc788b4a0c88875e32f84800928d66ca3a992e5
docker: Error response from daemon: Bind mount failed: '/volume/docker/portainer_ce/data' does not exists.
^[[Aash-4.4# docker run -d --name=portainer_ce7 --restart always -v /volume/docker/portainer_ce/data:/data -v /var/run/docker.sock:/var/run/docker.sock -p 9000:9000 --pid=host portainer/portainer-ce:latest
docker: Error response from daemon: Conflict. The container name "/portainer_ce7" is already in use by container "f181f3df8217aa5ea6ace7bcdfc788b4a0c88875e32f84800928d66ca3a992e5". You have to remove (or rename) that container to be able to reuse that name.
See 'docker run --help'.
ash-4.4# docker run -d --name=portainer_ce8 --restart always -v /volume/docker/portainer_ce/data:/data -v /var/run/docker.sock:/var/run/docker.sock -p 9000:9000 --pid=host portainer/portainer-ce:latest
7501189dcf9065aec8b764b2689052d9779c5283eb070ee8eba1f883eae470c5
docker: Error response from daemon: Bind mount failed: '/volume/docker/portainer_ce/data' does not exists.
ash-4.4#

Code:
sh-4.4# docker run -d --name=portainer_ce9 --restart always -v /volume/docker/portainer_ce/data:/data -v /var/run/docker.sock:/var/run/docker.sock -p 9000:9000 --pid=host portainer/portainer-ce:latest
00db39a2e5394fc17938de256a967d21f65b95dfb55bfc91bdf95592ed4ab64f
docker: Error response from daemon: Bind mount failed: '/volume/docker/portainer_ce/data' does not exists.
ash-4.4# docker run -d --name=portainer_ce9 --restart always -v /volume/docker/portainer_ce/data:/data -v /var/run/docker.sock:/var/run/docker.sock -p 9000:9000 --pid=host portainer/portainer-ce:latest
docker: Error response from daemon: Conflict. The container name "/portainer_ce9" is already in use by container "00db39a2e5394fc17938de256a967d21f65b95dfb55bfc91bdf95592ed4ab64f". You have to remove (or rename) that container to be able to reuse that name.
See 'docker run --help'.
ash-4.4# docker run -d --name=portainer_ceA --restart always -v /volume/docker/portainer_ce/data:/data -v /var/run/docker.sock:/var/run/docker.sock -p 9000:9000 --pid=host portainer/portainer-ce:latest
5de9baee27cdbb9834f48cf2225129a660bcd20587d21d8ff67d567bbafb4487
docker: Error response from daemon: Bind mount failed: '/volume/docker/portainer_ce/data' does not exists.
ash-4.4#

-- post merged: --

Here are a few commands I use to manage my portainer installation:
  • To list all current containers... sudo docker container ls -a
  • To stop a container (put in CONTAINER_ID found from the previous command) ... sudo docker container stop CONTAINER_ID
  • To remove a container... sudo docker container rm CONTAINER_ID
If you have a few rogue portainer containers that you want to clean up then use the ls -a command and then stop, if running, and rm them. Then you can start a new install with the latest image, making sure to include in the command -v /var/run/docker.sock:/var/run/docker.sock.

I have a script that I use to update the image for my simple portainer setup.

I think the default HTTPS port is 9443, but could be wrong.
Thank you...

1683204103271.png


All Portainers are not (typo) now removed (confirmed in viewing the list post removal)
 
Last edited:
And none are running.
not = now
-- post merged: --

And none are running.
None are running (based on running that code line.... I see the other containers but none are portainer.
-- post merged: --

@Telos
Stopped and deleted all portainers again. Ran the line... here is what came back..different error code..it looked like progress. Not following all the references to already exists, as any portainers were deleted.

1683215416956.png
 
Ran the line... here is what came back..different error code
Looks like you are not using the 9000 port but 8000 port and that one is already in use by another container.

Clear this attempt, check what ports are available and use it (or use the default 9000), and try again.
 
Want to say thank you to everyone responding, the support here is amazing....


@Rusty
Traffic for anything in the container runs through Gluetun, and in looking at the compose (saved for reference from when it was set up and working) and the current display - you are correct. The saved compose doesn't include Portainer, it must have been using a unique set of ports on set up.

1683284946811.png


1683285012541.png
 
The saved compose doesn't include Portainer, it must have been using a unique set of ports on set up.
Regardless, your last attempt was to try and run Portianer on port 8000, and from this last image it is clear that it is already in use. So, just correct the port for the portainer container to some other value, and try and rerun it.
 
Regardless, your last attempt was to try and run Portianer on port 8000, and from this last image it is clear that it is already in use. So, just correct the port for the portainer container to some other value, and try and rerun it.
Such a novice.... while am in as admin on Synology, the command prompt shows my user name and sudo or su are not engaging. The username yesterday when things were working was ash-4.4#
 

Create an account or login to comment

You must be a member in order to leave a comment

Create account

Create an account on our community. It's easy!

Log in

Already have an account? Log in here.

Old thread notice: There have been no replies in this thread for quite some time. The last reply was on .
The content in this thread may no longer be relevant. It might be better to open a new thread instead.

Similar threads

In fact, I run 99% in Portainer but with a docker-compose method that I would run either way via cli the...
Replies
12
Views
4,506
  • Question
If "I really don't need to access this disk anywhere but in my house" I am not sure why it matters that...
Replies
6
Views
1,645
It does. I have some site to site VPNs running. No doubt things will break for a while as I implement...
Replies
2
Views
1,091
I would be cautious responding to this thread, as it seems that the "OP" is attempting to circumvent...
Replies
3
Views
1,012
Ofc, but initially nothing was said by OP regarding what account is in question here. So didn’t want to...
Replies
3
Views
2,666
Once you get to your data again after recovery (not using QNAP) you will need the key file to decrypt the data
Replies
1
Views
1,582

Welcome to SynoForum.com!

SynoForum.com is an unofficial Synology forum for NAS owners and enthusiasts.

Registration is free, easy and fast!

Back
Top