Like WST16 wrote, this by no means is related to router functionality.
Its is simply how certificate validation in HTTPS works:
a certificate is created for a Common Name (CN) and zero to n Subject Alternative Name (SAN) - never for an ip. If a service is setup to provide https access using this certificate, clients need to addess the service using an URL that either has the CN or one of the SANs as the fqdn.
Though, before you add an exception for the certificate in your client, you might want to inspect it and be sure that is the certificate you used. If a man in the middle attack should ever happen, your client will warn you about a missmatch of the previous trusted exception and the current one.
What is the advantage of running internal services? Is the network you run them not trustworthy? Is the network you use to access them not trustworthy? Imho, https only adds value if services are exposed to the internet or mutual TLS is required to authenticate services amongst each other.
For internet exposed services, I ended up creating this setup:
- pointing a wildcard domain to my router (I host my own dyndns service that daily updates the dns entry to my current dynamic WAN IP using the dns-api of my provider)
- forward the incomming traffic from my router to my docker cluster's port used to publish Traefik
- use Traefik to handle the TLS lifecylce
| - termination of the TLS traffic and forwarding to the target container based on Docker Labels
Recently I added KeyCloak to the mix, to get some sort of SingleSignOn. When you try to access the first target service, you will be forwarded to a KeyCloak login screen and returned to your target service after a successful login. SSO only works consistant for services that support OIDC/SAMLv2 or the X-FORWARD-USER header ootb... everthing else might require a second login in the target application.