I am trying to run Portainer on Raspberry Pi 4.
After creating the admin user I can't get pass the time out warning.
our Portainer instance timed out for security purposes. To re-enable your Portainer instance, you will need to restart Portainer.
Tried:
stopping and starting Portainer
sudo docker restart portainer
recreating the container
using another port
Anyone an idea of whats wrong here?
Thanks for yr effort in advance JB
Related
I am running portainer in the cloud with NGINX proxy manager.
I want to accomplish the following
I want to run ADGUARD Home as a docker and make it such that my DNS queries on my devices go through as https request but the nginx proxy manager should correcly route the dns traffic to my adguard portainer and give me the results.
I read this but still have issues implementing this
https://github.com/AdguardTeam/AdguardHome/wiki/Encryption#install
I am trying to do this since i don't want to install a vpn on all my clients or open my port 53 to the entire world
Please help me out.
I saw this problem over here https://www.youtube.com/watch?v=csAmd0JxYV4
This was how it was addressed:
Duncan Ross
6 months ago
Great job! Just in case anyone out there has issues with the container deploying saying that port 53 is already in use. The host system (in my case ubuntu) is probably using it. Try this:
sudo systemctl stop systemd-resolved
sudo systemctl disable systemd-resolved
This fixed this issue and allowed the container to deploy....happy days.
Running Ubuntu desktop as server as a VM on ESXI and have Docker running there and Home assistant.
The problem is - When power source breaks, the machine autorestarts, VM reboots, but docker does not always boot up and Home assistant doesn't work either.
When I type in
sudo docker ps
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running
Then I got to ESXI - hit restart and all works like a charm.
What can it be guys?
You need to enable the docker service to run it automatically when the system restarts. The command is:
sudo systemctl enable docker
thank you, i tried that and got this line
Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /lib/systemd/system/docker.service.
i guess this should help me out.
but i still don't understand why it autostarted sometimes, and sometimes not
I was watching nginx tutorial and for the purpose of follow up, I created Ubuntu 18.04 docker container. I installed and started nginx service as shown as in the tutorial and everything was going well. Then I removed both docker image and container that I'm working on. Despite removal of container and image, "http://104.200.23.232/" this address on my machine still returns nginx welcome page. As I think, this indicates that nginx service is still up and running. My question is that how can I stop and disable auto start of nginx service now?
Note: My host machine operating system is Windows 10 and restarting computer did not help to solve this problem.
Yeah, well blind guessing here but the IP "104.200.23.232" is registered for Linode which is a cloudhosting/vps provider, right? so probably not the IP of your local computer. Where did you get the reference for this ip? Did you try or install something on a cloudserver? I am pretty sure there are just some things mixed up.
Use "prune" to start fresh:
docker system prune
Try this to stop nginx in docker container:
service nginx stop
I'm new to Docker and I'm Running macOS Sierra. I have installed Docker and can get the open Kitematic on localhost I can see my containers without issue and can access the site on localhost - when I switch to VirutalBox, my containers are no longer in Kitematic and I can't figure out how to access them in my browser.
Could anyone give me some insight on what to do here?
Your intention for getting the IP of your host(192.*) is unclear to me.
You can achieve this by simply adding the net flag in your docker instruction '--net host'
For more details go here.
If your intention is to make your container available to everyone, then there is no need to do it. Everyone should be able to access your docker container by accessing your machine's ip followed by port number (http://192.168.x.x:xxxx)
I have some problem using docker swarm mode .
I want to have high availability with swarm mode.
I think I can do that with rolling update of swarm.
Something like this...
docker service update --env-add test=test --update-parallelism 1 --update-delay 10s 6bwm30rfabq4
However there is a problem.
My docker image have entrypoint. Because of this there is a little delay before the service(I mean docker container) is really up. But docker service just think the service is already running, because status of the container is 'Up'. Even the service still do some work on entrypoint. So some container return error when I try to connect the service.
For example, if I create docker service named 'test' and scale up to 4 with port 8080. I can access test:8080 on web browser. And I try to rolling update with --update-parallelism 1 --update-delay 10s options. After that I try to connect the service again.. one container return error.. Because Docker service think that container already run..even the container still doesn't up because of entrypoint. And after 10s another container return error.. because update is started and docker service also think that container is already up.
So.. Is there any solution to solve this problem?
Should I make some nginx settings for disconnect connection to error container and reconnect another one?
The HEALTHCHECK Dockerfile command works for this use case. You specify how Docker should check if the container is available, and it gets used during updates as well as checking service levels in Swarm.
There's a good article about it here: Reducing Deploy Risk With Docker’s New Health Check Instruction.