I had portainer working yesterday when I installed it. After turning the server off in the evening and starting it again today I can't access it anymore (ERR_CONNECTION_TIMED_OUT).
I think there may be a problem with some configuration of docker itself as I tried to configure docker swarm / portainer agent yesterday but failed.
What I tried so far:
sudo docker rm -f portainer
sudo docker volume rm -f portainer_data
sudo docker run -d --privileged -p 9000:9000 --name portainer --restart=always -v /var/run/docker.sock:/var/run/docker.sock -v portainer_data:/data portainer/portainer-ce:latest
I still had ERR_CONNECTION_TIMED_OUT. To check any firewall issues:
sudo ufw allow 9000
and then
nc -l 9000
When I now try to access portainer on port 9000, I get the following response:
GET / HTTP/1.1
Host: 192.168.178.46:9000
Connection: keep-alive
Cache-Control: max-age=0
Upgrade-Insecure-Requests: 1
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/106.0.0.0 Safari/537.36
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9
Accept-Encoding: gzip, deflate
Accept-Language: de-DE,de;q=0.9,en-US;q=0.8,en;q=0.7
Cookie: _pk_id.1.15bd=2476c3cd963cb84b.1667402782.
I don't know what exactly that means, but to me it looks like my request gets to the server but for any reason portainer doesn't answer it.
I also checked the logs of portainer, that showed:
sudo docker logs portainer
2022/11/03 09:45AM INF github.com/portainer/portainer/api/cmd/portainer/main.go:530 > encryption key file not present | filename=portainer
2022/11/03 09:45AM INF github.com/portainer/portainer/api/cmd/portainer/main.go:549 > proceeding without encryption key |
2022/11/03 09:45AM INF github.com/portainer/portainer/api/database/boltdb/db.go:124 > loading PortainerDB | filename=portainer.db
2022/11/03 09:45AM INF github.com/portainer/portainer/api/internal/ssl/ssl.go:80 > no cert files found, generating self signed SSL certificates |
2022/11/03 09:45:47 server: Reverse tunnelling enabled
2022/11/03 09:45:47 server: Fingerprint cf:97:21:21:46:a0:bf:ef:aa:e0:7c:66:f9:77:86:67
2022/11/03 09:45:47 server: Listening on 0.0.0.0:8000...
2022/11/03 09:45AM INF github.com/portainer/portainer/api/cmd/portainer/main.go:789 > starting Portainer | build_number=24674 go_version=1.18.3 image_tag=linux-amd64-2.16.0 nodejs_version=14.20.1 version=2.16.0 webpack_version=5.68.0 yarn_version=1.22.19
2022/11/03 09:45AM INF github.com/portainer/portainer/api/http/server.go:337 > starting HTTPS server | bind_address=:9443
2022/11/03 09:45AM INF github.com/portainer/portainer/api/http/server.go:322 > starting HTTP server | bind_address=:9000
Okay, I managed to fix this.
TL;DR:
I removed the "insecure" line inside of /etc/docker/daemon.json
As I already mentioned I tried to get docker swarm / portainer agent running (to be able to look inside of the volumes I created).
By trying to create a Docker Swarm via portainer it asks for an environment address, where I tried LOCALHOST:9000 (Connection test failed).
I then tried to figure out why that fails and found, that I needed to add
{ "insecure-registries" : ["LOCALHOST-IP:9000"] }
to /etc/docker/daemon.json. That didn't solve the issue and I forgot about that. It turns out that removing this line (and restarting the server) gave me access to portainer again.
Still need to figure out how I can look inside volumes from inside portainer but thats a chapter of another book.
Related
Context:
I'm setting up a deployment tooling image which contains aws-cli, kubectl and helm. Testing the image locally, I found out that kubectl times out in the container despite working fine on the host (my laptop).
Tested with alpine/k8s:1.19.16 image as well (same docker run command options) and ran into the same issue.
What I did:
I'm on OS X and have kubectl, aws-cli and helm installed via brew
I have valid (not expired yet) AWS credential (~/.aws/credentials) and ~/.kube/config
running aws s3 ls s3://my-bucket works on my laptop, returning the correct response
running kubectl get pods -A works on my laptop, returning the correct response
switching to running these in containers with docker run. no context change. this issue exists in both the image I created and an official k8s tooling image from alpine. for simplicity reason I'll use alpine/k8s:1.19.16 in my command
command for launching container console: docker run --rm -it --entrypoint="" -e AWS_PROFILE -v /Users/myself/.aws:/root/.aws -v /Users/myself/.kube/config:/root/.kube/config -e SSH_AUTH_SOCK="/run/host-services/ssh-auth.sock" -v /run/host-services/ssh-auth.sock:/run/host-services/ssh-auth.sock -e GIT_SSH_COMMAND="ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no" alpine/k8s:1.19.16 /bin/bash
in the launched console:
running aws s3 ls s3://my-bucket still works fine, returning the correct response
running kubectl get pods -A times out
I compared the verbose output of kubectl get pods --v=8 (with the same context and ~/.kube/config):
on the host (my laptop)
I0826 01:24:11.999265 43571 loader.go:372] Config loaded from file: /Users/myself/.kube/config
I0826 01:24:12.014315 43571 round_trippers.go:463] GET https://<current-context-k8s-dns-name>/apis/external.metrics.k8s.io/v1beta1?timeout=32s
I0826 01:24:12.014330 43571 round_trippers.go:469] Request Headers:
I0826 01:24:12.014351 43571 round_trippers.go:473] User-Agent: kubectl/v1.24.0 (darwin/amd64) kubernetes/4ce5a89
I0826 01:24:12.014358 43571 round_trippers.go:473] Accept: application/json, */*
I0826 01:24:12.443152 43571 round_trippers.go:574] Response Status: 200 OK in 428 milliseconds
in the console (docker container):
I0826 08:25:47.066787 19 loader.go:375] Config loaded from file: /root/.kube/config
I0826 08:25:47.067505 19 round_trippers.go:421] GET https://<current-context-k8s-dns-name>/api?timeout=32s
I0826 08:25:47.067532 19 round_trippers.go:428] Request Headers:
I0826 08:25:47.067538 19 round_trippers.go:432] Accept: application/json, */*
I0826 08:25:47.067542 19 round_trippers.go:432] User-Agent: kubectl/v1.19.16 (linux/amd64) kubernetes/e37e4ab
I0826 08:26:17.047076 19 round_trippers.go:447] Response Status: in 30000 milliseconds
The ~/.kube/config was mounted correctly and the verbose log verified that it's loaded correctly, pointing to the right https endpoint. I tried ssh (by IP) to one of the master node by ip (from both container and laptop): laptop worked but the same ssh command from container timed out too.
nslookup <current-context-k8s-dns-name> from both container and laptop gave slightly different output.
from my laptop(host):
nslookup <current-context-k8s-dns-name>
Server: 10.253.0.2
Address: 10.253.0.2#53
Non-authoritative answer:
Name: <current-context-k8s-dns-name>
Address: 172.20.50.40
Name: <current-context-k8s-dns-name>
Address: 172.20.50.41
Name: <current-context-k8s-dns-name>
Address: 172.20.50.42
from the container:
nslookup <current-context-k8s-dns-name>
Server: 192.168.65.5
Address: 192.168.65.5:53
Non-authoritative answer:
Name: <current-context-k8s-dns-name>
Address: 172.20.50.40
Name: <current-context-k8s-dns-name>
Address: 172.20.50.41
Name: <current-context-k8s-dns-name>
Address: 172.20.50.42
I have a feeling that this has something to do with docker network but I don't know enough to solve this. I'll be deeply grateful if anyone can help explain this to me.
Thanks in advance
this issue is clearly a docker networking issue.
ran tcpdump + telnet on both host and container to compare the output and seems like the packets are not even routed to the host.
end up doing a docker network prune and this issue is resolved. nothing wrong with the setup, it's a known issue for docker on OSX
Trying to communicate with a running docker container by running a simple curl:
curl -v -s -X POST http://localhost:4873/_session -d \'name=some\&password=thing\'
Which works fine from any shell (login/interactive), but miserably fails when doing it in a script:
temp=$(curl -v -s -X POST http://localhost:4873/_session -d \'name=some\&password=thing\')
echo $temp
With error output suggesting a connection reset:
* Trying 127.0.0.1:4873...
* TCP_NODELAY set
* Connected to localhost (127.0.0.1) port 4873 (#0)
> POST /_session HTTP/1.1
> Host: localhost:4873
> User-Agent: curl/7.68.0
> Accept: */*
> Content-Length: 29
> Content-Type: application/x-www-form-urlencoded
>
} [29 bytes data]
* upload completely sent off: 29 out of 29 bytes
* Recv failure: Connection reset by peer <-- this! why?
* Closing connection 0
I'm lost and any hint is appreciated.
PS: tried without subshell and same happens so it's something with the script or the way it's executed.
Edit 1
Added docker compose file. I don't see why regular shell works, but script does not. Note that script is not ran inside docker, it's also running from host.
version: "2.1"
services:
verdaccio:
image: verdaccio/verdaccio:4
container_name: verdaccio-docker-local-storage-vol
ports:
- "4873:4873"
volumes:
- "./storage:/verdaccio/storage"
- "./conf:/verdaccio/conf"
volumes:
verdaccio:
driver: local
Edit 2
So doing temp=$(curl -v -s http://www.google.com) works fine in the script. It's some kind of networking issue, but I still haven't managed to figure out why.
Edit 3
Lots of people suggested to reformat the payload data, but even without a payload same error is thrown. Also note I'm on Linux so not sure if there are any permissions that can play a role here.
if you are using bash script, Can you update the script with below change and try to run again.
address="http://127.0.0.1:4873/_session"
cred="{\"name\":\"some\", \"password\":\"thing\"}"
temp="curl -v -s -X POST $address -d $cred"
echo $temp
I suspect the issue is within the script and not with docker.
If you run your container in default mode, docker daemon will locate it in another network, so 'localhost' of your host machine and that one of your container are different.
If you want to see the host machine ports from your container, try to run it with key --network="host" (detailed description can be found here)
I have downloaded the Portainer image and created the container in the Docker manager node, by using the below command.
docker run -d -p 61010:9000 -v /var/run/docker.sock:/var/run/docker.sock portainer/portainer
But after some time the container is getting excited. Also when I access the Portainer with the above port it's just saying Portainer loading and nothing is happening. PFB the logs for the Portainer
2019/10/16 16:20:58 server: Reverse tunnelling enabled
2019/10/16 16:20:58 server: Fingerprint 43:68:57:37:e4:3f:f7:98:bd:52:13:39:c6:6d:24:c9
2019/10/16 16:20:58 server: Listening on 0.0.0.0:8000...
2019/10/16 16:20:58 Starting Portainer 1.22.1 on :9000
2019/10/16 16:20:58 [DEBUG] [chisel, monitoring] [check_interval_seconds: 10.000000] [message:
starting tunnel management process]
2019/10/16 16:25:58 No administrator account was created after 5 min. Shutting down the Portainer
instance for security reasons.
2019/10/16 16:30:12 Templates already registered inside the database. Skipping template import.
2019/10/16 16:30:12 server: Reverse tunnelling enabled
2019/10/16 16:30:12 server: Fingerprint 43:68:57:37:e4:3f:f7:98:bd:52:13:39:c6:6d:24:c9
2019/10/16 16:30:12 server: Listening on 0.0.0.0:8000...
2019/10/16 16:30:12 Starting Portainer 1.22.1 on :9000
2019/10/16 16:30:12 [DEBUG] [chisel, monitoring] [check_interval_seconds: 10.000000] [message:
starting tunnel management process]
2019/10/16 16:35:12 No administrator account was created after 5 min. Shutting down the Portainer
instance for security reasons.
I am not sure whether the Porainer is running on 61010. Also, do i need to install Agent for this to work Please help to resolve this.
Follow the docs and it should work:
Quick start If you are running Linux, deploying Portainer is as simple
as:
$ docker volume create portainer_data
$ docker run -d -p 9000:9000 -p 8000:8000 --name portainer --restart always -v /var/run/docker.sock:/var/run/docker.sock -v portainer_data:/data portainer/portainer
Voilà, you can now use Portainer by accessing the
port 9000 on the server where Portainer is running.
Once you access the localhost:9000 in the browser, you will be required to created admin account, afterwards you will see the Portainer ui
Hello everybody and happy new year,
I have some issues connecting to my bluemix container,
I followed this IBM's Bluemix guide
to learn how to pull a docker image to bluemix image repository, which works.
Then I executed the command to open port 9080 server side (the vm one is 5000 according to my Dockerfile)
PS: ((I have tried with -P instead of -p 9080 or -p 9080:5000 but none works in fixing this issue))
cf ic -v run -d -p 9080 --name testdebug registry.ng.bluemix.net/datainjector/esolom python testGUI.py
after a "cf ic ps" I obtain:
CONTAINER ID IMAGE
8457d4bb-247 registry.ng.bluemix.net/datainjector/esolom:latest
COMMAND PORTS NAMES
"python testGUI.py " 9080/tcp testdebug
the debug command (executed while running the image) reports me this:
DEMANDE : [2017-01-12T11:57:42+01:00]
POST /UAALoginServerWAR/oauth/token HTTP/1.1
Host: login.ng.bluemix.net
Accept: application/json
Authorization: [DONNEES PRIVEES MASQUEES]
Connection: close
Content-Type: application/x-www-form-urlencoded
User-Agent: go-cli 6.22.2+a95e24c / darwin
grant_type=refresh_token&refresh_token=eyJhbGciOiJIUzI1NiJ9.eyJqdGkiOiJmMTE5ZTI2MC0zZGE4LTQ5NzctOTI4OS05YjY1ZDcwMmM2OWQtciIsInN1YiI6ImZkMWVmM2Q3LTI2OTQtNDQ4Ni1iNjY2LWRmNTVjY2M4MzVmOCIsInNjb3BlIjpbIm9wZW5pZCIsInVhYS51c2VyIiwiY2xvdWRfY29udHJvbGxlci5yZWFkIiwicGFzc3dvcmQud3JpdGUiLCJjbG91ZF9jb250cm9sbGVyLndyaXRlIl0sImlhdCI6MTQ4NDIxMTE1NSwiZXhwIjoxNDg2ODAzMTU1LCJjaWQiOiJjZiIsImNsaWVudF9pZCI6ImNmIiwiaXNzIjoiaHR0cHM6Ly91YWEubmcuYmx1ZW1peC5uZXQvb2F1dGgvdG9rZW4iLCJ6aWQiOiJ1YWEiLCJncmFudF90eXBlIjoicGFzc3dvcmQiLCJ1c2VyX25hbWUiOiJlbW1hbnVlbC5zb2xvbUBmci5pYm0uY29tIiwib3JpZ2luIjoidWFhIiwidXNlcl9pZCI6ImZkMWVmM2Q3LTI2OTQtNDQ4Ni1iNjY2LWRmNTVjY2M4MzVmOCIsInJldl9zaWciOiI2MWNkZjM4MiIsImF1ZCI6WyJjZiIsIm9wZW5pZCIsInVhYSIsImNsb3VkX2NvbnRyb2xsZXIiLCJwYXNzd29yZCJdfQ.PIQlkKPDwxfa0c6951pO52qcAzggfPGrsCMuFl4V-eY&scope=
REPONSE : [2017-01-12T11:57:42+01:00]
HTTP/1.1 200 OK
Connection: close
Transfer-Encoding: chunked
Cache-Control: no-cache, no-store, max-age=0, must-revalidate,no-store
Content-Security-Policy: default-src 'self' www.ibm.com 'unsafe-inline';
Content-Type: application/json;charset=UTF-8
Date: Thu, 12 Jan 2017 10:57:42 GMT
Expires: 0
Pragma: no-cache,no-cache
Server: Apache-Coyote/1.1
Strict-Transport-Security: max-age=2592000 ; includeSubDomains
X-Archived-Client-Ip: 169.54.180.83
X-Backside-Transport: OK OK,OK OK
X-Client-Ip: 169.54.180.83,91.151.65.169
X-Content-Type-Options: nosniff
X-Frame-Options: DENY
X-Global-Transaction-Id: 3429006079
X-Powered-By: Servlet/3.1
X-Vcap-Request-Id: 8990fd56-827c-4956-696d-497922464ac0,04094b44-0ace- 4891-6ec0-c4855fd481f7
X-Xss-Protection: 1; mode=block
6f6
{"access_token":"[DONNEES PRIVEES MASQUEES]","token_type":"[DONNEES PRIVEES MASQUEES]","refresh_token":"[DONNEES PRIVEES MASQUEES]","expires_in":1209599,"scope":"cloud_controller.read password.write cloud_controller.write openid uaa.user","jti":"9619d1dd-995f-41b4-8a8a-825af8397ccb"}
0
ae4b3e08-4ba6-47d9-bf1f-30654af7fcfc
Next I bind an IP I requested with:
cf ic ip request // 169.46.18.243 was given
cf ic ip bind 169.46.18.243 testdebug
OK
The IP address was bound successfully.
And the "cf ic ps" command gave me this:
CONTAINER ID IMAGE
1afd6916-718 registry.ng.bluemix.net/datainjector/esolom:latest
COMMAND CREATED STATUS
"python testGUI.py " 5 minutes ago Running 5 minutes ago
PORTS NAMES
169.46.18.243:9080->9080/tcp testdebug
Associated logs:
cf ic logs -ft testdebug
2017-01-12T12:45:54.531512850Z /usr/local/lib/python2.7/dist-packages/requests/packages/urllib3/util/ssl_.py:334: SNIMissingWarning: An HTTPS request has been made, but the SNI (Subject Name Indication) extension to TLS is not available on this platform. This may cause the server to present an incorrect TLS certificate, which can cause validation failures. You can upgrade to a newer version of Python to solve this. For more information, see https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings
32017-01-12T12:45:54.531555759Z SNIMissingWarning
�2017-01-12T12:45:54.531568916Z /usr/local/lib/python2.7/dist-packages/requests/packages/urllib3/util/ssl_.py:132: InsecurePlatformWarning: A true SSLContext object is not available. This prevents urllib3 from configuring SSL appropriately and may cause certain SSL connections to fail. You can upgrade to a newer version of Python to solve this. For more information, see https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings
92017-01-12T12:45:54.531576871Z InsecurePlatformWarning
�2017-01-12T12:45:54.836875400Z /usr/local/lib/python2.7/dist-packages/requests/packages/urllib3/util/ssl_.py:132: InsecurePlatformWarning: A true SSLContext object is not available. This prevents urllib3 from configuring SSL appropriately and may cause certain SSL connections to fail. You can upgrade to a newer version of Python to solve this. For more information, see https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings
92017-01-12T12:45:54.836895214Z InsecurePlatformWarning
�2017-01-12T12:45:54.884966459Z WebSocket transport not available. Install eventlet or gevent and gevent-websocket for improved performance.
Y2017-01-12T12:45:54.889150620Z * Running on http://0.0.0.0:5000/ (Press CTRL+C to quit)
So, is the fact that the same ports are attributed to host and VM are responsible for "connection: close" in the debug log? or are those two different problems?
Is "connection: close" explains that I can't connect to the web app?
Do you have an idea of how to fix this? (something to fix in the image? or an additional option in the CLI?)
Thank you for reading, your commitment and for your answers!
PS:
Clues: I'm looking for modifying the Dockerfile, I read that I have to had 2 instructions in order to integrate my docker image to Bluemix, a sleep command so that I'm sure the container is up before calling him, and apparently the "ENV PORT 3000" instruction, here is my Dockerfile, don't hesitate to review it, a simple error easily happens.
FROM ubuntu:14.04
RUN apt-get update && apt-get -y install python2.7
RUN apt-get -y install python-pip
RUN pip install Flask
RUN pip install ibmiotf
RUN pip install requests
RUN pip install flask-socketio
RUN pip install cloudant
ENV PORT=3000
EXPOSE 3000
ADD ./SIARA /opt/SIARA/
WORKDIR /opt/SIARA/
CMD (sleep 60)
CMD ["python", "testGUI.py"]
changed the Dockerfile for:
CMD sleep 60 && python testGUI.py
after seeing the #gile comment.
Even if it didn't works immediately, after 3 hours it works surprisingly well!
EDIT: seems it was just a lucky shot, I can't access to the container again, my idea is that it worked because my dockerHub rebuilt the image after commiting the changes on the Dockerfile because it worked just after I copied the images from dockerhub repo to bluemix private repo.
Does it highlight an issue? Because I don't understand why it didn't worked from again...
Pretty straightforward:
christian#christian:~/development$ docker -v
Docker version 1.6.2, build 7c8fca2
I ran these instructions to start docker.
docker run --detach --name neo4j --publish 7474:7474 \
--volume $HOME/neo4j/data:/data neo4j
Nothing exciting here; this should all just work.
But, http://localhost:7474 doesn't respond. When I jump into the container, it seems to respond just fine (see debug session). What did I miss?
christian#christian:~$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
2d9e0d5d2f73 neo4j:latest "/docker-entrypoint. 15 minutes ago Up 15 minutes 7473/tcp, 0.0.0.0:7474->7474/tcp neo4j
christian#christian:~$ curl http://localhost:7474
^C
christian#christian:~$ time curl http://localhost:7474
^C
real 0m33.353s
user 0m0.008s
sys 0m0.000s
christian#christian:~$ docker exec -it 2d9e0d5d2f7389ed8b7c91d923af4a664471a93f805deb491b20fe14d389a3d2 /bin/bash
root#2d9e0d5d2f73:/var/lib/neo4j# curl http://localhost:7474
{
"management" : "http://localhost:7474/db/manage/",
"data" : "http://localhost:7474/db/data/"
}root#2d9e0d5d2f73:/var/lib/neo4j# exit
christian#christian:~$ docker logs 2d9e0d5d2f7389ed8b7c91d923af4a664471a93f805deb491b20fe14d389a3d2
Starting Neo4j Server console-mode...
/var/lib/neo4j/data/log was missing, recreating...
2016-03-07 17:37:22.878+0000 INFO No SSL certificate found, generating a self-signed certificate..
2016-03-07 17:37:25.276+0000 INFO Successfully started database
2016-03-07 17:37:25.302+0000 INFO Starting HTTP on port 7474 (4 threads available)
2016-03-07 17:37:25.462+0000 INFO Enabling HTTPS on port 7473
2016-03-07 17:37:25.531+0000 INFO Mounting static content at /webadmin
2016-03-07 17:37:25.579+0000 INFO Mounting static content at /browser
2016-03-07 17:37:26.384+0000 INFO Remote interface ready and available at http://0.0.0.0:7474/
I can't reproduce this. Docker 1.8.2. & 1.10.0 is OK with your case:
docker run --detach --name neo4j --publish 7474:7474 neo4j
curl -i 127.0.0.1:7474
HTTP/1.1 200 OK
Date: Tue, 08 Mar 2016 16:45:46 GMT
Content-Type: application/json; charset=UTF-8
Access-Control-Allow-Origin: *
Content-Length: 100
Server: Jetty(9.2.4.v20141103)
{
"management" : "http://127.0.0.1:7474/db/manage/",
"data" : "http://127.0.0.1:7474/db/data/"
}
Try upgrade Docker and check netfilter rules for forwarding.
Instead of making the request to localhost you'll want to use the docker-machine VM ip address, which you can determine with this command:
docker-machine inspect default | grep IPAddress
or
curl -i http://$(docker-machine ip default):7474/
The default IP address is 192.168.99.100
OK, basically I removed the volume mount in the args to docker and it works. Ultimately, I don't want an out-of-container mount anyways. Thank you #LoadAverage for cluing me in. It's still not 'right' but for my purposes I don't care.
christian#christian:~/development$ docker run --detach --name neo4j --publish 7474:7474 neo4j
6c94527816057f8ca1e325c8f9fa7b441b4a5d26682f72d42ad17614d9251170
christian#christian:~/development$ curl http://127.0.0.1:7474
{
"management" : "http://127.0.0.1:7474/db/manage/",
"data" : "http://127.0.0.1:7474/db/data/"
}
christian#christian:~/development$