Docker manager 1 grants access 2 times out - docker

I work with 2 managers and 4 worker nodes.
root#ist-manager-1:~# docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION
xx antep-db-1 Ready Active 20.10.23
xx antep-manager-1 Ready Active Leader 20.10.23
xx antep-worker-1 Ready Active 20.10.23
xx ist-db-1 Ready Active 20.10.23
xx * ist-manager-1 Ready Active Reachable 20.10.23
xx ist-worker-1 Ready Active 20.10.23
I have a microservice running on laravel and octane that I wrote myself.
The yaml file I used to boot the service
version: '3.2'
services:
service:
image: xx/auth-service-roudrunner:1.0.2
tty: true
networks:
- kong-net
deploy:
placement:
constraints:
- node.role == worker
- node.labels.type == worker
environment:
- DB_HOST=auth_db
- DB_PORT=3306
- DB_DATABASE=auth-db
- DB_USERNAME=root
- DB_PASSWORD=xx
db:
image: mysql
tty: true
networks:
- kong-net
deploy:
placement:
constraints:
- node.role == worker
- node.labels.type == db
environment:
- MYSQL_ROOT_PASSWORD=xxx
- MYSQL_DATABASE=auth-db
networks:
kong-net:
external: true
My problem is:
When I send a curl request to auth_service:8000 on ist-manager-1, the response is received.
When I send a curl request to auth_service:8000 on antep-manager-1, the connection times out.
I can ping from two managers. I can share port directly and access over IP.
ist-manager-1 curl:
root#ist-manager-1:~# docker run --network kong-net --rm curlimages/curl:7.87.0 --max-time 15 -L -v http://auth_service:8000
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0* Trying 10.0.6.40:8000...
* Connected to auth_service (10.0.6.40) port 8000 (#0)
> GET / HTTP/1.1
> Host: auth_service:8000
> User-Agent: curl/7.87.0-DEV
> Accept: */*
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< Cache-Control: no-cache, private
< Content-Type: text/html; charset=UTF-8
< Date: Fri, 27 Jan 2023 20:58:43 GMT
< Set-Cookie: XSRF-TOKEN=dd; expires=Fri, 27 Jan 2023 22:58:43 GMT; Max-Age=7200; path=/; samesite=lax
< Set-Cookie: xx_auth_service_session=yy; expires=Fri, 27 Jan 2023 22:58:43 GMT; Max-Age=7200; path=/; httponly; samesite=lax
< Content-Length: 6
<
{ [6 bytes data]
100 6 100 6 0 0 645 0 --:--:-- --:--:-- --:--:-- 666
* Connection #0 to host auth_service left intact
antep-manager-1 curl:
root#antep-manager-1:~# docker run --network kong-net --rm curlimages/curl:7.87.0 --max-time 15 -L -v http://auth_service:8000
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0* Trying 10.0.6.40:8000...
0 0 0 0 0 0 0 0 --:--:-- 0:00:14 --:--:-- 0* Connection timed out after 15000 milliseconds
0 0 0 0 0 0 0 0 --:--:-- 0:00:15 --:--:-- 0
* Closing connection 0
curl: (28) Connection timed out after 15000 milliseconds
While the request can be sent through one manager on Docker without any problems, the request over the other manager times out.

Related

Cannot access Docker container from another

Using this docker-compose file:
version: '3'
services:
hello:
image: nginxdemos/hello
ports:
- 7080:80
tool:
image: wbitt/network-multitool
tty: true
networks:
default:
name: test-network
If I curl from the host, it works.
❯ curl -s -o /dev/null -v http://192.168.1.102:7080
* Expire in 0 ms for 6 (transfer 0x8088b0)
* Trying 192.168.1.102...
* TCP_NODELAY set
* Expire in 200 ms for 4 (transfer 0x8088b0)
* Connected to 192.168.1.102 (192.168.1.102) port 7080 (#0)
> GET / HTTP/1.1
> Host: 192.168.1.102:7080
> User-Agent: curl/7.64.0
> Accept: */*
>
< HTTP/1.1 200 OK
< Server: nginx/1.23.1
< Date: Sun, 10 May 2071 00:06:00 GMT
< Content-Type: text/html
< Transfer-Encoding: chunked
< Connection: keep-alive
< Expires: Sun, 10 May 2071 00:05:59 GMT
< Cache-Control: no-cache
<
{ [6 bytes data]
* Connection #0 to host 192.168.1.102 left intact
If I try to contact another container from within the network, it fails.
❯ docker exec -it $(gdid tool) curl -s -o /dev/null -v http://hello
* Could not resolve host: hello
* Closing connection 0
Is this intended behaviour? I thought networks within the same network (and using docker-compose) are meant to be able to talk by their service name?
I am bringing the containers up with docker-compose up -d

Selenium not able to access website hosted in another Docker container

Any docker, Selenium expert, I would appreciate your help!
I have a web application(LNMP), and I use docker at my local as dev environment.
Now, I am trying to add an additional Selenium container so I can use it for automated testing (https://hub.docker.com/r/selenium/standalone-chrome-debug).
I couldn't figure out how to set up the connection from Selenium container to my application container(Nginx), because the website is not published.
When I develop at my local, I use a test domain, and I connect to the website by editing the host file at my local.
e.g.
127.0.0.1 website.test
I have tried adding the Selenium container in the yaml file, looks like it was successful, and I am able to use a VNC client from the host to connect to the container.
But I always got Connection Refused when I try to access the website from the Selenium browser. Looks like the DNS works otherwise Chrome would give me a DNS error rather than connection refuse.
My yaml file looks like, as you can see, I added a Selenium container(browser_chrome) and a network(testing). app is the original Nginx container.
networks:
testing:
version: "3"
services:
browser_chrome:
image: selenium/standalone-chrome-debug:3.8.1
ports:
- "5900:5900"
networks:
- testing
app:
networks:
testing:
aliases:
- "website.test"
This is the original docker file without the Selenium setup, https://github.com/markshust/docker-magento/blob/master/compose/docker-compose.yml, basically, I added the Selenium container(browser_chrome) and the network(testing) to connect them.
I only know the basic stuff of docker, so any help would be appreciated!
:)
First of all you don't have to create static network to work with this and leave docker to fit it itself = less problems in future (at least that's my opinion). By not specifying docker network in compose file you leave docker to create network called default (container point of view), from host point of view called <stack-name>_default
Second thing you have to understand is how networking and connection between two containers in stack works -> Here for connection between containers you have to use service names (in your case app) as docker automatically creates DNS records for it
Here is simple compose file to test validity of above:
version: "3"
services:
server:
image: containous/whoami
client:
image: appropriate/curl
command: "curl -vvv http://server"
And output from console and service logs
$ docker stack deploy -c docker-compose.yml test
Creating network test_default
Creating service test_server
Creating service test_client
$ docker service logs -f test_client
test_client.1.fv8dq5b80dkk#... | * Rebuilt URL to: http://server/
test_client.1.fv8dq5b80dkk#... | % Total % Received % Xferd Average Speed Time Time Time Current
test_client.1.fv8dq5b80dkk#... | Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0* Trying 10.0.5.2...
test_client.1.fv8dq5b80dkk#... | * TCP_NODELAY set
test_client.1.fv8dq5b80dkk#... | * Connected to server (10.0.5.2) port 80 (#0)
test_client.1.fv8dq5b80dkk#... | > GET / HTTP/1.1
test_client.1.fv8dq5b80dkk#... | > Host: server
test_client.1.fv8dq5b80dkk#... | > User-Agent: curl/7.59.0
test_client.1.fv8dq5b80dkk#... | > Accept: */*
test_client.1.fv8dq5b80dkk#... | >
test_client.1.fv8dq5b80dkk#... | < HTTP/1.1 200 OK
test_client.1.fv8dq5b80dkk#... | < Date: Tue, 01 Jun 2021 09:54:21 GMT
test_client.1.fv8dq5b80dkk#... | < Content-Length: 162
test_client.1.fv8dq5b80dkk#... | < Content-Type: text/plain; charset=utf-8
test_client.1.fv8dq5b80dkk#... | <
test_client.1.fv8dq5b80dkk#... | { [162 bytes data]
test_client.1.fv8dq5b80dkk#... | Hostname: 7d91b392ac0a
test_client.1.fv8dq5b80dkk#... | IP: 127.0.0.1
test_client.1.fv8dq5b80dkk#... | IP: 10.0.5.3
test_client.1.fv8dq5b80dkk#... | IP: 172.18.0.4
test_client.1.fv8dq5b80dkk#... | RemoteAddr: 10.0.5.4:45600
test_client.1.fv8dq5b80dkk#... | GET / HTTP/1.1
test_client.1.fv8dq5b80dkk#... | Host: server
test_client.1.fv8dq5b80dkk#... | User-Agent: curl/7.59.0
test_client.1.fv8dq5b80dkk#... | Accept: */*
test_client.1.fv8dq5b80dkk#... |
100 162 100 162 0 0 23142 0 --:--:-- --:--:-- --:--:-- 27000
test_client.1.fv8dq5b80dkk#... | * Connection #0 to host server left intact

Using a docker container for testing in Atlassian Bitbucket pipeline

I'm using a dynamodb docker container to run some tests in an Atlassian Bitbucket pipeline. These steps work locally with the same exact docker run command, but for some reason I cannot connect to the db container after it starts while running in the pipeline:
image: python:3.6
pipelines:
default:
- step:
caches:
- docker
script:
- docker run -d -p 8000:8000 --name dynamodb --entrypoint java amazon/dynamodb-local -jar DynamoDBLocal.jar -sharedDb -inMemory
- curl http://localhost:8000
services:
- docker
The curl command returns:
curl http://localhost:8000 % Total % Received % Xferd Average Speed
Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0
0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:-
-:-- 0curl: (56) Recv failure: Connection reset by peer
I've tried with both localhost and dynamodb as the host names with the same result. I've also posted this on the Atlassian Community, but got no answers.
You should not start amazon/dynamodb-local manually, you should use services instead:
definitions:
services:
dynamodb-local:
image: amazon/dynamodb-local
memory: 2048
pipelines:
default:
- step:
image: python:3.6
size: 2x
services:
- dynamodb-local
script:
- export DYNAMODB_LOCAL_URL=http://localhost:8000
- export AWS_DEFAULT_REGION=us-east-1
- export AWS_ACCESS_KEY_ID=''
- export AWS_SECRET_ACCESS_KEY=''
- aws --endpoint-url ${DYNAMODB_LOCAL_URL} dynamodb delete-table --table-name test || true
- aws --endpoint-url ${DYNAMODB_LOCAL_URL} dynamodb create-table --cli-input-json file://test.table.json
- python -m unittest test_module.TestClass
You'll probably need to double the size of container and memory as DynamoDB is pretty heavvweight (but it may work on defaults as well).

Cannot make request to gitlab running in the official Docker container

I am trying to run gitlab from a Docker (gitlab/gitlab-ce, latest) container using the instruction given here.
Docker version
Docker version 1.12.4, build 1564f02
I first run
docker run --detach --hostname <myIP> --publish 8000:443--publish 8001:80 --publish 8002:22 --name gitlab --restart always --volume /docker/app/gitlab/config:/etc/gitlab --volume /docker/app/gitlab/logs:/var/log/gitlab --volume /docker/app/gitlab/data:/var/opt/gitlab gitlab/gitlab-ce:latest
Then I edited the container's /etc/gitlab/gitlab.rb to set
external_url 'http://<myIP>:8001'
gitlab_rails['gitlab_shell_ssh_port'] = 8002
Then I restarted the container with
docker restart gitlab
Now.
When I try to connect to <myIP>:8001 I get a (110) Connection timed out.
When I try from the Docker container's host I get
xxx#xxx:~$ curl localhost:8001
curl: (56) Recv failure: Connection reset by peer
Logs (just the end)
==> /var/log/gitlab/gitlab-workhorse/current <==
2017-07-26_14:53:41.50465 localhost:8001 # - - [2017-07-26 14:53:41.223110228 +0000 UTC] "GET /help HTTP/1.1" 200 33923 "" "curl/7.53.0" 0.281484
==> /var/log/gitlab/nginx/gitlab_access.log <==
127.0.0.1 - - [26/Jul/2017:14:53:41 +0000] "GET /help HTTP/1.1" 200 33967 "-" "curl/7.53.0"
==> /var/log/gitlab/gitlab-monitor/current <==
2017-07-26_14:53:47.27460 ::1 - - [26/Jul/2017:14:53:47 UTC] "GET /sidekiq HTTP/1.1" 200 3399
2017-07-26_14:53:47.27464 - -> /sidekiq
2017-07-26_14:53:49.22004 ::1 - - [26/Jul/2017:14:53:49 UTC] "GET /database HTTP/1.1" 200 42025
2017-07-26_14:53:49.22007 - -> /database
2017-07-26_14:53:51.48866 ::1 - - [26/Jul/2017:14:53:51 UTC] "GET /process HTTP/1.1" 200 7132
2017-07-26_14:53:51.48873 - -> /process
==> /var/log/gitlab/gitlab-rails/production.log <==
Started GET "/-/metrics" for 127.0.0.1 at 2017-07-26 14:53:55 +0000
Processing by MetricsController#index as HTML
Filter chain halted as :validate_prometheus_metrics rendered or redirected
Completed 404 Not Found in 1ms (Views: 0.7ms | ActiveRecord: 0.0ms)
Docker ps
xxx#xxx:~$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
67e013741b6d gitlab/gitlab-ce:latest "/assets/wrapper" 2 hours ago Up About an hour (healthy) 0.0.0.0:8002->22/tcp, 0.0.0.0:8001->80/tcp, 0.0.0.0:8000->443/tcp gitlab
Netstat
xxx#xxx:~$ netstat --listen
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 localhost:smtp *:* LISTEN
tcp 0 0 *:2020 *:* LISTEN
tcp 0 0 *:git *:* LISTEN
tcp 0 0 *:43918 *:* LISTEN
tcp 0 0 *:sunrpc *:* LISTEN
tcp6 0 0 localhost:smtp [::]:* LISTEN
tcp6 0 0 [::]:8000 [::]:* LISTEN
tcp6 0 0 [::]:8001 [::]:* LISTEN
tcp6 0 0 [::]:8002 [::]:* LISTEN
tcp6 0 0 [::]:2020 [::]:* LISTEN
tcp6 0 0 [::]:git [::]:* LISTEN
tcp6 0 0 [::]:sunrpc [::]:* LISTEN
tcp6 0 0 [::]:http [::]:* LISTEN
tcp6 0 0 [::]:43730 [::]:* LISTEN
udp 0 0 *:54041 *:*
udp 0 0 *:sunrpc *:*
udp 0 0 *:snmp *:*
udp 0 0 *:958 *:*
udp 0 0 localhost:969 *:*
udp 0 0 *:37620 *:*
udp6 0 0 [::]:54611 [::]:*
udp6 0 0 [::]:sunrpc [::]:*
udp6 0 0 localhost:snmp [::]:*
udp6 0 0 [::]:958 [::]:*
I cannot find what is wrong. Anybody can help ?
Here is a docker-compose.yml which worked fine for me
version: '2'
services:
gitlab:
image: gitlab/gitlab-ce:latest
ports:
- "8002:22"
- "8000:8000"
- "8001:443"
environment:
- "GITLAB_OMNIBUS_CONFIG=external_url 'http://192.168.33.100:8000/'"
volumes:
- ./config:/etc/gitlab
- ./logs:/var/log/gitlab
- ./data:/var/opt/gitlab
The thing is that when you configure external url as <MyIP>:8000 the listening port inside the container also is updated to 8000. In your case you are mapping port 8000 to 80 and you should be mapping 8000 to 8000 only
Read the below url for details on the same
https://docs.gitlab.com/omnibus/settings/nginx.html#setting-the-nginx-listen-port
If you need to override this port then you can do that in gitlab.rb
nginx['listen_port'] = 8081
I prefer to launch Gitlab using a docker-compose file instead of commands, as it is easy to configure, start, restart gitlab

docker 1.12 swarm cluster cannot serve static files

I had a swarm cluster based on docker 1.12.1 swarm mode.
There are 3 master node and 3 worker node. I have a node.js/express service running upon the cluster.
The service has 5 replicas and public port 8082.
But when i try to access it, it failed if serve static files.
I use curl to fetch one static style script from the service as an example.
[root#i1-proxy ~]# time curl -v 192.168.100.3:8082/dist/style.css
* About to connect() to 192.168.100.3 port 8082 (#0)
* Trying 192.168.100.3...
* Connected to 192.168.100.3 (192.168.100.3) port 8082 (#0)
> GET /dist/style.css HTTP/1.1
> User-Agent: curl/7.29.0
> Host: 192.168.100.3:8082
> Accept: */*
>
< HTTP/1.1 200 OK
< X-Powered-By: Express
< Accept-Ranges: bytes
< Cache-Control: public, max-age=0
< Last-Modified: Fri, 02 Sep 2016 02:09:20 GMT
< ETag: W/"106e1-156e8a85080"
< Content-Type: text/css; charset=UTF-8
< Content-Length: 67297
< Date: Mon, 05 Sep 2016 02:50:14 GMT
< Connection: keep-alive
<
* Recv failure: Connection reset by peer
* Closing connection 0
curl: (56) Recv failure: Connection reset by peer
real 4m0.657s
user 0m0.019s
sys 0m0.012s
It returns the HTTP headers but no HTTP body.
Add:
docker info of one master in cluster
Containers: 1
Running: 1
Paused: 0
Stopped: 0
Images: 12
Server Version: 1.12.1
Storage Driver: aufs
Root Dir: /var/lib/docker/aufs
Backing Filesystem: extfs
Dirs: 59
Dirperm1 Supported: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge overlay null host
Swarm: active
NodeID: 99wq9cba397bps7578fxqijq1
Is Manager: true
ClusterID: 73oimnw1q8zh9caof3gk6sutw
Managers: 3
Nodes: 6
Orchestration:
Task History Retention Limit: 5
Raft:
Snapshot Interval: 10000
Heartbeat Tick: 1
Election Tick: 3
Dispatcher:
Heartbeat Period: 5 seconds
CA Configuration:
Expiry Duration: 3 months
Node Address: 192.168.100.3
Runtimes: runc
Default Runtime: runc
Security Options: seccomp
Kernel Version: 4.4.0-34-generic
Operating System: Ubuntu 16.04.1 LTS
OSType: linux
Architecture: x86_64
CPUs: 4
Total Memory: 7.796 GiB
Name: i-yyk6gta5
ID: VXWN:I4AA:EIY7:4NKF:OQJW:GLIJ:43TJ:L6FO:RQB4:Z4L2:6A7D:EGIP
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
WARNING: No swap limit support
Insecure Registries:
127.0.0.0/8
And docker version
Client:
Version: 1.12.1
API version: 1.24
Go version: go1.6.3
Git commit: 23cf638
Built: Thu Aug 18 05:33:38 2016
OS/Arch: linux/amd64
Server:
Version: 1.12.1
API version: 1.24
Go version: go1.6.3
Git commit: 23cf638
Built: Thu Aug 18 05:33:38 2016
OS/Arch: linux/amd64

Resources