Docker-compose: close ports after SIGTERM - ruby-on-rails

I'm using Docker and Docker Compose for my development environment.
When I switch between projects, I usually have quite some pain because I receive a PORT ALREADY IN USE error.
If I do docker-compose up (which makes my rails server start), is Ctrl+C the correct way to terminate this container?
Here's my docker-compose.yml file:
db:
image: postgres
ports:
- "5432"
web:
build: .
command: bundle exec rails s -p 3000 -b '0.0.0.0'
volumes:
- .:/myapp
ports:
- "3000:3000"
links:
- db
Sometimes, I simply have to delete ./tmp/pids/server.pid, but sometimes I have to kill -9 some process.
Here's for example what ps -edf | grep docker outputs:
root 742 1 0 Jul18 ? 00:01:11 /usr/bin/docker -d -H fd://
root 22341 742 0 Jul21 ? 00:00:00 docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 32781 -container-ip 172.17.0.48 -container-port 5432
root 22510 742 0 Jul21 ? 00:00:00 docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 3030 -container-ip 172.17.0.49 -container-port 3030
root 28766 742 0 Jul21 ? 00:00:00 docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 32783 -container-ip 172.17.0.57 -container-port 5432
root 28886 742 0 Jul21 ? 00:00:00 docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 3000 -container-ip 172.17.0.58 -container-port 3000
Am I doing something wrong?

I would launch the container as a background process with docker-compose up -d. Then later you can do a shutdown with docker-compose stop in a clean way.

Related

how to expose ingress-nginx port 80 from github codespaces

I have the following output when running kubectl get ingress:
NAME CLASS HOSTS ADDRESS PORTS AGE
ingress-svc nginx localdev.me 192.168.49.2 80 32m
and under vs code in codespaces, I see about 5 port forwarded ports that I can access from a browser that have processes that look like this:
/usr/bin/docker-proxy -proto tcp -host-ip 127.0.0.1 -host-port 49157 -container-ip 192.168.49.2 -container-port 22
I was wondering how I can configure the one with port 80 to some external port that I can access from the browser. Thank you.

Concourse : Web Connection Refused

I composed Concourse on EC2 Linux (Ubuntu 22.04) and revised CONCOURSE_EXTERNAL_URL in docker-compose.yml to Elastic IP Address of EC2 Linux.
Even though secrurity group inbound and ACL allow all tcp / http /https, http://{myElasticIP}:8080/ connection refused.
(Instance is running, can ping to {myElasticIP} without fail)
This was my first time to set Concourse, so I guess something is wrong in my procedure.
Any advice would be highly appreciated.
-- command and result
$ docker-compose up -d
Starting ubuntu_concourse-db_1
Recreating ubuntu_concourse_1
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
56f8859a67ba concourse/concourse "dumb-init /usr/loca…" 24 minutes ago Restarting (1) 28 seconds ago ubuntu_concourse-web_1
307a647554eb postgres:9.5 "docker-entrypoint.s…" 24 minutes ago Up 24 minutes 5432/tcp ubuntu_concourse-db_1
Error (Fiddler):
ConnectionRefused (0x274d).
-- kernel
$ uname -r
5.15.0-1015-aws
--docker-compose.yml
version: '3'
services:
concourse-db:
image: postgres:9.5
environment:
POSTGRES_DB: concourse
POSTGRES_USER: "${CONCOURSE_POSTGRES_USER}"
POSTGRES_PASSWORD: "${CONCOURSE_POSTGRES_PASSWORD}"
PGDATA: /database
concourse-web:
image: concourse/concourse
links: [concourse-db]
command: web
depends_on: [concourse-db]
ports: ["8080:8080"]
volumes: ["./keys/web:/concourse-keys"]
restart: unless-stopped # required so that it retries until conocurse-db comes up
environment:
CONCOURSE_BASIC_AUTH_USERNAME: "${CONCOURSE_BASIC_AUTH_USERNAME}"
CONCOURSE_BASIC_AUTH_PASSWORD: "${CONCOURSE_BASIC_AUTH_PASSWORD}"
CONCOURSE_EXTERNAL_URL: "${CONCOURSE_EXTERNAL_URL}"
CONCOURSE_POSTGRES_HOST: concourse-db
CONCOURSE_POSTGRES_USER: "${CONCOURSE_POSTGRES_USER}"
CONCOURSE_POSTGRES_PASSWORD: "${CONCOURSE_POSTGRES_PASSWORD}"
CONCOURSE_POSTGRES_DATABASE: concourse
--.env
CONCOURSE_BASIC_AUTH_USERNAME=concourse
CONCOURSE_BASIC_AUTH_PASSWORD=changeme
CONCOURSE_EXTERNAL_URL=http://{myElasticIP}:8080
CONCOURSE_POSTGRES_USER=concourse
CONCOURSE_POSTGRES_PASSWORD=changeme
-- port check
$ sudo lsof -i -P -n | grep LISTEN
systemd-r 390 systemd-resolve 14u IPv4 16470 0t0 TCP 127.0.0.53:53 (LISTEN)
sshd 644 root 3u IPv4 17932 0t0 TCP *:22 (LISTEN)
sshd 644 root 4u IPv6 17943 0t0 TCP *:22 (LISTEN)

docker can't connect to docker daemon but socket is created

My problem is that I can use docker with for example Portainer but when I run docker on the machine on sudo docker can't connect to the daemon and tells me about it:
All commands are done with root.
docker ps
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
docker service:
systemctl status docker.service
● docker.service - Docker Application Container Engine
Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)
Drop-In: /etc/systemd/system/docker.service.d
└─options.conf
Active: active (running) since Fri 2021-10-22 19:02:54 UTC; 4 days ago
TriggeredBy: ● docker.socket
Docs: https://docs.docker.com
Main PID: 182765 (dockerd)
Tasks: 175
Memory: 93.1M
CGroup: /system.slice/docker.service
├─182765 /usr/bin/dockerd -H unix:// --containerd=/run/containerd/containerd.sock
├─182942 /usr/bin/docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 49153 -container-ip 172.17.0.2 -container-port 27017
├─182949 /usr/bin/docker-proxy -proto tcp -host-ip :: -host-port 49153 -container-ip 172.17.0.2 -container-port 27017
├─182962 /usr/bin/docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 8081 -container-ip 172.18.0.2 -container-port 8080
├─182970 /usr/bin/docker-proxy -proto tcp -host-ip :: -host-port 8081 -container-ip 172.18.0.2 -container-port 8080
├─182984 /usr/bin/docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 6881 -container-ip 172.18.0.2 -container-port 6881
├─182990 /usr/bin/docker-proxy -proto tcp -host-ip :: -host-port 6881 -container-ip 172.18.0.2 -container-port 6881
├─183004 /usr/bin/docker-proxy -proto udp -host-ip 0.0.0.0 -host-port 6881 -container-ip 172.18.0.2 -container-port 6881
├─183010 /usr/bin/docker-proxy -proto udp -host-ip :: -host-port 6881 -container-ip 172.18.0.2 -container-port 6881
├─183034 /usr/bin/docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 49154 -container-ip 172.17.0.3 -container-port 3306
├─183041 /usr/bin/docker-proxy -proto tcp -host-ip :: -host-port 49154 -container-ip 172.17.0.3 -container-port 3306
├─183148 /usr/bin/docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 49155 -container-ip 172.17.0.4 -container-port 6379
├─183154 /usr/bin/docker-proxy -proto tcp -host-ip :: -host-port 49155 -container-ip 172.17.0.4 -container-port 6379
├─183332 /usr/bin/docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 9443 -container-ip 172.17.0.5 -container-port 9443
├─183339 /usr/bin/docker-proxy -proto tcp -host-ip :: -host-port 9443 -container-ip 172.17.0.5 -container-port 9443
├─183353 /usr/bin/docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 9000 -container-ip 172.17.0.5 -container-port 9000
├─183360 /usr/bin/docker-proxy -proto tcp -host-ip :: -host-port 9000 -container-ip 172.17.0.5 -container-port 9000
├─183372 /usr/bin/docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 8000 -container-ip 172.17.0.5 -container-port 8000
├─183378 /usr/bin/docker-proxy -proto tcp -host-ip :: -host-port 8000 -container-ip 172.17.0.5 -container-port 8000
├─186468 /usr/bin/docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 8080 -container-ip 172.17.0.6 -container-port 80
└─186474 /usr/bin/docker-proxy -proto tcp -host-ip :: -host-port 8080 -container-ip 172.17.0.6 -container-port 80
Oct 27 09:31:30 falcon dockerd[182765]: time="2021-10-27T09:31:30.218332206Z" level=debug msg="Calling GET /containers/69047b41bedea4794803730ff0fa13a65e546519f9>
Oct 27 09:31:30 falcon dockerd[182765]: time="2021-10-27T09:31:30.219026938Z" level=debug msg="Calling GET /images/json?all=0"
Oct 27 09:32:58 falcon dockerd[182765]: time="2021-10-27T09:32:58.654321683Z" level=debug msg="Calling HEAD /_ping"
Oct 27 09:32:58 falcon dockerd[182765]: time="2021-10-27T09:32:58.656492046Z" level=debug msg="Calling GET /v1.37/info"
Oct 27 09:32:58 falcon dockerd[182765]: time="2021-10-27T09:32:58.673377621Z" level=debug msg="Calling GET /v1.37/containers/json?all=1&limit=0"
Oct 27 09:32:58 falcon dockerd[182765]: time="2021-10-27T09:32:58.680766521Z" level=debug msg="Calling GET /v1.37/images/json"
Oct 27 09:32:58 falcon dockerd[182765]: time="2021-10-27T09:32:58.701618241Z" level=debug msg="Calling GET /v1.37/volumes"
I can use portainer as usual
docker version:
docker -v
Docker version 20.10.7, build 20.10.7-0ubuntu1~20.04.2
More information:
ls -l /var/run
lrwxrwxrwx 1 root root 4 Jul 31 2020 /var/run -> /run
docker.sock is created:
drwxr-xr-x 2 root root 40 Oct 25 20:31 docker.sock
rcat /usr/lib/systemd/system/docker.service
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
Wants=containerd.service
[Service]
Type=notify
# the default is not to use systemd for cgroups because the delegate issues still
# exists and systemd currently does not support the cgroup feature set required
# for containers run by docker
ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
ExecReload=/bin/kill -s HUP $MAINPID
TimeoutSec=0
RestartSec=2
Restart=always
# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
# Both the old, and new location are accepted by systemd 229 and up, so using the old location
# to make them work for either version of systemd.
StartLimitBurst=3
# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
# this option work for either version of systemd.
StartLimitInterval=60s
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Comment TasksMax if your systemd version does not support it.
# Only systemd 226 and above support this option.
TasksMax=infinity
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
OOMScoreAdjust=-500
[Install]
WantedBy=multi-user.target
Any one got a clue why docker is unable to connect to the daemon?
It looks like you have a container configured to bind mount /var/lib/docker.sock and the daemon restarted that container before creating the socket. There's been some tweeks to packaging in recent releases to reduce this chance. Otherwise you may want to mount the entire directory instead of a single file.
To fix, try stopping docker, deleting the empty directory, and restarting docker to see if the socket gets created first (it's a race condition).

docker-proxy keeps port open making it impossible to start a service

I perform docker-compose down and I have:
$ docker-compose ps
Name Command State Ports
------------------------------
But when I do docker-compose up -d, I get
ERROR: for php Cannot start service php: driver failed programming external connectivity on endpoint project_php_1 (1a97183b3dad2157994251af0ead734e6750d95a3c71540d95f4c32c487d0830): Bind for 127.0.0.1:9000 failed: port is already allocated
Netstat:
tcp 0 0 127.0.0.1:9000 0.0.0.0:* LISTEN 0 24211 1643/docker-proxy
ps:
$ sudo ps auxwwwt | grep docker-proxy
root 18924 0.0 0.1 1152904 3132 ? Sl 11:56 0:00 /usr/bin/docker-proxy -proto tcp -host-ip 127.0.0.1 -host-port 9000 -container-ip 172.23.0.2 -container-port 9000
root 19233 0.0 0.1 1152904 3220 ? Sl 11:56 0:00 /usr/bin/docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 54321 -container-ip 172.18.0.2 -container-port 5432
root 19241 0.0 0.1 1079172 4032 ? Sl 11:56 0:00 /usr/bin/docker-proxy -proto tcp -host-ip :: -host-port 54321 -container-ip 172.18.0.2 -container-port 5432\
I stop and start the docker service, but the docker-proxy keeps coming up. What can I do to stop it?
You must have containers running from outside this compose setup. Use docker ps to list all running containers and stop them. Afterwards, to be sure, use docker network prune to remove orphaned networks.
After that your compose setup should start normally.

How to solve the error after modifying the Docker default storage location?

When the server primary partition was full, I mounted a new disk and changed the Docker default root path to that disk. The specific steps are as follows
Stop the Docker service with the systemctl stop docker.servicecommand
Migration of data by cp -r .var/lib/docker/* /mnt/data/docker
Modify the daemon.json configuration file
{
"insecure-registries":["xxx.xxx.xxx.xxx:xxxx","xx.xxx.xxx.xx:xxx"],
"live-restore": true,
"data-root": "/mnt/data/docker"
}
Restart Docker service by systemctl stop docker.service
I can see the original image and container information through both docker images and docker ps -a.
But when I check the Docker status from the command line, I get the following error
root#xxxx:~# systemctl status docker
● docker.service - Docker Application Container Engine
Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)
Active: active (running) since Mon 2021-03-08 19:38:06 CST; 7s ago
Docs: https://docs.docker.com
Main PID: 12877 (dockerd)
Tasks: 247
CGroup: /system.slice/docker.service
├─ 4395 /usr/bin/docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 1990 -container-ip 172.24.0.8 -container-port 1883
├─ 4423 /usr/bin/docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 1888 -container-ip 172.19.0.2 -container-port 1883
├─ 4432 /usr/bin/docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 1923 -container-ip 172.28.0.2 -container-port 1883
├─ 4435 /usr/bin/docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 1856 -container-ip 192.168.192.2 -container-port 1883
├─ 4448 /usr/bin/docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 1683 -container-ip 192.168.176.8 -container-port 1883
├─ 4456 /usr/bin/docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 1855 -container-ip 192.168.160.2 -container-port 1883
├─ 4474 /usr/bin/docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 1988 -container-ip 172.20.0.2 -container-port 1883
├─ 4676 setup-resolver /var/run/docker/netns/3afca278ff2c 127.0.0.11:37870 127.0.0.11:34561
├─ 4695 /sbin/iptables --wait -L -n
├─12877 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
├─13708 /usr/bin/docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 8091 -container-ip 192.168.128.2 -container-port 8000
├─13720 /usr/bin/docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 9002 -container-ip 172.31.0.3 -container-port 9000
├─13731 /usr/bin/docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 8092 -container-ip 172.21.0.4 -container-port 8000
├─13742 /usr/bin/docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 9003 -container-ip 172.20.0.4 -container-port 9000
├─13758 /usr/bin/docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 8123 -container-ip 192.168.96.3 -container-port 8000
├─13770 /usr/bin/docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 9001 -container-ip 192.168.32.2 -container-port 9000
├─15762 /usr/bin/docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 8010 -container-ip 192.168.176.5 -container-port 8000
├─15787 /usr/bin/docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 8099 -container-ip 172.24.0.7 -container-port 8000
└─16000 /usr/bin/docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 8000 -container-ip 192.168.0.6 -container-port 8000
**Mar 08 19:38:10 iZ8vb9yzkqg41jw8ev9zfmZ dockerd[12877]: time="2021-03-08T19:38:10.743123800+08:00" level=warning msg="xtables contention detected while running [-t nat -C DOCKER -p tcp -d 0/0 --dport 1923 -j DNAT --to-dest
Mar 08 19:38:11 iZ8vb9yzkqg41jw8ev9zfmZ dockerd[12877]: time="2021-03-08T19:38:11.747714065+08:00" level=warning msg="xtables contention detected while running [-t nat -C DOCKER -p tcp -d 0/0 --dport 1856 -j DNAT --to-dest
Mar 08 19:38:11 iZ8vb9yzkqg41jw8ev9zfmZ dockerd[12877]: time="2021-03-08T19:38:11.750899979+08:00" level=warning msg="xtables contention detected while running [-t filter -C DOCKER ! -i br-1a4306a45084 -o br-1a4306a45084 -
Mar 08 19:38:11 iZ8vb9yzkqg41jw8ev9zfmZ dockerd[12877]: time="2021-03-08T19:38:11.755768353+08:00" level=warning msg="xtables contention detected while running [-t filter -A DOCKER ! -i br-f46376a67c00 -o br-f46376a67c00 -
Mar 08 19:38:11 iZ8vb9yzkqg41jw8ev9zfmZ dockerd[12877]: time="2021-03-08T19:38:11.758681889+08:00" level=warning msg="xtables contention detected while running [-t nat -D DOCKER -p tcp -d 0/0 --dport 1888 -j DNAT --to-dest
Mar 08 19:38:11 iZ8vb9yzkqg41jw8ev9zfmZ dockerd[12877]: time="2021-03-08T19:38:11.884016359+08:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Mar 08 19:38:13 iZ8vb9yzkqg41jw8ev9zfmZ dockerd[12877]: time="2021-03-08T19:38:13.418598834+08:00" level=error msg="Handler for GET /containers/4411367a4b654b8393ded8b4212d6945d84a07a108b3f05a6561d6f381277a04/json returned
Mar 08 19:38:13 iZ8vb9yzkqg41jw8ev9zfmZ dockerd[12877]: http: superfluous response.WriteHeader call from github.com/docker/docker/api/server/httputils.WriteJSON (httputils_write_json.go:11)
But it seems to disappear again after a while
root#xxx:~/harbor# systemctl status docker
● docker.service - Docker Application Container Engine
Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)
Active: active (running) since Mon 2021-03-08 19:38:06 CST; 25min ago
Docs: https://docs.docker.com
Main PID: 12877 (dockerd)
Tasks: 188
CGroup: /system.slice/docker.service
├─10836 /usr/bin/docker-proxy -proto tcp -host-ip 127.0.0.1 -host-port 1514 -container-ip 172.25.0.2 -container-port 10514
├─12877 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
├─13708 /usr/bin/docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 8091 -container-ip 192.168.128.2 -container-port 8000
├─13720 /usr/bin/docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 9002 -container-ip 172.31.0.3 -container-port 9000
├─13731 /usr/bin/docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 8092 -container-ip 172.21.0.4 -container-port 8000
├─13742 /usr/bin/docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 9003 -container-ip 172.20.0.4 -container-port 9000
├─13758 /usr/bin/docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 8123 -container-ip 192.168.96.3 -container-port 8000
├─13770 /usr/bin/docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 9001 -container-ip 192.168.32.2 -container-port 9000
├─15762 /usr/bin/docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 8010 -container-ip 192.168.176.5 -container-port 8000
├─15787 /usr/bin/docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 8099 -container-ip 172.24.0.7 -container-port 8000
└─16000 /usr/bin/docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 8000 -container-ip 192.168.0.6 -container-port 8000
Mar 08 20:03:22 iZ8vb9yzkqg41jw8ev9zfmZ dockerd[12877]: time="2021-03-08T20:03:22.488572549+08:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Mar 08 20:03:24 iZ8vb9yzkqg41jw8ev9zfmZ dockerd[12877]: time="2021-03-08T20:03:24.618043201+08:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Mar 08 20:03:26 iZ8vb9yzkqg41jw8ev9zfmZ dockerd[12877]: time="2021-03-08T20:03:26.699891790+08:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Mar 08 20:03:28 iZ8vb9yzkqg41jw8ev9zfmZ dockerd[12877]: time="2021-03-08T20:03:28.755718305+08:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Mar 08 20:03:31 iZ8vb9yzkqg41jw8ev9zfmZ dockerd[12877]: time="2021-03-08T20:03:31.232132801+08:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Mar 08 20:03:34 iZ8vb9yzkqg41jw8ev9zfmZ dockerd[12877]: time="2021-03-08T20:03:34.511822519+08:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Mar 08 20:03:39 iZ8vb9yzkqg41jw8ev9zfmZ dockerd[12877]: time="2021-03-08T20:03:39.403540253+08:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Mar 08 20:03:45 iZ8vb9yzkqg41jw8ev9zfmZ dockerd[12877]: time="2021-03-08T20:03:45.966994222+08:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Mar 08 20:03:47 iZ8vb9yzkqg41jw8ev9zfmZ dockerd[12877]: time="2021-03-08T20:03:47.446758477+08:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Mar 08 20:03:55 iZ8vb9yzkqg41jw8ev9zfmZ dockerd[12877]: time="2021-03-08T20:03:55.828274659+08:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
root#xxx:~/harbor#
When I tried to restart Harbor, the following error occurred and caused the remote connection to the server to be disconnected
[Step 4]: starting Harbor ...
Creating network "harbor_harbor" with the default driver
Creating harbor-log ... done
Creating redis ...
Creating harbor-portal ...
Creating redis ... error
Creating registry ...
Creating harbor-db ... error
Creating harbor-portal ... error
ERROR: for harbor-db Cannot start service postgresql: failed to initialize logging driver: dial tcp 127.0.0.1:1514: connect: connection refused
Creating registryctl ... error
ERROR: for harbor-portal Cannot start service portal: failed to initialize logging driver: dial tcp 127.0.0.1:1514: connect: connection refused
Creating registry ... error
ERROR: for registryctl Cannot start service registryctl: failed to initialize logging driver: dial tcp 127.0.0.1:1514: connect: connection refused
ERROR: for registry Cannot start service registry: failed to initialize logging driver: dial tcp 127.0.0.1:1514: connect: connection refused
ERROR: for postgresql Cannot start service postgresql: failed to initialize logging driver: dial tcp 127.0.0.1:1514: connect: connection refused
ERROR: for redis Cannot start service redis: failed to initialize logging driver: dial tcp 127.0.0.1:1514: connect: connection refused
ERROR: for portal Cannot start service portal: failed to initialize logging driver: dial tcp 127.0.0.1:1514: connect: connection refused
ERROR: for registryctl Cannot start service registryctl: failed to initialize logging driver: dial tcp 127.0.0.1:1514: connect: connection refused
ERROR: for registry Cannot start service registry: failed to initialize logging driver: dial tcp 127.0.0.1:1514: connect: connection refused
ERROR: Encountered errors while bringing up the project.
──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
Session stopped
- Press <return> to exit tab
- Press R to restart session
- Press S to save terminal output to file
Harbor's docker-compose YAML is shown below
root#iZ8vb9yzkqg41jw8ev9zfmZ:~/harbor# cat docker-compose.yml
version: '2.3'
services:
log:
image: goharbor/harbor-log:v1.10.2
container_name: harbor-log
restart: always
dns_search: .
cap_drop:
- ALL
cap_add:
- CHOWN
- DAC_OVERRIDE
- SETGID
- SETUID
volumes:
- /var/log/harbor/:/var/log/docker/:z
- type: bind
source: ./common/config/log/logrotate.conf
target: /etc/logrotate.d/logrotate.conf
- type: bind
source: ./common/config/log/rsyslog_docker.conf
target: /etc/rsyslog.d/rsyslog_docker.conf
ports:
- 127.0.0.1:1514:10514
networks:
- harbor
registry:
image: goharbor/registry-photon:v1.10.2
container_name: registry
restart: always
cap_drop:
- ALL
cap_add:
- CHOWN
- SETGID
- SETUID
volumes:
- /data/registry:/storage:z
- ./common/config/registry/:/etc/registry/:z
- type: bind
source: /data/secret/registry/root.crt
target: /etc/registry/root.crt
networks:
- harbor
dns_search: .
depends_on:
- log
logging:
driver: "syslog"
options:
syslog-address: "tcp://127.0.0.1:1514"
tag: "registry"
registryctl:
image: goharbor/harbor-registryctl:v1.10.2
container_name: registryctl
env_file:
- ./common/config/registryctl/env
restart: always
cap_drop:
- ALL
cap_add:
- CHOWN
- SETGID
- SETUID
volumes:
- /data/registry:/storage:z
- ./common/config/registry/:/etc/registry/:z
- type: bind
source: ./common/config/registryctl/config.yml
target: /etc/registryctl/config.yml
networks:
- harbor
dns_search: .
depends_on:
- log
logging:
driver: "syslog"
options:
syslog-address: "tcp://127.0.0.1:1514"
tag: "registryctl"
postgresql:
image: goharbor/harbor-db:v1.10.2
container_name: harbor-db
restart: always
cap_drop:
- ALL
cap_add:
- CHOWN
- DAC_OVERRIDE
- SETGID
- SETUID
volumes:
- /data/database:/var/lib/postgresql/data:z
networks:
harbor:
dns_search: .
env_file:
- ./common/config/db/env
depends_on:
- log
logging:
driver: "syslog"
options:
syslog-address: "tcp://127.0.0.1:1514"
tag: "postgresql"
core:
image: goharbor/harbor-core:v1.10.2
container_name: harbor-core
env_file:
- ./common/config/core/env
restart: always
cap_drop:
- ALL
cap_add:
- SETGID
- SETUID
volumes:
- /data/ca_download/:/etc/core/ca/:z
- /data/psc/:/etc/core/token/:z
- /data/:/data/:z
- ./common/config/core/certificates/:/etc/core/certificates/:z
- type: bind
source: ./common/config/core/app.conf
target: /etc/core/app.conf
- type: bind
source: /data/secret/core/private_key.pem
target: /etc/core/private_key.pem
- type: bind
source: /data/secret/keys/secretkey
target: /etc/core/key
networks:
harbor:
dns_search: .
depends_on:
- log
- registry
- redis
- postgresql
logging:
driver: "syslog"
options:
syslog-address: "tcp://127.0.0.1:1514"
tag: "core"
portal:
image: goharbor/harbor-portal:v1.10.2
container_name: harbor-portal
restart: always
cap_drop:
- ALL
cap_add:
- CHOWN
- SETGID
- SETUID
- NET_BIND_SERVICE
networks:
- harbor
dns_search: .
depends_on:
- log
logging:
driver: "syslog"
options:
syslog-address: "tcp://127.0.0.1:1514"
tag: "portal"
jobservice:
image: goharbor/harbor-jobservice:v1.10.2
container_name: harbor-jobservice
env_file:
- ./common/config/jobservice/env
restart: always
cap_drop:
- ALL
cap_add:
- CHOWN
- SETGID
- SETUID
volumes:
- /data/job_logs:/var/log/jobs:z
- type: bind
source: ./common/config/jobservice/config.yml
target: /etc/jobservice/config.yml
networks:
- harbor
dns_search: .
depends_on:
- core
logging:
driver: "syslog"
options:
syslog-address: "tcp://127.0.0.1:1514"
tag: "jobservice"
redis:
image: goharbor/redis-photon:v1.10.2
container_name: redis
restart: always
cap_drop:
- ALL
cap_add:
- CHOWN
- SETGID
- SETUID
volumes:
- /data/redis:/var/lib/redis
networks:
harbor:
dns_search: .
depends_on:
- log
logging:
driver: "syslog"
options:
syslog-address: "tcp://127.0.0.1:1514"
tag: "redis"
proxy:
image: goharbor/nginx-photon:v1.10.2
container_name: nginx
restart: always
cap_drop:
- ALL
cap_add:
- CHOWN
- SETGID
- SETUID
- NET_BIND_SERVICE
volumes:
- ./common/config/nginx:/etc/nginx:z
- /data/secret/cert:/etc/cert:z
networks:
- harbor
dns_search: .
ports:
- 80:8080
- 1880:8443
depends_on:
- registry
- core
- portal
- log
logging:
driver: "syslog"
options:
syslog-address: "tcp://127.0.0.1:1514"
tag: "proxy"
networks:
harbor:
external: false
I don't know what caused this mistake, I hope someone can help me, thank you.

Resources