I had local drupal website setup with docksal and then accidentally I ran "fin init" command which restarted the project stack.
Now when I am launching my drupal website, it is totally new with no existing modules installed. Is there any way i can get back my existing website with all my earlier modules and contents.
Below is the output of fin dioagnose:
COMPOSE_PROJECT_NAME_SAFE: drupal_website
COMPOSE_FILE:
/home/minsharm/.docksal/stacks/volumes-bind.yml
/home/minsharm/.docksal/stacks/stack-default.yml
/home/minsharm/Downloads/drupal_website/.docksal/docksal.yml
ENV_FILE:
/home/minsharm/Downloads/drupal_website/.docksal/docksal.env
/home/minsharm/Downloads/drupal_website/.docksal/docksal-local.env
PROJECT_ROOT: /home/minsharm/Downloads/drupal_website
DOCROOT: web
VIRTUAL_HOST: drupal-website.docksal
VIRTUAL_HOST_ALIASES: *.drupal-website.docksal
IP: 192.168.64.100
MySQL endpoint: 192.168.64.100:32769
Public URL:
Docker Compose configuration
---------------------
services:
cli:
dns:
- 192.168.64.100
- 8.8.8.8
environment:
BLACKFIRE_CLIENT_ID: null
BLACKFIRE_CLIENT_TOKEN: null
BROWSERTEST_OUTPUT_BASE_URL: http://drupal-website.docksal/
BROWSERTEST_OUTPUT_DIRECTORY: /var/www/web/sites/simpletest/browser_output
CLI_IMAGE: '"docksal/cli:php8.1-3.2"'
COMPOSER_ALLOW_XDEBUG: "0"
COMPOSER_DEFAULT_VERSION: null
COMPOSER_DISABLE_XDEBUG_WARN: "0"
DOCKSAL_STACK: default
DOCROOT: web
DRUSH_ALLOW_XDEBUG: "0"
DRUSH_OPTIONS_URI: drupal-website.docksal
GIT_USER_EMAIL: minsharm#redhat.com
GIT_USER_NAME: minsharm
HOST_GID: "4210635"
HOST_UID: "4210635"
MINK_DRIVER_ARGS_WEBDRIVER: '''["chrome", {"chromeOptions":{"w3c":false } },"http://browser:4444/wd/hub"]'''
MYSQL_DATABASE: default
MYSQL_HOST: db
MYSQL_PASSWORD: user
MYSQL_ROOT_PASSWORD: root
MYSQL_USER: user
PHP_IDE_CONFIG: null
SECRET_ACQUIA_CLI_KEY: null
SECRET_ACQUIA_CLI_SECRET: null
SECRET_PLATFORMSH_CLI_TOKEN: null
SECRET_SSH_PRIVATE_KEY: null
SECRET_TERMINUS_TOKEN: null
SIMPLETEST_BASE_URL: http://web
SIMPLETEST_DB: mysql://user:user#db/default
SSH_AUTH_SOCK: /.ssh-agent/proxy-socket
VIRTUAL_HOST: drupal-website.docksal
XDEBUG_CONFIG: client_host=192.168.64.1 remote_host=192.168.64.1
XDEBUG_ENABLED: "0"
extends:
file: /home/minsharm/.docksal/stacks/services.yml
service: cli
hostname: cli
healthcheck:
interval: 10s
image: docksal/cli:php8.1-3.2
labels:
io.docksal.shell: bash
io.docksal.user: docker
logging:
options:
max-file: "10"
max-size: 1m
networks:
default: null
volumes:
- type: volume
source: docksal_ssh_agent
target: /.ssh-agent
read_only: true
volume: {}
- type: volume
source: cli_home
target: /home/docker
volume: {}
- type: bind
source: /tmp/.docksal/drupal_website
target: /tmp/.docksal/drupal_website
read_only: true
bind:
create_host_path: true
- type: volume
source: project_root
target: /var/www
volume:
nocopy: true
db:
dns:
- 192.168.64.100
- 8.8.8.8
environment:
MYSQL_ALLOW_EMPTY_PASSWORD: null
MYSQL_DATABASE: default
MYSQL_INITDB_SKIP_TZINFO: null
MYSQL_ONETIME_PASSWORD: null
MYSQL_PASSWORD: user
MYSQL_RANDOM_ROOT_PASSWORD: null
MYSQL_ROOT_PASSWORD: root
MYSQL_USER: user
extends:
file: /home/minsharm/.docksal/stacks/services.yml
service: mariadb
hostname: db
healthcheck:
interval: 10s
image: docksal/mariadb:10.6-1.3
logging:
options:
max-file: "10"
max-size: 1m
networks:
default: null
ports:
- mode: ingress
target: 3306
protocol: tcp
volumes:
- type: volume
source: db_data
target: /var/lib/mysql
volume: {}
- type: volume
source: project_root
target: /var/www
read_only: true
volume:
nocopy: true
web:
depends_on:
cli:
condition: service_started
dns:
- 192.168.64.100
- 8.8.8.8
environment:
APACHE_BASIC_AUTH_PASS: null
APACHE_BASIC_AUTH_USER: null
APACHE_DOCUMENTROOT: /var/www/web
APACHE_FCGI_HOST_PORT: cli:9000
extends:
file: /home/minsharm/.docksal/stacks/services.yml
service: apache
hostname: web
healthcheck:
interval: 10s
image: docksal/apache:2.4-2.5
labels:
io.docksal.cert-name: none
io.docksal.permanent: "false"
io.docksal.project-root: /home/minsharm/Downloads/drupal_website
io.docksal.virtual-host: drupal-website.docksal,*.drupal-website.docksal,drupal-website.docksal.*
logging:
options:
max-file: "10"
max-size: 1m
networks:
default: null
volumes:
- type: volume
source: project_root
target: /var/www
read_only: true
volume:
nocopy: true
networks:
default:
name: drupal_website_default
volumes:
cli_home:
name: drupal_website_cli_home
db_data:
name: drupal_website_db_data
docksal_ssh_agent:
name: docksal_ssh_agent
external: true
project_root:
name: drupal_website_project_root
driver: local
driver_opts:
device: /home/minsharm/Downloads/drupal_website
o: bind
type: none
---------------------
███ DOCKSAL
Docksal version: v1.17.0
fin version: 1.110.1
███ OS
Linux Red Hat Enterprise Linux 8.7
Linux minsharm.remote.csb 4.18.0-425.3.1.el8.x86_64 #1 SMP Fri Sep 30 11:45:06 EDT 2022 x86_64 x86_64 x86_64 GNU/Linux
███ ENVIRONMENT
MODE : Linux Kernel
DOCKER_HOST :
███ DOCKER
Expected client version: 20.10.12
Expected server version: 20.10.12
Installed versions:
Client: Docker Engine - Community
Cloud integration: v1.0.29
Version: 20.10.22
API version: 1.41
Go version: go1.18.9
Git commit: 3a2c30b
Built: Thu Dec 15 22:28:05 2022
OS/Arch: linux/amd64
Context: default
Experimental: true
Server: Docker Engine - Community
Engine:
Version: 20.10.22
API version: 1.41 (minimum version 1.12)
Go version: go1.18.9
Git commit: 42c8b31
Built: Thu Dec 15 22:26:16 2022
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: 1.6.14
GitCommit: 9ba4b250366a5ddde94bb7c9d1def331423aa323
runc:
Version: 1.1.4
GitCommit: v1.1.4-0-g5fd4c4d
docker-init:
Version: 0.19.0
GitCommit: de40ad0
███ DOCKER COMPOSE
Expected version: 2.1.0
Installed version: v2.1.0
███ DOCKSAL: PROJECTS
project STATUS virtual host project root
drupal_website Up 6 hours (healthy) drupal-website.docksal,*.drupal-website.docksal,drupal-website.docksal.* /home/minsharm/Downloads/drupal_website
drupal_website1 Exited (0) 20 hours ago drupal-website1.docksal,*.drupal-website1.docksal,drupal-website1.docksal.* /home/minsharm/Documents/drupal_website1
downloads Exited (0) 2 months ago downloads.docksal,*.downloads.docksal,downloads.docksal.* /home/minsharm/Downloads
███ DOCKSAL: VIRTUAL HOSTS
*.drupal-website.docksal
drupal-website.docksal.*
drupal-website.docksal
███ DOCKSAL: NETWORKING
DOCKSAL_IP: 192.168.64.100
DOCKSAL_HOST_IP: 192.168.64.1
DOCKSAL_VHOST_PROXY_IP:
DOCKSAL_DNS_IP:
DOCKSAL_DNS_DISABLED: 0
DOCKSAL_NO_DNS_RESOLVER: 0
DOCKSAL_DNS_UPSTREAM:
DOCKSAL_DNS_DOMAIN: docksal
███ DOCKSAL: CONNECTIVITY
Host to 192.168.64.100: PASS
Container to 192.168.64.100: PASS
Container to 192.168.64.1: PASS
Checking connectivity to http://dns-test.docksal...
Host: PASS
Containers: FAIL
███ DOCKER: RUNNING CONTAINERS
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
0537085a4a49 docksal/apache:2.4-2.5 "httpd-foreground" 6 hours ago Up 6 hours (healthy) 80/tcp, 443/tcp drupal_website_web_1
c9bc05669e6d docksal/cli:php8.1-3.2 "/opt/startup.sh sup…" 6 hours ago Up 6 hours (healthy) 22/tcp, 3000/tcp, 9000/tcp drupal_website_cli_1
b41aa71709ca docksal/mariadb:10.6-1.3 "docker-entrypoint.s…" 6 hours ago Up 6 hours (healthy) 0.0.0.0:32769->3306/tcp drupal_website_db_1
9de58b3013cd docksal/ssh-agent:1.4 "docker-entrypoint.s…" 2 weeks ago Up 20 hours (healthy) docksal-ssh-agent
c7ce9926f107 docksal/dns:1.2 "docker-entrypoint.s…" 2 weeks ago Up 20 hours (healthy) 192.168.64.100:53->53/udp docksal-dns
89eb7926a675 docksal/vhost-proxy:1.8 "docker-entrypoint.s…" 2 weeks ago Up 20 hours (healthy) 192.168.64.100:80->80/tcp, 192.168.64.100:443->443/tcp docksal-vhost-proxy
███ DOCKER: NETWORKS
NETWORK ID NAME DRIVER SCOPE
91f4a14850e3 _default bridge local
74fce51de42d bridge bridge local
14aee58d5925 ddev-drush_default bridge local
65b5cbce275d ddev_default bridge local
c6a990a6e748 downloads_default bridge local
ebcacefdd5a2 drupal_website1_default bridge local
4f79bf6091bd drupal_website_default bridge local
5868b632ab1c host host local
5a3b3863626b none null local
███ HDD Usage
Filesystem Size Used Avail Use% Mounted on
devtmpfs 16G 0 16G 0% /dev
tmpfs 16G 294M 16G 2% /dev/shm
tmpfs 16G 2.5M 16G 1% /run
tmpfs 16G 0 16G 0% /sys/fs/cgroup
/dev/mapper/RHELCSB-Root 49G 24G 26G 49% /
/dev/mapper/RHELCSB-Home 100G 16G 85G 16% /home
/dev/nvme0n1p2 3.0G 461M 2.6G 16% /boot
/dev/nvme0n1p1 200M 19M 182M 10% /boot/efi
tmpfs 3.1G 68K 3.1G 1% /run/user/4210635
Related
I'm experimenting with forcing a container to use more memory than it's allowed but I can't get it to work. The container is part of a stack defined with docker compose and it's deployed to docker in swarm mode.
Docker is allowing the container to go way above the 50M limit I've set. I was expecting docker to kill the container, throw an error, etc.
Can anyone help me on why Docker does not enforce the memory limit here?
The container in docker-compose.yml is defined to have a memory limit of 50M, and then I have setup a very simple PHP test which will try to allocate 200M. I've defined PHP mem limit to 128M.
This is my docker-compose.yml
version: "3"
services:
nginx:
image: nginx:latest
restart: unless-stopped
volumes:
- ./deploy/nginx/nginx.conf:/etc/nginx/nginx.conf
- ./public:/usr/share/nginx/html
ports:
- "8180:80"
links:
- app
app:
image: 127.0.0.1:5000/wpdemo
build:
context: .
dockerfile: Dockerfile-app
restart: unless-stopped
volumes:
- .:/var/www/html
links:
- mysql
deploy:
resources:
limits:
cpus: '0.50'
memory: 50M
reservations:
cpus: '0.25'
memory: 20M
mysql:
image: mysql:5.7
restart: unless-stopped
ports:
- "13306:3306"
environment:
MYSQL_ALLOW_EMPTY_PASSWORD: 'yes'
volumes:
- ~/docker/volumes/mysql:/var/lib/mysql
Instead of docker killing the container, it allows it to take as much memory as it wants and PHP eventually stops the process throwing the error below:
"PHP message: PHP Fatal error: Allowed memory size of 125829120 bytes exhausted (tried to allocate 67108872 bytes) in /var/www/html/public/index.php on line 4"
I'm using Ubuntu 18.04.
uname -a
Linux 4.18.10-041810-generic #201809260332 SMP Wed Sep 26 07:34:01 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
Docker version 18.06.1-ce, build e68fc7a
docker-compose version 1.17.1, build unknown docker-py version: 2.5.1
CPython version: 2.7.15rc1 OpenSSL version: OpenSSL 1.1.0g 2 Nov 2017
This is the output of "docker stats" on the app container:
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
679c8495ac1d stackdemo_app.1.hr3ufwlskhdafre39aqrshxyu 0.00% 43.81MiB / 50MiB 87.62% 106kB / 389kB 2.05GB / 10.6GB 5
This is the output of "docker info":
Containers: 36
Running: 5
Paused: 0
Stopped: 31
Images: 450
Server Version: 18.06.1-ce
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog
Swarm: active
NodeID: wnegv5lp41wfs3epfrua489or
Is Manager: true
ClusterID: hq7o176yffjglxzb9pu3fiomr
Managers: 1
Nodes: 1
Orchestration:
Task History Retention Limit: 5
Raft:
Snapshot Interval: 10000
Number of Old Snapshots to Retain: 0
Heartbeat Tick: 1
Election Tick: 10
Dispatcher:
Heartbeat Period: 5 seconds
CA Configuration:
Expiry Duration: 3 months
Force Rotate: 0
Autolock Managers: false
Root Rotation In Progress: false
Node Address: 192.168.1.120
Manager Addresses:
192.168.1.120:2377
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 468a545b9edcd5932818eb9de8e72413e616e86e
runc version: 69663f0bd4b60df09991c08812a60108003fa340
init version: fec3683
Security Options:
apparmor
seccomp
Profile: default
Kernel Version: 4.18.10-041810-generic
Operating System: Ubuntu 18.04.1 LTS
OSType: linux
Architecture: x86_64
CPUs: 8
Total Memory: 15.49GiB
Name: rafxps15
ID: QEX7:FEB3:J76L:DCAQ:SO4S:SWVE:4XPI:PI6R:YM4C:MV4I:C3PM:FLOQ
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
WARNING: No swap limit support
As you said in comment, swap is enabled on host but swap limit in cgroups does not supported yet.
According to this enabling swap limit support. Note that reboot of system is essential.
At last, —-memory-swap flag should be set. If you want to prevent your PHP app accessing swap, you should set it with the same value of —-memory. More details about memory swap settings.
I had a small working docker swarm on google cloud platform.
There are just two nodes, one with nginx and php, the other one with mysql.
Right now it seems that from master node I can't connect to the mysql on the worker node.
SQLSTATE[HY000] [2002] php_network_getaddresses: getaddrinfo failed: Name or service not known
Same problem also with ping from a shell inside the container.
I've used --advertise-addr flag when init the swarm:
docker swarm init --advertise-addr 10.156.0.3
Then I've successfully join the swarm from the 2nd node:
docker swarm join --token my-token 10.156.0.3:2377
Also the deploy is successful
docker stack deploy --compose-file docker-compose.yml test
Creating network test_default
Creating service test_mysql
Creating service test_web
Creating service test_app
(in docker-compose.yml there is no network definition, I'm using the docker default)
Nodes:
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION
oz1ebgrp1a68brxi0nd1gdr2k mysql-001 Ready Active 18.03.1-ce
ndy11zyxi0wym8mjmgh8op1ni * app-001 Ready Active Leader 18.03.1-ce
docker stack ps test
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
9afwjgtpy8lc test_app.1 127.0.0.1:5000/app:latest app-001 Running Running 8 minutes ago
mgajupmcai0t test_web.1 127.0.0.1:5000/web:latest app-001 Running Running 8 minutes ago
s17jvkukahl7 test_mysql.1 mysql:5.7 mysql-001 Running Running 8 minutes ago
docker networks:
NETWORK ID NAME DRIVER SCOPE
9084b39892f4 bridge bridge local
ofqtewx039fl test_default overlay swarm
5cc9d4554bea docker_gwbridge bridge local
97fbd06a23b5 host host local
x8f408klk2ms ingress overlay swarm
ca1b849ea73a none null local
Here is my docker info
Containers: 12
Running: 3
Paused: 0
Stopped: 9
Images: 35
Server Version: 18.03.1-ce
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog
Swarm: active
NodeID: ndy11zyxi0wym8mjmgh8op1ni
Is Manager: true
ClusterID: q23l1v6dav3u4anqqu51nwx0r
Managers: 1
Nodes: 2
Orchestration:
Task History Retention Limit: 5
Raft:
Snapshot Interval: 10000
Number of Old Snapshots to Retain: 0
Heartbeat Tick: 1
Election Tick: 10
Dispatcher:
Heartbeat Period: 5 seconds
Total Memory: 14.09GiB
CA Configuration:
Expiry Duration: 3 months
Force Rotate: 0
Autolock Managers: false
Root Rotation In Progress: false
Node Address: 10.156.0.3
Manager Addresses:
10.156.0.3:2377
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 773c489c9c1b21a6d78b5c538cd395416ec50f88
runc version: 4fc53a81fb7c994640722ac585fa9ca548971871
init version: 949e6fa
Security Options:
apparmor
seccomp
Profile: default
Kernel Version: 4.13.0-1019-gcp
Operating System: Ubuntu 16.04.4 LTS
OSType: linux
Architecture: x86_64
CPUs: 16
Total Memory: 14.09GiB
Name: app-001
ID: IWKK:NWRJ:HKAQ:3JSQ:7H3L:2WXC:IIJ7:OEKB:4ARR:T7FY:VAWR:HOPL
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
This swarm was working fine few weeks ago. I didn't need this application for few weeks so I've turned off all the machines. Meanwhile swarm-node.crt expired and so today when I've turned on the machine I had to remove the service and the swarm and recreate it from scratch. The result is that I can't connect from container on one node to container on the other node.
Any help will be appreciated.
UPDATE:
here is docker-compose.yml
version: '3'
services:
web:
image: 127.0.0.1:5000/web
build:
context: ./web
volumes:
- ./test:/var/www
build:
ports:
- 80:80
links:
- app
deploy:
mode: replicated
replicas: 1
placement:
constraints:
- node.hostname == app-001
app:
image: 127.0.0.1:5000/app
build:
context: ./app
volumes:
- ./test:/var/www
depends_on:
- mysql
deploy:
mode: replicated
replicas: 1
placement:
constraints:
- node.hostname == app-001
mysql:
image: mysql:5.7
volumes:
- /mnt/disks/ssd-001/mysql:/var/lib/mysql
- /mnt/disks/buckets/common-storage-001/backup/mysql:/backup
environment:
- "MYSQL_DATABASE=test"
- "MYSQL_USER=test"
- "MYSQL_PASSWORD=*****"
- "MYSQL_ROOT_PASSWORD=*****"
command: mysqld --key-buffer-size=32M --max-allowed-packet=16M --myisam-recover-options=FORCE,BACKUP --tmp-table-size=32M --query-cache-type=0 --query-cache-size=0 --max-heap-table-size=32M --max-connections=500 --thread-cache-size=50 --innodb-flush-method=O_DIRECT --innodb-log-file-size=512M --innodb-buffer-pool-size=16G --open-files-limit=65535
deploy:
mode: replicated
replicas: 1
placement:
constraints:
- node.hostname == mysql-001
currently i am working in docker swarm,
I have made 2 node cluster with docker swarm on bare metal servers.
i have tried to run individual container in each nodes, and they are running. but when i write docker-compose.yml file to run replicas. it gives errors, my docker-compose.yml is here.
version: "3"
services:
web:
# replace username/repo:tag with your name and image details
image: brainplow/shopnroar:latest
deploy:
replicas: 2
restart_policy:
condition: on-failure
resources:
limits:
cpus: "0.1"
memory: 50M
ports:
- "8000:8080"
networks:
- webnet
web-dhaar:
# replace username/repo:tag with your name and image details
image: brainplow/dhaar:latest
deploy:
replicas: 2
restart_policy:
condition: on-failure
resources:
limits:
cpus: "0.1"
memory: 50M
ports:
- "9090:9090"
networks:
- webnet
visualizer:
image: dockersamples/visualizer:stable
ports:
- "8010:8080"
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
deploy:
placement:
constraints: [node.role == manager]
networks:
- webnet
redis:
image: redis
ports:
- "6379:6379"
volumes:
- /home/docker/data:/data
deploy:
placement:
constraints: [node.role == manager]
command: redis-server --appendonly yes
networks:
- webnet
networks:
webnet:
and when i run docker service ls it gives following,
ID NAME MODE REPLICAS IMAGE PORTS
h6irzq0swdat getstartedlab_redis replicated 0/1 redis:latest *:6379->6379/tcp
tuchpcxd159x getstartedlab_visualizer replicated 0/1 dockersamples/visualizer:stable *:8010->8080/tcp
mt5jxxfty0om getstartedlab_web replicated 0/2 brainplow/shopnroar:latest *:8000->8080/tcp
igz1ceqtawkk getstartedlab_web-dhaar replicated 0/2 brainplow/dhaar:latest *:9090->9090/tcp
hdya9obuk7ok redis replicated 0/5 myservice:latest
and when i run docker service ps igz1ceqtawkk
it gives me this error,
zg7ostkj1ycn \_ redis.4 myservice:latest Masternode Shutdown Rejected 4 minutes ago "No such image: myservice:late…"
xxscungxgmbv \_ redis.4 myservice:latest Slavenode Shutdown Rejected 4 minutes ago "No such image: myservice:late…"
6i5qq5msn6ig redis.5 myservice:latest Masternode Ready Rejected 2 seconds ago "No such image: myservice:late…"
zsvxwm9nsjj6 \_ redis.5 myservice:latest Masternode Shutdown Rejected 32 seconds ago "No such image: myservice:late…"
yshbkh62eb7x \_ redis.5 myservice:latest Slavenode Shutdown Rejected about a minute ago "No such image: myservice:late…"
zat104nz0evk \_ redis.5 myservice:latest Slavenode Shutdown Rejected 3 minutes ago "No such image: myservice:late…"
zd4rcb9eeqbb \_ redis.5 myservice:latest Slavenode Shutdown Rejected 3 minutes ago "No such image: myservice:late…"
and sometime this
zy72uf810mka \_ getstartedlab_web.5 brainplow/shopnroar:latest Masternode Shutdown Failed 21 minutes ago "starting container failed: su…"
zzpe0lwoe7cd \_ getstartedlab_web.5 brainplow/shopnroar:latest Masternode Shutdown Failed 30 minutes ago "starting container failed: su…"
zt3eu0jb2uou \_ getstartedlab_web.5 brainplow/shopnroar:latest Slavenode Shutdown Failed 41 minutes ago "starting container failed: su…"
zxesxvq2vumv \_ getstartedlab_web.5 brainplow/shopnroar:latest Masternode Shutdown Failed 2 hours ago "starting container failed: su…"
can anybody tell me why it is happening.
here is my docker info
Containers: 2430
Running: 0
Paused: 0
Stopped: 2430
Images: 5
Server Version: 17.09.0-ce
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog
Swarm: active
NodeID: wf9o88zy9w4xek011esw9oaa5
Is Manager: true
ClusterID: at6z6315v8d8zs43u1u3dqqca
Managers: 1
Nodes: 2
Orchestration:
Task History Retention Limit: 5
Raft:
Snapshot Interval: 10000
Number of Old Snapshots to Retain: 0
Heartbeat Tick: 1
Election Tick: 3
Dispatcher:
Heartbeat Period: 5 seconds
CA Configuration:
Expiry Duration: 3 months
Force Rotate: 0
Autolock Managers: false
Root Rotation In Progress: false
Node Address: 158.69.23.109
Manager Addresses:
158.69.23.109:2377
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 06b9cb35161009dcb7123345749fef02f7cea8e0
runc version: 3f2f8b84a77f73d38244dd690525642a72156c64
init version: 949e6fa
Security Options:
seccomp
Profile: default
Kernel Version: 4.9.33-mod-std-ipv6-64
Operating System: Ubuntu 16.04.3 LTS
OSType: linux
Architecture: x86_64
CPUs: 8
Total Memory: 31.04GiB
Name: Masternode
ID: 3GY6:RP6D:W3SU:FAJV:ZOJP:THFX:TYDF:UZKB:3FJN:WJKC:I23H:MBHO
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Username: brainplow
Registry: https://index.docker.io/v1/
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
WARNING: No cpu cfs quota support
WARNING: No cpu cfs period support
and
docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS
wf9o88zy9w4xek011esw9oaa5 * Masternode Ready Active Leader
n3n38nos3kzcq4g4gl8l0s48c Slavenode Ready Active
can anybody tell me why my containers are not running in service. i have stuck on it.
and my docker version
Client:
Version: 17.09.0-ce
API version: 1.32
Go version: go1.8.3
Git commit: afdb6d4
Built: Tue Sep 26 22:42:18 2017
OS/Arch: linux/amd64
Server:
Version: 17.09.0-ce
API version: 1.32 (minimum version 1.12)
Go version: go1.8.3
Git commit: afdb6d4
Built: Tue Sep 26 22:40:56 2017
OS/Arch: linux/amd64
Experimental: false
Thanks in advance. Really needed help on this.
Problem is solved now. it was due to kernal version. i was working on bare metal cloud and i installed Ubuntu 16.04 on it. but i forgot to install kernal that was come with this version.
after reinstalling the Ubuntu with its own kernal, problem was solved.
I have been playing around with docker-in-docker (dind) setups and am running into a weird problem.
If I run a docker container separately inside dind and expose a port then I could connect to the port without any problems. For example, using the docker swarm visualizer inside dind:
/home/dockremap # docker run -d -p 8080:8080 dockersamples/visualizer:stable
/home/dockremap # wget localhost:8080
Connecting to localhost:8080 (127.0.0.1:8080)
index.html 100% |*********************** ....
However, if I run the same inside a swarm by deploying from a compose file it doesn't work.
Here is what my compose file looks like:
version: "3"
services:
visualizer:
image: dockersamples/visualizer:stable
ports:
- "8080:8080"
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
deploy:
placement:
constraints: [node.role == manager]
networks:
- webnet
networks:
webnet:
and the commands I run:
/home/dockremap # docker swarm init
/home/dockremap # docker stack deploy -c compose.yaml test
now when I do wget I get connection refused error:
/home/dockremap # wget localhost:8080
Connecting to localhost:8080 (127.0.0.1:8080)
wget: can't connect to remote host (127.0.0.1): Connection refused
Should doing this sort of thing in dind be able to work by default, or is there something I need to configure? I am using docker 17.03.1-ce on Windows and here is what I get when I run docker info in dind:
Containers: 2
Running: 1
Paused: 0
Stopped: 1
Images: 1
Server Version: 17.05.0-ce
Storage Driver: vfs
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host macvlan null overlay
Swarm: active
NodeID: wz2r6iuyqztg3ivyk9fwsn976
Is Manager: true
ClusterID: mshadtrs0b1oayva2vrquf67d
Managers: 1
Nodes: 1
Orchestration:
Task History Retention Limit: 5
Raft:
Snapshot Interval: 10000
Number of Old Snapshots to Retain: 0
Heartbeat Tick: 1
Election Tick: 3
Dispatcher:
Heartbeat Period: 5 seconds
CA Configuration:
Expiry Duration: 3 months
Node Address: 172.17.0.2
Manager Addresses:
172.17.0.2:2377
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 9048e5e50717ea4497b757314bad98ea3763c145
runc version: 9c2d8d184e5da67c95d601382adf14862e4f2228
init version: 949e6fa
Security Options:
seccomp
Profile: default
Kernel Version: 4.4.59-boot2docker
Operating System: Alpine Linux v3.5 (containerized)
OSType: linux
Architecture: x86_64
CPUs: 1
Total Memory: 987.1MiB
Name: 7e480e7313ae
ID: EL7P:NI2I:TOR4:I7IW:DPAB:WKYU:6A6J:NCC7:3K3E:6YVH:PYVB:2L2W
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
WARNING: bridge-nf-call-iptables is disabled
WARNING: bridge-nf-call-ip6tables is disabled
I create a docker swarm cluster on 2 Linux machine, but when I use docker-compose up -d to start containers , some error has occurred.
This is my docker info:
Containers: 4
Running: 4
Paused: 0
Stopped: 0
Images: 28
Server Version: swarm/1.2.5
Role: primary
Strategy: spread
Filters: health, port, containerslots, dependency, affinity, constraint
Nodes: 2
ozcluster01: 192.168.168.41:2375
└ ID: CKCO:JGAA:PIOM:F4PL:6TIH:EQFY:KZ6X:B64Q:HRFH:FSTT:MLJT:BJUY
└ Status: Healthy
└ Containers: 2 (2 Running, 0 Paused, 0 Stopped)
└ Reserved CPUs: 0 / 2
└ Reserved Memory: 0 B / 3.79 GiB
└ Labels: executiondriver=native-0.2, kernelversion=3.10.0- 327.13.1.el7.x86_64, operatingsystem=CentOS Linux 7 (Core), storagedriver=devicemapper
└ UpdatedAt: 2016-11-04T02:05:08Z
└ ServerVersion: 1.10.3
ozcluster02: 192.168.168.42:2375
└ ID: 73GR:6M7W:GMWD:D3DO:UASW:YHJ2:BTH6:DCO5:NJM6:SXPN:PXTY:3NHI
└ Status: Healthy
└ Containers: 2 (2 Running, 0 Paused, 0 Stopped)
└ Reserved CPUs: 0 / 2
└ Reserved Memory: 64 MiB / 3.79 GiB
└ Labels: executiondriver=native-0.2, kernelversion=3.10.0-327.10.1.el7.x86_64, operatingsystem=CentOS Linux 7 (Core), storagedriver=devicemapper
└ UpdatedAt: 2016-11-04T02:05:06Z
└ ServerVersion: 1.10.3
This is my docker-compose.yml
version: '2'
services:
rabbitmq:
image: rabbitmq
ports:
- "5672:5672"
- "15672:15672"
network_mode: "bridge"
config-service:
image: ozms/config-service
ports:
- "8888:8888"
volumes:
- ~/ozms/configs:/var/tmp/
- ~/ozms/log:/log
network_mode: "bridge"
labels:
- "affinity:image==ozms/config-service"
eureka-service:
image: ozms/eureka-service
ports:
- "8761:8761"
volumes:
- ~/ozms/log:/log
links:
- config-service
- rabbitmq
environment:
- SPRING_RABBITMQ_HOST=rabbitmq
network_mode: "bridge"
after I exec docker-compose up -d , the service rabbitmq and config-service can be started up , but eureka-service caused an error:
[dannil#ozcluster01 ozms]$ docker-compose up -d
Creating ozms_config-service_1
Creating ozms_rabbitmq_1
Creating ozms_eureka-service_1
ERROR: Unable to find a node that satisfies the following conditions
[port 8761 (Bridge mode)]
[available container slots]
[--link=ozms_config-service_1:config-service --link=ozms_config-service_1:config-service_1 --link=ozms_config-service_1:ozms_config-service_1 --link=ozms_rabbitmq_1:ozms_rabbitmq_1 --link=ozms_rabbitmq_1:rabbitmq --link=ozms_rabbitmq_1:rabbitmq_1]
And I exec docker ps:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
871afc8e1eb6 rabbitmq "docker-entrypoint.sh" 2 minutes ago Up 2 minutes 4369/tcp, 192.168.168.41:5672->5672/tcp, 5671/tcp, 25672/tcp, 192.168.168.41:15672->15672/tcp ozcluster01/ozms_rabbitmq_1
8ef3f666a7b9 ozms/config-service "java -Djava.security" 2 minutes ago Up 2 minutes 192.168.168.42:8888->8888/tcp ozcluster02/ozms_config-service_1
I find that rabbitmq is start up on machine ozculster01, config-service is start up on machine ozculster02.
when docker-compose start config-service, there is no links, so it can be start up successfully.
but when I start eureka-service on machine ozculster02,there is a links to rabbitmq, but the service rabbitmq is on machine ozculster01,
the error occurred.
How could I do to resolve the problem ?
Is that right to use network_mode: "bridge" in Docker Swarm cluster ?
I resolved the problem myself.
In swarm mode, docker containers can't contact another container with network_mode:bridge.
In swarm mode, one must use network_mode : overlay. Overlay is used by default if you are using compose-file formate version 2.
see more detail :
Setting up a Docker Swarm with network overlay
With overlay mode , docker-compose.yml file do not need the config likns , Containers can contact another container with ${service_name_in_composeFile}
Example:
I can enter into the container config-service , and $ ping eureka-service, and it works fine!
this is my compose-file.yml :
version: '2'
services:
rabbitmq:
image: rabbitmq
ports:
- "5672:5672"
- "15672:15672"
config-service:
image: ozms/config-service
ports:
- "8888:8888"
volumes:
- ~/ozms/configs:/var/tmp/
- ~/ozms/log:/log
labels:
- "affinity:image==ozms/config-service"
eureka-service:
image: ozms/eureka-service
ports:
- "8761:8761"
volumes:
- ~/ozms/log:/log
#links: it is no need in overlay mode
# - config-service
# - rabbitmq
environment:
- SPRING_RABBITMQ_HOST=rabbitmq