I am working on building a high availability setup using keepalived, where each server will have its own set of dockers that will get handled appropriately depending on if it is in BACKUP or MASTER. However, for testing purposes, I don't have 2 boxes available that I can turn on and off for this. So, is there is a good (preferably lightweight) way I can setup multiple dockers with the same name on the same machine?
Essentially, it would like it to look something like this:
Physical Server A
-----------------------------------------
| Virtual Server A |
| -------------------------------------- |
| | keepalived - htmld - accessd - mysql |
| -------------------------------------- |
| ^ |
| | |
| v |
| Virtual Server B |
| -------------------------------------- |
| | keepalived - htmld - accessd - mysql |
| -------------------------------------- |
-----------------------------------------
Thanks
You cannot have multiple containers with the exact same name, but you can use docker-compose file to have several directories and containers with same name (but with some differences that I explain below).
You can read more about it in Docker Docs regarding my below explanation.
Let us suppose yours:
Physical Server A
-----------------------------------------
| Virtual Server A |
| -------------------------------------- |
| | keepalived - htmld - accessd - mysql |
| -------------------------------------- |
| ^ |
| | |
| v |
| Virtual Server B |
| -------------------------------------- |
| | keepalived - htmld - accessd - mysql |
| -------------------------------------- |
-----------------------------------------
In your case, I would create two directories: vsb and vsb. Now let's go into these two directories.
We have these one file (at least, but you can have more per your requirement):
-----------------------------------------
| /home/vsa/docker-compose.yml |
| /home/vsa/keepalived/Dockerfile |
| /home/vsa/htmld/Dockerfile |
| /home/vsa/accessd/Dockerfile |
| /home/vsa/mysql/Dockerfile |
| -------------------------------------- |
| ^ |
| | |
| v |
| /home/vsb/docker-compose.yml |
| /home/vsb/keepalived/Dockerfile |
| /home/vsb/htmld/Dockerfile |
| /home/vsb/accessd/Dockerfile |
| /home/vsb/mysql/Dockerfile |
| -------------------------------------- |
-----------------------------------------
Note the file names exactly, as Dockerfile starts with capital D.
Let's watch docker-compose.yml:
version: '3.9'
services:
keepalived:
build: ./keepalived
restart: always
htmld:
build: ./htmld
restart: always
accessd:
build: ./accessd
restart: always
mysql:
build: ./mysql
restart: always
networks:
default:
external:
name: some_network
volumes:
some_name: {}
Let's dig into docker-compose.yml first:
Version part defines which version to use. Services part starts the services and containers you want to create and run.
I've used names like keepalived under services. You can use any name you want there, as it's your choice.
Under keepalived, the keyword build specifies in which path Dockerfile exists, so that as the path is called /home/vsa/keepalived, so we use . which means here and then it goes to keepalived directory, searching for Dockerfile (in docker-compose.yml for vsb, it searches for this file in /home/vsb/keepalived).
networks part specifies the external network these containers use, so that when all of our containers from docker-compose are running, then they're in the same docker network, so they can see and talk to each other. name part has the name some_network that you can choose any name you want that created before.
How to create a network called some_network is, if you're in Linux, you should run docker network create some_network before running docker-compose file.
volumes part specifies the name of volume of these services.
And here is an example in keepalived directory for a file called Dockerfile:
FROM ubuntu:latest # see [Dockerfile Docs][2] for more info
# after FROM command, you can use
# other available commands to reach
# your own goal
Now let's go to Dockerfile:
FROM command specifies which OS base to use. In this case, we want to use ubuntu for example, so that we create our image based on ubuntu.
There are other commands you can see them all in above link.
After having finished both Dockerfile and docker-compose.yml files with your own commands and keywords, you can run and create them by these commands:
docker-compose -f /home/vsa/docker-compose.yml up -d
docker-compose -f /home/vsb/docker-compose.yml up -d
Now we'll have eight containers calling these (docker automatically called them, otherwise you explicitly name them on your own):
vsa_keepalived
vsa_htmld
vsa_accessd
vsa_mysql
vsb_keepalived
vsb_htmld
vsb_accessd
vsb_mysql
tl; dr? Jump straight to Question ;)
Context & Architecture
In this application designed with a micro-service architecture in mind, one can find notably two components:
monitor: probes system metrics and report them via HTTP
controller: read metrics reported by monitor and take actions according to rules defined in a configuration file.
+------------------------------------------------------+
| host / |
+-----/ |
| |
| +-----------------+ +-------------------+ |
| | monitor / | | controller / | |
| +--------/ | +-----------/ | |
| | +----------+ | | +-------------+ | |
| | | REST :80 |>--+--------+->| application | | |
| | +----------+ | | +-------------+ | |
| +-----------------+ +-------------------+ |
| |
+------------------------------------------------------+
Trouble with Docker
The only way I found for monitor to be able to read network statistics not contrived to its docker network stack was to start its container with --network=host. The following question assumes this is the only solution. If (fortunately) I were mistaken, please do answer with an alternative.
version: "3.2"
services:
monitor:
build: monitor
network_mode: host
controller:
build: controller
network_mode: host
Question
Is there a way for monitor to serve its report on a docker network even though it reads statistics from the host network stack?
Or, is there a way for controller to not be on --network=host even though it connects to monitor which is?
(note: I use docker-compose but a pure docker answer suits me)
I use articulate for building my chatbot. https://github.com/samtecspg/articulate
I get the below error when I execute docker-compose up
api_1 |
api_1 | /usr/src/app/server/index.js:33
api_1 | throw err;
api_1 | ^
api_1 | [cluster_block_exception] blocked by: [FORBIDDEN/12/index read-only / allow delete (api)]; :: {"path":"/document/_mapping/document","query":{},"body":"{\"properties\":{\"document\":{\"type\":\"text\"},\"time_stamp\":{\"type\":\"date\"},\"maximum_saying_score\":{\"type\":\"float\"},\"maximum_category_score\":{\"type\":\"float\"},\"total_elapsed_time_ms\":{\"type\":\"text\"},\"rasa_results\":{\"type\":\"object\"},\"session\":{\"type\":\"text\"},\"agent_id\":{\"type\":\"integer\"},\"agent_model\":{\"type\":\"text\"}}}","statusCode":403,"response":"{\"error\":{\"root_cause\":[{\"type\":\"cluster_block_exception\",\"reason\":\"blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];\"}],\"type\":\"cluster_block_exception\",\"reason\":\"blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];\"},\"status\":403}"}
api_1 | at respond (/usr/src/app/node_modules/elasticsearch/src/lib/transport.js:308:15)
api_1 | at checkRespForFailure (/usr/src/app/node_modules/elasticsearch/src/lib/transport.js:267:7)
api_1 | at HttpConnector.<anonymous> (/usr/src/app/node_modules/elasticsearch/src/lib/connectors/http.js:165:7)
api_1 | at IncomingMessage.wrapper (/usr/src/app/node_modules/lodash/lodash.js:4949:19)
api_1 | at IncomingMessage.emit (events.js:187:15)
api_1 | at IncomingMessage.EventEmitter.emit (domain.js:442:20)
api_1 | at endReadableNT (_stream_readable.js:1081:12)
api_1 | at process._tickCallback (internal/process/next_tick.js:63:19)
api_1 | [nodemon] app crashed - waiting for file changes before starting...
I've cleared 15gb of my space eventhough I get this error.
The indices probably became read-only.
Use the following command:
curl -s -H 'Content-Type: application/json' -XPUT '[IP-server]:9200/_all/_settings?pretty' -d ' {
"index":{
"blocks" : {"read_only_allow_delete":"false"}
}
}'
i did an upgrade on my server for my Docker MARIADB with:
docker-compose pull
docker-compose up -d
My version before:
Server version: 10.2.14-MariaDB-10.2.14+maria~jessie mariadb.org binary distribution
SHOW VARIABLES LIKE "%version%";
+-------------------------+--------------------------------------+
| Variable_name | Value |
+-------------------------+--------------------------------------+
| innodb_version | 5.7.21 |
| protocol_version | 10 |
| slave_type_conversions | |
| version | 10.2.14-MariaDB-10.2.14+maria~jessie |
| version_comment | mariadb.org binary distribution |
| version_compile_machine | x86_64 |
| version_compile_os | debian-linux-gnu |
| version_malloc_library | system |
| version_ssl_library | OpenSSL 1.0.1t 3 May 2016 |
| wsrep_patch_version | wsrep_25.23 |
+-------------------------+--------------------------------------+
My version now:
Server version: 10.3.9-MariaDB-1:10.3.9+maria~bionic mariadb.org binary distribution
+---------------------------------+------------------------------------------+
| Variable_name | Value |
+---------------------------------+------------------------------------------+
| innodb_version | 10.3.9 |
| protocol_version | 10 |
| slave_type_conversions | |
| system_versioning_alter_history | ERROR |
| system_versioning_asof | DEFAULT |
| version | 10.3.9-MariaDB-1:10.3.9+maria~bionic |
| version_comment | mariadb.org binary distribution |
| version_compile_machine | x86_64 |
| version_compile_os | debian-linux-gnu |
| version_malloc_library | system |
| version_source_revision | ca26f91bcaa21933147974c823852a2e1c2e2bd7 |
| version_ssl_library | OpenSSL 1.1.0g 2 Nov 2017 |
| wsrep_patch_version | wsrep_25.23 |
+---------------------------------+------------------------------------------+
So it seems it was a upgrade from 10.2 to 10.3.
Upgrading from MariaDB 10.2 to MariaDB 10.3
Now i get the following error in "docker-compose logs"
2018-09-28 13:03:38 0 [Warning] InnoDB: Table mysql/innodb_table_stats has length mismatch in the column name table_name. Please run mysql_upgrade
2018-09-28 13:03:38 0 [Warning] InnoDB: Table mysql/innodb_index_stats has length mismatch in the column name table_name. Please run mysql_upgrade
The database is working as expected. What to do to get rid of this error?
While I was writing the question I was able to fix it myself. If you also facing this problem:
connect to the docker database container:
docker exec -u 0 -i -t CONTAINER_NAME /bin/bash
run mysql_upgrade like written in the error message:
mysql_upgrade --user=root --password=xxyy --host=localhost
I did a restart of the docker compose with:
docker-compose stop
docker-compose start
I may be asking a very beginner level question but I need a way to distinguish process under docker and that under non-docker in a box. The 'ps' command command output gives me a feeling that process is running in linux box and cannot confirm if same is under hood of docker.
In the same context is it possible / feasible that process under docker be started with docker root file system.
Is the same feasible or there any other solution for same?
You can identify Docker process via the process tree on the Docker host (or on the VM if using docker for mac/windows)
The parent process to 2924(haproxy) is 2902
The parent process to 2902(haproxy-start) is 2881
2881 will be docker-container which is managed by a dockerd process
To view your process listing in a tree format use ps -ejH or pstree (available in the psmisc package)
To get a quick list of whats running under dockerd
/ # pstree $(pgrep dockerd)
dockerd-+-docker-containe-+-docker-containe-+-java---17*[{java}]
| | `-8*[{docker-containe}]
| |-docker-containe-+-sinopia-+-4*[{V8 WorkerThread}]
| | | |-{node}
| | | `-4*[{sinopia}]
| | `-8*[{docker-containe}]
| |-docker-containe-+-node-+-4*[{V8 WorkerThread}]
| | | `-{node}
| | `-8*[{docker-containe}]
| |-docker-containe-+-tinydns
| | `-8*[{docker-containe}]
| |-docker-containe-+-dnscache
| | `-8*[{docker-containe}]
| |-docker-containe-+-apt-cacher-ng
| | `-8*[{docker-containe}]
| `-20*[{docker-containe}]
|-2*[docker-proxy---6*[{docker-proxy}]]
|-docker-proxy---5*[{docker-proxy}]
|-2*[docker-proxy---4*[{docker-proxy}]]
|-docker-proxy---8*[{docker-proxy}]
`-28*[{dockerd}]
Show the parents of a PID (-s)
/ # pstree -aps 3744
init,1
`-dockerd,1721 --pidfile=/run/docker.pid -H unix:///var/run/docker.sock --swarm-default-advertise-addr=eth0
`-docker-containe,1728 -l unix:///var/run/docker/libcontainerd/docker-containerd.sock --shim docker-containerd-shim ...
`-docker-containe,3711 8d923b3235eb963b735fda847b745d5629904ccef1245d4592cc986b3b9b384a...
`-java,3744 -Dzookeeper.log.dir=. -Dzookeeper.root.logger=INFO,CONSOLE -cp/zookeeper/bin/../build/cl
|-{java},4174
|-{java},4175
|-{java},4176
|-{java},4177
|-{java},4190
|-{java},4208
|-{java},4209
|-{java},4327
|-{java},4328
|-{java},4329
|-{java},4330
|-{java},4390
|-{java},4416
|-{java},4617
|-{java},4625
|-{java},4629
`-{java},4632
Show all children of docker, including namespace changes (-S):
/ # pstree -apS $(pgrep dockerd)
dockerd,1721 --pidfile=/run/docker.pid -H unix:///var/run/docker.sock --swarm-default-advertise-addr=eth0
|-docker-containe,1728 -l unix:///var/run/docker/libcontainerd/docker-containerd.sock --shim docker-containerd-shim ...
| |-docker-containe,3711 8d923b3235eb963b735fda847b745d5629904ccef1245d4592cc986b3b9b384a...
| | |-java,3744,ipc,mnt,net,pid,uts -Dzookeeper.log.dir=. -Dzookeeper.root.logger=INFO,CONSOLE -cp/zookeeper/bin/../build/cl
| | | |-{java},4174
| | | |-{java},4175
| | | |-{java},4629
| | | `-{java},4632
| | |-{docker-containe},3712
| | `-{docker-containe},4152
| |-docker-containe,3806 49125f8274242a5ae244ffbca121f354c620355186875617d43876bcde619732...
| | |-sinopia,3841,ipc,mnt,net,pid,uts
| | | |-{V8 WorkerThread},4063
| | | |-{V8 WorkerThread},4064
| | | |-{V8 WorkerThread},4065
| | | |-{V8 WorkerThread},4066
| | | |-{node},4062
| | | |-{sinopia},4333
| | | |-{sinopia},4334
| | | |-{sinopia},4335
| | | `-{sinopia},4336
| | |-{docker-containe},3814
| | `-{docker-containe},4038
| |-docker-containe,3846 2a756d94c52d934ba729927b0354014f11da6319eff4d35880a30e72e033c05d...
| | |-node,3910,ipc,mnt,net,pid,uts lib/dnsd.js
| | | |-{V8 WorkerThread},4204
| | | |-{V8 WorkerThread},4205
| | | |-{V8 WorkerThread},4206
| | | |-{V8 WorkerThread},4207
| | | `-{node},4203
The command lxc-ls and the command lxc-ps may be installable on your Linux distribution. This will allow you to list the running LXC containers and the processes running within those containers respectively. You should be able to link the output from lxc-ls to lxc-ps using streams and get a list of all containerized processes.
The big caveat is that you specified Docker and not every Docker instance is running on LXC nor is it necessarily a localhost process. Docker defines an API that can be called to list remote Docker instances, so this technique will not help with enumerating processes on remote machines as well.
In windows docker behave little bit different.
It's processes are not run as child of parent process, but running as separate process on the host.
They can be viewed by (for example), powershell, like
Get-Process powershell
For example, getting processes on the host when running microsoft/iis container will include additional powershell process (since ms/iis container runs powershell as a main executable process).