I've been taksed to convert our standalone DB server that runs the OSTicket database into a MariaDB Galera cluster to add a level of resiliency to the service, I've got the cluster and have set the MYSQL_HOST variable to one of the nodes which works fine, however I am unable to set it to 'speak' to the whole cluster so currently it's not really doing the job required.
We have a haproxy docker instance we call swarm-maria which does other bits for our application which we have tried to point OSTicket to use (swamr-maria_proxy in the MYSQL_HOST variable) on an unused port but no dice, we then tried to add a 'link' as mentioned in the docker page for OSTicket (swarm-maria_proxy:mysql) and comment out the MYSQL_HOST variable but then it has a problem that it doesn't have said variable...
Are there any 'standard practice' ways of doing this?
Related
It's been a few days since I've been trying to get docker container up and running, and always something goes wrong.
I need (mostly) LAMP stack, only instead MySQL -> mongoDb.
Of course I started by looking on docker hub and trying to compose some image from others. Googled after configs. Simplest one couldn't go past the stage of setting MONGODB_ADMIN_USER and MONGODB_ADMIN_PASSWORD and always returned with code 1, though mentioned variables were set in yml.
I tried to start with just centos/mongodb image, install apache, php and whatnot, commit it, and work on my own image, but without kernel it's hard to properly install and run apache within docker container.
So i tried once more, found promising project here: https://github.com/akhomy/docker-compose-lamp
but can't attach to the container, can't run localhost with default settings, though apparently composing stage goes ok.
Has anyone of You, by chance, working set of docker files / docker-compose?
Or some helpful hint? Really, looks like a straightforward task, take two images from docker hub, make docker-compose.yml, run docker-compose up, case closed. I can't wrap my head around this :|
Docker approach is not to put all services in one container but to have a single container for a single service. All Docker tools are aligned to this.
For your LAMP stack to start, you just have to download docker-compose, create docker-compose.yml file with 3 services defined and run docker-compose up
Docker compose is an orchestrating tool for containers, suited for single machine.
You need to have at least small tour over this tool, just for an example I provide sample config file:
docker-compose.yml
version: '3'
services:
apache:
image: bitnami/apache:latest
.. here goes apache config ...
db:
image: mongo
.. here goes apache config ...
php:
image: php
.. here goes php config ...
After you start this with docker-compose up you will get network created automatically for you and all services will join it. They will see each other under their names (lets say to connect to database from php you will use db as host name).
To connect to this stuff from host PC, you will need to expose ports explicitly.
I have been trying to user docker-compose to spin up a postgres container container, with a single, persisted named volume.
The goal is to have different postgres containers share the same persisted data (not concurrently!) - one container dies or is killed, another takes it place without losing previously persisted data.
As I understand "named volumes" are supposed to replace "Data Volume Containers".
However, so far either one of two things happen:
The postgres container fails to start up, with error message "ERROR: Container command not found or does not exist."
I achieve persistance for only that specific container. If it is stopped and removed and another container started, we start with a blank slate.
SO, as far as I understand, the postgres image does create it's own volume, which is of course bound to that specific container. Which would be fine, if I could just get THAT volume aliased or linked or something with the named volume.
Current incarnation of docker-compose.yml:
version: '2'
services:
db:
image: postgres
restart: allways
volumes:
- myappdb:/var/lib/postgresql/data/
environment:
- POSTGRES_PASSWORD=mysecretpasswordPleaseChangeME
volumes:
myappdb:
driver: local
Am I doing something stupidly wrong, or attempting something that is simply not supported?
Docker version 1.10.3, build 20f81dd
docker-compose version 1.6.0,
build d99cad6
Ok, after a lot of trial and error, things are now working as they should (meaning I am able to run docker-compose down and then docker-compose up and my data is in the state where it was left with the down command).
In general, a few things:
Don't use the PGDATA environment option with the official postgres image
If using spring boot (like I was), and docker compose (as I was) and passing environment options to a service linked to your database container, do not wrap a profile name in double quotes. It is passed as-is to the Spring as-is, resulting in a non-existing profile to be used as the active profile.
I had some subtle and strange things incorrectly configured initially, but I suspect the killer was point 2 above - it caused my app,when running in a container, to use in-mem H2 database instead of the linked container database. So everything functioned (almost) perfectly - until container shutdown. And, when running from IDE, against container DB (with ports exposed to host), all worked perfectly (including persistence), since the active profile parameter was correctly set in the IDE launcher (NO quotes!).
Live and learn I guess (but I do feel a LOT of egg on my face).
You need to tell Compose that it should manage creation of the Volume, otherwise it assumes it should already exist on the host.
volumes:
myappdb:
external: false
Docs: https://docs.docker.com/compose/compose-file/#external
I created simple compose config to try Postgres BDR replication.
I expect containers to have host names as service names I defined and I expect one container to be able to resolve and reach another with this hostname. I expect it to be true because of that:
https://docs.docker.com/compose/networking/
My config:
version: '2'
services:
bdr1:
image: bdr
volumes:
- /var/lib/postgresql/data1:/var/lib/postgresql/data
ports:
- "5001:5432"
bdr2:
image: bdr
volumes:
- /var/lib/postgresql/data2:/var/lib/postgresql/data
ports:
- "5002:5432"
But in reality both containers get rubbish hostnames and are not reachable by container names:
Creating network "bdr_default" with the default driver
Creating bdr_bdr1_1
Creating bdr_bdr2_1
Attaching to bdr_bdr1_1, bdr_bdr2_1
bdr1_1 | Hostname: 938e0585fee2
bdr2_1 | Hostname: 7153165f4d5b
Is it a bug, or I did something wrong?
I use Ubuntu 14.04.4 LTS, Docker version 1.10.1, build 9e83765, docker-compose version 1.6.0, build d99cad6
docker-compose gives you the option of scaling services up or down, meaning you can launch multiple instances of the same service. That is at least one reason why the hostnames are not just service names. You will notice that if you scale bdr1 to 2 instance, you will then have bdr_bdr1_1 and bdr_bdr1_2 containers.
You can work around this inside the containers that were started up by docker-compose in at least two ways:
If a service refers to other service, you can use links section, for example make bdr1 link to bdr2. In this case when you are inside bdr1 you can call host bdr2 by name. I have not tried what happens when you scale up bdr2 in this case.
You can force the hostname of a container internally to the name you want by using the hostname section. For example if you add hostname: bdr1 to bdr1, then you can internally connect to bdr1, which is itself.
You can possibly achieve similar result with the networks section, but I have not yet used it myself so I don't know for sure.
The hostname inside the container should be the short container id, so this is correct (note there was a bug with Compose 1.6.0 and the short container id, so you should use at least version 1.6.2). Also /etc/hosts is not longer used, there is now an embedded dns server that handles resolving names to container ip addresses.
The container is discoverable by other containers with 3 names: the container name, the container short id, and the service name.
However, the other container may not be available immediately when the first one starts. You can use depends_on to set the order.
If you are testing the discovery, try using ping, and make sure to retry , because the name may not resolve immediately.
I have a 2 machine swarm cluster. I have installed the simple Docker compose demo from here on one of the machines. However, when I try to scale the application with the docker-compose scale web=5command, it only scales to the current machine and does not create any of the new web containers on the other machine in the swarm cluster as expected.
On every example I've seen by others, their scale command just works and nothing was mentioned about additional configurations needed get it to scale across multiple nodes.
Not sure what else to try. I get the same result when running the scale command from either machine
Please let me know what further information I can provide.
I see now there were two issues causing my scale commands to fail, however, it is still not working even with proper multi-host networking setup.
When scaling a container from a compose application that was linked to another container in that same compose app - This was failing because I was joining the containers with the deprecated(?) "links" functionality rather than using the new multi-host networking functionality. Apparently, "links" can only work on a single machine and cannot be scaled across multiple machines. (I'm fairly sure this is the case, but could be wrong)
When attempting to scale an unlinked container - This was actually working as expected. I had forgot I had other containers running on the machine I was expecting Docker to scale out my container to. Thus the Swarm scheduler just put the newly scaled containers onto the current machine since the current machine was being least utilized. (This was on a 2 machine swarm cluster)
EDIT - Actual Solution
Okay, it looks like the final problem was I cannot scale the part of the compose app that uses build for creating its image rather than specifying the image with image.
I suppose this makes sense because the machine it is trying to scale that container to doesn't have the build file available to create that image but I had assumed Docker Compose/Swarm would be smart enough to figure that out and somehow copy that across machines.
So the solution is to build that image beforehand with Docker build and then either push that image to the public Docker Hub or your own private registry and have the Docker compose file specify that image with image rather than trying to create it with build.
One thing you could do is label the web containers (such as com.mydomain.myapp.category=web) and make a soft anti-affinity rule for the label (such as affinity:com.mydomain.myapp.category!=~web). This would tell Swarm to try and schedule another container with com.mydomain.myapp.category=web to host that doesn't container the container first (but schedule on one already having that container if not).
The modified Docker Compose file in that repository would be something like:
web:
build: .
volumes:
- .:/code
links:
- redis
expose:
- "5000"
environment:
- "affinity:com.mydomain.myapp.category!=~web"
labels:
- "com.mydomain.myapp.category=web"
redis:
image: redis
lb:
image: tutum/haproxy
links:
- web
ports:
- "80:80"
environment:
- BACKEND_PORT=5000
- BALANCE=roundrobin
I'm using docker compose to run a MariaDB Galera Cluster, where each node is a docker container, but MariaDB GC need a master node at start to initialize the database.
I'd like to choose the master container by mounting a file as a volume in the container, with a script at start which check for this file. So I need docker-compose to mount the file only for the first container launched and not for the container created by doing docker-compose scale.
Is it possible ?
What you want to do is not directly possible; when using docker-compose scale you will get a suite of identical containers. You have several options available for selecting a primary node for your Galera cluster. Here are two; there are undoubtedly others:
Explicit primary
Have the primary be a single-instance container in your docker-compose.yaml file, and only scale the secondary containers.
galera_primary:
image: myimage
command: command_to_start_galera_master
galera_secondary:
image: myimage
links:
- galera_primary
command: command_to_start_galera_worker
Dynamic primary
If you're willing to write some code, you could probably use etcd to perform master election, probably by taking advantage of the ability to atomically create keys.
I don't have an example of this handy, but the process should be relatively simple:
Each node attempts to create a particular key in etcd
The node that succeeds is the master
Other nodes can query etcd for the address of the master