It's been a few days since I've been trying to get docker container up and running, and always something goes wrong.
I need (mostly) LAMP stack, only instead MySQL -> mongoDb.
Of course I started by looking on docker hub and trying to compose some image from others. Googled after configs. Simplest one couldn't go past the stage of setting MONGODB_ADMIN_USER and MONGODB_ADMIN_PASSWORD and always returned with code 1, though mentioned variables were set in yml.
I tried to start with just centos/mongodb image, install apache, php and whatnot, commit it, and work on my own image, but without kernel it's hard to properly install and run apache within docker container.
So i tried once more, found promising project here: https://github.com/akhomy/docker-compose-lamp
but can't attach to the container, can't run localhost with default settings, though apparently composing stage goes ok.
Has anyone of You, by chance, working set of docker files / docker-compose?
Or some helpful hint? Really, looks like a straightforward task, take two images from docker hub, make docker-compose.yml, run docker-compose up, case closed. I can't wrap my head around this :|
Docker approach is not to put all services in one container but to have a single container for a single service. All Docker tools are aligned to this.
For your LAMP stack to start, you just have to download docker-compose, create docker-compose.yml file with 3 services defined and run docker-compose up
Docker compose is an orchestrating tool for containers, suited for single machine.
You need to have at least small tour over this tool, just for an example I provide sample config file:
docker-compose.yml
version: '3'
services:
apache:
image: bitnami/apache:latest
.. here goes apache config ...
db:
image: mongo
.. here goes apache config ...
php:
image: php
.. here goes php config ...
After you start this with docker-compose up you will get network created automatically for you and all services will join it. They will see each other under their names (lets say to connect to database from php you will use db as host name).
To connect to this stuff from host PC, you will need to expose ports explicitly.
Related
There are a few approaches to fix container startup order in docker-compose, e.g.
depends_on
docker-compose-wait
Docker Compose wait for container X before starting Y (Asked 7 years, 6 months ago, Modified 7 months ago, Viewed 483k times)
...
However, if one of the services in a docker-compose file includes a build directive, it seems docker-compose will try to build the image first (ignoring depends_on basically - or interpreting depends_on as start dependency, not build dependency).
Is it possible for a build directive to specify that it needs another service to be up, before starting the build process?
Minimal Example:
version: "3.5"
services:
web:
build: # this will run before postgres is up
context: .
dockerfile: Dockerfile.setup # needs postgres to be up
depends_on:
- postgres
...
postgres:
image: postgres:10
...
Notwithstanding the general advice that programs should be written in a way that handles the unavailability of services (at least for some time) gracefully, are there any ways to allow builds to start only when other containers are up?
Some other related questions:
multi-stage build in docker compose?
Update/Solution: Solved the underlying problem by pushing all the (database) setup required to the CMD directive of a bootstrap container:
FROM undertest-base:latest
...
CMD ./wait && ./bootstrap.sh
where wait waits for postgres and bootstrap.sh contains the code for setting up the postgres database with fixtures so the over system becomes fully testable after that script.
With that, setting up an ephemeral test environment with database setup becomes a simple docker-compose up again.
There is no option for this in Compose, and also it won't really work.
The output of an image build is a self-contained immutable image. You can do things like docker push an image to a registry, and Docker's layer cache will avoid rebuilding an image that it's already built. So in this hypothetical setup, if you could access the database in a Dockerfile, but you ran
docker-compose build
docker-compose down -v
docker-compose up -d --build
the down -v step will remove the storage the database uses. While the up --build option will cause the image to be rebuilt, the build sequence will skip all of the steps and produce the same image as originally, and whatever changes you might have made to the database won't have happened.
At a more mechanical layer, the build sequence doesn't use the Compose-provided network, so you also wouldn't be able to connect to the database container.
There are occasional use cases where a dependency in build: would be handy, in particular if you're trying to build a base image that other images in your Compose setup share. But neither the stable Compose file v3 build: block nor the less-widely-supported Compose specification build: supports any notion of an image build depending on anything else.
I have been researching a little bit about docker compose.
From what I understand,
services:
api:
build: ./api
db:
image: <someimage>
With something like this(adding some other missing options here), I should be able to access the db container from the web container using 'db' as the hostname.
This works on my local machine. However, I would like to know if this will still work on something like an ECS cluster.
Do I need to make any further changes in the code itself?
Example -> I might have this as an env variable in my api:
DB_URL=mysql://admin:12345#db/mydb
Do I need to change anything when I deploy it to an ECS cluster or will docker compose take care of it?
I have seen people using links, and depends_on, but I am not quite clear on what it all does yet. I understand that depends_on just tells docker that it has to wait for another container to start up first, but links don't seem to do anything in local.
I have this website which uses angular for the frontend and has a NodeJs backend. The backend serves the angular files and handles client calls.
As it is now, they are both packages and deployed as one docker image. Meaning, if I change the frontend, I also need to build the backend in order to create a new image. So it makes sense to seperate them.
But if I create an image for the backend and frontend, how can the backend serve files from the frontend container?
Is this the right approach?
I think I would like to have the frontend inside a docker image, so I can do stuff like rollback easily (which is not possible with docker volumes for example)!
Yes! Containerize them to have their own containers is the way to go! This make us deploy/deliver faster and also separate build pipelines to make steps clearer to everyone involved.
I won't bother having backend serving frontend files. I usually create my frontend image with a webserver (eg nginx:alpine), since frontend and backend can be separately deployed to different machines or systems. And don't forget to use multi-stage builds to minimize image size.
But if you must do that, I guess you can use docker-compose to have them in one network, and then, forward requests of those static files from backend to the frontend webserver. (Just some hacks, there must be a better way to handle this from more advanced people here :P)
I have something similar, an Emberjs running in one docker container that connects to nodejs that is running in its own container (not to mention the DB that runs on a third container). It all works rather well.
I would recommend that you create your containers using docker-compose which will automatically create the network so that both containers can talk to each other using :.
Also I set it up so that the code is mapped from a folder in my machine to a folder in the container. This allows me to easily change stuff, work with Git , etc...
Here is a snippet of my docker-compose file as an example:
version: "3"
services:
....
ember_gui:
image: danlynn/ember-cli
container_name: ember_dev
depends_on:
- node_server
volumes:
- ./Ember:/myapp
command: ember server
ports:
- "4200:4200"
- "7020:7020"
- "7357:7357"
Here I create an ember_gui service which creates a container named ember_dev based on an existing image from docker hub. Then it tells docker that this container is dependent on another container that needs to be compiled first and which I do not show in the snippet but that is defined in the same docker-compose file (node_server). After that, I map the ./Ember directory to the /myapp folder in the container so that I can share the code. Finally I start the ember server and open some ports
I am trying to setup a Docker swarm which connects 3 of my servers together. The swarm is setup and going to the URL I get the same result, which is perfect and just what I need.
However, for the server in which I am working on now I am producing a global nginx on every server in order to allow load balancing.
Sitting on the server will be multiple config files which I need in order to map the domain to the correct folder, which is the part which I am stuck on/not working for me.
I have a really simple docker-compose.yml as I have shrunk it in order to debug the issue, it consists of the following...
version: '3'
services:
nginx:
image: nginx:latest
volumes:
- /var/www/nginx/config/:/etc/nginx/conf.d/:ro
deploy:
mode: global
The volume is coming back with the error "invalid mount config for type "bind": bind source path does not exist" so obviously when I remove the volume line it works perfectly, however I 100% need this line.
I can, inside of the server navigate perfectly to /var/www/nginx/config/ and my config files exist within. Same with the other, if I run docker exec -it <container> /bin/bash and navigate to /etc/nginx/conf.d I can get to there perfectly fine which is why I'm posting on here.
I've looked at other posts and done what other people have said have fixed it such as
Adding quotes to the volume
Remove the slash at the end of the file
Restart the server
Restart Docker
But nothing seems to be working
The potential issue could be that not all the nodes in your swarm cluster have the directory (/var/www/nginx/config/) created. Since in swarm the service can be placed in any of the available nodes(unless you put in a constraint) you might be seeing this error.
Make sure that you have this directory created in all the 3 nodes.
Additionally you can also have a look here for defining configs.
I have been trying to user docker-compose to spin up a postgres container container, with a single, persisted named volume.
The goal is to have different postgres containers share the same persisted data (not concurrently!) - one container dies or is killed, another takes it place without losing previously persisted data.
As I understand "named volumes" are supposed to replace "Data Volume Containers".
However, so far either one of two things happen:
The postgres container fails to start up, with error message "ERROR: Container command not found or does not exist."
I achieve persistance for only that specific container. If it is stopped and removed and another container started, we start with a blank slate.
SO, as far as I understand, the postgres image does create it's own volume, which is of course bound to that specific container. Which would be fine, if I could just get THAT volume aliased or linked or something with the named volume.
Current incarnation of docker-compose.yml:
version: '2'
services:
db:
image: postgres
restart: allways
volumes:
- myappdb:/var/lib/postgresql/data/
environment:
- POSTGRES_PASSWORD=mysecretpasswordPleaseChangeME
volumes:
myappdb:
driver: local
Am I doing something stupidly wrong, or attempting something that is simply not supported?
Docker version 1.10.3, build 20f81dd
docker-compose version 1.6.0,
build d99cad6
Ok, after a lot of trial and error, things are now working as they should (meaning I am able to run docker-compose down and then docker-compose up and my data is in the state where it was left with the down command).
In general, a few things:
Don't use the PGDATA environment option with the official postgres image
If using spring boot (like I was), and docker compose (as I was) and passing environment options to a service linked to your database container, do not wrap a profile name in double quotes. It is passed as-is to the Spring as-is, resulting in a non-existing profile to be used as the active profile.
I had some subtle and strange things incorrectly configured initially, but I suspect the killer was point 2 above - it caused my app,when running in a container, to use in-mem H2 database instead of the linked container database. So everything functioned (almost) perfectly - until container shutdown. And, when running from IDE, against container DB (with ports exposed to host), all worked perfectly (including persistence), since the active profile parameter was correctly set in the IDE launcher (NO quotes!).
Live and learn I guess (but I do feel a LOT of egg on my face).
You need to tell Compose that it should manage creation of the Volume, otherwise it assumes it should already exist on the host.
volumes:
myappdb:
external: false
Docs: https://docs.docker.com/compose/compose-file/#external