Make docker image as base for another image - docker

Now I have built a simple GET API to access this database: https://github.com/ghusta/docker-postgres-world-db
This API will get a country code and get the full record of the country of this country from the database.
The structure is that the API is in a separate docker image, and the database is in another one.
So once the API's image tries to start, I need it to start the database's image before and then start running itself upon the database's image.
So how to do that?

You can use Docker Compose, specifically the depends_on directive. This will cause Docker to start all dependencies before starting an image.
Unfortunately there is no way to make it wait for the dependency to go live before starting any dependents. You'll have to manage that yourself with a wait script or similar.

A most probable solution would be to use docker compose along with a third party script.
For Example your docker compose file might look like:
services:
web:
build: .
ports:
- "80:8000"
depends_on:
- "db"
command: ["./wait-for-it.sh", "db:5432", "--", "python", "app.py"]
db:
image: postgres
Where ./wait-for-it.sh is a third party script you can get from
https://github.com/vishnubob/wait-for-it
You can also use this script from
https://github.com/Eficode/wait-for
I would recommend to tweak the script as per your needs if you want to(i did that).
P.S:
The problem of waiting for a database (for example) to be ready is really just a subset of a much larger problem of distributed systems. In production, your database could become unavailable or move hosts at any time. Your application needs to be resilient to these types of failures.
To handle this, design your application to attempt to re-establish a connection to the database after a failure. If the application retries the connection, it can eventually connect to the database.
The best solution is to perform this check in your application code, both at startup and whenever a connection is lost for any reason

Related

Is necessary rebuild container to change ports or stop/start is enough?

I have a composer file with four services. I need to OPEN one of them to outside by settings ports.
After changing .yml file, do I need to 'rebuild the container' (docker-compose down/up) or do I just need to stop/start? (docker-compose stop/start)?
Specifically, what I neet to do accesible to outside is a Posgree Server. This is my actual postgres service definition in .yml:
mydb:
image: postgres:9.4
environment:
- POSTGRES_PASSWORD=myPassword
volumes:
- db-data:/var/lib/postgresql/data
I think I just need to change it to:
mydb:
image: postgres:9.4
ports:
- "5432:5432"
environment:
- POSTGRES_PASSWORD=myPassword
volumes:
- db-data:/var/lib/postgresql/data
I'm worried of loosing data on 'db-data' volume, or connection to the other services, if I use down/up.
Also, there are 3 other services specified in the .yml file. If it is necessary to REBUILD (without loosing data in db-data, of course), I don't want to touch these other containers. In this case, which would be the steps?:
First, rebuild 'mydb' container with 'docker run' (Could you provide me the right command, please?)
Modify the .yml as stated before, just adding the ports
Perform a simple docker-compose stop/start
Could you help me, please?
If you're only changing settings like ports:, it is enough to re-run docker-compose up -d again. Compose will figure out which things are different from the existing containers, and destroy and recreate only those specific containers.
If you're changing a Dockerfile or your application code you may specifically need to docker-compose build your application or use docker-compose up -d --build. But you don't specifically need to rebuild the images if you're only changing runtime settings like ports:.
docker-compose down tears down your entire container stack. You don't need it for routine rebuilds or container updates. You may want to intentionally shut down the container system (and free up host ports, memory, and other resources) and it's useful then.
docker-compose stop leaves the containers in an unusual state of existing but without a running process. You almost never need this. docker-compose start restarts containers in this unusual state, and you also almost never need it.
You have to rebuild it.
For that reason the best practice is to map all the mount points and resources externally, so you can recreate the container (with changed parameters) without any loss of data.
In your scenario I see that you put all the data in an external docker volume, so I think you could recreate it with changed ports in a safe way.

Docker - using labels to influence the start-up sequence

My Django application uses Celery to process tasks on a regular basis. Sadly this results in having 3 continers (App, Celery Worker, Celery Beat) each of them having a very own startup shell-script instead of a docker entrypoint script.
So my Idea was to have a single entrypoint script which is able to process the lables I enter at my docker-compose.yml. Based on the lables the container should start as App, Celery Beat or Celery Worker instance.
I never did such a Implementation before but asking myself if this is even possible as I saw something similar at the trafik loadblancer project, see e.g.:
loadbalancer:
image: traefik:1.7
command: --docker
ports:
- 80:80
volumes:
- /var/run/docker.sock:/var/run/docker.sock
networks:
- frontend
- backend
labels:
- "traefik.frontend.passHostHeader=false"
- "traefik.docker.network=frontend"
...
I didn't found any good material according to that on the web or on how to implement such a scenario or if it's even possible the way I think here. Does smb did it like that befor or should I better stay with 3 single shell scripts, one for each service?
You can access the labels from within the container, but it does not seem to be as straight forward as other options and I do not recommend it. See this StackOverflow question.
If your use cases (== entrypoints) are more different than alike, it is probably easier to use three entrypoints or three commands.
If your use cases are more similar, then it is easier and clearer to simply use environment variables.
Another nice alternative that I like to use, it to create one entrypoint shell script, that accept arguments - so you have one entrypoint, and the arguments are provided using the command definition.
Labels are designed to be used by the docker engine and other applications that work at the host or docker-orchestrator level, and not at the container level.
I am not sure how the traefik project is using that implementation. If they use it, it should be totally possible.
However, I would recommend using environment variables instead of docker labels. Environment variables are the recommended way to processes configuration parameters in a cloud-native app. The use of the labels is more related to the service metadata, so you can identify and filter specific services. In your scenario you can have something like this:
version: "3"
services:
celery-worker:
image: generic-dev-image:latest
environment:
- SERVICE_TYPE=celery-worker
celery-beat:
image: generic-dev-image:latest
environment:
- SERVICE_TYPE=celery-beat
app:
image: generic-dev-image:latest
environment:
- SERVICE_TYPE=app
Then you can use the SERVICE_TYPE environment variable in your docker entrypoint to launch the specific service.
However (again), there is nothing wrong with having 3 different docker images. In fact, that's the idea of containers (and microservices). You encapsulate the processes in images and instantiate them in containers. Each one of them will have different purposes and lifecycles. For development purposes, there is nothing wrong with your implementation. But in production, I would recommend separating the services in different images. Otherwise, you have big images, only using a third of the functionality in each service, and hard coupling the livecycle of the services.

How can I start one container before other?

I need to start backend-container after start database-container. How can I do it with docker-compose?
Use a depends_on clause on your backend-container. Something like that :
version: "3.7"
services:
web:
build: .
depends_on:
- db
db:
image: postgres
Here is the documentation about it.
Have fun!
You should look into the depends_on configuration for docker compose.
In short, you should be able to do something like:
services:
database-container:
# configuration
backend-container:
depends_on:
- database-container
# configuration
The depends_on field will work with docker-compose, but you will find it is not supported if you upgrade to swarm mode. It also guarantees the database container is created, but not necessarily ready to receive connections.
For that, there are several options:
Let the backend container fail and configure a restart policy. This is ugly, leads to false errors being reported, but it's also the easiest to implement.
Perform a connection from your app with a retry loop, a sleep between retries, and a timeout in case the database doesn't start in a timely fashion. This is usually my preferred method, but it requires a change to your application.
Use an entrypoint script with a command like wait-for-it.sh that waits for a remote resource to become available, and once that command succeeds, launch your app. This doesn't cover all the scenarios as a complete client connection, but can be less intrusive to implement since it only requires changes to an entrypoint script rather than the app itself.

How can I link an image created volume with a docker-compose specified named volume?

I have been trying to user docker-compose to spin up a postgres container container, with a single, persisted named volume.
The goal is to have different postgres containers share the same persisted data (not concurrently!) - one container dies or is killed, another takes it place without losing previously persisted data.
As I understand "named volumes" are supposed to replace "Data Volume Containers".
However, so far either one of two things happen:
The postgres container fails to start up, with error message "ERROR: Container command not found or does not exist."
I achieve persistance for only that specific container. If it is stopped and removed and another container started, we start with a blank slate.
SO, as far as I understand, the postgres image does create it's own volume, which is of course bound to that specific container. Which would be fine, if I could just get THAT volume aliased or linked or something with the named volume.
Current incarnation of docker-compose.yml:
version: '2'
services:
db:
image: postgres
restart: allways
volumes:
- myappdb:/var/lib/postgresql/data/
environment:
- POSTGRES_PASSWORD=mysecretpasswordPleaseChangeME
volumes:
myappdb:
driver: local
Am I doing something stupidly wrong, or attempting something that is simply not supported?
Docker version 1.10.3, build 20f81dd
docker-compose version 1.6.0,
build d99cad6
Ok, after a lot of trial and error, things are now working as they should (meaning I am able to run docker-compose down and then docker-compose up and my data is in the state where it was left with the down command).
In general, a few things:
Don't use the PGDATA environment option with the official postgres image
If using spring boot (like I was), and docker compose (as I was) and passing environment options to a service linked to your database container, do not wrap a profile name in double quotes. It is passed as-is to the Spring as-is, resulting in a non-existing profile to be used as the active profile.
I had some subtle and strange things incorrectly configured initially, but I suspect the killer was point 2 above - it caused my app,when running in a container, to use in-mem H2 database instead of the linked container database. So everything functioned (almost) perfectly - until container shutdown. And, when running from IDE, against container DB (with ports exposed to host), all worked perfectly (including persistence), since the active profile parameter was correctly set in the IDE launcher (NO quotes!).
Live and learn I guess (but I do feel a LOT of egg on my face).
You need to tell Compose that it should manage creation of the Volume, otherwise it assumes it should already exist on the host.
volumes:
myappdb:
external: false
Docs: https://docs.docker.com/compose/compose-file/#external

Docker compose and REST config

Suppose I have docker compose service made up of elasticsearch and a nodeJS app that uses elasticsearch. Something like :
web_app:
image: my_nodejs
links:
- ess
...
ess:
image: elasticsearch
expose:
- "9200"
- "9300"
I need to ensure that a particular index exists in my elasticsearch instance. For whatever reason (don't ask), I have to add this index via a rest call to the running elasticsearch container. What is the best way to do this?
I could run a short term job just to make a REST call to create the index, but then I have to do monitoring and dependency work that docker compose does not support.
I prefer the idea of running the REST calls in a script in the ess image. Is that good practice? Am I missing something?
Running the create index script as part of the entrypoint script for the ess service would work. I like to do that kind of work during the build phase of the image, but it's a little more work to set that up.

Resources