Docker compose and REST config - docker

Suppose I have docker compose service made up of elasticsearch and a nodeJS app that uses elasticsearch. Something like :
web_app:
image: my_nodejs
links:
- ess
...
ess:
image: elasticsearch
expose:
- "9200"
- "9300"
I need to ensure that a particular index exists in my elasticsearch instance. For whatever reason (don't ask), I have to add this index via a rest call to the running elasticsearch container. What is the best way to do this?
I could run a short term job just to make a REST call to create the index, but then I have to do monitoring and dependency work that docker compose does not support.
I prefer the idea of running the REST calls in a script in the ess image. Is that good practice? Am I missing something?

Running the create index script as part of the entrypoint script for the ess service would work. I like to do that kind of work during the build phase of the image, but it's a little more work to set that up.

Related

Question about Docker Compose Networking (Local Vs Production (ECS))

I have been researching a little bit about docker compose.
From what I understand,
services:
api:
build: ./api
db:
image: <someimage>
With something like this(adding some other missing options here), I should be able to access the db container from the web container using 'db' as the hostname.
This works on my local machine. However, I would like to know if this will still work on something like an ECS cluster.
Do I need to make any further changes in the code itself?
Example -> I might have this as an env variable in my api:
DB_URL=mysql://admin:12345#db/mydb
Do I need to change anything when I deploy it to an ECS cluster or will docker compose take care of it?
I have seen people using links, and depends_on, but I am not quite clear on what it all does yet. I understand that depends_on just tells docker that it has to wait for another container to start up first, but links don't seem to do anything in local.

Docker - using labels to influence the start-up sequence

My Django application uses Celery to process tasks on a regular basis. Sadly this results in having 3 continers (App, Celery Worker, Celery Beat) each of them having a very own startup shell-script instead of a docker entrypoint script.
So my Idea was to have a single entrypoint script which is able to process the lables I enter at my docker-compose.yml. Based on the lables the container should start as App, Celery Beat or Celery Worker instance.
I never did such a Implementation before but asking myself if this is even possible as I saw something similar at the trafik loadblancer project, see e.g.:
loadbalancer:
image: traefik:1.7
command: --docker
ports:
- 80:80
volumes:
- /var/run/docker.sock:/var/run/docker.sock
networks:
- frontend
- backend
labels:
- "traefik.frontend.passHostHeader=false"
- "traefik.docker.network=frontend"
...
I didn't found any good material according to that on the web or on how to implement such a scenario or if it's even possible the way I think here. Does smb did it like that befor or should I better stay with 3 single shell scripts, one for each service?
You can access the labels from within the container, but it does not seem to be as straight forward as other options and I do not recommend it. See this StackOverflow question.
If your use cases (== entrypoints) are more different than alike, it is probably easier to use three entrypoints or three commands.
If your use cases are more similar, then it is easier and clearer to simply use environment variables.
Another nice alternative that I like to use, it to create one entrypoint shell script, that accept arguments - so you have one entrypoint, and the arguments are provided using the command definition.
Labels are designed to be used by the docker engine and other applications that work at the host or docker-orchestrator level, and not at the container level.
I am not sure how the traefik project is using that implementation. If they use it, it should be totally possible.
However, I would recommend using environment variables instead of docker labels. Environment variables are the recommended way to processes configuration parameters in a cloud-native app. The use of the labels is more related to the service metadata, so you can identify and filter specific services. In your scenario you can have something like this:
version: "3"
services:
celery-worker:
image: generic-dev-image:latest
environment:
- SERVICE_TYPE=celery-worker
celery-beat:
image: generic-dev-image:latest
environment:
- SERVICE_TYPE=celery-beat
app:
image: generic-dev-image:latest
environment:
- SERVICE_TYPE=app
Then you can use the SERVICE_TYPE environment variable in your docker entrypoint to launch the specific service.
However (again), there is nothing wrong with having 3 different docker images. In fact, that's the idea of containers (and microservices). You encapsulate the processes in images and instantiate them in containers. Each one of them will have different purposes and lifecycles. For development purposes, there is nothing wrong with your implementation. But in production, I would recommend separating the services in different images. Otherwise, you have big images, only using a third of the functionality in each service, and hard coupling the livecycle of the services.

Make docker image as base for another image

Now I have built a simple GET API to access this database: https://github.com/ghusta/docker-postgres-world-db
This API will get a country code and get the full record of the country of this country from the database.
The structure is that the API is in a separate docker image, and the database is in another one.
So once the API's image tries to start, I need it to start the database's image before and then start running itself upon the database's image.
So how to do that?
You can use Docker Compose, specifically the depends_on directive. This will cause Docker to start all dependencies before starting an image.
Unfortunately there is no way to make it wait for the dependency to go live before starting any dependents. You'll have to manage that yourself with a wait script or similar.
A most probable solution would be to use docker compose along with a third party script.
For Example your docker compose file might look like:
services:
web:
build: .
ports:
- "80:8000"
depends_on:
- "db"
command: ["./wait-for-it.sh", "db:5432", "--", "python", "app.py"]
db:
image: postgres
Where ./wait-for-it.sh is a third party script you can get from
https://github.com/vishnubob/wait-for-it
You can also use this script from
https://github.com/Eficode/wait-for
I would recommend to tweak the script as per your needs if you want to(i did that).
P.S:
The problem of waiting for a database (for example) to be ready is really just a subset of a much larger problem of distributed systems. In production, your database could become unavailable or move hosts at any time. Your application needs to be resilient to these types of failures.
To handle this, design your application to attempt to re-establish a connection to the database after a failure. If the application retries the connection, it can eventually connect to the database.
The best solution is to perform this check in your application code, both at startup and whenever a connection is lost for any reason

centos/apache/php/mongodb - can't get this to work together

It's been a few days since I've been trying to get docker container up and running, and always something goes wrong.
I need (mostly) LAMP stack, only instead MySQL -> mongoDb.
Of course I started by looking on docker hub and trying to compose some image from others. Googled after configs. Simplest one couldn't go past the stage of setting MONGODB_ADMIN_USER and MONGODB_ADMIN_PASSWORD and always returned with code 1, though mentioned variables were set in yml.
I tried to start with just centos/mongodb image, install apache, php and whatnot, commit it, and work on my own image, but without kernel it's hard to properly install and run apache within docker container.
So i tried once more, found promising project here: https://github.com/akhomy/docker-compose-lamp
but can't attach to the container, can't run localhost with default settings, though apparently composing stage goes ok.
Has anyone of You, by chance, working set of docker files / docker-compose?
Or some helpful hint? Really, looks like a straightforward task, take two images from docker hub, make docker-compose.yml, run docker-compose up, case closed. I can't wrap my head around this :|
Docker approach is not to put all services in one container but to have a single container for a single service. All Docker tools are aligned to this.
For your LAMP stack to start, you just have to download docker-compose, create docker-compose.yml file with 3 services defined and run docker-compose up
Docker compose is an orchestrating tool for containers, suited for single machine.
You need to have at least small tour over this tool, just for an example I provide sample config file:
docker-compose.yml
version: '3'
services:
apache:
image: bitnami/apache:latest
.. here goes apache config ...
db:
image: mongo
.. here goes apache config ...
php:
image: php
.. here goes php config ...
After you start this with docker-compose up you will get network created automatically for you and all services will join it. They will see each other under their names (lets say to connect to database from php you will use db as host name).
To connect to this stuff from host PC, you will need to expose ports explicitly.

docker-compose conditionally build containers

Our team is new to running a micro-service ecosystem and I am curious how one would achieve conditionally loading docker containers from a compose, or another variable-cased script.
An example use-case would be.
Doing front-end development that depends on a few different services. We will label those DockerA/D
Dependency Matrix
Feature1 - DockerA
Feature2 - DockerA and DockerB
Feature3 - DockerA and DockerD
I would like to be able to run something like the following
docker-compose --feature1
or
magic-script -service DockerA -service DockerB
Basically, I would like to run the command to conditionally start the APIs that I need.
I am already aware of using various mock servers for UI development, but want to avoid them.
Any thoughts on how to configure this?
You can stop all services after creating them and then selectively starting them one by one. E.g.:
version: "3"
services:
web1:
image: nginx
ports:
- "80:80"
web2:
image: nginx
ports:
- "8080:80"
docker-compose up -d
Creating network "composenginx_default" with the default driver
Creating composenginx_web2_1 ... done
Creating composenginx_web1_1 ... done
docker-compose stop
Stopping composenginx_web1_1 ... done
Stopping composenginx_web2_1 ... done
Now any service can be started using, e.g.,
docker-compose start web2
Starting web2 ... done
Also, using linked services, there's the scale command that can change the number of running services (can add containers without restart).

Resources