Access container_name in Dockerfile (from docker-compose) - docker

I have setup a docker-compose project which are creating multiple images:
cache_server:
image: current_timezone/full-supervisord-cache-server:1.00
container_name: renamed-varnish-cache
networks:
- network_frontend
build:
context: "./all-services/"
dockerfile: "./cache-server/Dockerfile.cacheserver.varnish"
args:
- DOCKER_CONTAINER_USERNAME=username
ports:
- "6081:6081"
- "6082:6082"
When I use docker-compose up -f file1.yml file2.override.yml I will then get the containers: in the case of above one it will be named : renamed-varnish-cache
In the corresponding Dockerfile (./nginx-proxy/Dockerfile.proxy.nginx) I want to be able use the container_name property defined in the docker-compose.yml shown above.
When the containers are created I want to update the Varnish configurations inline inside Dockerfile : RUN sed -i "s|webserver_container_name|renamed-varnish-cache|g" /etc/varnish/default.vcl"
For instance:
backend webserver_container_name{
.host = "webserver_container_name";
.port = "8080";
}
To: I anticipate I will have to replace the - with _ for the backend:
backend renamed_varnish_cache{
.host = "renamed-varnish-cache";
.port = "8080";
}
Is there a way to receive the docker-compose named items as variables inside Dockerfile?

In core Docker, there are two separate concepts. An image is a built version of some piece of software packaged together with its dependencies; a container is a running instance of an image. There are separate docker build and docker run commands to build images and launch containers, and you can launch multiple containers from a single image.
Docker Compose wraps these concepts. In particular, the build: block corresponds to the image-build step, and that is what invokes the Dockerfile. None of the other Compose options are available or visible inside the Dockerfile. You cannot access the container_name: or environment: variables or volumes: because those don't exist at this point in the build lifecycle; you also cannot contact other Compose services from inside the Dockerfile.
It's pretty common to have multiple containers run off the same image if they have largely the same code base but need a different top-level command. One example is a Python Django application that needs Celery background workers; you'd have the same project structure but a different command for the Celery worker.
version: '3.8'
services:
web:
build: .
image: my/django-app
worker:
image: my/django-app
command: celery worker ...
Now with this stack you can docker-compose build to build the one image, and then run docker-compose up to launch both containers from that image. (During the build you can't know what the container names will be, and there will be two container names so you can't just use one in the Dockerfile.)
At a design level, this means that you often can't include configuration-type settings in the image itself (other containers' hostnames, user IDs for host-shared filesystems). If your application lets you specify these things as environment variables, that's the easiest option. You can use bind mounts (volumes:) to inject whole config files. If neither of these things work for you, you can use an entrypoint script to rewrite the config file.

Related

Can we use the same image for running multiple docker containers?

I am new to Docker and started building containers. I came across Docker-compose for building multiple containers inside a container.
Now, i have a problem with docker-compose containers.
I created two yaml files which are producer.yaml and consumer.yaml.
#producer.yaml
version: "3"
services:
mymongo:
image:// imageurl
port: 6666:6666
mynodeapp:
build:
context: //Dockerfile path
port:
- 2222:2222
#end producer.yaml
#consumer.yaml
version: "3"
services:
mymongo:
image:// sameImageUrl
port: 7777:6666
mynodeapp:
build:
context: //Dockerfile path
port:
- 3333:3333
#end consumer.yaml
Now, when i run docker-compose producer.yaml up. The producer container is up and running. But simultaneously, if i run docker-compose consumer.yaml up. This command makes the producer container to be terminated and then, the consumer container will be running. How can i make sure that, the imageURL used will be separate for both the containers.
To run multiple files you should run all of them together
docker-compose up -f producer.yaml -f consumer.yaml up
When you run individual files, the compose will try to up all the services and it will not kill any existing services if there is no service name conflict.
In your case you have same service names for both your consumer and producer, which is wrong. Because when you merge the two files, compose will use only one of the definitions. So your service name in yaml should be mynodeapp-producer and mynodeapp-consumer in case you want them to run in parallel.
In case you expect different mongodb for both then you should configure those names as well to be different

docker service with compose file single node and local image

So I need rolling-updates with docker on my single node server. Until now, I was using docker-compose but unfortunately, I can't achieve what I need with it. Reading the web, docker-swarm seems to be the way to go.
I have found how to run an app with multiple replicas on a single node using swarm:
docker service create --replicas 3 --name myapp-staging myapp_app:latest
myapp:latest being built from my docker-compose.yml:
version: "3.6"
services:
postgres:
env_file:
- ".env"
image: "postgres:11.0-alpine"
volumes:
- "/var/run/postgresql:/var/run/postgresql"
app:
build: "."
working_dir: /app
depends_on:
- "postgres"
env_file:
- ".env"
command: iex -S mix phx.server
volumes:
- ".:/app"
volumes:
postgres: {}
static:
driver_opts:
device: "tmpfs"
type: "tmpfs"
Unfortunately, this doesn't work since it doesn't get the config from the docker-compose.yml file: .env file, command entry etc.
Searching deeper, I find that using
docker stack deploy -c docker-compose.yml <name>
will create a service using my docker-compose.yml config.
But then I get the following error message:
failed to update service myapp-staging_postgres: Error response from daemon: rpc error: code = InvalidArgument desc = ContainerSpec: image reference must be provided
So it seems I have to use the registry and push my image there so that it works. I understand this need in case of a multiple node architecture, but in my case I don't want to do that. (Carrying images are heavy, I don't want my image to be public, and after all, image is here, so why should I move it to the internet?)
How can I set up my docker service using local image and config written in docker-compose.yml?
I could probably manage my way using docker service create options, but that wouldn't use my docker-compose.yml file so it would not be DRY nor maintainable, which is important to me.
docker-compose is a great tool for developers, it is sad that we have to dive into DevOps tools to achieve such common features as rolling updates. This whole swarm architecture seems too complicated for my needs at this stage.
You don't have to use registeries in your single node setup. you can build your "app" image on your node from a local docker file using this command -cd to the directory of you docker file-
docker build . -t my-app:latest
This will create a local docker image on your node, this image is only visible to your single node which is benefitial in your use case but i wouldn't recommend this in a production setup.
You can now edit the compose file to be:
version: "3.6"
services:
postgres:
env_file:
- ".env"
image: "postgres:11.0-alpine"
volumes:
- "/var/run/postgresql:/var/run/postgresql"
app:
image: "my-app:latest"
depends_on:
- "postgres"
env_file:
- ".env"
volumes:
- ".:/app"
volumes:
postgres: {}
static:
driver_opts:
device: "tmpfs"
type: "tmpfs"
And now you can run your stack from this node and it will use your local app image and benefit from the usage of the image [updates - rollbacks ...etc]
I do have a side note though on your stack file. You are using the same env file for both services, please mind that swarm will look for the ".env" file relative/next to the ".yml" file, so if this is not intentional please revise the location of your env files.
Also on a side note this solution is only feasable on a single node cluster and if you scale your cluster you will have to use a registery and registeries dont have to be public, you can deploy a private registery on your cluster and only your nodes can access it -or you can make it public- the accessibility of your registery is your choice.
Hope this will help with your issue.
Instead of docker images, you can directly use the docker file there. please check the below example.
version: "3.7"
services:
webapp:
build: ./dir
The error is because of compose unable to find an image on the Docker public registry.
Above method should solve your issue.
Basically you need to use docker images in order to make the rolling update to work in docker swarm. Also I would like to clarify that you can host a private registry and use it instead of public one.
Detailed Explanation:
When you try out rolling update how docker swarm works is that it sees whether there is a change in the image which is used for the service if so then docker swarm schedules service updation based on the updation criteria's set up and will work on it.
Let us say there is no change to the image then what happens? Simply docker will not apply the rolling update. Technically you can specify --force flag to make it force update the service but it will just redeploy the service.
Hence create a local repo and store the images into that and use that image name in docker-compose file to be used for a swarm. You can secure the repo by using SSL, user credentials, firewall restrictions which is up to you. Refer this for more details on deploying docker registry server.
Corrections in your compose file:
Since docker stack uses the image to create service you need to specify image: "<image name>" in app service like done in postgres service. AS you have mentioned build instruction image-name is mandatory as docker-compose doesn't know what tho name the image as.Reference.
Registry server is needed if you are going to deploy the application in multi-server. Since you have mentioned it's a single node deployment just having the image pulled/built on the server is enough. But private registry approach is the recommended.
My recommendation is that don't club all the services into a single docker-compose file. The reason is that when you deploy/destroy using docker-compose file all the services will be taken down. This is a kind of tight coupling. Of course, I understand that all the other services depend on DB. in such cases make sure DB service is brought up first before other services.
Instead of specifying the env file make it as a part of Docker file instruction. either copy the env file and source it in entry point or use ENV variable to define it.
Also just an update:
Stack is just to group the services in swarm.
So your compose file should be:
version: "3.6"
services:
postgres:
env_file:
- ".env"
image: "postgres:11.0-alpine"
volumes:
- "/var/run/postgresql:/var/run/postgresql"
app:
build: "."
image: "image-name:tag" #the image built will be tagged as image-name:tag
working_dir: /app # note here I've removed .env file
depends_on:
- "postgres"
command: iex -S mix phx.server
volumes:
- ".:/app"
volumes:
postgres: {}
static:
driver_opts:
device: "tmpfs"
type: "tmpfs"
Dockerfile:
from baseimage:tag
COPY .env /somelocation
# your further instructions go here
RUN ... & \
... & \
... && chmod a+x /somelocation/.env
ENTRYPOINT source /somelocation/.env && ./file-to-run
Alternative Dockerfile:
from baseimage:tag
ENV a $a
ENV b $b
ENV c $c # here a,b,c has to be exported in the shell befire building the image.
ENTRYPOINT ./file-to-run
And you may need to run
docker-compose build
docker-compose push (optional needed to push the image into registry in case registry is used)]
docker stack deploy -c docker-compose.yml <stackname>
NOTE:
Even though you can create the services as mentioned here by #M.Hassan I've explained the ideal recommended way.

docker compose config for multiple instances of one image with different arguments

I have a cli application that can run two services based on the input argument.
1- app serve // to run a web server
2- app work // to run a long-running background worker
they share the same code. what do I need when for deployment?
A: two separate containers or
B: two processes in the same container
And what would be the docker-compose config ?
If you want to have one process per container, I would suggest to have a generic image (and Dockerfile) which can run as worker or as server.
The Dockerfile file should set the entrypoint to the app, e.g. ENTRYPOINT ["/path_to_my_app/myapp"] but not the CMD. When the user invokes the command from the command line, he can start the worker with docker run IMAGENAME work or the server with docker run IMAGENAME serve.
To define both services in a compose file, you need to override the command field for each service.
version: '3'
services:
web:
build: ./docker # common Dockerfile
image: IMAGENAME
ports:
- "8090:8090"
command: ["serve"]
worker:
build: ./docker # common Dockerfile
image: IMAGENAME # reuse image
ports:
- "8091:8091"
command: ["work"]
The benefit of this solution over a solution with two separate images, is a gain in maintainability. Since there is only one Dockerfile and one image, both services should be always compatible.
after some googling, I found that as #sauerburger said in comments, better to have one process per container.
But to have multiple containers each for running my main app with a specific argument (ie. one for main app and one for worker) I need to have multiple Dockerfiles. the in my docker-compose i can referenc them separately.
but how to have different dockerfiles for a project?
the prefered solution is to have a docker directory in which each part has its own folder. for my application it will be like this:
- docker
- web
-Dockerfile
- worker
-Dockerfile
then in each Dockerfile I have a common entrypoint and a distinct cmd:
-in web Dockerfile :
- ENTRYPOINT ["/path_to_my_app/myapp"]
- CMD ["web"]
-in worker Dockerfile :
- ENTRYPOINT ["/path_to_my_app/myapp"]
- CMD ["worker"]
after doing this my docker-compose file will reference them like this:
version: '3'
services:
web:
# will build ./docker/web/Dockerfile
build: ./docker/web
ports:
- "8090:8090"
worker:
# will build ./docker/worker/Dockerfile
build: ./docker/worker
ports:
- "8091:8091"

Start particular service from docker-compose

I am new to Docker and have docker-compose.yml which is containing many services and iI need to start one particular service. I have docker-compose.yml file with information:
version: '2'
services:
postgres:
image: ${ARTIFACTORY_URL}/datahub/postgres:${BUILD_NUMBER}
restart: "no"
volumes:
- /etc/passwd:/etc/passwd
volumes_from:
- libs
depends_on:
- libs
setup:
image: ${ARTIFACTORY_URL}/setup:${B_N}
restart: "no"
volumes:
- ${HOME}:/usr/local/
I am able to call docker-compose.yml file using command:
docker-compose -f docker-compose.yml up -d --no-build
But I need to start "setup service" in docker-compose file:
How can I do this?
It's very easy:
docker compose up <service-name>
In your case:
docker compose -f docker-compose.yml up setup -d
To stop the service, then you don't need to specify the service name:
docker compose down
will do.
Little side note: if you are in the directory where the docker-compose.yml file is located, then docker-compose will use it implicitly, there's no need to add it as a parameter.
You need to provide it in the following situations:
the file is not in your current directory
the file name is different from the default one, eg. myconfig.yml
As far as I understand your question, you have multiple services in docker-compose but want to deploy only one.
docker-compose should be used for multi-container Docker applications. From official docs :
Compose is a tool for defining and running multi-container Docker
applications.
IMHO, you should run your service image separately with docker run command.
PS: If you are asking about recreating only the container whose image is changed among the multiple services in your docker-compose file, then docker-compose handles that for you.

What is the difference between `docker-compose build` and `docker build`?

What is the difference between docker-compose build and docker build?
Suppose in a dockerized project path there is a docker-compose.yml file:
docker-compose build
And
docker build
docker-compose can be considered a wrapper around the docker CLI (in fact it is another implementation in python as said in the comments) in order to gain time and avoid 500 characters-long lines (and also start multiple containers at the same time). It uses a file called docker-compose.yml in order to retrieve parameters.
You can find the reference for the docker-compose file format here.
So basically docker-compose build will read your docker-compose.yml, look for all services containing the build: statement and run a docker build for each one.
Each build: can specify a Dockerfile, a context and args to pass to docker.
To conclude with an example docker-compose.yml file :
version: '3.2'
services:
database:
image: mariadb
restart: always
volumes:
- ./.data/sql:/var/lib/mysql
web:
build:
dockerfile: Dockerfile-alpine
context: ./web
ports:
- 8099:80
depends_on:
- database
When calling docker-compose build, only the web target will need an image to be built. The docker build command would look like :
docker build -t web_myproject -f Dockerfile-alpine ./web
docker-compose build will build the services in the docker-compose.yml file.
https://docs.docker.com/compose/reference/build/
docker build will build the image defined by Dockerfile.
https://docs.docker.com/engine/reference/commandline/build/
Basically, docker-compose is a better way to use docker than just a docker command.
If the question here is if docker-compose build command, will build a zip kind of thing containing multiple images, which otherwise would have been built separately with usual Dockerfile, then the thinking is wrong.
Docker-compose build, will build individual images, by going into individual service entry in docker-compose.yml.
With docker images, command, we can see all the individual images being saved as well.
The real magic is docker-compose up.
This one will basically create a network of interconnected containers, that can talk to each other with name of container similar to a hostname.
Adding to the first answer...
You can give the image name and container name under the service definition.
e.g. for the service called 'web' in the below docker-compose example, you can give the image name and container name explicitly, so that docker does not have to use the defaults.
Otherwise the image name that docker will use will be the concatenation of the folder (Directory) and the service name. e.g. myprojectdir_web
So it is better to explicitly put the desired image name that will be generated when docker build command is executed.
e.g.
image: mywebserviceImage
container_name: my-webServiceImage-Container
example docker-compose.yml file :
version: '3.2'
services:
web:
build:
dockerfile: Dockerfile-alpine
context: ./web
ports:
- 8099:80
image: mywebserviceImage
container_name: my-webServiceImage-Container
depends_on:
- database
Few additional words about the difference between docker build and docker-compose build.
Both have an option for building images using an existing image as a cache of layers.
with docker build, the option is --cache-from <image>
with docker-composer, there is a tag cache_from in the build section.
Unfortunately, up until now, at this level, images made by one are not compatible with the other as a cache of layers (Ids are not compatible).
However, docker-compose v1.25.0 (2019-11-18), introduces an experimental feature COMPOSE_DOCKER_CLI_BUILD so that docker-compose uses native docker builder (therefore, images made by docker build can be used as a cache of layers for docker-compose build)

Resources