Referencing services with Docker Compose - docker

I'm having issues calling what is supposed to have been defined in some Docker Compose services from my "main" (web) service. I have the following docker-compose.yml file:
version: '2'
services:
db:
image: postgres
volumes:
- postgres-db-volume:/data/postgres
pdftk:
image: mnuessler/pdftk
volumes:
- /tmp/manager:/work
ffmpeg:
image: jrottenberg/ffmpeg
volumes:
- /tmp/manager:/files
web:
build: .
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
- /tmp/manager:/files
ports:
- "8000:8000"
depends_on:
- db
- pdftk
- ffmpeg
volumes:
postgres-db-volume:
I'm able to use db from web perfectly, but unfortunately not pdftk or ffmpeg (these are just command-line utilities that are undefined when I run web's shell):
manager$ docker-compose run web bash
Starting manager_ffmpeg_1
Starting manager_pdftk_1
root#16e4b755172d:/code# pdftk
bash: pdftk: command not found
root#16e4b755172d:/code# ffmpeg
bash: ffmpeg: command not found
How can I get pdftk and ffmpeg to be defined within the web service? Is depends_on not the appropriate directive? Should I be extending web's Dockerfile to call an entry-point script that installs the content found in the other two services (even though this'd seem counterproductive)?
Tried to remove and rebuild the web service after adding pdftk and ffmpeg, but that didn't solve it.
What can I do?
Thank you!

Looks like an misunderstanding of "depends_on". It is used to set a starting order for containers.
For example: Start Database before Webserver etc.
If you want access to tools, installed in other containers, you have to open an ssh connection for example:
ssh pdftk <your command>
But i would install the nessecary tools into the web container image.
Extend the Dockerfile to install both tools should do the trick.

To access the "tools" you do not need to install SSH, this is most probably pretty complicated and not wanted. The containers are not "merged into one" when using depends_on.
Depends_on is even less then starting order, its more "ruff container start order. E.g. eventhough app depends on DB, it will happen, that the db container process did not yet get fully started while app has already been started - depends_on right now is, in most cases, rather used to notify when a container needs re-initialization when a other container he depends on e.g. does get recreated.
Other then that, start your containers and mount the docker-socket into them. Add this:
services:
web:
volumes:
- /var/run/docker.sock:/var/run/docker.sock
Now, on the webserver, where you need docker to be installed, you can do:
docker exec pdftk <thecommand>
Thats the usual way to run commands on services.
You can of course use http/api based implementations, in this case, you do not need to expose any ports or mount the socket, more, you can access the services using their service-name
ping pdftk or ping ffmpeg

Edit: The described method below, does not work for the OP's question. Still leaving it here as educational information.
Besides the options described by #opHASnoNAME ... you could try declaring a container-volume for pdftk and ffmpeg and use the binaries directly, like so:
ffmpeg:
volumes:
- /usr/local/bin
and mount this on your web container:
web:
volumes_from:
- ffmpeg
Please note that this approach has some limitations:
the path /usr/local/bin mounted from ffmpeg should not exist in web, otherwise you might need to mount the files, only.
in web, /usr/local/bin must be in your $PATH.
since this is kinda hotlinking of binaries, this might fail due to different Linux versions, missing shared libraries, etc... - so it actually works only for standalone binaries
all containers using volumes and volumes_from have to be deployed on the same host
But I am still using this here and there, i.e. with the docker or docker-compose binaries.

Related

How do I switch from docker-compose to Dockerfile?

I'm trying to run Magento 2 locally, so I built built a Docker LAMP stack using docker compose. This is the docker-compose.yml portion related to the php container:
php:
image: 'docker.io/bitnami/php-fpm:7.4-debian-10'
ports:
- 9001:9000
volumes:
- './docker/php/php.ini:/opt/bitnami/php/etc/conf.d/custom.ini'
networks:
- 'web'
and it works great. The point is that the bitnami image doesn't seem to have cron pre-installed, and for the sake of simplicity it's probably easier to have it directly in the php container (so I can reach it through the Magento cli functionalities, e.g. bin/magento cron:install)
So, I changed the php instruction into the docker-compose file from image to build, added a separated Dockerfile written like this:
FROM docker.io/bitnami/php-fpm:7.4-debian-10
I didn't even add the RUN instructions for adding packages. What I'm expecting at the moment is that by running docker-compose up -d --remove-orphan the result is basically the same. But it's not, because refreshing the homepage now gives me a 503 error that doesn't seem to leave any trace into the log files, so I'm a bit stuck.

Networking in Docker Compose file

I am writing a docker compose file for my web app.If I use 'link' to connect services with each other do I also need to include 'port'? And is 'depends on' an alternate option of 'links'? What will be best for connection services in a compose file with one another?
The core setup for this is described in Networking in Compose. If you do absolutely nothing, then one service can call another using its name in the docker-compose.yml file as a host name, using the port the process inside the container is listening on.
Up to startup-order issues, here's a minimal docker-compose.yml that demonstrates this:
version: '3'
services:
server:
image: nginx
client:
image: busybox
command: wget -O- http://server/
# Hack to make the example actually work:
# command: sh -c 'sleep 1; wget -O- http://server/'
You shouldn't use links: at all. It was an important part of first-generation Docker networking, but it's not useful on modern Docker. (Similarly, there's no reason to put expose: in a Docker Compose file.)
You always connect to the port the process inside the container is running on. ports: are optional; if you have ports:, cross-container calls always connect to the second port number and the remapping doesn't have any effect. In the example above, the client container always connects to the default HTTP port 80, even if you add ports: ['12345:80'] to the server container to make it externally accessible on a different port.
depends_on: affects two things. Try adding depends_on: [server] to the client container to the example. If you look at the "Starting..." messages that Compose prints out when it starts, this will force server to start starting before client starts starting, but this is not a guarantee that server is up and running and ready to serve requests (this is a very common problem with database containers). If you start only part of the stack with docker-compose up client, this also causes server to start with it.
A more complete typical example might look like:
version: '3'
services:
server:
# The Dockerfile COPYs static content into the image
build: ./server-based-on-nginx
ports:
- '12345:80'
client:
# The Dockerfile installs
# https://github.com/vishnubob/wait-for-it
build: ./client-based-on-busybox
# ENTRYPOINT and CMD will usually be in the Dockerfile
entrypoint: wait-for-it.sh server:80 --
command: wget -O- http://server/
depends_on:
- server
SO questions in this space seem to have a number of other unnecessary options. container_name: explicitly sets the name of the container for non-Compose docker commands, rather than letting Compose choose it, and it provides an alternate name for networking purposes, but you don't really need it. hostname: affects the container's internal host name (what you might see in a shell prompt for example) but it has no effect on other containers. You can manually create networks:, but Compose provides a default network for you and there's no reason to not use it.

docker service with compose file single node and local image

So I need rolling-updates with docker on my single node server. Until now, I was using docker-compose but unfortunately, I can't achieve what I need with it. Reading the web, docker-swarm seems to be the way to go.
I have found how to run an app with multiple replicas on a single node using swarm:
docker service create --replicas 3 --name myapp-staging myapp_app:latest
myapp:latest being built from my docker-compose.yml:
version: "3.6"
services:
postgres:
env_file:
- ".env"
image: "postgres:11.0-alpine"
volumes:
- "/var/run/postgresql:/var/run/postgresql"
app:
build: "."
working_dir: /app
depends_on:
- "postgres"
env_file:
- ".env"
command: iex -S mix phx.server
volumes:
- ".:/app"
volumes:
postgres: {}
static:
driver_opts:
device: "tmpfs"
type: "tmpfs"
Unfortunately, this doesn't work since it doesn't get the config from the docker-compose.yml file: .env file, command entry etc.
Searching deeper, I find that using
docker stack deploy -c docker-compose.yml <name>
will create a service using my docker-compose.yml config.
But then I get the following error message:
failed to update service myapp-staging_postgres: Error response from daemon: rpc error: code = InvalidArgument desc = ContainerSpec: image reference must be provided
So it seems I have to use the registry and push my image there so that it works. I understand this need in case of a multiple node architecture, but in my case I don't want to do that. (Carrying images are heavy, I don't want my image to be public, and after all, image is here, so why should I move it to the internet?)
How can I set up my docker service using local image and config written in docker-compose.yml?
I could probably manage my way using docker service create options, but that wouldn't use my docker-compose.yml file so it would not be DRY nor maintainable, which is important to me.
docker-compose is a great tool for developers, it is sad that we have to dive into DevOps tools to achieve such common features as rolling updates. This whole swarm architecture seems too complicated for my needs at this stage.
You don't have to use registeries in your single node setup. you can build your "app" image on your node from a local docker file using this command -cd to the directory of you docker file-
docker build . -t my-app:latest
This will create a local docker image on your node, this image is only visible to your single node which is benefitial in your use case but i wouldn't recommend this in a production setup.
You can now edit the compose file to be:
version: "3.6"
services:
postgres:
env_file:
- ".env"
image: "postgres:11.0-alpine"
volumes:
- "/var/run/postgresql:/var/run/postgresql"
app:
image: "my-app:latest"
depends_on:
- "postgres"
env_file:
- ".env"
volumes:
- ".:/app"
volumes:
postgres: {}
static:
driver_opts:
device: "tmpfs"
type: "tmpfs"
And now you can run your stack from this node and it will use your local app image and benefit from the usage of the image [updates - rollbacks ...etc]
I do have a side note though on your stack file. You are using the same env file for both services, please mind that swarm will look for the ".env" file relative/next to the ".yml" file, so if this is not intentional please revise the location of your env files.
Also on a side note this solution is only feasable on a single node cluster and if you scale your cluster you will have to use a registery and registeries dont have to be public, you can deploy a private registery on your cluster and only your nodes can access it -or you can make it public- the accessibility of your registery is your choice.
Hope this will help with your issue.
Instead of docker images, you can directly use the docker file there. please check the below example.
version: "3.7"
services:
webapp:
build: ./dir
The error is because of compose unable to find an image on the Docker public registry.
Above method should solve your issue.
Basically you need to use docker images in order to make the rolling update to work in docker swarm. Also I would like to clarify that you can host a private registry and use it instead of public one.
Detailed Explanation:
When you try out rolling update how docker swarm works is that it sees whether there is a change in the image which is used for the service if so then docker swarm schedules service updation based on the updation criteria's set up and will work on it.
Let us say there is no change to the image then what happens? Simply docker will not apply the rolling update. Technically you can specify --force flag to make it force update the service but it will just redeploy the service.
Hence create a local repo and store the images into that and use that image name in docker-compose file to be used for a swarm. You can secure the repo by using SSL, user credentials, firewall restrictions which is up to you. Refer this for more details on deploying docker registry server.
Corrections in your compose file:
Since docker stack uses the image to create service you need to specify image: "<image name>" in app service like done in postgres service. AS you have mentioned build instruction image-name is mandatory as docker-compose doesn't know what tho name the image as.Reference.
Registry server is needed if you are going to deploy the application in multi-server. Since you have mentioned it's a single node deployment just having the image pulled/built on the server is enough. But private registry approach is the recommended.
My recommendation is that don't club all the services into a single docker-compose file. The reason is that when you deploy/destroy using docker-compose file all the services will be taken down. This is a kind of tight coupling. Of course, I understand that all the other services depend on DB. in such cases make sure DB service is brought up first before other services.
Instead of specifying the env file make it as a part of Docker file instruction. either copy the env file and source it in entry point or use ENV variable to define it.
Also just an update:
Stack is just to group the services in swarm.
So your compose file should be:
version: "3.6"
services:
postgres:
env_file:
- ".env"
image: "postgres:11.0-alpine"
volumes:
- "/var/run/postgresql:/var/run/postgresql"
app:
build: "."
image: "image-name:tag" #the image built will be tagged as image-name:tag
working_dir: /app # note here I've removed .env file
depends_on:
- "postgres"
command: iex -S mix phx.server
volumes:
- ".:/app"
volumes:
postgres: {}
static:
driver_opts:
device: "tmpfs"
type: "tmpfs"
Dockerfile:
from baseimage:tag
COPY .env /somelocation
# your further instructions go here
RUN ... & \
... & \
... && chmod a+x /somelocation/.env
ENTRYPOINT source /somelocation/.env && ./file-to-run
Alternative Dockerfile:
from baseimage:tag
ENV a $a
ENV b $b
ENV c $c # here a,b,c has to be exported in the shell befire building the image.
ENTRYPOINT ./file-to-run
And you may need to run
docker-compose build
docker-compose push (optional needed to push the image into registry in case registry is used)]
docker stack deploy -c docker-compose.yml <stackname>
NOTE:
Even though you can create the services as mentioned here by #M.Hassan I've explained the ideal recommended way.

How do I keep a copy of the options I use to run an image in Docker?

I just figured out how to get swagger-ui up and running with Docker with my own openapi.json file using the following command:
docker run -p 80:8080 -e SWAGGER_JSON=/foo/openapi.json -v ~/source:/foo swaggerapi/swagger-ui
The openapi.json file is in source control and this could be run in lots of places.
Is there any way to make this command easy to rerun other than just putting it in a README? Can I use a Dockerfile for this? Or could I use docker-compose? The most important part is just to make it easy, and then later to make it easy to change/add options.
I also know I could use a bash script that I could just change, but I'm wondering if there's any Docker way to do it, and not a hack.
docker-compose is your perfect solution:
//docker-compose.yml
version: '3.7'
services:
swagger:
image: swaggerapi/swagger-ui
ports:
- "80:8080"
environment:
- SWAGGER_JSON=/foo/openapi.json
volumes:
- "~/source:/foo "
to run it, just hit docker-compose up and you are all set
I prefer using docker-compose for more complicated runs to keep all options in yaml file. Then all you need to start container is docker-compose up.
For more options inside application you can use .env file.
I think it’s the clearest way to make containers running and doesn’t require any knowledge for the next users/developers of this environment .

Using docker-compose (formerly fig) to link a cron image

I'm runing a simple rails app in docker using docker-compose (formerly fig) like this:
docker-compose.yml
db:
image: postgres
volumes:
- pgdata:/var/lib/postgresql/data
web:
build: .
command: bundle exec rails s -b 0.0.0.0
volumes:
- .:/usr/src/app
ports:
- "3011:3000"
links:
- db
Dockerfile
FROM rails:onbuild
I need to run some periodical maintainance scripts, such as database backups, pinging sitemaps to search engines etc.
I'd prefer not to use cron on my host machine, since I prefer to keep the application portable and my idea is to use docker-compose to link an image such as https://registry.hub.docker.com/u/hamiltont/docker-cron/ using docker-compose.
The rails official image does not have ssh enabled so I cannot just have the cron container to ssh into the web container and run the scripts.
Does docker-compose have a way for a container to gain a shell into a linked container to execute some commands?
What actually would you like to do with your containers? If you need to access some objects from container's file system, you should just mount the volume to the ancillary container (consider --volumes-from option).
Any SSH interaction between containers is considered as a bad practice (at least since docker 1.3, when docker exec has been implemented). Running more than one process inside the container (e.g. smth but the postgres or rails in your case) will result in a large overhead: in order to have a sshd along with rails you'll have to deploy something like supervisord.
But if you really need to provide some kind of nonstandard interaction between the containers and you're sure that you really need it, I would suggest you to use one of the full-featured docker client libraries (like docker-py). It will allow you to launch docker exec in a programmable way.

Resources