On Docker for Windows, I have a simple SQL Server container based on microsoft/mssql-server-windows-developer that is launched with docker-compose up via a simple docker-compose.yaml file.
Is there a way to allocate more than 1GB of memory to this container? I can do it when running the image directly or when I build my image with -m 4GB, but I can't figure out how to do this when using Docker Compose. This container needs more than 1GB of RAM to run properly and all of my research has revealed nothing helpful thus far.
I've looked into the resources configuration option, but that only applies when running under Docker Swarm, which I don't need.
In docker compose version 2.* you could use the mem_limit option as below
version: '2.4'
services:
my-svc:
image: microsoft/mssql-server-windows-developer
mem_limit: 4G
In docker compose version 3 it is replaced by the resources options which requires docker swarm.
version: '3'
services:
my-svc:
image: microsoft/mssql-server-windows-developer
deploy:
resources:
limits:
memory: 4G
There is a compatibility flag that can be used to translate the deploy section to equivalent version 2 parameters when running docker-compose --compatibility up. However this is not recommended for production deployments
From documentation
docker-compose 1.20.0 introduces a new --compatibility flag designed
to help developers transition to version 3 more easily. When enabled,
docker-compose reads the deploy section of each service’s definition
and attempts to translate it into the equivalent version 2 parameter.
Currently, the following deploy keys are translated:
resources
limits and memory reservations
replicas
restart_policy
condition and max_attempts All other keys are ignored and produce a
warning if present. You can review the configuration that will be used
to deploy by using the --compatibility flag with the config command.
We recommend against using --compatibility mode in production. Because the resulting configuration is only an approximate using non-Swarm mode properties, it may produce unexpected results.
Looking for options to set resources on non swarm mode containers?
The options described here are specific to the deploy key and swarm mode. If you want to set resource constraints on non swarm deployments, use Compose file format version 2 CPU, memory, and other resource options. If you have further questions, refer to the discussion on the GitHub issue docker/compose/4513.
You can use the docker-compose file on version 2 instead of version 3. You can use mem_limit (available on version 2) to set the memory limit. So you can use a docker-compose file like this:
version: "2.4"
services:
sql-server:
image: microsoft/mssql-server-windows-developer
environment:
- ACCEPT_EULA=Y
- SA_PASSWORD=t3st&Pa55word
mem_limit: 4GB
You can check the memory limit using docker stats.
Was also out for setting this up via docker-compose. Had a hard time figuring out why sql server worked on a new machine but not any longer on my older one. Finally recalled I had tuned the size down able to allocate in Docker Desktop. Utilizing this you find it through the settings button, Resources/Advanced. Setting Memory to 2GB resolved the issue for me.
Related
There are a few approaches to fix container startup order in docker-compose, e.g.
depends_on
docker-compose-wait
Docker Compose wait for container X before starting Y (Asked 7 years, 6 months ago, Modified 7 months ago, Viewed 483k times)
...
However, if one of the services in a docker-compose file includes a build directive, it seems docker-compose will try to build the image first (ignoring depends_on basically - or interpreting depends_on as start dependency, not build dependency).
Is it possible for a build directive to specify that it needs another service to be up, before starting the build process?
Minimal Example:
version: "3.5"
services:
web:
build: # this will run before postgres is up
context: .
dockerfile: Dockerfile.setup # needs postgres to be up
depends_on:
- postgres
...
postgres:
image: postgres:10
...
Notwithstanding the general advice that programs should be written in a way that handles the unavailability of services (at least for some time) gracefully, are there any ways to allow builds to start only when other containers are up?
Some other related questions:
multi-stage build in docker compose?
Update/Solution: Solved the underlying problem by pushing all the (database) setup required to the CMD directive of a bootstrap container:
FROM undertest-base:latest
...
CMD ./wait && ./bootstrap.sh
where wait waits for postgres and bootstrap.sh contains the code for setting up the postgres database with fixtures so the over system becomes fully testable after that script.
With that, setting up an ephemeral test environment with database setup becomes a simple docker-compose up again.
There is no option for this in Compose, and also it won't really work.
The output of an image build is a self-contained immutable image. You can do things like docker push an image to a registry, and Docker's layer cache will avoid rebuilding an image that it's already built. So in this hypothetical setup, if you could access the database in a Dockerfile, but you ran
docker-compose build
docker-compose down -v
docker-compose up -d --build
the down -v step will remove the storage the database uses. While the up --build option will cause the image to be rebuilt, the build sequence will skip all of the steps and produce the same image as originally, and whatever changes you might have made to the database won't have happened.
At a more mechanical layer, the build sequence doesn't use the Compose-provided network, so you also wouldn't be able to connect to the database container.
There are occasional use cases where a dependency in build: would be handy, in particular if you're trying to build a base image that other images in your Compose setup share. But neither the stable Compose file v3 build: block nor the less-widely-supported Compose specification build: supports any notion of an image build depending on anything else.
I have a docker-compose file that looks like the following:
version: "3.9"
services:
api:
build: .
ports:
- "5000"
deploy:
resources:
reservations:
devices:
- capabilities: [gpu]
count: 1
When I run docker-compose up, this runs as intended, using the first GPU on the machine.
However, if I run docker-compose up --scale api=2, I would expect each docker container to reserve one GPU on the host.
The actual behaviour is that both containers receive the same GPU, meaning that they compete for resources. Additionally, I also get this behaviour if I have two containers specified in the docker-compose.yml, both with count: 1. If I manually specify device_ids for each container, it works.
How can I make it so that each docker container reserves exclusive access to 1 GPU? Is this a bug or intended behaviour?
The behavior of docker-compose when a scale is requested is to create additional containers as per the exact specification provided by the service.
There are very few specification parameters that will vary during the creation of the additional containers and the devices which are part of the host_config set of parameters are copied without modifications.
docker-compose is python project, so if this is important feature for you, you can try to implement it. The logic that drives the lifecycle of the services (creation, scaling, etc.) reside in compose/services.py.
Given compose file
version: '3.8'
services:
whoami1:
image: containous/whoami
depends_on:
- whoami2
whoami2:
image: containous/whoami
when deployed to docker swarm docker stack deploy -c docker-compose.yaml test
services whoami1 and whoami2 seem to start in random order and ignore depends_on condition.
docker stack deploy -c docker-compose.yaml test
Creating network test_default
Creating service test_whoami1
Creating service test_whoami2
Does docker swarm support service startup sequencing via dependencies?
No, at least not built in.
Even with depends_on the whoami2 may not yet be ready to interact with whoami1 because it may need time to boot itself:
However, for startup Compose does not wait until a container is “ready” (whatever that means for your particular application) - only until it’s running. There’s a good reason for this.
https://docs.docker.com/compose/startup-order/
They hint at two possibilites to check if whoami2 is ready.
Use a tool such as wait-for-it, dockerize, or sh-compatible wait-for. These are small wrapper scripts which you can include in your application’s image to poll a given host and port until it’s accepting TCP connections.
And depends_on is indeed ignored for docker swarm:
There are several things to be aware of when using depends_on:
(...)
The depends_on option is ignored when deploying a stack in swarm mode with a version 3 Compose file.
https://docs.docker.com/compose/compose-file/#depends_on
There are a lot of applications which I launch on my workstation using docker-compose up.
Reasons:
They don't have an installer, or I don't want to use it
They require a dedicated storage engine to be present
They require a build process step
They are created by me and I want them to be easily launched on any workstation
e.t.c
So what I usually end up with the following file-structure:
myAppDir
- docker-compose.yml
- Dockerfile (not always)
- someConfigFile
And my docker-compose.yml is something like this:
(It can contain 2 or 3 services, but I provide the simplest form that I use)
version: '3.7'
services:
mysql:
image: mysql:5.7.29
restart: always
volumes:
- ./mysqld.cnf:/etc/mysql/mysql.conf.d/mysqld.cnf
environment:
- MYSQL_ROOT_PASSWORD=xyz
ports:
- 3306:3306
Then when I need to launch the application I just perform:
docker-compose up # (or with --build)
Recently I tried to add:
deploy:
resources:
limits:
cpus: '0.50'
memory: 200M
and got a message:
Some services (mysql) use the 'deploy' key, which will be ignored. Compose does not support 'deploy' configuration - use docker stack deploy to deploy to a swarm.
So I tried:
docker stack deploy mystack --compose-file docker-compose.yml
and got message:
Ignoring unsupported options: restart
this node is not a swarm manager. Use "docker swarm init" or "docker swarm join" to connect this node to swarm and try again
This seems more complex that docker-compose up.
I saw that I can use --compatibility flag e.g.
docker-compose --compatibility up
But the word compatibility means to me that I should soon switch to a new way of launching my apps locally.
My question is: What is the new procedure that I should follow for launching apps on my workstation using a docker and a descriptor file, in order to support options present in Compose file v3?
If you want to specify memory limits and similar constraints for local containers, you need to use a version 2 Compose file. This is called out in the documentation for the deploy: resources: section. docker/compose#4513 has some reasonably clear statements that Compose file version 2 is more targeted at local setups and version 3 more at Swarm installations, and that Docker intends to keep supporting both file versions.
Docker has put many options and functions specific to their Swarm cluster-installation mode into the core product. Anything that mentions a "stack", for example, is specific to a Swarm setup. One consequence of Swarm and plain-Docker things being combined together is that the deploy: Docker Compose options only have an effect in Swarm mode. The documentation for the deploy: key notes:
This only takes effect when deploying to a swarm with docker stack deploy, and is ignored by docker-compose up and docker-compose run.
My question is: What is the new procedure that I should follow for launching apps on my workstation using a docker and a descriptor file, in order to support options present in Compose file v3?
Docker compose V3 is meant to be used with Docker Swarm deployments, therefore you need to run your Docker in Swarm mode, otherwise just keep using the V2 and it's simpler interface for localhost developments.
For example restart is ignored because that responsibility belongs now to the Docker Swarm, not to Docker itself.
Using the compatibility flag it's kind of converting at runtime your V3 compose file into a V2 compose file.
So in short just use V3 if you want to run Docker in Swarm mode to take advantage of all its new features, aka it's kind of a Kubernetes in Docker land.
I am working on building automated CI/CD pipeline for LAMP application using docker.
I want image to be spinned into 5 containers, so that 5 different developers can work on their code. Can this be atained? I tried it using replicas, but it didnt worked out.
version: '3'
services:
web:
build: .
ports:
- "8080:80"#
deploy:
mode: replicated
replicas: 4
Error which i get:
:#!/bin/bash -eo pipefail docker-compose up ERROR: The Compose file
'./docker-compose.yml' is invalid because: Additional properties are
not allowed ('jobs' was unexpected) You might be seeing this error
because you're using the wrong Compose file version. Either specify a
supported version (e.g "2.2" or "3.3") and place your service
definitions under the services key, or omit the version key and place
your service definitions at the root of the file to use version 1. For
more on the Compose file format versions, see
docs.docker.com/compose/compose-file Exited with code 1 –
Also, from different container, can developers push, pull and commit to git? Will work done in one container will get lost if image is rebuild or run?
What things should i actually take care of while building this pipeline.
First of all, build your image separately using a Dockerfile with docker build -t <image name>:<version/tag> . then use following compose file with docker stack deploy to deploy your stack.
version: '3'
services:
web:
image: <image name>:<version/tag>
ports:
- "8080:80"#
deploy:
mode: replicated
replicas: 4
deploy attribute should be inside a service because it describes the number of replicas a service must have. It is not a global attribute like services. That seems to be the only problem in your compose file and docker compose up is complaining about this when running from the pipeline.
Update
You cannot run multiple replicas with a single docker-compose command. To run multiple replicas from a compose.yml, create a swarm by executing docker swarm init on your machine.
Afterward, simply replace docker-compose up with docker stack deploy <stack name>. docker-compose simply ignores the deploy attribute.
For details on differences between docker-compose up and docker stack deploy <stack name> refer to this article: https://nickjanetakis.com/blog/docker-tip-23-docker-compose-vs-docker-stack