While I was trying to convert Docker compose file with the container transform I got the following error:
Container "container-name" is missing required parameter 'image'.
Services with the image parameter, it operates fine. However, the ones with the build parameter instead of the image cause error. I want to build some of the images based on a Dockerfile by using a build parameter and I don't need an image parameter in the Docker compose file at all. What would be the most effective solution here?
Here is an example:
Successfull transformation for db service:
Docker-compose.yml:
db:
image: postgres
Dockerrun.aws.json:
"containerDefinitions": [
{
"essential": true,
"image": "postgres",
"memory": 128,
"mountPoints": [
{
"containerPath": "/var/lib/postgresql/data/",
"sourceVolume": "Postgresql"
}
],
"name": "db"
}
Unsuccessfull transformation for web service since build used instead of image parameter:
Docker-compose.yml:
web:
build:
context: .
dockerfile: Dockerfile
The issue is that an AWS ECS (=elastic container service) task definition cannot depend on a Dockerfile to build the image. The image has to be already build for it to be used in a task definition. For this reason the "image" key is required in a task definition json file and so it has to be in the docker-compose file you are converting from also.
The image for the task definition can come from Docker hub (like the postgres image does) or you can build your own images and push them to AWS ECR (=elastic container registry).
Related
I need to deploy on Heroku a Docker image I have from a public registry.
Such image needs some params to be run, including a certificate file.
Locally, I can use docker compose specifying in the docker-compose.yml file all the env vars and volumes.
# === docker-compose.yml ===
services:
my_service:
image: public.images.repo/my-service
container_name: my_service
volumes:
- /Users/me/public.pem:/public.pem
environment:
- CERT_PATH=/public.pem
Unfortunately, I've just seen that Heroku doesn't support docker compose.
I see that is supports the heroku.yml file, but it requires to have the Dockerfile, that I don't have and can't modify since I only have the image. And, apparently, there is no volume field.
# === heroku.yml ===
build:
docker:
web: Dockerfile
worker: worker/Dockerfile
release:
image: worker
How can I deploy a docker container, with volumes to import certificate files?
Heroku does not support Docker volumes at all:
Unsupported Dockerfile commands
VOLUME - Volume mounting is not supported. The filesystem of the dyno is ephemeral.
You could create a custom image based on the public image that gets the certificate file some other way.
Without more information about the container you're trying to run it's hard to say much more.
I have a very simple docker-compose.yml:
version: '2.4'
services:
containername:
image: ${DOCKER_IMAGE}
volumes:
- ./config:/root/config
I'm using a remote staging server accessed via ssh:
docker context create staging --docker "host=ssh://ubuntu#staging.example.com"
docker context use staging
However, I see unexpected results with my volume after I docker-compose up:
docker-compose --context staging up -d
docker inspect containername
...
"Mounts": [
{
"Type": "bind",
"Source": "/Users/crummy/code/.../config",
"Destination": "/root/config",
"Mode": "rw",
"RW": true,
"Propagation": "rprivate"
}
],
...
It seems the expansion of ./config to a full path happens on the machine docker-compose is running on. Not the machine Docker is running on.
I can fix this problem by hardcoding the entire path: /home/ubuntu/config:/root/config. But this makes my docker-compose file a little less flexible than it could be. Is there a way to get the dot expansion to occur on the remote machine?
No, the docs say that:
You can mount a relative path on the host, which expands relative to the directory of the Compose configuration file being used. Relative paths should always begin with . or ..
I believe that happens for two reasons:
There's no easy way and objective way that the docker-compose can find out how to expand . in this context, as there's no way to know what . would mean for the ssh client (home? same folder?).
Even though the docker cli is using a different context, the expansion is done by the docker-compose tool, that's unaware about the context switch.
Even using environment variables might pose a problem, since the env expansion would also happen in the machine you're running the docker-compose command.
I am trying to come up with a CI system where I validate the Dockerfile and docker-compose.yaml files that are used to build our images.
I found Google containter-structure-tests
that can be used to verify the structre of Docker images that are built. This works if the docker images are build from Dockerfile.
Is there a way that I can verify the docker images with all the configurations that are added to the images by Docker-compose?
EDIT:
Maybe I didn't all put all my details into the questions.
Lets say I have docker-compose file with the following structure:
version: "3"
services:
image-a:
build:
context: .
dockerfile: Dockerfile-a
image-b:
build:
context: .
dockerfile: Dockerfile-b
ports:
- '8983:8983'
volumes:
- '${DEV_ENV_ROOT}/solr/cores:/var/data/solr'
- '${DEV_ENV_SOLR_ROOT}/nginx:/var/lib/nginx'
Now that the images would be built from Dockerfile-a and Dockerfile-b, there would be configurations made on top of image foo-b. How can I validate those configurations without building the container from image foo-b? Would that even be possible?
Assuming you have the following docker-compose.yml file:
version: "3"
services:
image-a:
build:
context: .
dockerfile: Dockerfile-a
image-b:
build:
context: .
dockerfile: Dockerfile-b
Build your images running the command docker-compose --project-name foo build. This will make all images' name start with the prefix foo_. So you would end up with the following image names:
foo_image-a
foo_image-b
The trick is to use a unique id (such as your CI job id) instead of foo so you can identify the very images that were just built.
Now that you know the names of your images, you can use:
container-structure-test test --image foo_image-a --config config.yaml
container-structure-test test --image foo_image-b --config config.yaml
If you are to make some kind of generic job which does not know the docker compose service names, you can use the following command to get the list of images starting with that foo_ prefix:
docker image list --filter "reference=foo_*"
REPOSITORY TAG IMAGE ID CREATED SIZE
foo_image-a latest 0c5e1cf8c1dc 16 minutes ago 4.15MB
foo_image-b latest d4e384157afb 16 minutes ago 4.15MB
and if you want a script to iterate over this result, add the --quiet option to obtain just the images' id:
docker image list --filter "reference=foo_*" --quiet
0c5e1cf8c1dc
d4e384157afb
What is the difference between docker-compose build and docker build?
Suppose in a dockerized project path there is a docker-compose.yml file:
docker-compose build
And
docker build
docker-compose can be considered a wrapper around the docker CLI (in fact it is another implementation in python as said in the comments) in order to gain time and avoid 500 characters-long lines (and also start multiple containers at the same time). It uses a file called docker-compose.yml in order to retrieve parameters.
You can find the reference for the docker-compose file format here.
So basically docker-compose build will read your docker-compose.yml, look for all services containing the build: statement and run a docker build for each one.
Each build: can specify a Dockerfile, a context and args to pass to docker.
To conclude with an example docker-compose.yml file :
version: '3.2'
services:
database:
image: mariadb
restart: always
volumes:
- ./.data/sql:/var/lib/mysql
web:
build:
dockerfile: Dockerfile-alpine
context: ./web
ports:
- 8099:80
depends_on:
- database
When calling docker-compose build, only the web target will need an image to be built. The docker build command would look like :
docker build -t web_myproject -f Dockerfile-alpine ./web
docker-compose build will build the services in the docker-compose.yml file.
https://docs.docker.com/compose/reference/build/
docker build will build the image defined by Dockerfile.
https://docs.docker.com/engine/reference/commandline/build/
Basically, docker-compose is a better way to use docker than just a docker command.
If the question here is if docker-compose build command, will build a zip kind of thing containing multiple images, which otherwise would have been built separately with usual Dockerfile, then the thinking is wrong.
Docker-compose build, will build individual images, by going into individual service entry in docker-compose.yml.
With docker images, command, we can see all the individual images being saved as well.
The real magic is docker-compose up.
This one will basically create a network of interconnected containers, that can talk to each other with name of container similar to a hostname.
Adding to the first answer...
You can give the image name and container name under the service definition.
e.g. for the service called 'web' in the below docker-compose example, you can give the image name and container name explicitly, so that docker does not have to use the defaults.
Otherwise the image name that docker will use will be the concatenation of the folder (Directory) and the service name. e.g. myprojectdir_web
So it is better to explicitly put the desired image name that will be generated when docker build command is executed.
e.g.
image: mywebserviceImage
container_name: my-webServiceImage-Container
example docker-compose.yml file :
version: '3.2'
services:
web:
build:
dockerfile: Dockerfile-alpine
context: ./web
ports:
- 8099:80
image: mywebserviceImage
container_name: my-webServiceImage-Container
depends_on:
- database
Few additional words about the difference between docker build and docker-compose build.
Both have an option for building images using an existing image as a cache of layers.
with docker build, the option is --cache-from <image>
with docker-composer, there is a tag cache_from in the build section.
Unfortunately, up until now, at this level, images made by one are not compatible with the other as a cache of layers (Ids are not compatible).
However, docker-compose v1.25.0 (2019-11-18), introduces an experimental feature COMPOSE_DOCKER_CLI_BUILD so that docker-compose uses native docker builder (therefore, images made by docker build can be used as a cache of layers for docker-compose build)
I would like to pull a remote image from a private registry. The image's dockerfile contains some args whose values would be populated via docker-compose.yml. For example:
version: '3.0'
services:
api:
image: remoteApiImage
web:
image: remoteWebImage
build:
args:
baseurl: http://remoteApiImage:80
Currently, this does not work as build requires a context. However, if I set a context, it expects a local Dockerfile. Even setting to . without a local Dockerfile will pull the remote image, but the build args are not passed properly.
Is this possible?
Note: I am using Windows Server 2016 containers. Not sure that is relevant to the issue.
As johnharris85 comment suggests, what you are trying to do is not possible. You cannot pull and image and then rebuild it (without the Dockerfile).
If you are trying to specify arguments to an image then you would d this via environment variables. Otherwise, if you have the sources Dockerfile for the image, you can use the ARG function to specify arguments while rebuilding it yourself.
build and image in the docker-compose.yml context are mutually exclusive.