Docker compose won't push to ECR - docker

I'm trying to set up deployment where on docker compose up command it would build necessary images, push them to ECR and launch everything on ECS.
What I've done so far.
Created and switched to aws context:
docker context create ecs aws
docker context use aws
Created docker-compose.yml file:
version: '3.9'
services:
frontend:
build: ./frontend
image: <my_aws_id>.dkr.ecr.<region>.amazonaws.com/frontend:latest
backend:
build: ./backend
image: <my_aws_id>.dkr.ecr.<region>.amazonaws.com/backend:latest
nginx:
build: ./nginx
image: <my_aws_id>.dkr.ecr.<region>.amazonaws.com/nginx:latest
ports:
- 80:80
After reading a bunch of manuals, to my understanding, command docker compose up should:
Build images
Push them to ECR
Create cluster and necessary tasks on ECS
Launch containers
Instead I'm having error that telling me that those images are not found. Yes, my ECR repo is empty, but I expect docker compose up to build and push images to that repo, not just try to get them from ECR.
What am I doing wrong?

The purpose of the command is to bring the services up locally:
❯ docker-compose --help
...
create Create services
push Push service images
start Start services
up Create and start containers
Depending on context it may as well build and pull images, create or update containers, and of course, start services, but it does not push. If you intend to use docker-compose to push images, I suggest you manually run build and push commands:
docker-compose build
docker-compose push

The ecs context in docker compose does not allow to build/push. As of today you need to dance slightly between context to achieve what you are trying to do. This is an example (embedded into a completely different project) that shows the process.
In a nutshell (and for posterity):
export ACCOUNTNUMBER=<your account>
export MYREGION=<your region>
aws ecr get-login-password --region $MYREGION | docker login --password-stdin --username AWS $ECR_ECSWORKER_REPO_URI
docker context use default
docker compose build
docker compose push
docker context use myecscontext
docker compose up
[the above assumes the myecscontext context has already been created with docker context create ecs myecscontext]

Related

Docker compose ecs fails to deploy (fails when using docker compose up)

I am trying to determine why the cloudformation building of application fails when trying to create resources for BackgroundjobsService (Create failed in cloudformation). The only main differences from other services I have built is that it has no exposed ports and I am using ubuntu instead of php-apache image.
Dockerfile (I made it super simply (basically do nothing)
# Pulling Ubuntu image
FROM ubuntu:20.04
docker-compose.yml
services:
background_jobs:
image: 000.dkr.ecr.us-east-1.amazonaws.com/company/job-scheduler
restart: always
env_file: ../.env.${ENV}
build:
context: "."
How I deploy (I verified the enf files exist in the parent directory of job-scheduler).
cd job-scheduler
ENV=dev docker --context default compose build
docker push 000.dkr.ecr.us-east-1.amazonaws.com/company/job-scheduler:latest
ENV=dev docker --context tcetra-dev compose up
I don't know how to find any sort of error logs but the task defination gets created and all my env vars are in there.

Push docker image to another container registry with compose

I am running locust (the official image from Docker Hub) locally using a Docker-compose file as below
version: '3'
services:
locust:
image: locustio/locust
ports:
- "8089:8089"
volumes:
- ./:/mnt/locust
command: -f /mnt/locust/locustfile.py -H https://my-host-url.com
I have done the stress-testing in my local with docker-compose up. The next step is to push this compose file onto another registry. I am following the steps given in docker hub documentation. However, I just need some help in copying the necessary locustfile.py as well to my other registry (let's say artifactory).
To upload an image to your custom registry it has to be properly tagged (named) and it is not necessary to use docker build for that. You can do with docker tag:
# pull the image
docker pull locustio/locust
# rename it for your registry
docker tag locustio/locust:latest my-registry.com:5000/locust:latest
# push it to your registry using its new name
docker push my-registry.com:5000/locust:latest

Docker: Pull from docker hub, doesn't download any files

It's my second day playing with docker, I'm trying to make a simple django web server with docker. So basically I created a Dockerfile and docker-compose.yml files in my directory, I have my docker-compose.yml set to :
version: '3'
services:
web:
build: .
command: python manage.py runserver 0.0.0.0:8080
volumes:
- .:/app
ports:
- "8080:8080"
env_file:
- ./env.dev
What I'm trying to achieve is to push these files into docker hub (repository) as I think, and then to pull them from the repo. So basically I open the terminal and launch these commands:
docker images
docker tag ID docker_username/repo_name:firsttry
docker push docker_username/repo_name
After pushing I can see that I have a repository in hub with some type of image history, so now I'm trying to pull the data to my local pc.
My commands:
cd some_directory
docker pull dziugasiot/wintekaiot:firsttry
And the response I get is:
firsttry: Pulling from dziugasiot/wintekaiot
Digest: sha256:477a0bb335f841875d43f0f5717c0416a500989f280112c36b613aa97d82157e
Status: Image is up to date for dziugasiot/wintekaiot:firsttry
docker.io/dziugasiot/wintekaiot:firsttry
The directory is empty, what I'm doing wrong?
You are not doing wrong but thinking wrong.
Docker push , send image to dockerhub
Docker pull, download your image to a docker folder in your system, not to the directory you are in. you can see your pulled image with command: docker images (show you a list)
having your pulled image downloaded you can use it from any directory running the command: docker run -it dziugasiot/wintekaiot:firsttry bash ( create a container )
All "files" (layers) needed to create a container for that image are already downloaded, see the message "Status: Image is up to date for dziugasiot/wintekaiot:firsttry"
Docker uses copy-on-write mechanism, so in short: once you download an image, you don't have to download it again (for the same version).
For beginners with Docker I don't recommend to do anything in Docker root folder, but for a complete answer: you can find your image files (layers) in Docker root folder. The place and the format of it depends on your actual configuration, but you can look it up by issuing docker info command and looking for Docker Root.

How to deploy a project using docker with gitlab-ci

I'm fairly new to docker and gitlab-ci with the docker runner.
The docker runner works and I'm fine with it except of one thing. It seems as if the docker runner cannot see locally available images. Which means I may have to create a custom registry unless there's a way to make the docker command to check on the host docker.
What I try to achieve is this:
Build a Dockerfile and fetch a few other git repositories to
Create a new docker image based on the Dockerfile.
Start a new docker container on the host docker which will remain alive even after the job is done.
In other words, I'm trying to generate a docker image and start/replace an existing service in the host's dockerd service.
Right now that's what I came with but it doesn't work as data isn't passed from one job to the other. And even if job build would work I doubt the docker service I created would be accessible from the outside world.
stages:
- test
- prepare
- build
# Build the Dockerfile
prepare_script:
stage: prepare
image: debian:stretch
script:
- apt-get update
- apt-get install -y git python3
- python3 prepare_project.py
# Build and deploy the docker image
build:
variables:
DOCKER_HOST: tcp://docker:2375/
DOCKER_DRIVER: overlay2
image: docker:stable
services:
- docker:dind
stage: build
script:
- docker build -t my-project .
- docker run --add-host db:172.17.42.1 -d --name my-project-inst --restart always -p 8069:8069 myproject
How can I use gitlab-ci to automatically deploy docker images in the host docker service?
The problem I'm trying to solve is to generate the docker file so fetching of git repositories and submodules can be done dynamically without having to hand modify Dockerfiles.

Docker Compose does not allow to use local images

The following command fails, trying to pull image from the Docker Hub:
$ docker-compose up -d
Pulling web-server (web-server:staging)...
ERROR: repository web-server not found: does not exist or no pull access
But I just want to use a local version of the image, which exists:
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
web-server staging b94573990687 7 hours ago 365MB
Why Docker doesn't search among locally stored images?
This is my Docker Compose file:
version: '3'
services:
chat-server:
image: chat-server:staging
ports:
- "8110:8110"
web-server:
image: web-server:staging
ports:
- "80:80"
- "443:443"
- "8009:8009"
- "8443:8443"
and my .env file:
DOCKER_HOST=tcp://***.***.**.**:2376
DOCKER_TLS_VERIFY=true
DOCKER_CERT_PATH=/Users/Victor/Documents/Development/projects/.../target/docker
In general, this should work as you describe it. Tried to reproduce it, but it simply worked...
Folder structure:
.
├── docker-compose.yml
└── Dockerfile
Content of Dockerfile:
FROM alpine
CMD ["echo", "i am groot"]
Build and tag image:
docker build -t groot .
docker tag groot:latest groot:staging
with docker-compose.yml:
version: '3.1'
services:
groot:
image: groot:staging
and start docker-compose:
$ docker-compose up
Creating groot_groot ...
Creating groot_groot_1 ... done
Attaching to groot_groot_1
groot_1 | i am groot
groot_groot_1 exited with code 0
Version >1.23 (2019 and newer)
Easiest way is to change image to build: and reference the Dockerfile in the relative directory, as shown below:
version: '3.0'
services:
custom_1:
build:
context: ./my_dir
dockerfile: Dockerfile
This allows docker-compose to manage the entire build and image orchestration in a single command.
# Rebuild all images
docker-compose build
# Run system
docker-compose up
In your docker-compose.yml, you can specify build: . instead of build: <username>/repo> for local builds (rather than pulling from docker-hub) - I can't verify this yet, but I believe you may be able to do relative paths for multiple services to the docker-compose file.
services:
app:
build: .
Reference: https://github.com/gvilarino/docker-workshop
March-09-2020 EDIT:
(docker version 18.09.9-ce build 039a7df,
dockercompose version 1.24.0, build 0aa59064)
I found that to just create a docker container, you can just docker-compose 'up -d' after tagging the container with a fake local registry server tag (localhost:5000/{image}).
$ docker tag {imagename}:{imagetag} localhost:5000/{imagename}:{imagetag}
You don't need to run the local registry server, but need to change the image url in dockercompose yaml file with the fake local registry server url:
version: '3'
services:
web-server:
image: localhost:5000/{your-image-name} #change from {imagename}:{imagetag} to localhost:5000/{imagename}:{imagetag}
ports:
- "80:80"
from {imagename}:{imagetag} to localhost:5000/{imagename}:{imagetag}
and just up -d
$ docker-compose -f {yamlfile}.yaml up -d
This creates the container if you already have the image (localhost:5000/{imagename}) in your local machine.
Adding to #Tom Saleeba's response,
I still got errors after tagging the container with "/"
(for ex: victor-dombrovsky/docker-image:latest)
It kept looking for the image from remote docker.io server.
registry_address/docker-image
It seems the url before "/" is the registry address and after "/" is the image name. and without "/" provided, docker-compose by default looks for the image from the remote docker.io.
It guess it's a known bug with docker-compose
I finally got it working by running the local registry, pushing the image to the local registry with the registry tag, and pulling the image from the local registry.
$ docker run -d -p 5000:5000 --restart=always --name registry registry:2
$ docker tag your-image-name:latest localhost:5000/your-image-name
$ docker push localhost:5000/your-image-name
and then change the image url in the dockerfile:
version: '3'
services:
chat-server:
image: chat-server:staging
ports:
- "8110:8110"
web-server:
image: localhost:5000/{your-image-name} #####change here
ports:
- "80:80"
- "443:443"
- "8009:8009"
- "8443:8443"
Similarly for the chat-server image.
You might need to change your image tag to have two parts separated by a slash /. So instead of
chat-server:staging
do something like:
victor-dombrovsky/chat-server:staging
I think there's some logic behind Docker tags and "one part" tags are interpreted as official images coming from DockerHub.
For me putting "build: ." did the trick. My working docker compose file looks like this,
version: '3.0'
services:
terraform:
build: .
image: tf:staging
env_file: .env
working_dir: /opt
volumes:
- ~/.aws:/.aws
You have a DOCKER_HOST entry in your .env 👀
From the looks of your .env file you seem to have configured docker-compose to use a remote docker host:
DOCKER_HOST=tcp://***.***.**.**:2376
Moreover, this .env is only loaded by docker-compose, but not docker. So in this situation your docker images output doesn't represent what images are available when running docker-compose.
When running docker-compose you're actually running Docker on the remote host tcp://***.***.**.**:2376, yet when running docker by itself you're running Docker locally.
When you run docker images, you're indeed seeing a list of the images that are stored locally on your machine. But docker-compose up -d is going to attempt to start the containers not on your local machine, but on ***.***.**.**:2376. docker images won't show you what images are available on the remote Docker host unless you set the DOCKER_HOST environment variable, like this for example:
DOCKER_HOST=tcp://***.***.**.**:2376 docker images
Evidently the remote Docker host doesn't have the web-server:staging image stored there, nor is the image available on Docker hub. That's why Docker complains it can't find the image.
Solutions
Run the container locally
If your intention was to run the container locally, then simply remove the DOCKER_HOST=... line from your .env and try again.
Push the image to a repository.
However if you plan on running the image remotely on the given DOCKER_HOST, then you should probably push it to a repository. You can create a free repository at Docker Hub, or you can host your own repository somewhere, and use docker push to push the image there, then make sure your docker-compose.yml referenced the correct repository.
Save the image, load it remotely.
If you don't want to push the image to Docker Hub or host your own repository you can also transfer the image using a combination of docker image save and docker image load:
docker image save web-server:staging | DOCKER_HOST=tcp://***.***.**.**:2376 docker image load
Note that this can take a while for big images, especially on a slow connection.
You can use pull_policy:
image: img:tag
pull_policy: if_not_present
My issue when getting this was that I had built using docker without sudo, and ran docker compose with sudo. Running docker images and sudo docker images gave me two different sets of images, where sudo docker compose up gave me access only to the latter.

Resources