Docker compose ecs fails to deploy (fails when using docker compose up) - docker

I am trying to determine why the cloudformation building of application fails when trying to create resources for BackgroundjobsService (Create failed in cloudformation). The only main differences from other services I have built is that it has no exposed ports and I am using ubuntu instead of php-apache image.
Dockerfile (I made it super simply (basically do nothing)
# Pulling Ubuntu image
FROM ubuntu:20.04
docker-compose.yml
services:
background_jobs:
image: 000.dkr.ecr.us-east-1.amazonaws.com/company/job-scheduler
restart: always
env_file: ../.env.${ENV}
build:
context: "."
How I deploy (I verified the enf files exist in the parent directory of job-scheduler).
cd job-scheduler
ENV=dev docker --context default compose build
docker push 000.dkr.ecr.us-east-1.amazonaws.com/company/job-scheduler:latest
ENV=dev docker --context tcetra-dev compose up
I don't know how to find any sort of error logs but the task defination gets created and all my env vars are in there.

Related

deploy on Heroku a docker image (with volumes), without having source code

I need to deploy on Heroku a Docker image I have from a public registry.
Such image needs some params to be run, including a certificate file.
Locally, I can use docker compose specifying in the docker-compose.yml file all the env vars and volumes.
# === docker-compose.yml ===
services:
my_service:
image: public.images.repo/my-service
container_name: my_service
volumes:
- /Users/me/public.pem:/public.pem
environment:
- CERT_PATH=/public.pem
Unfortunately, I've just seen that Heroku doesn't support docker compose.
I see that is supports the heroku.yml file, but it requires to have the Dockerfile, that I don't have and can't modify since I only have the image. And, apparently, there is no volume field.
# === heroku.yml ===
build:
docker:
web: Dockerfile
worker: worker/Dockerfile
release:
image: worker
How can I deploy a docker container, with volumes to import certificate files?
Heroku does not support Docker volumes at all:
Unsupported Dockerfile commands
VOLUME - Volume mounting is not supported. The filesystem of the dyno is ephemeral.
You could create a custom image based on the public image that gets the certificate file some other way.
Without more information about the container you're trying to run it's hard to say much more.

How to deploy container using docker-compose to google cloud?

i'm quite new to GCP and been using mostly AWS. I am currently trying to play around with GCP and want to deploy a container using docker-compose.
I set up a very basic docker-compose.yml file as follows:
# docker-compose.yml
version: '3.3'
services:
git:
image: alpine/git
volumes:
- ${PWD}:/git
command: "clone https://github.com/PHP-DI/demo.git"
composer:
image: composer
volumes:
- ${PWD}/demo:/app
command: "composer install"
depends_on:
- git
web:
image: php:7.4-apache
ports:
- "8080:${PORT:-80}"
- "8000:${PORT:-8000}"
volumes:
- ${PWD}/demo:/var/www/html
command: php -S 0.0.0.0:8000 -t /var/www/html
depends_on:
- composer
So the container will get the code from git, then install the dependencies using composer and finally be available on port 8000.
On my machine, running docker-compose up does everything. However how can push this docker-compose to google cloud.
I have tried building a container using the docker/compose image and a Dockerfile as follows:
FROM docker/compose
WORKDIR /opt
COPY docker-compose.yml .
WORKDIR /app
CMD docker-compose -f /opt/docker-compose.yml up web
Then push the container to the registry. And from there i tried deploying to:
cloud run - did not work as i could not find a way to specify mounted volume for /var/run/docker.sock
Kubernetes - i mounted the docker.sock but i keep getting an error in the logs that /app from the git service is read only
compute engine - same error as above
I don't want to make a container by copying all local files into it then upload, as the dependencies could be really big thus making a heavy container to push.
I have a working docker-compose and just want to use it on GCP. What's the easiest way?
This can be done by creating a cloudbuild.yaml file in your project root directory.
Add the following step to cloudbuild.yaml:
steps:
# running docker-compose
- name: 'docker/compose:1.26.2'
args: ['up', '-d']
On Google Cloud Platform > Cloud Builder : configure the file type of your build configuration as Cloud Build configuration file (yaml or json), enter the file location : cloudbuild.yaml
If the repository event that invokes trigger is set to "push to a branch" then Cloud Build will launch docker-compose.yml to build your containers.
Take a look at Kompose. It can help you convert the docker compose instructions into Kuberenetes specific deployment and services. You can then apply the Kubernetes files against your GKE Clusters. Note that you will have to build the containers and store in Container Registry first and update the image tag in service definitions accordingly.
If you are trying to setup same as on-premise VM in GCE, you can install these and run. Ref: https://dev.to/calvinqc/the-easiest-docker-docker-compose-setup-on-compute-engine-1op1

Push image built with docker-compose to dockerhub

I have a golang script which interacts with postgres. Created a service in docker-compose.yml for both golang and postgre. When I run it locally with "docker-compose up" it works perfect, but now I want to create one single image to push it to my dockerhub so it can be pulled and ran with just "docker run ". What is the correct way of doing it?
Image created by "docker-compose up --build" launches with no error with "docker run " but immediately stops.
docker-compose.yml:
version: '3.6'
services:
go:
container_name: backend
build: ./
volumes:
- # some paths
command: go run ./src/main.go
working_dir: $GOPATH/src/workflow/project
environment: #some env variables
ports:
- "80:80"
db:
image: postgres
environment: #some env variables
volumes:
- # some paths
ports:
- "5432:5432"
Dockerfile:
FROM golang:latest
WORKDIR $GOPATH/src/workflow/project
CMD ["/bin/bash"]
I am a newbie with docker so any comments on how to do things idiomatically are appreciated
docker-compose does not combine docker images into one, it runs (with up) or builds then runs (with up --build) docker containers based on the images defined in the yml file.
More info are in the official docs
Compose is a tool for defining and running multi-container Docker applications.
so, in your example, docker-compose will run two containers:
1 - based on the go configurations
2 - based on the db configurations
to see what containers are actually running, use the command:
docker ps -a
for more info see docker docs
It is always recommended to run each searvice on a separate container, but if you insist to make an image which has both golangand postrges, you can take a postgres base image and install golang on it, or the other way around, take golangbased image and install postgres on it.
The installation steps can be done inside the Dockerfile, please refer to:
- postgres official Dockerfile
- golang official Dockerfile
combine them to get both.
Edit: (digital ocean deployment)
Well, if you copy every thing (docker images and the yml file) to your droplet, it should bring the application up and running similar to what happens when you do the same on your local machine.
An example can be found here: How To Deploy a Go Web Application with Docker and Nginx on Ubuntu 18.04
In production, usually for large scale/traffic applications, more advanced solutions are used such as:
- docker swarm
- kubernetes
For more info on Kubernetes on digital ocean, please refer to the official docs
hope this helps you find your way.

docker-compose vs docker run to load local docker images across the machines

I have copied the images created on one machine and copied that images on a different machine. (docker images are saved using docker save -o [images.tar] command)
After copying, images are loaded using docker load command on the second machine.
When I try to run these images using docker compose file(i.e yml file)
After execution of docker-compose up -d --no-recreate command
I am getting below error,
Pulling mongodb (datastore-mongodb:latest)...
ERROR: pull access denied for datastore-mongodb, repository does not exist or may require 'docker login'
But Image is already present in local docker repo and same is verified using the docker images command.
However, using docker run command I could able to run the same image.
But I have multi-container application so I cannot able to replace yml file.
Why it is not working with docker compose?
Below is the yml file:
version: "2.2"
services:
mongodb:
image: datastore-mongodb
container_name: datastore-mongodb
network_mode: "bridge"
volumes:
- /opt/mongodb:/data/db
ports:
- 192.168.80.240:27017:27017

Docker Compose does not allow to use local images

The following command fails, trying to pull image from the Docker Hub:
$ docker-compose up -d
Pulling web-server (web-server:staging)...
ERROR: repository web-server not found: does not exist or no pull access
But I just want to use a local version of the image, which exists:
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
web-server staging b94573990687 7 hours ago 365MB
Why Docker doesn't search among locally stored images?
This is my Docker Compose file:
version: '3'
services:
chat-server:
image: chat-server:staging
ports:
- "8110:8110"
web-server:
image: web-server:staging
ports:
- "80:80"
- "443:443"
- "8009:8009"
- "8443:8443"
and my .env file:
DOCKER_HOST=tcp://***.***.**.**:2376
DOCKER_TLS_VERIFY=true
DOCKER_CERT_PATH=/Users/Victor/Documents/Development/projects/.../target/docker
In general, this should work as you describe it. Tried to reproduce it, but it simply worked...
Folder structure:
.
├── docker-compose.yml
└── Dockerfile
Content of Dockerfile:
FROM alpine
CMD ["echo", "i am groot"]
Build and tag image:
docker build -t groot .
docker tag groot:latest groot:staging
with docker-compose.yml:
version: '3.1'
services:
groot:
image: groot:staging
and start docker-compose:
$ docker-compose up
Creating groot_groot ...
Creating groot_groot_1 ... done
Attaching to groot_groot_1
groot_1 | i am groot
groot_groot_1 exited with code 0
Version >1.23 (2019 and newer)
Easiest way is to change image to build: and reference the Dockerfile in the relative directory, as shown below:
version: '3.0'
services:
custom_1:
build:
context: ./my_dir
dockerfile: Dockerfile
This allows docker-compose to manage the entire build and image orchestration in a single command.
# Rebuild all images
docker-compose build
# Run system
docker-compose up
In your docker-compose.yml, you can specify build: . instead of build: <username>/repo> for local builds (rather than pulling from docker-hub) - I can't verify this yet, but I believe you may be able to do relative paths for multiple services to the docker-compose file.
services:
app:
build: .
Reference: https://github.com/gvilarino/docker-workshop
March-09-2020 EDIT:
(docker version 18.09.9-ce build 039a7df,
dockercompose version 1.24.0, build 0aa59064)
I found that to just create a docker container, you can just docker-compose 'up -d' after tagging the container with a fake local registry server tag (localhost:5000/{image}).
$ docker tag {imagename}:{imagetag} localhost:5000/{imagename}:{imagetag}
You don't need to run the local registry server, but need to change the image url in dockercompose yaml file with the fake local registry server url:
version: '3'
services:
web-server:
image: localhost:5000/{your-image-name} #change from {imagename}:{imagetag} to localhost:5000/{imagename}:{imagetag}
ports:
- "80:80"
from {imagename}:{imagetag} to localhost:5000/{imagename}:{imagetag}
and just up -d
$ docker-compose -f {yamlfile}.yaml up -d
This creates the container if you already have the image (localhost:5000/{imagename}) in your local machine.
Adding to #Tom Saleeba's response,
I still got errors after tagging the container with "/"
(for ex: victor-dombrovsky/docker-image:latest)
It kept looking for the image from remote docker.io server.
registry_address/docker-image
It seems the url before "/" is the registry address and after "/" is the image name. and without "/" provided, docker-compose by default looks for the image from the remote docker.io.
It guess it's a known bug with docker-compose
I finally got it working by running the local registry, pushing the image to the local registry with the registry tag, and pulling the image from the local registry.
$ docker run -d -p 5000:5000 --restart=always --name registry registry:2
$ docker tag your-image-name:latest localhost:5000/your-image-name
$ docker push localhost:5000/your-image-name
and then change the image url in the dockerfile:
version: '3'
services:
chat-server:
image: chat-server:staging
ports:
- "8110:8110"
web-server:
image: localhost:5000/{your-image-name} #####change here
ports:
- "80:80"
- "443:443"
- "8009:8009"
- "8443:8443"
Similarly for the chat-server image.
You might need to change your image tag to have two parts separated by a slash /. So instead of
chat-server:staging
do something like:
victor-dombrovsky/chat-server:staging
I think there's some logic behind Docker tags and "one part" tags are interpreted as official images coming from DockerHub.
For me putting "build: ." did the trick. My working docker compose file looks like this,
version: '3.0'
services:
terraform:
build: .
image: tf:staging
env_file: .env
working_dir: /opt
volumes:
- ~/.aws:/.aws
You have a DOCKER_HOST entry in your .env 👀
From the looks of your .env file you seem to have configured docker-compose to use a remote docker host:
DOCKER_HOST=tcp://***.***.**.**:2376
Moreover, this .env is only loaded by docker-compose, but not docker. So in this situation your docker images output doesn't represent what images are available when running docker-compose.
When running docker-compose you're actually running Docker on the remote host tcp://***.***.**.**:2376, yet when running docker by itself you're running Docker locally.
When you run docker images, you're indeed seeing a list of the images that are stored locally on your machine. But docker-compose up -d is going to attempt to start the containers not on your local machine, but on ***.***.**.**:2376. docker images won't show you what images are available on the remote Docker host unless you set the DOCKER_HOST environment variable, like this for example:
DOCKER_HOST=tcp://***.***.**.**:2376 docker images
Evidently the remote Docker host doesn't have the web-server:staging image stored there, nor is the image available on Docker hub. That's why Docker complains it can't find the image.
Solutions
Run the container locally
If your intention was to run the container locally, then simply remove the DOCKER_HOST=... line from your .env and try again.
Push the image to a repository.
However if you plan on running the image remotely on the given DOCKER_HOST, then you should probably push it to a repository. You can create a free repository at Docker Hub, or you can host your own repository somewhere, and use docker push to push the image there, then make sure your docker-compose.yml referenced the correct repository.
Save the image, load it remotely.
If you don't want to push the image to Docker Hub or host your own repository you can also transfer the image using a combination of docker image save and docker image load:
docker image save web-server:staging | DOCKER_HOST=tcp://***.***.**.**:2376 docker image load
Note that this can take a while for big images, especially on a slow connection.
You can use pull_policy:
image: img:tag
pull_policy: if_not_present
My issue when getting this was that I had built using docker without sudo, and ran docker compose with sudo. Running docker images and sudo docker images gave me two different sets of images, where sudo docker compose up gave me access only to the latter.

Resources