docker-compose tries to pull already existing images - docker

I have a docker-compose.yml file, which defines a services and its image.
service:
image: my_image
now, that I run docker-compose up I get the following message:
$ docker-compose up
Pulling service (my_image:latest)...
Pulling repository docker.io/library/my_image
ERROR: Error: image library/my_image:latest not found
It is correct, that my_image in this case is not on the docker hub. But I've created it with docker build -t my_image . (in a different file) before and it is listed in docker images.
Is there anything I miss to tell docker-compose, to not look for the image in the docker.io registry/hub?
[edit] docker client and server version is 1.9.1, docker-compose version is 1.5.2.
I'm running docker-compose (as well as docker) through the HTTP-API on a remote machine, don't know if this makes any difference.

If you have image local or anywhere except docker hub you need to use build and path or url to Dockerfile. So basically when we work OFF dockerhub we change image to path !
ubuntu:
container_name: ubuntu
build: /compose/build/ubuntu
links:
- db:mysql
ports:
- 80:80
In this example am using my own Ubuntu Dockerfile that is places in the build path. The file should be named Dockerfile like normal and you just specify the path to folder where it is.

Related

Docker compose ecs fails to deploy (fails when using docker compose up)

I am trying to determine why the cloudformation building of application fails when trying to create resources for BackgroundjobsService (Create failed in cloudformation). The only main differences from other services I have built is that it has no exposed ports and I am using ubuntu instead of php-apache image.
Dockerfile (I made it super simply (basically do nothing)
# Pulling Ubuntu image
FROM ubuntu:20.04
docker-compose.yml
services:
background_jobs:
image: 000.dkr.ecr.us-east-1.amazonaws.com/company/job-scheduler
restart: always
env_file: ../.env.${ENV}
build:
context: "."
How I deploy (I verified the enf files exist in the parent directory of job-scheduler).
cd job-scheduler
ENV=dev docker --context default compose build
docker push 000.dkr.ecr.us-east-1.amazonaws.com/company/job-scheduler:latest
ENV=dev docker --context tcetra-dev compose up
I don't know how to find any sort of error logs but the task defination gets created and all my env vars are in there.

Push docker image to another container registry with compose

I am running locust (the official image from Docker Hub) locally using a Docker-compose file as below
version: '3'
services:
locust:
image: locustio/locust
ports:
- "8089:8089"
volumes:
- ./:/mnt/locust
command: -f /mnt/locust/locustfile.py -H https://my-host-url.com
I have done the stress-testing in my local with docker-compose up. The next step is to push this compose file onto another registry. I am following the steps given in docker hub documentation. However, I just need some help in copying the necessary locustfile.py as well to my other registry (let's say artifactory).
To upload an image to your custom registry it has to be properly tagged (named) and it is not necessary to use docker build for that. You can do with docker tag:
# pull the image
docker pull locustio/locust
# rename it for your registry
docker tag locustio/locust:latest my-registry.com:5000/locust:latest
# push it to your registry using its new name
docker push my-registry.com:5000/locust:latest

Docker: Pull from docker hub, doesn't download any files

It's my second day playing with docker, I'm trying to make a simple django web server with docker. So basically I created a Dockerfile and docker-compose.yml files in my directory, I have my docker-compose.yml set to :
version: '3'
services:
web:
build: .
command: python manage.py runserver 0.0.0.0:8080
volumes:
- .:/app
ports:
- "8080:8080"
env_file:
- ./env.dev
What I'm trying to achieve is to push these files into docker hub (repository) as I think, and then to pull them from the repo. So basically I open the terminal and launch these commands:
docker images
docker tag ID docker_username/repo_name:firsttry
docker push docker_username/repo_name
After pushing I can see that I have a repository in hub with some type of image history, so now I'm trying to pull the data to my local pc.
My commands:
cd some_directory
docker pull dziugasiot/wintekaiot:firsttry
And the response I get is:
firsttry: Pulling from dziugasiot/wintekaiot
Digest: sha256:477a0bb335f841875d43f0f5717c0416a500989f280112c36b613aa97d82157e
Status: Image is up to date for dziugasiot/wintekaiot:firsttry
docker.io/dziugasiot/wintekaiot:firsttry
The directory is empty, what I'm doing wrong?
You are not doing wrong but thinking wrong.
Docker push , send image to dockerhub
Docker pull, download your image to a docker folder in your system, not to the directory you are in. you can see your pulled image with command: docker images (show you a list)
having your pulled image downloaded you can use it from any directory running the command: docker run -it dziugasiot/wintekaiot:firsttry bash ( create a container )
All "files" (layers) needed to create a container for that image are already downloaded, see the message "Status: Image is up to date for dziugasiot/wintekaiot:firsttry"
Docker uses copy-on-write mechanism, so in short: once you download an image, you don't have to download it again (for the same version).
For beginners with Docker I don't recommend to do anything in Docker root folder, but for a complete answer: you can find your image files (layers) in Docker root folder. The place and the format of it depends on your actual configuration, but you can look it up by issuing docker info command and looking for Docker Root.

docker-compose vs docker run to load local docker images across the machines

I have copied the images created on one machine and copied that images on a different machine. (docker images are saved using docker save -o [images.tar] command)
After copying, images are loaded using docker load command on the second machine.
When I try to run these images using docker compose file(i.e yml file)
After execution of docker-compose up -d --no-recreate command
I am getting below error,
Pulling mongodb (datastore-mongodb:latest)...
ERROR: pull access denied for datastore-mongodb, repository does not exist or may require 'docker login'
But Image is already present in local docker repo and same is verified using the docker images command.
However, using docker run command I could able to run the same image.
But I have multi-container application so I cannot able to replace yml file.
Why it is not working with docker compose?
Below is the yml file:
version: "2.2"
services:
mongodb:
image: datastore-mongodb
container_name: datastore-mongodb
network_mode: "bridge"
volumes:
- /opt/mongodb:/data/db
ports:
- 192.168.80.240:27017:27017

Docker Compose does not allow to use local images

The following command fails, trying to pull image from the Docker Hub:
$ docker-compose up -d
Pulling web-server (web-server:staging)...
ERROR: repository web-server not found: does not exist or no pull access
But I just want to use a local version of the image, which exists:
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
web-server staging b94573990687 7 hours ago 365MB
Why Docker doesn't search among locally stored images?
This is my Docker Compose file:
version: '3'
services:
chat-server:
image: chat-server:staging
ports:
- "8110:8110"
web-server:
image: web-server:staging
ports:
- "80:80"
- "443:443"
- "8009:8009"
- "8443:8443"
and my .env file:
DOCKER_HOST=tcp://***.***.**.**:2376
DOCKER_TLS_VERIFY=true
DOCKER_CERT_PATH=/Users/Victor/Documents/Development/projects/.../target/docker
In general, this should work as you describe it. Tried to reproduce it, but it simply worked...
Folder structure:
.
├── docker-compose.yml
└── Dockerfile
Content of Dockerfile:
FROM alpine
CMD ["echo", "i am groot"]
Build and tag image:
docker build -t groot .
docker tag groot:latest groot:staging
with docker-compose.yml:
version: '3.1'
services:
groot:
image: groot:staging
and start docker-compose:
$ docker-compose up
Creating groot_groot ...
Creating groot_groot_1 ... done
Attaching to groot_groot_1
groot_1 | i am groot
groot_groot_1 exited with code 0
Version >1.23 (2019 and newer)
Easiest way is to change image to build: and reference the Dockerfile in the relative directory, as shown below:
version: '3.0'
services:
custom_1:
build:
context: ./my_dir
dockerfile: Dockerfile
This allows docker-compose to manage the entire build and image orchestration in a single command.
# Rebuild all images
docker-compose build
# Run system
docker-compose up
In your docker-compose.yml, you can specify build: . instead of build: <username>/repo> for local builds (rather than pulling from docker-hub) - I can't verify this yet, but I believe you may be able to do relative paths for multiple services to the docker-compose file.
services:
app:
build: .
Reference: https://github.com/gvilarino/docker-workshop
March-09-2020 EDIT:
(docker version 18.09.9-ce build 039a7df,
dockercompose version 1.24.0, build 0aa59064)
I found that to just create a docker container, you can just docker-compose 'up -d' after tagging the container with a fake local registry server tag (localhost:5000/{image}).
$ docker tag {imagename}:{imagetag} localhost:5000/{imagename}:{imagetag}
You don't need to run the local registry server, but need to change the image url in dockercompose yaml file with the fake local registry server url:
version: '3'
services:
web-server:
image: localhost:5000/{your-image-name} #change from {imagename}:{imagetag} to localhost:5000/{imagename}:{imagetag}
ports:
- "80:80"
from {imagename}:{imagetag} to localhost:5000/{imagename}:{imagetag}
and just up -d
$ docker-compose -f {yamlfile}.yaml up -d
This creates the container if you already have the image (localhost:5000/{imagename}) in your local machine.
Adding to #Tom Saleeba's response,
I still got errors after tagging the container with "/"
(for ex: victor-dombrovsky/docker-image:latest)
It kept looking for the image from remote docker.io server.
registry_address/docker-image
It seems the url before "/" is the registry address and after "/" is the image name. and without "/" provided, docker-compose by default looks for the image from the remote docker.io.
It guess it's a known bug with docker-compose
I finally got it working by running the local registry, pushing the image to the local registry with the registry tag, and pulling the image from the local registry.
$ docker run -d -p 5000:5000 --restart=always --name registry registry:2
$ docker tag your-image-name:latest localhost:5000/your-image-name
$ docker push localhost:5000/your-image-name
and then change the image url in the dockerfile:
version: '3'
services:
chat-server:
image: chat-server:staging
ports:
- "8110:8110"
web-server:
image: localhost:5000/{your-image-name} #####change here
ports:
- "80:80"
- "443:443"
- "8009:8009"
- "8443:8443"
Similarly for the chat-server image.
You might need to change your image tag to have two parts separated by a slash /. So instead of
chat-server:staging
do something like:
victor-dombrovsky/chat-server:staging
I think there's some logic behind Docker tags and "one part" tags are interpreted as official images coming from DockerHub.
For me putting "build: ." did the trick. My working docker compose file looks like this,
version: '3.0'
services:
terraform:
build: .
image: tf:staging
env_file: .env
working_dir: /opt
volumes:
- ~/.aws:/.aws
You have a DOCKER_HOST entry in your .env 👀
From the looks of your .env file you seem to have configured docker-compose to use a remote docker host:
DOCKER_HOST=tcp://***.***.**.**:2376
Moreover, this .env is only loaded by docker-compose, but not docker. So in this situation your docker images output doesn't represent what images are available when running docker-compose.
When running docker-compose you're actually running Docker on the remote host tcp://***.***.**.**:2376, yet when running docker by itself you're running Docker locally.
When you run docker images, you're indeed seeing a list of the images that are stored locally on your machine. But docker-compose up -d is going to attempt to start the containers not on your local machine, but on ***.***.**.**:2376. docker images won't show you what images are available on the remote Docker host unless you set the DOCKER_HOST environment variable, like this for example:
DOCKER_HOST=tcp://***.***.**.**:2376 docker images
Evidently the remote Docker host doesn't have the web-server:staging image stored there, nor is the image available on Docker hub. That's why Docker complains it can't find the image.
Solutions
Run the container locally
If your intention was to run the container locally, then simply remove the DOCKER_HOST=... line from your .env and try again.
Push the image to a repository.
However if you plan on running the image remotely on the given DOCKER_HOST, then you should probably push it to a repository. You can create a free repository at Docker Hub, or you can host your own repository somewhere, and use docker push to push the image there, then make sure your docker-compose.yml referenced the correct repository.
Save the image, load it remotely.
If you don't want to push the image to Docker Hub or host your own repository you can also transfer the image using a combination of docker image save and docker image load:
docker image save web-server:staging | DOCKER_HOST=tcp://***.***.**.**:2376 docker image load
Note that this can take a while for big images, especially on a slow connection.
You can use pull_policy:
image: img:tag
pull_policy: if_not_present
My issue when getting this was that I had built using docker without sudo, and ran docker compose with sudo. Running docker images and sudo docker images gave me two different sets of images, where sudo docker compose up gave me access only to the latter.

Resources