I am running locust (the official image from Docker Hub) locally using a Docker-compose file as below
version: '3'
services:
locust:
image: locustio/locust
ports:
- "8089:8089"
volumes:
- ./:/mnt/locust
command: -f /mnt/locust/locustfile.py -H https://my-host-url.com
I have done the stress-testing in my local with docker-compose up. The next step is to push this compose file onto another registry. I am following the steps given in docker hub documentation. However, I just need some help in copying the necessary locustfile.py as well to my other registry (let's say artifactory).
To upload an image to your custom registry it has to be properly tagged (named) and it is not necessary to use docker build for that. You can do with docker tag:
# pull the image
docker pull locustio/locust
# rename it for your registry
docker tag locustio/locust:latest my-registry.com:5000/locust:latest
# push it to your registry using its new name
docker push my-registry.com:5000/locust:latest
I am new to docker-compose, I have built a simple web application using flask and redis and it works fine in my localhost, my question is how to push this web app including the python and redis images to docker hub and pull that image from a different machine.
I usually do docker-compose build,
docker push
version: '3'
services:
web:
build: .
image: "alhaffar/flask_redis_app:2.0"
ports:
- "8088:5000"
depends_on:
- redis
redis:
image: "redis:alpine"
the Dockerfile
FROM python:3.7
# CHANGE WORKIN DIR AND COPY FILES
WORKDIR /code
COPY . /code
# INSTALL REQUIRED PACAKGES
RUN pip install --upgrade pip
RUN pip install -r requirements.txt
# RUN THE APP
CMD ["python", "./main.py"]
when I try to pull the image into a different machine and issue docker run, it runs only the python image without redis image.
how I can run all images
Dockerhub and other docker registries work with images. Docker-compose is just an abstraction which helps to set up a bunch of images, that can work together, by using one configuration-file - docker-compose. There is nothing like docker-compose registries. Then if you have your docker-compose file on the other machine you just use docker-compose up and images should be pulled - assuming they are published to some registry (public/private). Image with your app should be published by you and refis will be taken form dockerhub registry, if you are using redis official image.
Docker-compose is helpful when you are doing some local development and want to set up your working environment quickly. If you want to set up this environment on other machine you would have to share the docker-compose file with them and have the docker and docker-compose installed on that other machine.
If your docker-compose is configured to build some image on start, you can still push this image using docker-compose push command.
With your Docker Compose Script you do two things:
Build your Flask App —> Image 1
Pull and run Redis —> Image 2
If you push Image 1 to DockerHub and pull it on an other machine, you are missing the second Image.
What you want to do is run the Docker compose script on the second machine without the build line.
I have copied the images created on one machine and copied that images on a different machine. (docker images are saved using docker save -o [images.tar] command)
After copying, images are loaded using docker load command on the second machine.
When I try to run these images using docker compose file(i.e yml file)
After execution of docker-compose up -d --no-recreate command
I am getting below error,
Pulling mongodb (datastore-mongodb:latest)...
ERROR: pull access denied for datastore-mongodb, repository does not exist or may require 'docker login'
But Image is already present in local docker repo and same is verified using the docker images command.
However, using docker run command I could able to run the same image.
But I have multi-container application so I cannot able to replace yml file.
Why it is not working with docker compose?
Below is the yml file:
version: "2.2"
services:
mongodb:
image: datastore-mongodb
container_name: datastore-mongodb
network_mode: "bridge"
volumes:
- /opt/mongodb:/data/db
ports:
- 192.168.80.240:27017:27017
The following command fails, trying to pull image from the Docker Hub:
$ docker-compose up -d
Pulling web-server (web-server:staging)...
ERROR: repository web-server not found: does not exist or no pull access
But I just want to use a local version of the image, which exists:
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
web-server staging b94573990687 7 hours ago 365MB
Why Docker doesn't search among locally stored images?
This is my Docker Compose file:
version: '3'
services:
chat-server:
image: chat-server:staging
ports:
- "8110:8110"
web-server:
image: web-server:staging
ports:
- "80:80"
- "443:443"
- "8009:8009"
- "8443:8443"
and my .env file:
DOCKER_HOST=tcp://***.***.**.**:2376
DOCKER_TLS_VERIFY=true
DOCKER_CERT_PATH=/Users/Victor/Documents/Development/projects/.../target/docker
In general, this should work as you describe it. Tried to reproduce it, but it simply worked...
Folder structure:
.
├── docker-compose.yml
└── Dockerfile
Content of Dockerfile:
FROM alpine
CMD ["echo", "i am groot"]
Build and tag image:
docker build -t groot .
docker tag groot:latest groot:staging
with docker-compose.yml:
version: '3.1'
services:
groot:
image: groot:staging
and start docker-compose:
$ docker-compose up
Creating groot_groot ...
Creating groot_groot_1 ... done
Attaching to groot_groot_1
groot_1 | i am groot
groot_groot_1 exited with code 0
Version >1.23 (2019 and newer)
Easiest way is to change image to build: and reference the Dockerfile in the relative directory, as shown below:
version: '3.0'
services:
custom_1:
build:
context: ./my_dir
dockerfile: Dockerfile
This allows docker-compose to manage the entire build and image orchestration in a single command.
# Rebuild all images
docker-compose build
# Run system
docker-compose up
In your docker-compose.yml, you can specify build: . instead of build: <username>/repo> for local builds (rather than pulling from docker-hub) - I can't verify this yet, but I believe you may be able to do relative paths for multiple services to the docker-compose file.
services:
app:
build: .
Reference: https://github.com/gvilarino/docker-workshop
March-09-2020 EDIT:
(docker version 18.09.9-ce build 039a7df,
dockercompose version 1.24.0, build 0aa59064)
I found that to just create a docker container, you can just docker-compose 'up -d' after tagging the container with a fake local registry server tag (localhost:5000/{image}).
$ docker tag {imagename}:{imagetag} localhost:5000/{imagename}:{imagetag}
You don't need to run the local registry server, but need to change the image url in dockercompose yaml file with the fake local registry server url:
version: '3'
services:
web-server:
image: localhost:5000/{your-image-name} #change from {imagename}:{imagetag} to localhost:5000/{imagename}:{imagetag}
ports:
- "80:80"
from {imagename}:{imagetag} to localhost:5000/{imagename}:{imagetag}
and just up -d
$ docker-compose -f {yamlfile}.yaml up -d
This creates the container if you already have the image (localhost:5000/{imagename}) in your local machine.
Adding to #Tom Saleeba's response,
I still got errors after tagging the container with "/"
(for ex: victor-dombrovsky/docker-image:latest)
It kept looking for the image from remote docker.io server.
registry_address/docker-image
It seems the url before "/" is the registry address and after "/" is the image name. and without "/" provided, docker-compose by default looks for the image from the remote docker.io.
It guess it's a known bug with docker-compose
I finally got it working by running the local registry, pushing the image to the local registry with the registry tag, and pulling the image from the local registry.
$ docker run -d -p 5000:5000 --restart=always --name registry registry:2
$ docker tag your-image-name:latest localhost:5000/your-image-name
$ docker push localhost:5000/your-image-name
and then change the image url in the dockerfile:
version: '3'
services:
chat-server:
image: chat-server:staging
ports:
- "8110:8110"
web-server:
image: localhost:5000/{your-image-name} #####change here
ports:
- "80:80"
- "443:443"
- "8009:8009"
- "8443:8443"
Similarly for the chat-server image.
You might need to change your image tag to have two parts separated by a slash /. So instead of
chat-server:staging
do something like:
victor-dombrovsky/chat-server:staging
I think there's some logic behind Docker tags and "one part" tags are interpreted as official images coming from DockerHub.
For me putting "build: ." did the trick. My working docker compose file looks like this,
version: '3.0'
services:
terraform:
build: .
image: tf:staging
env_file: .env
working_dir: /opt
volumes:
- ~/.aws:/.aws
You have a DOCKER_HOST entry in your .env 👀
From the looks of your .env file you seem to have configured docker-compose to use a remote docker host:
DOCKER_HOST=tcp://***.***.**.**:2376
Moreover, this .env is only loaded by docker-compose, but not docker. So in this situation your docker images output doesn't represent what images are available when running docker-compose.
When running docker-compose you're actually running Docker on the remote host tcp://***.***.**.**:2376, yet when running docker by itself you're running Docker locally.
When you run docker images, you're indeed seeing a list of the images that are stored locally on your machine. But docker-compose up -d is going to attempt to start the containers not on your local machine, but on ***.***.**.**:2376. docker images won't show you what images are available on the remote Docker host unless you set the DOCKER_HOST environment variable, like this for example:
DOCKER_HOST=tcp://***.***.**.**:2376 docker images
Evidently the remote Docker host doesn't have the web-server:staging image stored there, nor is the image available on Docker hub. That's why Docker complains it can't find the image.
Solutions
Run the container locally
If your intention was to run the container locally, then simply remove the DOCKER_HOST=... line from your .env and try again.
Push the image to a repository.
However if you plan on running the image remotely on the given DOCKER_HOST, then you should probably push it to a repository. You can create a free repository at Docker Hub, or you can host your own repository somewhere, and use docker push to push the image there, then make sure your docker-compose.yml referenced the correct repository.
Save the image, load it remotely.
If you don't want to push the image to Docker Hub or host your own repository you can also transfer the image using a combination of docker image save and docker image load:
docker image save web-server:staging | DOCKER_HOST=tcp://***.***.**.**:2376 docker image load
Note that this can take a while for big images, especially on a slow connection.
You can use pull_policy:
image: img:tag
pull_policy: if_not_present
My issue when getting this was that I had built using docker without sudo, and ran docker compose with sudo. Running docker images and sudo docker images gave me two different sets of images, where sudo docker compose up gave me access only to the latter.
I have a docker-compose.yml file, which defines a services and its image.
service:
image: my_image
now, that I run docker-compose up I get the following message:
$ docker-compose up
Pulling service (my_image:latest)...
Pulling repository docker.io/library/my_image
ERROR: Error: image library/my_image:latest not found
It is correct, that my_image in this case is not on the docker hub. But I've created it with docker build -t my_image . (in a different file) before and it is listed in docker images.
Is there anything I miss to tell docker-compose, to not look for the image in the docker.io registry/hub?
[edit] docker client and server version is 1.9.1, docker-compose version is 1.5.2.
I'm running docker-compose (as well as docker) through the HTTP-API on a remote machine, don't know if this makes any difference.
If you have image local or anywhere except docker hub you need to use build and path or url to Dockerfile. So basically when we work OFF dockerhub we change image to path !
ubuntu:
container_name: ubuntu
build: /compose/build/ubuntu
links:
- db:mysql
ports:
- 80:80
In this example am using my own Ubuntu Dockerfile that is places in the build path. The file should be named Dockerfile like normal and you just specify the path to folder where it is.