I am new to docker-compose, I have built a simple web application using flask and redis and it works fine in my localhost, my question is how to push this web app including the python and redis images to docker hub and pull that image from a different machine.
I usually do docker-compose build,
docker push
version: '3'
services:
web:
build: .
image: "alhaffar/flask_redis_app:2.0"
ports:
- "8088:5000"
depends_on:
- redis
redis:
image: "redis:alpine"
the Dockerfile
FROM python:3.7
# CHANGE WORKIN DIR AND COPY FILES
WORKDIR /code
COPY . /code
# INSTALL REQUIRED PACAKGES
RUN pip install --upgrade pip
RUN pip install -r requirements.txt
# RUN THE APP
CMD ["python", "./main.py"]
when I try to pull the image into a different machine and issue docker run, it runs only the python image without redis image.
how I can run all images
Dockerhub and other docker registries work with images. Docker-compose is just an abstraction which helps to set up a bunch of images, that can work together, by using one configuration-file - docker-compose. There is nothing like docker-compose registries. Then if you have your docker-compose file on the other machine you just use docker-compose up and images should be pulled - assuming they are published to some registry (public/private). Image with your app should be published by you and refis will be taken form dockerhub registry, if you are using redis official image.
Docker-compose is helpful when you are doing some local development and want to set up your working environment quickly. If you want to set up this environment on other machine you would have to share the docker-compose file with them and have the docker and docker-compose installed on that other machine.
If your docker-compose is configured to build some image on start, you can still push this image using docker-compose push command.
With your Docker Compose Script you do two things:
Build your Flask App —> Image 1
Pull and run Redis —> Image 2
If you push Image 1 to DockerHub and pull it on an other machine, you are missing the second Image.
What you want to do is run the Docker compose script on the second machine without the build line.
Related
It's my second day playing with docker, I'm trying to make a simple django web server with docker. So basically I created a Dockerfile and docker-compose.yml files in my directory, I have my docker-compose.yml set to :
version: '3'
services:
web:
build: .
command: python manage.py runserver 0.0.0.0:8080
volumes:
- .:/app
ports:
- "8080:8080"
env_file:
- ./env.dev
What I'm trying to achieve is to push these files into docker hub (repository) as I think, and then to pull them from the repo. So basically I open the terminal and launch these commands:
docker images
docker tag ID docker_username/repo_name:firsttry
docker push docker_username/repo_name
After pushing I can see that I have a repository in hub with some type of image history, so now I'm trying to pull the data to my local pc.
My commands:
cd some_directory
docker pull dziugasiot/wintekaiot:firsttry
And the response I get is:
firsttry: Pulling from dziugasiot/wintekaiot
Digest: sha256:477a0bb335f841875d43f0f5717c0416a500989f280112c36b613aa97d82157e
Status: Image is up to date for dziugasiot/wintekaiot:firsttry
docker.io/dziugasiot/wintekaiot:firsttry
The directory is empty, what I'm doing wrong?
You are not doing wrong but thinking wrong.
Docker push , send image to dockerhub
Docker pull, download your image to a docker folder in your system, not to the directory you are in. you can see your pulled image with command: docker images (show you a list)
having your pulled image downloaded you can use it from any directory running the command: docker run -it dziugasiot/wintekaiot:firsttry bash ( create a container )
All "files" (layers) needed to create a container for that image are already downloaded, see the message "Status: Image is up to date for dziugasiot/wintekaiot:firsttry"
Docker uses copy-on-write mechanism, so in short: once you download an image, you don't have to download it again (for the same version).
For beginners with Docker I don't recommend to do anything in Docker root folder, but for a complete answer: you can find your image files (layers) in Docker root folder. The place and the format of it depends on your actual configuration, but you can look it up by issuing docker info command and looking for Docker Root.
i'm quite new to GCP and been using mostly AWS. I am currently trying to play around with GCP and want to deploy a container using docker-compose.
I set up a very basic docker-compose.yml file as follows:
# docker-compose.yml
version: '3.3'
services:
git:
image: alpine/git
volumes:
- ${PWD}:/git
command: "clone https://github.com/PHP-DI/demo.git"
composer:
image: composer
volumes:
- ${PWD}/demo:/app
command: "composer install"
depends_on:
- git
web:
image: php:7.4-apache
ports:
- "8080:${PORT:-80}"
- "8000:${PORT:-8000}"
volumes:
- ${PWD}/demo:/var/www/html
command: php -S 0.0.0.0:8000 -t /var/www/html
depends_on:
- composer
So the container will get the code from git, then install the dependencies using composer and finally be available on port 8000.
On my machine, running docker-compose up does everything. However how can push this docker-compose to google cloud.
I have tried building a container using the docker/compose image and a Dockerfile as follows:
FROM docker/compose
WORKDIR /opt
COPY docker-compose.yml .
WORKDIR /app
CMD docker-compose -f /opt/docker-compose.yml up web
Then push the container to the registry. And from there i tried deploying to:
cloud run - did not work as i could not find a way to specify mounted volume for /var/run/docker.sock
Kubernetes - i mounted the docker.sock but i keep getting an error in the logs that /app from the git service is read only
compute engine - same error as above
I don't want to make a container by copying all local files into it then upload, as the dependencies could be really big thus making a heavy container to push.
I have a working docker-compose and just want to use it on GCP. What's the easiest way?
This can be done by creating a cloudbuild.yaml file in your project root directory.
Add the following step to cloudbuild.yaml:
steps:
# running docker-compose
- name: 'docker/compose:1.26.2'
args: ['up', '-d']
On Google Cloud Platform > Cloud Builder : configure the file type of your build configuration as Cloud Build configuration file (yaml or json), enter the file location : cloudbuild.yaml
If the repository event that invokes trigger is set to "push to a branch" then Cloud Build will launch docker-compose.yml to build your containers.
Take a look at Kompose. It can help you convert the docker compose instructions into Kuberenetes specific deployment and services. You can then apply the Kubernetes files against your GKE Clusters. Note that you will have to build the containers and store in Container Registry first and update the image tag in service definitions accordingly.
If you are trying to setup same as on-premise VM in GCE, you can install these and run. Ref: https://dev.to/calvinqc/the-easiest-docker-docker-compose-setup-on-compute-engine-1op1
I have a golang script which interacts with postgres. Created a service in docker-compose.yml for both golang and postgre. When I run it locally with "docker-compose up" it works perfect, but now I want to create one single image to push it to my dockerhub so it can be pulled and ran with just "docker run ". What is the correct way of doing it?
Image created by "docker-compose up --build" launches with no error with "docker run " but immediately stops.
docker-compose.yml:
version: '3.6'
services:
go:
container_name: backend
build: ./
volumes:
- # some paths
command: go run ./src/main.go
working_dir: $GOPATH/src/workflow/project
environment: #some env variables
ports:
- "80:80"
db:
image: postgres
environment: #some env variables
volumes:
- # some paths
ports:
- "5432:5432"
Dockerfile:
FROM golang:latest
WORKDIR $GOPATH/src/workflow/project
CMD ["/bin/bash"]
I am a newbie with docker so any comments on how to do things idiomatically are appreciated
docker-compose does not combine docker images into one, it runs (with up) or builds then runs (with up --build) docker containers based on the images defined in the yml file.
More info are in the official docs
Compose is a tool for defining and running multi-container Docker applications.
so, in your example, docker-compose will run two containers:
1 - based on the go configurations
2 - based on the db configurations
to see what containers are actually running, use the command:
docker ps -a
for more info see docker docs
It is always recommended to run each searvice on a separate container, but if you insist to make an image which has both golangand postrges, you can take a postgres base image and install golang on it, or the other way around, take golangbased image and install postgres on it.
The installation steps can be done inside the Dockerfile, please refer to:
- postgres official Dockerfile
- golang official Dockerfile
combine them to get both.
Edit: (digital ocean deployment)
Well, if you copy every thing (docker images and the yml file) to your droplet, it should bring the application up and running similar to what happens when you do the same on your local machine.
An example can be found here: How To Deploy a Go Web Application with Docker and Nginx on Ubuntu 18.04
In production, usually for large scale/traffic applications, more advanced solutions are used such as:
- docker swarm
- kubernetes
For more info on Kubernetes on digital ocean, please refer to the official docs
hope this helps you find your way.
I have a docker-compose.yml file, which defines a services and its image.
service:
image: my_image
now, that I run docker-compose up I get the following message:
$ docker-compose up
Pulling service (my_image:latest)...
Pulling repository docker.io/library/my_image
ERROR: Error: image library/my_image:latest not found
It is correct, that my_image in this case is not on the docker hub. But I've created it with docker build -t my_image . (in a different file) before and it is listed in docker images.
Is there anything I miss to tell docker-compose, to not look for the image in the docker.io registry/hub?
[edit] docker client and server version is 1.9.1, docker-compose version is 1.5.2.
I'm running docker-compose (as well as docker) through the HTTP-API on a remote machine, don't know if this makes any difference.
If you have image local or anywhere except docker hub you need to use build and path or url to Dockerfile. So basically when we work OFF dockerhub we change image to path !
ubuntu:
container_name: ubuntu
build: /compose/build/ubuntu
links:
- db:mysql
ports:
- 80:80
In this example am using my own Ubuntu Dockerfile that is places in the build path. The file should be named Dockerfile like normal and you just specify the path to folder where it is.
I'm a newbie to docker.
I want to create an image with my web application. I need some application server, e.g. wlp, then I need some database, e.g. postgres.
There is a Docker image for wlp and there is a Docker image for postgres.
So I created following simple Dockerfile.
FROM websphere-liberty:javaee7
FROM postgres:latest
Now, maybe it's a lame, but when I build this image
docker build -t wlp-db .
run container
docker run -it --name wlp-db-test wlp-db
and check it
docker exec -it wlp-db-test /bin/bash
only postgres is running and wlp is not even there. Directory /opt is empty.
What am I missing?
You need to use docker-compose file. This makes you bind two different containers that are running two different images. One holding your server and the other the database services.
Here is the Example of a nodejs server container working with a mongodb container
First of All, i write the docker file to configure the main container
FROM node:latest
RUN mkdir /src
RUN npm install nodemon -g
WORKDIR /src
ADD app/package.json package.json
RUN npm install
EXPOSE 3000
CMD npm start
Then i Create the docker-compose file to configure both containers and link them
version: '3' #docker-compose version
services: #Services are your different containers
node_server: #First Container, containing nodejs serveer
build: . #Saying that all of my source files are at the root path
volumes: #volume are for hot reload for exemple
- "./app:/src/app"
ports: #binding the host port with the machine
- "3030:3000"
links: #Linking the first service with the named mongo service (see below)
- "mongo:mongo"
mongo: #declaration of the mongodb container
image: mongo #using mongo image
ports: #port binding for mongodb is required
- "27017:27017"
I hope this helped.
Each service should have its own image/dockerfile. You start multiple containers and connect them over a network to be able to communicate.
If you wish to compose multiple containers in one file, check out docker-compose, which is made for just that!
You can't FROM multiple times in one file and expect both processes to run
That's creating each layer from the images, but only one entry point for the process, which is Postgres, because it's second
This pattern is typically only done when you have some "setup" docker image, then a "runtime" image on top of it.
https://docs.docker.com/engine/userguide/eng-image/multistage-build/#use-multi-stage-builds
Also what you're trying to do is not very adherent to "microservices". Run the database separately from your application. Docker Compose can assist you with that, and almost all the examples on dockers website use Postgres with some web app
Plus, you're starting an empty database and server. You need to copy at least a WAR, for example, to run your server code