Assuming i have created a DJANGO project using docker-compose as per the example given in https://docs.docker.com/compose/django/
Below is a simple docker-compose.yml file for understanding sake
docker-compose.yml
version: '3'
services:
db:
image: postgres
web:
image: python:3
command: python3 manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
depends_on:
- db
Here i am using two images python:3 (called "web") and postgres (called "db") which are automatically found from hub.docker.com and build accordingly. We also want web container depends on db container. Just to recollect whats in the docker-compose.yml above
Once i set everything i do docker-compose up and we can see two containers are running and django project is running on my local machine.
Once i have worked with my django application now i want to deploy on the production server.
So how to copy the images to the development server so that i am working on the same docker images again there also.
Because i try to create a docker-compose.yml file at production server thn there will be chance that the db image and web image may change.
Like:
When I build the postgres image on my development computer say i have postgres version 9.5
But If i again build the postgres image on the production server then i may have postgres version 10.1 installed.
SO i will not be working on the same environment, may be on the same os but not the same version of packages.
So how to check this when i am shifting things to development
Partially Solved:
As per the answer of #Yogesh_D,
If i am using prebuilt images from Dokcer hub, we can easily get the same environment on the production server using the version number like postgres:9.5.1 or python:3.
Partially UnSolved:
But If i created an image on my own using my own Dockerfile and then tagged it while building. Now i want to use the same image in production how to do that. Since its not on the Docker Hub and also i may not intereseted to put it on Docker hub.
So will copying manually my image to the production server is a good idea or i just the copy the Dockerfile and again build the image there on the production server.
This should be fairly simple as the docker compose image directive allows you to even specify the tags for that image.
So in your case it would be something like this:
version: '3'
services:
db:
image: postgres:9.5.14
web:
image: python:3.6.6
command: python3 manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
depends_on:
- db
The above file ensures that you get the 9.5.14 version of postgres and 3.6.6 version of python.
And no matter where you deploy this is exactly what you get.
Look at https://hub.docker.com//python for all the tags/versions available in the python image and look at https://hub.docker.com//postgres to figure out all the versions/tags available for the postgres images.
Edit:
To solve the problem of custom images you have a few ways:
a. depending on where you deploy (your datacenter vs public cloud providers) you can start you own docker image registry. And there are lot of options here like this
b. If you are running in one of the popular cloud providers like aws, gcp, azure, most of them provide their own registries
c. Or you can use docker hub to setup private repositories.
And each of them support tags, so still use tags to ensure your own custom images are deployed just like the public images.
Related
I am using an official Postgres12 image that I'm pulling inside the docker-compose.yml. Everything is working fine.
services:
db:
container_name: db
image: postgres:12
volumes:
- ...
ports:
- 5432:5432
environment:
- POSTGRES_USER=...
Now, when I run docker-compose up, I get this image
My question is: is there a way in which I can rename the image inside docker-compose.yml? I know there is a command but I require it to be everything inside the file if possible.
Thanks!
In a Compose file, there's no direct way to run docker tag or any other command that modifies some existing resource.
If you're trying to optionally point Compose at a local mirror of Docker Hub, you can take advantage of knowing the default repository is docker.io and use an optional environment variable:
image: ${REGISTRY:-docker.io}/postgres:latest
REGISTRY=docker-mirror.example.com docker-compose up
Another possible approach is to build a trivial image that doesn't actually extend the base postgres image at all:
build:
context: .
dockerfile: Dockerfile.postgres
image: local-images.example.com/my-project/postgres
# Dockerfile.postgres
FROM postgres:latest
# End of file
There's not really any benefit to doing this beyond the cosmetic appearances in the docker images output. Having it be clear that you're using a standard Docker Hub image could be slightly preferable; its behavior is better understood than something you built locally and if you have multiple projects running at once they can more obviously share the same single image.
I have a general question about DockerHub and GitHub. I am trying to build a pipeline on Jenkins using AWS instances and my end goal is to deploy the docker-compose.yml that my repo on GitHub has:
version: "3"
services:
db:
image: postgres
environment:
POSTGRES_USER: user
POSTGRES_PASSWORD: password
volumes:
- ./tmp/db:/var/lib/postgresql/data
web:
build: .
command: bash -c "rm -f tmp/pids/server.pid && bundle exec rails s -p 3000 -b '0.0.0.0'"
volumes:
- .:/myapp
ports:
- "3000:3000"
depends_on:
- db
environment:
POSTGRES_USER: user
POSTGRES_PASSWORD: password
POSTGRES_HOST: db
I've read that in CI/CD pipelines people build their images and push them to DockerHub but what is the point of it?
You would be just pushing an individual image. Even if you pull the image later in a different instance, in order to run the app with the different services you will need to run the container using docker-compose and you wouldn't have it unless you pull it from the github repo again or create it on the pipeline right?
Wouldn't be better and straightforward to just fetch the repo from Github and do docker-compose commands? Is there a "cleaner" or "proper" way of doing it? Thanks in advance!
The only thing you should need to copy to the remote system is the docker-compose.yml file. And even that is technically optional, since Compose just wraps basic Docker commands; you could manually docker network create and then docker run the two containers without copying anything at all.
For this setup it's important to delete the volumes: that require a copy of the application code to overwrite the image's content. You also shouldn't need an override command:. For the deployment you'd need to replace build: with image:.
version: "3.8"
services:
db: *from-the-question
web:
image: registry.example.com/me/web:${WEB_TAG:-latest}
ports:
- "3000:3000"
depends_on:
- db
environment: *web-environment-from-the-question
# no build:, command:, volumes:
In a Compose setup you could put the build: configuration in a parallel docker-compose.override.yml file that wouldn't get copied to the deployment system.
So what? There are a couple of good reasons to structure things this way.
A forward-looking answer involves clustered container managers like Kubernetes, Nomad, or Amazon's proprietary ECS. In these a container runs somewhere in a cluster of indistinguishable machines, and the only way you have to copy the application code in is by pulling it from a registry. In these setups you don't copy any files anywhere but instead issue instructions to the cluster manager that some number of copies of the image should run somewhere.
Another good reason is to support rolling back the application. In the Compose fragment above, I refer to an environment variable ${WEB_TAG}. Say you push out one build a day and you give each a date-stamped tag; registry.example.com/me/web:20220220. But, something has gone wrong with today's build! While you figure it out, you can connect to the deployment machine and run
WEB_TAG=20220219 docker-compose up -d
and instantly roll back, again without trying to check out anything or copy the application.
In general, using Docker, you want to make the image as self-contained as it can be, though still acknowledging that there are things like the database credentials that can't be "baked in". So make sure to COPY the code in, don't override the code with volumes:, do set a sensible CMD. You should be able to start with a clean system with only Docker installed and nothing else, and docker run the image with only Docker-related setup. You can imagine writing a shell script to run the docker commands, and the docker-compose.yml file is just a declarative version of that.
Finally remember that you don't have to use Docker. You can use a general-purpose system-management tool like Ansible, Salt Stack, or Chef to install Ruby on to the target machine and manually copy the code across. This is a well-proven deployment approach. I find Docker simpler, but there is the assumption that the code and all of its dependencies are actually in the image and don't need to be separately copied.
I'm trying to write a docker-compose file that will build and push a versioned (1.0, 1.1...) and latest build of my image to my local v2 docker registry. However when I run docker-compose build I get the following error:
ERROR: Couldn't connect to Docker daemon - you might need to run `docker-machine start default`.
I found a lot of people complaining about this error for many different reasons, in my case it has nothing to do with permissions or weather or not the docker service is running, I narrowed it down to my image naming having a URL on it (the URL of my local registry), I know that because if I name my image normally (like '/app:latest'), then the commands runs fine. So how can I have a URL as the image name?
Here is what I'm trying to do (docker-compose.yaml):
version: "3.8"
x-marvin-backend: &default-marvin-backend
container_name: marvin_backend
build: ./marvin-api
image: "http://my_registry_url:5000/marvin/backend:latest"
ports:
- "3000:3000"
networks:
- backend
x-marvin-frontend: &default-marvin-frontend
container_name: marvin_frontend
image: http://my_registry_url:5000/marvin/frontend:latest
build:
context: ./marvin-front
args:
- REACT_APP_SERVICES_HOST=http://marvin_backend:3000/
ports:
- "80:80"
networks:
- backend
depends_on:
- backend
services:
backend: *default-marvin-backend
backend_versioned:
<< : *default-marvin-backend
image: http://my_registry_url:5000/marvin/backend:1.0
frontend: *default-marvin-frontend
frontend_versioned:
<< : *default-marvin-frontend
image: http://my_registry_url:5000/marvin/frontend:1.0
networks:
backend:
I'm new to docker in general, my main goal here is to have a simple, preferably one command (e.g docker-compose build), that will build and tag both my front end and back end images so that I can just execute docker-compose push to push those newly created images to my registry running on AWS. With that I also want to be able to override the latest version of those images in the registry while also adding a versioned image for backup purposes, in case I want to revisit any of those version in the future.
Then in the AWS EC2 machine I have another docker-compose.yaml file that just fetches the latest versions of both images and run their containers.
So to summarize I would develop the application on my local machine, then add the new version manually to the versioned services in the local docker-compose.yaml file, then run docker-compose build followed by docker-compose push; then ssh into my AWS machine and run docker-compose up to fetch the latest and newly updated images and run them.
This could later evolve into a CI/CD pipeline, but right now I'm taking baby steps and trying to get my image name to have a URL in it.
Thank you.
Edit
I tried using a .env with REGISTRY=http://my_registry_url:5000/marvin and then using image: "${REGISTRY}/frontend:latest" or image: "$${REGISTRY}/frontend:latest" but that also didn't work
Just remove the http:// part from your images.
this is my second day working with Docker, can you help me with a solution for this typical case:
Currently, our application is a combination of Java Netty server, Tomcat, python flask, MariaDB.
Now we want to use Docker to make the deployment more easily.
My first idea is to create 1 Docker Image for environment (CentOS + Java 8 + Python 3), another image for MariaDB, and 1 Image for application.
So the docker-compose.yml should be like this
version: '2'
services:
centos7:
build:
context: ./
dockerfile: centos7_env
image:centos7_env
container_name: centos7_env
tty: true
mariadb:
image: mariadb/server:10.3
container_name: mariadb10.3
ports:
- "3306:3306"
tty: true
app:
build:
context: ./
dockerfile: app_docker
image: app:1.0
container_name: app1.0
depends_on:
- centos7
- mariadb
ports:
- "8081:8080"
volumes:
- /home/app:/home/app
tty: true
The app_dockerfile will be like this:
FROM centos7_env
WORKDIR /home/app
COPY docker_entrypoint.sh ./docker_entrypoint.sh
ENTRYPOINT ["docker_entrypoint.sh"]
In the docker_entrypoint.sh there should couple of commands like:
#!/bin/bash
sh /home/app/server/Server.sh start
sh /home/app/web/Web.sh start
python /home/app/analyze/server.py
I have some questions:
1- Is this design good, any better idea for this?
2- Should we separate image for database like this? Or we could install database on OS image, then do commit?
3- If run docker-compose up, will docker create 2 containers for OS image and app image which based on OS image?, is there anyway to just create container for app (which run on Centos already)?
4- If the app dockerfile not base on OS image, but use FROM SCRATCH, so can it run as expected?
Sorry for long question, Thank you all in advance!!!
One thing to understand is that Docker container is not a VM - they are much more lightweight, so you can run many containers on a single machine.
What I usually do is run each service in its own container. This allows me to package only stuff related to that particular service and update each container individually when needed.
With your example I would run the following containers:
MariaDB
Container running /home/app/server/Server.sh start
Container running /home/app/web/Web.sh start
Python container running python /home/app/analyze/server.py
You don't really need to run centos7 container - this is just a base image which you used to build another container on top of it. Though you would have to build it manually first, so that you can build other image from it - I guess this is what you are trying to achieve here, but it makes docker-compose.yml a bit confusing.
There's really no need to create a huge base container which contains everything. A better practice in my opinion is to use more specialized containers. For example in you case for Python you could have a container which container Python only, for Java - your preferred JDK.
My personal preference is Alpine-based images and you can find many official images based on it: python:<version>-alpine, node:<verion>-alpine, openjdk:<version>-alpine (though I'm not quite sure about all versions), postgres:<version>-alpine and etc.
Hope this helps. Let me know if you have other questions and I will try to address them here.
I have a golang script which interacts with postgres. Created a service in docker-compose.yml for both golang and postgre. When I run it locally with "docker-compose up" it works perfect, but now I want to create one single image to push it to my dockerhub so it can be pulled and ran with just "docker run ". What is the correct way of doing it?
Image created by "docker-compose up --build" launches with no error with "docker run " but immediately stops.
docker-compose.yml:
version: '3.6'
services:
go:
container_name: backend
build: ./
volumes:
- # some paths
command: go run ./src/main.go
working_dir: $GOPATH/src/workflow/project
environment: #some env variables
ports:
- "80:80"
db:
image: postgres
environment: #some env variables
volumes:
- # some paths
ports:
- "5432:5432"
Dockerfile:
FROM golang:latest
WORKDIR $GOPATH/src/workflow/project
CMD ["/bin/bash"]
I am a newbie with docker so any comments on how to do things idiomatically are appreciated
docker-compose does not combine docker images into one, it runs (with up) or builds then runs (with up --build) docker containers based on the images defined in the yml file.
More info are in the official docs
Compose is a tool for defining and running multi-container Docker applications.
so, in your example, docker-compose will run two containers:
1 - based on the go configurations
2 - based on the db configurations
to see what containers are actually running, use the command:
docker ps -a
for more info see docker docs
It is always recommended to run each searvice on a separate container, but if you insist to make an image which has both golangand postrges, you can take a postgres base image and install golang on it, or the other way around, take golangbased image and install postgres on it.
The installation steps can be done inside the Dockerfile, please refer to:
- postgres official Dockerfile
- golang official Dockerfile
combine them to get both.
Edit: (digital ocean deployment)
Well, if you copy every thing (docker images and the yml file) to your droplet, it should bring the application up and running similar to what happens when you do the same on your local machine.
An example can be found here: How To Deploy a Go Web Application with Docker and Nginx on Ubuntu 18.04
In production, usually for large scale/traffic applications, more advanced solutions are used such as:
- docker swarm
- kubernetes
For more info on Kubernetes on digital ocean, please refer to the official docs
hope this helps you find your way.