I have two docker images in Nexus repo
1. sql-db
2. main-svc
The main-svc need sql-db in order to run completely.
Can I defined such in gitlab-ci.yml?
Build:
image: test.com/main-svc:1.5
services: test.com/sql-db:1.0
script:
- docker pull test.com/main-svc:1.5
You can do this if you use docker-in-docker, but you don't need to. Simply define two service containers, as such:
build:
image: alpine
services:
- postgres:latest
- redis:latest
script:
- echo "hello world"
I didn't edit your example because I'm a bit confused about why you're using the same container for both the image and the docker pull inside it. My guess is you have a container you're trying to run integration tests with, then you're testing a container which needs to connect to a database as two different services. If not, it would be helpful for you to state what problem you're trying to solve and we can help more.
Related
I am using an official Postgres12 image that I'm pulling inside the docker-compose.yml. Everything is working fine.
services:
db:
container_name: db
image: postgres:12
volumes:
- ...
ports:
- 5432:5432
environment:
- POSTGRES_USER=...
Now, when I run docker-compose up, I get this image
My question is: is there a way in which I can rename the image inside docker-compose.yml? I know there is a command but I require it to be everything inside the file if possible.
Thanks!
In a Compose file, there's no direct way to run docker tag or any other command that modifies some existing resource.
If you're trying to optionally point Compose at a local mirror of Docker Hub, you can take advantage of knowing the default repository is docker.io and use an optional environment variable:
image: ${REGISTRY:-docker.io}/postgres:latest
REGISTRY=docker-mirror.example.com docker-compose up
Another possible approach is to build a trivial image that doesn't actually extend the base postgres image at all:
build:
context: .
dockerfile: Dockerfile.postgres
image: local-images.example.com/my-project/postgres
# Dockerfile.postgres
FROM postgres:latest
# End of file
There's not really any benefit to doing this beyond the cosmetic appearances in the docker images output. Having it be clear that you're using a standard Docker Hub image could be slightly preferable; its behavior is better understood than something you built locally and if you have multiple projects running at once they can more obviously share the same single image.
I have a general question about DockerHub and GitHub. I am trying to build a pipeline on Jenkins using AWS instances and my end goal is to deploy the docker-compose.yml that my repo on GitHub has:
version: "3"
services:
db:
image: postgres
environment:
POSTGRES_USER: user
POSTGRES_PASSWORD: password
volumes:
- ./tmp/db:/var/lib/postgresql/data
web:
build: .
command: bash -c "rm -f tmp/pids/server.pid && bundle exec rails s -p 3000 -b '0.0.0.0'"
volumes:
- .:/myapp
ports:
- "3000:3000"
depends_on:
- db
environment:
POSTGRES_USER: user
POSTGRES_PASSWORD: password
POSTGRES_HOST: db
I've read that in CI/CD pipelines people build their images and push them to DockerHub but what is the point of it?
You would be just pushing an individual image. Even if you pull the image later in a different instance, in order to run the app with the different services you will need to run the container using docker-compose and you wouldn't have it unless you pull it from the github repo again or create it on the pipeline right?
Wouldn't be better and straightforward to just fetch the repo from Github and do docker-compose commands? Is there a "cleaner" or "proper" way of doing it? Thanks in advance!
The only thing you should need to copy to the remote system is the docker-compose.yml file. And even that is technically optional, since Compose just wraps basic Docker commands; you could manually docker network create and then docker run the two containers without copying anything at all.
For this setup it's important to delete the volumes: that require a copy of the application code to overwrite the image's content. You also shouldn't need an override command:. For the deployment you'd need to replace build: with image:.
version: "3.8"
services:
db: *from-the-question
web:
image: registry.example.com/me/web:${WEB_TAG:-latest}
ports:
- "3000:3000"
depends_on:
- db
environment: *web-environment-from-the-question
# no build:, command:, volumes:
In a Compose setup you could put the build: configuration in a parallel docker-compose.override.yml file that wouldn't get copied to the deployment system.
So what? There are a couple of good reasons to structure things this way.
A forward-looking answer involves clustered container managers like Kubernetes, Nomad, or Amazon's proprietary ECS. In these a container runs somewhere in a cluster of indistinguishable machines, and the only way you have to copy the application code in is by pulling it from a registry. In these setups you don't copy any files anywhere but instead issue instructions to the cluster manager that some number of copies of the image should run somewhere.
Another good reason is to support rolling back the application. In the Compose fragment above, I refer to an environment variable ${WEB_TAG}. Say you push out one build a day and you give each a date-stamped tag; registry.example.com/me/web:20220220. But, something has gone wrong with today's build! While you figure it out, you can connect to the deployment machine and run
WEB_TAG=20220219 docker-compose up -d
and instantly roll back, again without trying to check out anything or copy the application.
In general, using Docker, you want to make the image as self-contained as it can be, though still acknowledging that there are things like the database credentials that can't be "baked in". So make sure to COPY the code in, don't override the code with volumes:, do set a sensible CMD. You should be able to start with a clean system with only Docker installed and nothing else, and docker run the image with only Docker-related setup. You can imagine writing a shell script to run the docker commands, and the docker-compose.yml file is just a declarative version of that.
Finally remember that you don't have to use Docker. You can use a general-purpose system-management tool like Ansible, Salt Stack, or Chef to install Ruby on to the target machine and manually copy the code across. This is a well-proven deployment approach. I find Docker simpler, but there is the assumption that the code and all of its dependencies are actually in the image and don't need to be separately copied.
this is my second day working with Docker, can you help me with a solution for this typical case:
Currently, our application is a combination of Java Netty server, Tomcat, python flask, MariaDB.
Now we want to use Docker to make the deployment more easily.
My first idea is to create 1 Docker Image for environment (CentOS + Java 8 + Python 3), another image for MariaDB, and 1 Image for application.
So the docker-compose.yml should be like this
version: '2'
services:
centos7:
build:
context: ./
dockerfile: centos7_env
image:centos7_env
container_name: centos7_env
tty: true
mariadb:
image: mariadb/server:10.3
container_name: mariadb10.3
ports:
- "3306:3306"
tty: true
app:
build:
context: ./
dockerfile: app_docker
image: app:1.0
container_name: app1.0
depends_on:
- centos7
- mariadb
ports:
- "8081:8080"
volumes:
- /home/app:/home/app
tty: true
The app_dockerfile will be like this:
FROM centos7_env
WORKDIR /home/app
COPY docker_entrypoint.sh ./docker_entrypoint.sh
ENTRYPOINT ["docker_entrypoint.sh"]
In the docker_entrypoint.sh there should couple of commands like:
#!/bin/bash
sh /home/app/server/Server.sh start
sh /home/app/web/Web.sh start
python /home/app/analyze/server.py
I have some questions:
1- Is this design good, any better idea for this?
2- Should we separate image for database like this? Or we could install database on OS image, then do commit?
3- If run docker-compose up, will docker create 2 containers for OS image and app image which based on OS image?, is there anyway to just create container for app (which run on Centos already)?
4- If the app dockerfile not base on OS image, but use FROM SCRATCH, so can it run as expected?
Sorry for long question, Thank you all in advance!!!
One thing to understand is that Docker container is not a VM - they are much more lightweight, so you can run many containers on a single machine.
What I usually do is run each service in its own container. This allows me to package only stuff related to that particular service and update each container individually when needed.
With your example I would run the following containers:
MariaDB
Container running /home/app/server/Server.sh start
Container running /home/app/web/Web.sh start
Python container running python /home/app/analyze/server.py
You don't really need to run centos7 container - this is just a base image which you used to build another container on top of it. Though you would have to build it manually first, so that you can build other image from it - I guess this is what you are trying to achieve here, but it makes docker-compose.yml a bit confusing.
There's really no need to create a huge base container which contains everything. A better practice in my opinion is to use more specialized containers. For example in you case for Python you could have a container which container Python only, for Java - your preferred JDK.
My personal preference is Alpine-based images and you can find many official images based on it: python:<version>-alpine, node:<verion>-alpine, openjdk:<version>-alpine (though I'm not quite sure about all versions), postgres:<version>-alpine and etc.
Hope this helps. Let me know if you have other questions and I will try to address them here.
I have a golang script which interacts with postgres. Created a service in docker-compose.yml for both golang and postgre. When I run it locally with "docker-compose up" it works perfect, but now I want to create one single image to push it to my dockerhub so it can be pulled and ran with just "docker run ". What is the correct way of doing it?
Image created by "docker-compose up --build" launches with no error with "docker run " but immediately stops.
docker-compose.yml:
version: '3.6'
services:
go:
container_name: backend
build: ./
volumes:
- # some paths
command: go run ./src/main.go
working_dir: $GOPATH/src/workflow/project
environment: #some env variables
ports:
- "80:80"
db:
image: postgres
environment: #some env variables
volumes:
- # some paths
ports:
- "5432:5432"
Dockerfile:
FROM golang:latest
WORKDIR $GOPATH/src/workflow/project
CMD ["/bin/bash"]
I am a newbie with docker so any comments on how to do things idiomatically are appreciated
docker-compose does not combine docker images into one, it runs (with up) or builds then runs (with up --build) docker containers based on the images defined in the yml file.
More info are in the official docs
Compose is a tool for defining and running multi-container Docker applications.
so, in your example, docker-compose will run two containers:
1 - based on the go configurations
2 - based on the db configurations
to see what containers are actually running, use the command:
docker ps -a
for more info see docker docs
It is always recommended to run each searvice on a separate container, but if you insist to make an image which has both golangand postrges, you can take a postgres base image and install golang on it, or the other way around, take golangbased image and install postgres on it.
The installation steps can be done inside the Dockerfile, please refer to:
- postgres official Dockerfile
- golang official Dockerfile
combine them to get both.
Edit: (digital ocean deployment)
Well, if you copy every thing (docker images and the yml file) to your droplet, it should bring the application up and running similar to what happens when you do the same on your local machine.
An example can be found here: How To Deploy a Go Web Application with Docker and Nginx on Ubuntu 18.04
In production, usually for large scale/traffic applications, more advanced solutions are used such as:
- docker swarm
- kubernetes
For more info on Kubernetes on digital ocean, please refer to the official docs
hope this helps you find your way.
I woud like to use Ansible to deploy one of my project (let's call it project-to-deploy).
project-to-deploy can be run locally using a docker-compose.yml file, which, among other things, mount the following volumes inside the docker-container.
version: "2"
services:
database:
image: mysql:5.6
volumes:
- ./docker/mysql.init.d:/docker-entrypoint-initdb.d
messages:
image: private.repo/project-to-deploy:latest
Nothing more useful here. To run the project: docker-compose up.
I have created a docker image of the project (in which I copy all the files from the project to the newly created docker image), and uploaded it to private.repo/project-to-deploy:latest.
Now comes the Ansible part.
For the project to run, I need:
The docker image
A MySQL instance (see part of my docker-compose.yml below)
In my docker-compose.yml (above), it is quite easy to do so. I just create the 2 services (database and project-to-deploy) and link them each-other.
How can I perform such action in Ansible?
First things I did is to fetch the image:
- name: Docker - pull project image
docker:
image: "private.repo/project-to-deploy:latest"
state: restarted
pull: always
Then, how can I link the MySQL docker image to this, knowing that the MySQL docker image need files from project-to-deploy ?
If you think of another way to do it, feel free to make suggestions !
slight correction, the docker module is for running containers, in your example you are not just fetching the image. You're actually pulling the image, creating a container, and running it.
I would typically accomplish this by using ansible to template each container's config files with the needed IP addresses, ports, credentials, etc. providing them all they need to know to communicate with each other.
Since your example only involves few connections you could set the links option in your ansible task. You should only need to set it on the "messages" container side.
- name: Docker - start MySQL container
docker:
name: database
image: "mysql:5.6"
state: restarted
volumes:
- /path/to/docker/mysql.init.d:/docker-entrypoint-initdb.d
pull: always
- name: Docker - start project container
docker:
name: messages
image: "private.repo/project-to-deploy:latest"
state: restarted
pull: always
links:
- database