Sir,
May I ask a question, if I want to setup multiple separate repos for different env, such as dev, prod for different repo to avoid unstable image to be used in prod version. does that means I have to use different port for different repos?
Such as:
Dev Hosted:8083
Dev Group:8082
PRD Hosted:8183
PRD Hosted:8182
If so, if we would like to create many many, does that means we have to use many ports?
Source workflows are usually different from company-to-company, but generally I recommend single repo per service and multiple-branch approach, so you can easily merge features to the master (eg: prod) branch from your feature branches which might be dedicated per environment.
Regarding static configuration I recommend to create a generic, non-environment specific container images that picks up all environment specific configuration from environment variables at startup and runtime.
On the port mapping, within the container you should always use the same ports (eg: build your image build with 82 and 83), and only change those when you expose it to the host during the composition.
When you build your docker images, you can use labels to set which one is the dev, prod revision of those images, so you can target those easier, with imagename:label
With this you can specify multiple docker compositions per environment, by creating the following files:
docker-compose.dev.yml:
version: '3'
services:
web:
image: "webapp:dev"
ports:
- "8082:82"
- "8083:83"
environment:
- DEBUG=true
- ENVIRONMENT_NAME=dev
docker-compose.prod.yml:
version: '3'
services:
web:
image: "webapp:prod"
ports:
- "8182:82"
- "8183:83"
environment:
- DEBUG=false
- ENVIRONMENT_NAME=prod
With this configuration you can create your service compositions based on the same or similar images by running docker-compose:
# To start a DEV service composition
docker-compose up -f ./docker-compose.dev.yml
# To start a DEV service composition
docker-compose up -f ./docker-compose.prod.yml
See more info about these:
docker-compose reference: https://docs.docker.com/compose/reference/overview/
github-flow branching stratgey: https://guides.github.com/introduction/flow/
Related
I have setup a docker-compose project which are creating multiple images:
cache_server:
image: current_timezone/full-supervisord-cache-server:1.00
container_name: renamed-varnish-cache
networks:
- network_frontend
build:
context: "./all-services/"
dockerfile: "./cache-server/Dockerfile.cacheserver.varnish"
args:
- DOCKER_CONTAINER_USERNAME=username
ports:
- "6081:6081"
- "6082:6082"
When I use docker-compose up -f file1.yml file2.override.yml I will then get the containers: in the case of above one it will be named : renamed-varnish-cache
In the corresponding Dockerfile (./nginx-proxy/Dockerfile.proxy.nginx) I want to be able use the container_name property defined in the docker-compose.yml shown above.
When the containers are created I want to update the Varnish configurations inline inside Dockerfile : RUN sed -i "s|webserver_container_name|renamed-varnish-cache|g" /etc/varnish/default.vcl"
For instance:
backend webserver_container_name{
.host = "webserver_container_name";
.port = "8080";
}
To: I anticipate I will have to replace the - with _ for the backend:
backend renamed_varnish_cache{
.host = "renamed-varnish-cache";
.port = "8080";
}
Is there a way to receive the docker-compose named items as variables inside Dockerfile?
In core Docker, there are two separate concepts. An image is a built version of some piece of software packaged together with its dependencies; a container is a running instance of an image. There are separate docker build and docker run commands to build images and launch containers, and you can launch multiple containers from a single image.
Docker Compose wraps these concepts. In particular, the build: block corresponds to the image-build step, and that is what invokes the Dockerfile. None of the other Compose options are available or visible inside the Dockerfile. You cannot access the container_name: or environment: variables or volumes: because those don't exist at this point in the build lifecycle; you also cannot contact other Compose services from inside the Dockerfile.
It's pretty common to have multiple containers run off the same image if they have largely the same code base but need a different top-level command. One example is a Python Django application that needs Celery background workers; you'd have the same project structure but a different command for the Celery worker.
version: '3.8'
services:
web:
build: .
image: my/django-app
worker:
image: my/django-app
command: celery worker ...
Now with this stack you can docker-compose build to build the one image, and then run docker-compose up to launch both containers from that image. (During the build you can't know what the container names will be, and there will be two container names so you can't just use one in the Dockerfile.)
At a design level, this means that you often can't include configuration-type settings in the image itself (other containers' hostnames, user IDs for host-shared filesystems). If your application lets you specify these things as environment variables, that's the easiest option. You can use bind mounts (volumes:) to inject whole config files. If neither of these things work for you, you can use an entrypoint script to rewrite the config file.
Letzt imagine i have 3 compose files (only focus on the mysql service)
docker-compose.yml
docker-compose.staging.yml
docker-compose.prod.yml
In my docker compose.yml i have my basic mysql stuff with dev als build target
version: "3.4"
services:
mysql:
build:
target: dev
...
And start it with
docker-compose up -d
In my staging environment i would like to expose port 3306, but also want another build target so i would create the docker-compose.staging.yml with the following content.
version: "3.4"
services:
mysql:
build
target: prod
ports:
- 3306:3306
And combine it with
docker-compose -f docker-compose.yml -f docker-compose.staging.yml up -d
So the build target is overwritten and the port 3306 is now exposed to the outside.
Now i want the same in the docker-compose.prod.yml, just without having the port 3306 exposed to the outside ... How can i override the ports directive to not having ports exposed?
I tried to put an empty array in the prod.yml without success (port is still exposed):
version: "3.4"
services:
mysql:
ports: []
In the end i would like to stack the up command like this:
docker-compose -f docker-compose.yml -f docker-compose.staging.yml -f docker-compose.prod.yml up -d
I also know the docs says
For the multi-value options ports, expose, external_links, dns, dns_search, and tmpfs, Compose concatenates both sets of values
But how can i reach my goal anyway without duplicating configuration?
Yes for sure, i could omit the docker-compose.staging.yml but in the staging.yml are build steps defined, which should also be used for the prod stage to not have any differences between the built container.
So duplicating things isn't really an option.
Thanks
I would actually strongly suggest just not using the "target" command in your compose files. I find it to be extremely beneficial to build a single image for local/staging/production - build once, test it, and deploy it in each environment. In this case, you change things using environment variables or mounted secrets/config files.
Further, using compose to build the images is... fragile. I would recommend building the images in a CI system, pushing them to a registry, and then using the image version tags in your compose file- it is a much more reproducible system.
You might consider using extends key in your compose files like this:
mysql:
extends:
file: docker-compose.yml
service: mysql
ports:
- 3306:3306
# other definitions
Although you'd have to change your compose version from 3.4 to < 3 ( like 2.3 ) because v3 doesn't support this feature ref as there is a open feature request hanging for a long time now.
Important note here is that you shouldn't expose any ports in your base docker-compose.yml file, only on the specific composes.
Oficial docs ref for extends
edit
target clause is not supported in v2.0 so I've adjusted the answer to match the extends and target requirement. That's compose v2.3.
edit from comments
As there is a deploy keyword requirement, then there is compose v3 requirement. And as for now, there is no possibility to extend composes. I've read in some official doc (can't find it now for ref) that they encourage us to use flat composes specific for environment so that it's always clear. Also Docker states that's hard to implement in v3 (ref in the above issue) and it's not going to be implemented anywhere soon. You have to use separate compose files per environment.
I have question for Docker, I have many containers like:
nginx
php-fpm
mysql
nodejs
composer
...
And I want to setup them by Docker Compose on Windows 10 with "Docker for Windows" application, but I would to bring them into one another container such as "Ubuntu 16.04". So how can I do that?
Thanks so much, guys!
To do that, create a Dockerfile based on "Ubuntu" image and setup all of them manually, same as you would install them on a "normal" machine.
But this is against the purpose, why docker was created for. Image, or more specifically, a container based on the image, is a specialized virtual machine intended to handle one specific service - search for "microservices" term.
You should be using docker-compose to create and manage multiple services. Ideally there will be a container for each of your components nginx, php-fpm, node, mysql. The container can be linked and accessible to each other over the network.
You can create multiple dockerfiles like one for nodejs another fo angular and another one for database and then you can link all the dockerfiles by creating one docker-compose file like this :
version: '2' # specify docker-compose version
# Define the services/containers to be run
services:
angular: # name of the first service
build: gamification-frontend # specify the directory of the Dockerfile
ports:
- "4200:4200" # specify port forwarding
express: #name of the second service
build: gamification-backend # specify the directory of the Dockerfile
ports:
- "3000:3000" #specify ports forwarding
links:
- database # link this service to the database service
database: # name of the third service
image: redis # specify an image to build container from
So these are three different containers
1. angular 2. express 3. database
which are linked together. To run these containers use:
docker-compose up --build
I have project with docker-compose file and want to migrate to V3, but when deploy with
docker stack deploy --compose-file=docker-compose.yml vertx
It does not understand build path, links, container names...
My file locate d here
https://github.com/armdev/vertx-spring/blob/master/docker-compose.yml
version: '3'
services:
eureka-node:
image: eureka-node
build: ./eureka-node
container_name: eureka-node
ports:
- '8761:8761'
networks:
- vertx-network
postgres-node:
image: postgres-node
build: ./postgres-node
container_name: postgres-node
ports:
- '5432:5432'
networks:
- vertx-network
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: socnet
POSTGRES_DB: socnet
vertx-node:
image: vertx-node
build: ./vertx-node
container_name: vertx-node
links:
- postgres-node
- eureka-node
ports:
- '8585:8585'
networks:
- vertx-network
networks:
vertx-network:
driver: overlay
when I run docker-compose up, it is working, but with
stack deploy not.
How to define path for docker file?
docker stack deploy works only on images, not on builds.
This means that you will have to push your images to an image registry (created with the build process), later docker stack deploy will download the images and execute them.
here you have an example of how was it done for a php application.
You have to pay attention to the parts 1, 3 and 4.
The articles are about php, but can easily be applied to any other language.
The swarm mode "docker service" interface has a few fundamental differences in how it manages containers. You are no longer directly running containers like with "docker run", and it is assumed that you will be doing this in a distributed environment more often than not.
I'll break down the answer by these specific things you listed.
It does not understand build path, links, container names...
Links
The link option has been deprecated for quite some time in favor of the network service discovery feature introduced alongside the "docker network" feature. You no longer need to specify specific links to/from containers. Instead, you simply need to ensure that all containers are on the same network and then they can discovery eachother by the container name or "network alias"
docker-compose will put all your containers into the same network by default, and it sets up the compose service name as an alias. That means if you have a service called 'postgres-node', you can reach it via dns by the name 'postgres-node'.
Container Names
The "docker service" interface allows you to declare a desired state. "I want x number of identical services". Since the interface must support x number of instances of a service, it doesn't allow you to choose the specific container name. Instead, you get to choose the service name. In the case of 'docker stack deploy', the service name defined under the services key in your docker-compose.yml file will be used, but it will also prepend the stack name to the service name.
In most cases, I would argue that overriding the container name in a docker-compose.yml file is unnecessary, even when using regular containers via docker-compose up.
If you need a different name for network service discovery purposes, add a different alias or use the service name alias that you get when using docker-compose or docker stack deploy.
build path
Because swarm mode was built to be a distributed system, building an image in place locally isn't something that "docker stack deploy" was meant to do. Instead, you should build and push your image to a registry that all nodes in your cluster can access.
In the case where you are using a single node swarm "cluster", you should be able to use the docker-compose build option to get the images built locally, and then use docker stack deploy.
I'm currently struggling with the deployment of my services and I wanted to ask, what's the proper way when you have to deal with multiple repositories. The repositories are independent, but to run in production, everything needs to be launched.
My Setup:
Git Repository Backend:
Backend Project Rails
docker-compose: backend(expose 3000), db and redis
Git Repository Frontend
Express.js server
docker-compose: (expose 4200)
Both can be run independently and test can be executed by CI
Git Repository Nginx for Production
Needs to connect to the other two services (same docker network)
forwards requests to the right service
I have already tried to include the two services as submodules into the Nginx repository and use the docker-compose of the nginx repo, but I'm not really happy with it.
You can have your CI build and push images for each service you want to run, and have the production environment run all 3 containers.
Then, your production docker-compose.yml would look like this:
lb:
image: nginx
depends_on:
- rails
- express
ports: 80:80
rails:
image: yourorg/railsapp
express:
image: yourorg/expressapp
Be noted that docker-compose isn't recommended for production environments; you should be looking at using Distributed Application Bundles (this is still an experimental feature, which will be released to core in version 1.13)
Alternatively, you can orchestrate your containers with a tool like ansible or a bash script; just make sure you create a docker network and attach all three containers to it so they can find each other.
Edit: since Docker v17 and the deprecation of DABs in favour of the Compose file v3, it seems that for single-host environments, docker-compose is a valid way for running multi-service applications. For multi-host/HA/clusterised scenarios you may want to look into either Docker Swarm for a self-managed solution, or Docker Cloud for a more PaaS approach. In any case, I'd advise you to try it out in Play-with-Docker, the official online sandbox where you can spin out multiple hosts and play around with a swarm cluster without needing to spin out your own boxes.