I use docker-compose to describe the deployment of one of my application. The application is composed of a
mongodb database,
a nodejs application
a nginx front end the static file of nodejs.
If i scale the nodejs application, i would like nginx autoscale to the three application.
Recently i use the following code snippet :
https://gist.github.com/cmoore4/4659db35ec9432a70bca
This is based on the fact that some environment variable are created on link, and change when new server are present.
But now with the version 2 of the docker-compse file and the new link system of docker, the environment variable doesn't exist anymore.
How my nginx can now detect the scaling of my application ?
version: '2'
services:
nodejs:
build:
context: ./
dockerfile: Dockerfile.nodejs
image: docker.shadoware.org/passprotect-server:1.0.0
expose:
- 3000
links:
- mongodb
environment:
- MONGODB_HOST=mongodb://mongodb:27017/passprotect
- NODE_ENV=production
- DEBUG=App:*
nginx:
image: docker.shadoware.org/nginx:1.2
links:
- nodejs
environment:
- APPLICATION_HOST=nodejs
- APPLICATION_PORT=3000
mongodb:
image: docker.shadoware.org/database/mongodb:3.2.7
Documentation states here that:
Containers for the linked service will be reachable at a hostname identical to the alias, or the service name if no alias was specified.
So I believe that you could just set your services names in that nginx conf file like:
upstream myservice {
yourservice1;
yourservice2;
}
as they would be exported as host entries in /etc/hosts for each container.
But if you really want to have that host:port information as environment variables you could write a script to parse that docker-compose.yml and define an .env file, or doing it manually.
UPDATE:
You can get that port information from outside the container, this will return you the ports
docker inspect --format='{{range $p, $conf := .NetworkSettings.Ports}} {{$p}} -> {{(index $conf 0).HostPort}} {{end}}' your_container_id
But if you want to do it from the inside of a containers then what you want is a service discovery system like zookeeper
There's a long feature request thread in docker's repo, about that.
One workaround solution caught my attention. You could try building your own nginx image based on that.
Related
I am using docker compose to set up application environments. There are two distinct environments, test and production.
In a test environment, I need to expose additional ports (for debugging). These ports should remain closed in a production environment.
I would also like to use the same image and docker-compose.yml file. Using the same image is no problem but I am struggeling with the compose file. In it, I would like to open or close a port based on an environment variable.
The current setup is pretty much the standard, like this:
# ...
ports:
- "8080:8080" # HTTP Server port
- "9301:9301" # debug port
# ...
In this example, both ports are always exposed. Is it possible to expose the port 9301 only if a certain environment variable, say EXPOSE_DEBUG, is set?
You can use profiles or a second compose file.
services:
app-prod:
&app
image: busybox
profiles:
- production
ports:
- 8080:8080
app-dev:
<<: *app
profiles:
- development
ports:
- 8080:8080
- 9090:9090
Then you can use the below command or an environment variable to set the profile, COMPOSE_PROFILES.
docker compose --profile <profile-name> up
Alternatively, you can use a second compose file and override the ports.
# compose.yaml
services:
app:
image: busybox
ports:
- 8080:8080
# compose.dev.yaml
services:
app:
ports:
- 8080:8080
- 9090:9090
Then you can use the file after the main file to patch it:
docker compose -f compose.yaml -f compose.dev.yaml up
The file(s) to use can also be controls with an environment variable, COMPOSE_FILE.
If you name the file compose.override.yaml, docker will automatically use it, so you don't have to point to it with the -f flag. Be careful that you don't add this file to your production system, if you choose to do this.
You could also bind the debug port to the loopback interface so that you can only access it locally.
ports:
- 8080:8080
- 127:0.0.1:9090:9090
The solution I usually use in my projects is to make a bash script that writes the docker-compose.yml based on the value of the environment variable. But you could write it with any other programming language as well.
Conditional statements (if else) are not supported in docker compose.
Use additional software like jinja-compose adding Jinja2 logic to docker-compose
Use just two different files (dc-dev.yml and dc-prod.yml) and give them as arg (docker compose -f)
Generate docker-compose.yml programmatically by yourself
Use profiles (Was to slow, see answer of the fool)
To just maintain dev/prod environments in my opinion solution 2 is the most efficient in terms of effort.
To follow your approach:
You can set port mapping by envs like:
.env-File or add them in docker compose up -e command
PORT1="8080:8080"
PORT2="9301:9301"
docker-compse.yml
services:
container1:
ports:
- ${PORT1}
- ${PORT2}
But afaik there is no way to omit one of them
I am writing a docker compose file for my web app.If I use 'link' to connect services with each other do I also need to include 'port'? And is 'depends on' an alternate option of 'links'? What will be best for connection services in a compose file with one another?
The core setup for this is described in Networking in Compose. If you do absolutely nothing, then one service can call another using its name in the docker-compose.yml file as a host name, using the port the process inside the container is listening on.
Up to startup-order issues, here's a minimal docker-compose.yml that demonstrates this:
version: '3'
services:
server:
image: nginx
client:
image: busybox
command: wget -O- http://server/
# Hack to make the example actually work:
# command: sh -c 'sleep 1; wget -O- http://server/'
You shouldn't use links: at all. It was an important part of first-generation Docker networking, but it's not useful on modern Docker. (Similarly, there's no reason to put expose: in a Docker Compose file.)
You always connect to the port the process inside the container is running on. ports: are optional; if you have ports:, cross-container calls always connect to the second port number and the remapping doesn't have any effect. In the example above, the client container always connects to the default HTTP port 80, even if you add ports: ['12345:80'] to the server container to make it externally accessible on a different port.
depends_on: affects two things. Try adding depends_on: [server] to the client container to the example. If you look at the "Starting..." messages that Compose prints out when it starts, this will force server to start starting before client starts starting, but this is not a guarantee that server is up and running and ready to serve requests (this is a very common problem with database containers). If you start only part of the stack with docker-compose up client, this also causes server to start with it.
A more complete typical example might look like:
version: '3'
services:
server:
# The Dockerfile COPYs static content into the image
build: ./server-based-on-nginx
ports:
- '12345:80'
client:
# The Dockerfile installs
# https://github.com/vishnubob/wait-for-it
build: ./client-based-on-busybox
# ENTRYPOINT and CMD will usually be in the Dockerfile
entrypoint: wait-for-it.sh server:80 --
command: wget -O- http://server/
depends_on:
- server
SO questions in this space seem to have a number of other unnecessary options. container_name: explicitly sets the name of the container for non-Compose docker commands, rather than letting Compose choose it, and it provides an alternate name for networking purposes, but you don't really need it. hostname: affects the container's internal host name (what you might see in a shell prompt for example) but it has no effect on other containers. You can manually create networks:, but Compose provides a default network for you and there's no reason to not use it.
So I need rolling-updates with docker on my single node server. Until now, I was using docker-compose but unfortunately, I can't achieve what I need with it. Reading the web, docker-swarm seems to be the way to go.
I have found how to run an app with multiple replicas on a single node using swarm:
docker service create --replicas 3 --name myapp-staging myapp_app:latest
myapp:latest being built from my docker-compose.yml:
version: "3.6"
services:
postgres:
env_file:
- ".env"
image: "postgres:11.0-alpine"
volumes:
- "/var/run/postgresql:/var/run/postgresql"
app:
build: "."
working_dir: /app
depends_on:
- "postgres"
env_file:
- ".env"
command: iex -S mix phx.server
volumes:
- ".:/app"
volumes:
postgres: {}
static:
driver_opts:
device: "tmpfs"
type: "tmpfs"
Unfortunately, this doesn't work since it doesn't get the config from the docker-compose.yml file: .env file, command entry etc.
Searching deeper, I find that using
docker stack deploy -c docker-compose.yml <name>
will create a service using my docker-compose.yml config.
But then I get the following error message:
failed to update service myapp-staging_postgres: Error response from daemon: rpc error: code = InvalidArgument desc = ContainerSpec: image reference must be provided
So it seems I have to use the registry and push my image there so that it works. I understand this need in case of a multiple node architecture, but in my case I don't want to do that. (Carrying images are heavy, I don't want my image to be public, and after all, image is here, so why should I move it to the internet?)
How can I set up my docker service using local image and config written in docker-compose.yml?
I could probably manage my way using docker service create options, but that wouldn't use my docker-compose.yml file so it would not be DRY nor maintainable, which is important to me.
docker-compose is a great tool for developers, it is sad that we have to dive into DevOps tools to achieve such common features as rolling updates. This whole swarm architecture seems too complicated for my needs at this stage.
You don't have to use registeries in your single node setup. you can build your "app" image on your node from a local docker file using this command -cd to the directory of you docker file-
docker build . -t my-app:latest
This will create a local docker image on your node, this image is only visible to your single node which is benefitial in your use case but i wouldn't recommend this in a production setup.
You can now edit the compose file to be:
version: "3.6"
services:
postgres:
env_file:
- ".env"
image: "postgres:11.0-alpine"
volumes:
- "/var/run/postgresql:/var/run/postgresql"
app:
image: "my-app:latest"
depends_on:
- "postgres"
env_file:
- ".env"
volumes:
- ".:/app"
volumes:
postgres: {}
static:
driver_opts:
device: "tmpfs"
type: "tmpfs"
And now you can run your stack from this node and it will use your local app image and benefit from the usage of the image [updates - rollbacks ...etc]
I do have a side note though on your stack file. You are using the same env file for both services, please mind that swarm will look for the ".env" file relative/next to the ".yml" file, so if this is not intentional please revise the location of your env files.
Also on a side note this solution is only feasable on a single node cluster and if you scale your cluster you will have to use a registery and registeries dont have to be public, you can deploy a private registery on your cluster and only your nodes can access it -or you can make it public- the accessibility of your registery is your choice.
Hope this will help with your issue.
Instead of docker images, you can directly use the docker file there. please check the below example.
version: "3.7"
services:
webapp:
build: ./dir
The error is because of compose unable to find an image on the Docker public registry.
Above method should solve your issue.
Basically you need to use docker images in order to make the rolling update to work in docker swarm. Also I would like to clarify that you can host a private registry and use it instead of public one.
Detailed Explanation:
When you try out rolling update how docker swarm works is that it sees whether there is a change in the image which is used for the service if so then docker swarm schedules service updation based on the updation criteria's set up and will work on it.
Let us say there is no change to the image then what happens? Simply docker will not apply the rolling update. Technically you can specify --force flag to make it force update the service but it will just redeploy the service.
Hence create a local repo and store the images into that and use that image name in docker-compose file to be used for a swarm. You can secure the repo by using SSL, user credentials, firewall restrictions which is up to you. Refer this for more details on deploying docker registry server.
Corrections in your compose file:
Since docker stack uses the image to create service you need to specify image: "<image name>" in app service like done in postgres service. AS you have mentioned build instruction image-name is mandatory as docker-compose doesn't know what tho name the image as.Reference.
Registry server is needed if you are going to deploy the application in multi-server. Since you have mentioned it's a single node deployment just having the image pulled/built on the server is enough. But private registry approach is the recommended.
My recommendation is that don't club all the services into a single docker-compose file. The reason is that when you deploy/destroy using docker-compose file all the services will be taken down. This is a kind of tight coupling. Of course, I understand that all the other services depend on DB. in such cases make sure DB service is brought up first before other services.
Instead of specifying the env file make it as a part of Docker file instruction. either copy the env file and source it in entry point or use ENV variable to define it.
Also just an update:
Stack is just to group the services in swarm.
So your compose file should be:
version: "3.6"
services:
postgres:
env_file:
- ".env"
image: "postgres:11.0-alpine"
volumes:
- "/var/run/postgresql:/var/run/postgresql"
app:
build: "."
image: "image-name:tag" #the image built will be tagged as image-name:tag
working_dir: /app # note here I've removed .env file
depends_on:
- "postgres"
command: iex -S mix phx.server
volumes:
- ".:/app"
volumes:
postgres: {}
static:
driver_opts:
device: "tmpfs"
type: "tmpfs"
Dockerfile:
from baseimage:tag
COPY .env /somelocation
# your further instructions go here
RUN ... & \
... & \
... && chmod a+x /somelocation/.env
ENTRYPOINT source /somelocation/.env && ./file-to-run
Alternative Dockerfile:
from baseimage:tag
ENV a $a
ENV b $b
ENV c $c # here a,b,c has to be exported in the shell befire building the image.
ENTRYPOINT ./file-to-run
And you may need to run
docker-compose build
docker-compose push (optional needed to push the image into registry in case registry is used)]
docker stack deploy -c docker-compose.yml <stackname>
NOTE:
Even though you can create the services as mentioned here by #M.Hassan I've explained the ideal recommended way.
I want to use docker-compose to maintain containers, there is a cluster of API servers.
They build from the same image, I knew the docker-compose scale app=5 will start 5 containers, but they all same, including port setting.
I want to run multiple containers like this:
services:
# wx_service_cluster
wx_service_51011:
build:
context: .
dockerfile: Dockerfile
volumes:
- .:/go/src/wx_service
ports:
- "51011:8080"
wx_service_51012:
build:
context: .
dockerfile: Dockerfile
volumes:
- .:/go/src/wx_service
ports:
- "51012:8080"
wx_service_...:
....
THERE ARE ALMOST 100 SERVICES NEED TO BE WROTE
ANYONE CAN HELPS ME TO MAKE IT SIMPLER.
can I make it simpler?
like a shell loop:
for each_port in $( seq 51011 51040 )
{
wx_service_${each_port}:
build:
context: .
dockerfile: Dockerfile
volumes:
- .:/go/src/wx_service
ports:
- "${each_port}:8080"
}
Simple answer to your actual questions: Use ENV variables and probably combine it with dotenv https://docs.docker.com/compose/environment-variables/
services:
foo_{$instance1}
ports:
- "${instance1}:8080"
foo_{$instance12}
ports:
- "${instance2}:8080"
but this will not help you "generating a docker-compose file with X service entries for WX" .. you seem to plan some kind of "hosting".
Alternatives:
You should step back, and rather use random-port assignment and the use docker inspect to find the port - see an example here https://github.com/EugenMayer/docker-sync/blob/master/lib/docker-sync/sync_strategy/unison.rb#L199 .. so basically you either use a template system to generate you docker-compose.yml file - e.g. like https://github.com/markround/tiller .. then you generate services with a static prefix like wx_service_ .. and later you use a different script ( for you nginx / haproxy ) to configure and upstream for each of those, find the name and port (using inspect) dynamically.
If i am right and you really go for some kind of hosting scenario and you do it commercially - you might even rethink this and add consul to the game. Let every wx service register as a service in consul and then use a additional httpd passenger like nginx / haproxy to reconfigure itself and add a backend+frontend/upstream+server entry in the passender using tiller+consul watch.
The last one is just the next level stuff, but if you do that "commercially" you should not do what you asked for initially - nevertheless, if you choose to, use dotenv as outlined above
/I'm using docker beta on a mac an have some services set up in service-a/docker-compose.yml:
version: '2'
services:
service-a:
# ...
ports:
- '4000:80'
I then set up the following in /etc/hosts:
::1 service-a.here
127.0.0.1 service-a.here
and I've got an nginx server running that proxies service-a.here to localhost:4000.
So on my mac I can just run: curl http://service-a.here. This all works nicely.
Now, I'm building another service, service-b/docker-compose.yml:
version: '2'
services:
service-b:
# ...
ports:
- '4001:80'
environment:
SERVICE_A_URL: service-a.here
service-b needs service-a for a couple of things:
It needs to redirect the user in the browser to the $SERVICE_A_URL
It needs to perform HTTP requests to service-a, also using the $SERVICE_A_URL
With this setup, only the redirection (1.) works. HTTP requests (2.) do not work because the service-b container
has no notion of service-a.here in it's DNS.
I tried adding service-a.here using the add_hosts configuration variable, but I'm not sore what to set it to. localhost will not work of course.
Note that I really want to keep the docker-compose files separate (joining them would not fix my problem by the way) because they both already have a lot of services running inside of them.
Is there a way to have access to the DNS resolving on localhost from inside a docker container, so that for instance curl service-a.here will work from inside a container?
You can use 'link' instruction in your docker-compose.yml file to automatically resolve the address from your container service-b.
service-b:
image: blabla
links:
- service-a:service-a
service-a:
image: blablabla
You will now have a line in the /etc/hosts of you service-b saying:
service-a 172.17.0.X
And note that service-a will be created before service-b while composing your app. I'm not sure how you can after that specify a special IP but Docker's documentation is pretty well done. Hope that's what you were looking for.