Docker compose - external links fail after successfull restart - docker

the situation is this:
I have three different dockers compose files for three different projects: a frontend, a middleware, and a backend. FE is Ember, middleware and backend spring (boot). Which should not matter here though. Middleware uses an external_link to the backend, and frontend (UI) is using an external_link to middleware.
When I start with a clean docker (docker stop $(docker ps -aq), docker rm $(docker ps -aq)), everything works fine: I start the backend with docker-compose up, then middleware, then frontend. Everything is nice all external links do work (also running Cypress e2e tests on this setup - works fine).
Now, when I change something in the middleware, rebuild the image, stop the container (control+c) and restart it using docker-compose up, and then try to restart the frontend (control + c and then docker-compose up), docker will tell me:
Starting UI ... error
ERROR: for UI Cannot start service ui: Cannot link to a non running container: /32f2db8e96a1_middleware AS /ui/backend
ERROR: for UI Cannot start service ui: Cannot link to a non running container: /32f2db8e96a1_middleware AS /ui/backend
ERROR: Encountered errors while bringing up the project.
Now what irritates me:
where is the "32f2db8e96a1" coming from? The middleware container name is set to "middleware", which is also used in the external link of the UI, and works fine for every clean startup (meaning, using docker rm "-all" before). Also, docker ps shows me that a container for the middleware is actually running.
Unfortunately, I cannot post the compose files here, but I am willing to add any info needed.
Running on Docker version 18.09.0, build 4d60db4
Ubuntu 18.04.1 LTS
I would like to restart any of these containers without a broken external link. How do I achieve this?
Since you guys take time for me, I took time to clear out two of the compose. This is the UI/frontend one:
files:
version: '2.1'
services:
ui:
container_name: x-ui
build:
dockerfile: Dockerfile
context: .
image: "xxx/ui:latest"
external_links:
- "middleware:backend"
ports:
- "127.0.0.1:4200:80"
network_mode: bridge
This is the middleware:
version: '2.1'
services:
middleware:
container_name: x-middleware
image: xxx/middleware:latest
build:
dockerfile: src/main/docker/middleware/Dockerfile
context: .
ports:
- "127.0.0.1:8080:8080"
- "127.0.0.1:9003:9000"
external_links:
- "api"
network_mode: "bridge"
The "api" one is essentially the same as middleware.
Please note: I removed volumes and environment. Also I renamed, so that the error message names will not fit perfectly. Please note the naming schema is the same: service name goes like "middleware", container name uses a prefix "x-middleware".

Related

Networking in Docker Compose file

I am writing a docker compose file for my web app.If I use 'link' to connect services with each other do I also need to include 'port'? And is 'depends on' an alternate option of 'links'? What will be best for connection services in a compose file with one another?
The core setup for this is described in Networking in Compose. If you do absolutely nothing, then one service can call another using its name in the docker-compose.yml file as a host name, using the port the process inside the container is listening on.
Up to startup-order issues, here's a minimal docker-compose.yml that demonstrates this:
version: '3'
services:
server:
image: nginx
client:
image: busybox
command: wget -O- http://server/
# Hack to make the example actually work:
# command: sh -c 'sleep 1; wget -O- http://server/'
You shouldn't use links: at all. It was an important part of first-generation Docker networking, but it's not useful on modern Docker. (Similarly, there's no reason to put expose: in a Docker Compose file.)
You always connect to the port the process inside the container is running on. ports: are optional; if you have ports:, cross-container calls always connect to the second port number and the remapping doesn't have any effect. In the example above, the client container always connects to the default HTTP port 80, even if you add ports: ['12345:80'] to the server container to make it externally accessible on a different port.
depends_on: affects two things. Try adding depends_on: [server] to the client container to the example. If you look at the "Starting..." messages that Compose prints out when it starts, this will force server to start starting before client starts starting, but this is not a guarantee that server is up and running and ready to serve requests (this is a very common problem with database containers). If you start only part of the stack with docker-compose up client, this also causes server to start with it.
A more complete typical example might look like:
version: '3'
services:
server:
# The Dockerfile COPYs static content into the image
build: ./server-based-on-nginx
ports:
- '12345:80'
client:
# The Dockerfile installs
# https://github.com/vishnubob/wait-for-it
build: ./client-based-on-busybox
# ENTRYPOINT and CMD will usually be in the Dockerfile
entrypoint: wait-for-it.sh server:80 --
command: wget -O- http://server/
depends_on:
- server
SO questions in this space seem to have a number of other unnecessary options. container_name: explicitly sets the name of the container for non-Compose docker commands, rather than letting Compose choose it, and it provides an alternate name for networking purposes, but you don't really need it. hostname: affects the container's internal host name (what you might see in a shell prompt for example) but it has no effect on other containers. You can manually create networks:, but Compose provides a default network for you and there's no reason to not use it.

Push image built with docker-compose to dockerhub

I have a golang script which interacts with postgres. Created a service in docker-compose.yml for both golang and postgre. When I run it locally with "docker-compose up" it works perfect, but now I want to create one single image to push it to my dockerhub so it can be pulled and ran with just "docker run ". What is the correct way of doing it?
Image created by "docker-compose up --build" launches with no error with "docker run " but immediately stops.
docker-compose.yml:
version: '3.6'
services:
go:
container_name: backend
build: ./
volumes:
- # some paths
command: go run ./src/main.go
working_dir: $GOPATH/src/workflow/project
environment: #some env variables
ports:
- "80:80"
db:
image: postgres
environment: #some env variables
volumes:
- # some paths
ports:
- "5432:5432"
Dockerfile:
FROM golang:latest
WORKDIR $GOPATH/src/workflow/project
CMD ["/bin/bash"]
I am a newbie with docker so any comments on how to do things idiomatically are appreciated
docker-compose does not combine docker images into one, it runs (with up) or builds then runs (with up --build) docker containers based on the images defined in the yml file.
More info are in the official docs
Compose is a tool for defining and running multi-container Docker applications.
so, in your example, docker-compose will run two containers:
1 - based on the go configurations
2 - based on the db configurations
to see what containers are actually running, use the command:
docker ps -a
for more info see docker docs
It is always recommended to run each searvice on a separate container, but if you insist to make an image which has both golangand postrges, you can take a postgres base image and install golang on it, or the other way around, take golangbased image and install postgres on it.
The installation steps can be done inside the Dockerfile, please refer to:
- postgres official Dockerfile
- golang official Dockerfile
combine them to get both.
Edit: (digital ocean deployment)
Well, if you copy every thing (docker images and the yml file) to your droplet, it should bring the application up and running similar to what happens when you do the same on your local machine.
An example can be found here: How To Deploy a Go Web Application with Docker and Nginx on Ubuntu 18.04
In production, usually for large scale/traffic applications, more advanced solutions are used such as:
- docker swarm
- kubernetes
For more info on Kubernetes on digital ocean, please refer to the official docs
hope this helps you find your way.

Docker: docker compose file for "docker stack deploy"

I have a docker-compose.yml file which works with docker-compose up --build. My app works and everything is fine.
version: '3'
services:
myapp:
container_name: myapp
restart: always
build: ./myapp
ports:
- "8000:8000"
command: /usr/local/bin/gunicorn -w 2 -b :8000 flaskplot:app
nginx:
container_name: nginx
restart: always
build: ./nginx
ports:
- "80:80"
depends_on:
- myapp
But when I use docker stack deploy -c docker-compose.yml myapp, I get the following error:
Ignoring unsupported options: build, restart
Ignoring deprecated options:
container_name: Setting the container name is not supported.
Creating network myapp_default
Creating service myapp_myapp
failed to create service myapp_myapp: Error response from daemon: rpc error: code = InvalidArgument desc = ContainerSpec: image reference must be provided
any hints how I should "translate" the docker-compose.yml file to make it compatible with docker stack deploy?
To run containers in swarm mode, you do not build them on each swarm node individually. Instead you build the image once, typically on a CI server, push to a registry server (often locally hosted, or you can use docker hub), and specify the image name inside your compose file with an "image" section for each service.
Doing that will get rid of the hard error. You'll likely remove the build section of the compose file since it no longer applies.
Specifying "container_name" is unsupported because it would break the ability to scale or perform updates (a container name must be unique within the docker engine). Let swarm name the containers and reference your app on the docker network by it's service name.
Specifying "depends_on" is not supported because containers may be started on different nodes, and rolling updates/failure recovery may remove some containers providing a service after the app started. Docker can retry the failing app until the other service starts up, or preferably you configure an entrypoint that waits for the dependencies to become available with some kind of ping for a minute or two.
Without seeing your Dockerfile, I'd also recommend setting up a healthcheck on each image. Swarm mode uses this to control rolling updates and recover from application failures.
Lastly, consider adding a "deploy" section to your compose file. This tells swarm mode how to deploy and update your service, including how many replicas, constraints on where to run, memory and CPU limits and requirements, and how fast to update the service. You can define a restart policy here as well but I recommend against it since I've seen docker engines restarting containers that conflict with swarm mode deploying containers on other nodes, or even a new container on the same node.
You can see the full compose file documentation with all of these options here: https://docs.docker.com/compose/compose-file/

How to run protractor on docker?

I'm a newb with docker & protractor so please bear with me.
I have an app that uses python and django for its backend API, and angular.js for its frontend and e2e test with protractor. So this is how I think I should proceed:
I must set a docker container for my backend which is in Python-Django, then expose this API through some PORT.
Create another container (or a layer not sure which) for the angular.js frontend.
Download an image for protractor and build the container.
Connect all of this containers layer through docker network?
Alternative
Run backend on local machine.
Create docker container for protractor and somehow point the e2e test to the container?
Please help me review the steps to achieve this. This video gives some insight but not sure where to start.
Your initial idea is just about right. When setting this up, I typically use a docker-compose file like so...
#docker-compose.yml
version: '2'
services:
backend:
build: ./backend
command: <your django startup command>
db:
image: <postgres or whatever>
frontend:
build: ./frontend
command: <npm start or equivalent>
ports:
- "80:80"
Then, I would run my tests with
docker-compose run --rm frontend <MY TESTING COMMAND HERE>
Docker-compose handles the docker networking stuff for you- in that case your frontend would be able to access your backend at http://backend:. Protractor and npm and all that fun stuff is installed in your frontend container.
The one major pain point that you haven't thought of yet is that protractor requires a display to work- it won't work with a headless browser like phantomjs, which your docker containers will usually not provide. This repo is an example of how to install a real browser and provide it a fake display so that it will work in a container... https://github.com/mark-adams/docker-chromium-xvfb, basically replace the chrome startup script with a shell script that starts an xvfb interface and attaches the browser to it.

How to link multiple Docker containers and encapsulate the result?

I have a Node.js web-application that connects to a Neo4j database. I would like to encapsulate these in a single Docker image (using also a Neo4j Docker container), but I'm a docker novice and can't seem to figure this out. What's the recommended way to do it in the latest Docker versions?
My intuition would be to run the Neo4j container nested inside the app container. But from what I've read, I think the supported / recommended approach is to link the containers together. What I need is pretty well illustrated in this image. But the article where the image comes from isn't clear to me. Anyway, it's using the soon-to-be-deprecated legacy container linking, while networking is recommended these days. A tutorial or explanation would be much appreciated.
Also, how does docker-compose fit into all this?
Running a container within another container would imply to run a Docker engine within a Docker container. This is referenced as dind for Docker-in-Docker and I would strongly advise against it. You can search 'dind' online and discover why in most cases it is a bad idea, but as it is not the main object of your question I won't extend this subject any further.
Running both a node.js process and a neo4j process in the same container
While most people will tell you to refrain yourself from running more than one process within a Docker container, nothing prevents you from doing so. If you want to follow this path, take a look at the Using Supervisor with Docker from the Docker documentation website, or at the Phusion baseimage Docker image.
Just be aware that this way of doing things will make your Docker image more and more difficult to maintain over time.
Linking containers
As you found out, keeping Docker images as simple as you can (i.e: running one and only one app within a Docker container) will make your life easier on the long term.
Linking containers together is trivial when both containers run on the same Docker engine. It is just a matter of:
having your neo4j container expose the port its service listens on
running your node.js container with the --link <neo4j container name>:<alias> option
within the node.js application configuration, set the neo4j host to the <alias> hostname, docker will take care of forwarding that connection to the IP it assigned to the neo4j container
When you want to run those two containers on different hosts, things get more difficult.
With Docker Compose, you have to use the link: key to define your links
The new Docker network feature
You also discovered that linking containers won't be supported in the future and that the new way of making multiple Docker containers communicate is to create a virtual network and attach those 2 containers to that network.
Here's how to proceed:
docker network create mynet
docker run --detach --name myneo4j --net mynet neo4j
docker run --detach --name mynodejs --net mynet <your nodejs image>
Your node application configuration should then use myneo4j as the host to connect to.
To tell Docker Compose to use the new network feature, you would have to use the --x-networking option. Also you would not use the links: key.
Using the new networking feature also means that you won't be able to define any alias for the db. As a result you have to use the container name. Beware that unless you use the container_name: key in your docker-compose.yml file, Compose will create container names based on the directory which contains your docker-compose.yml file, the service name as found in the yml file and a number.
For instance, the following docker-compose.yml file, if within a directory named "foo" would create two containers named foo_web_1 and foo_db_1:
web:
build: .
ports:
- "8000:8000"
db:
image: postgres
when started with docker-compose --x-networking up, the web app configuration should then use foo_db_1 as the db hostname.
While if you use container_name:
web:
build: .
ports:
- "8000:8000"
db:
image: postgres
container_name: mydb
when started with docker-compose --x-networking up, the web app configuration should then use mydb as the db hostname.
Example of using Docker Compose to run a web app using nodeJS and neo4j
In this example, I will show how to dockerize the example app from github project aseemk/node-neo4j-template which uses nodejs and neo4j.
I assume you already have Docker 1.9.0+ and Docker Compose 1.5+ installed.
This project will use 2 docker containers, one to run the neo4j database and one to run the nodeJS web app.
Dockerizing the web app
We need to build a Docker image from which Docker compose will run a container. For that, we will write a Dockerfile.
Create a file named Dockerfile (mind the capital D) with the following content:
FROM node
RUN git clone https://github.com/aseemk/node-neo4j-template.git
WORKDIR /node-neo4j-template
RUN npm install
# ugly 20s sleep to wait for neo4j to initialize
CMD sleep 20s && node app.js
This Dockerfile describes the steps the Docker engine will have to follow to build a docker image for our web app. This docker image will:
be based on the official node docker image
clone the nodeJS example project from Github
change the working directory to the directory containing the git clone
run the npm install command to download and install the nodeJS app dependencies
instruct docker which command to use when running a container of that image
A quick review of the nodeJS code reveals that the author allows us to configure the URL to use to connect to the neo4j database using the NEO4J_URL environment variable.
Dockerizing the neo4j database
Well people took care of that for us already. We will use the official Docker image for neo4j which can be found on the Docker Hub.
A quick review of the readme tells us to use the NEO4J_AUTH environment variable to change the neo4j password. And setting this variable to none will disable the authentication all together.
Setting up Docker Compose
In the same directory as the one containing our Dockerfile, create a docker-compose.yml file with the following content:
db:
container_name: my-neo4j-db
image: neo4j
environment:
NEO4J_AUTH: none
web:
build: .
environment:
NEO4J_URL: http://my-neo4j-db:7474
ports:
- 80:3000
This Compose configuration file describes 2 services: db and web.
The db service will produce a container named my-neo4j-db from the official neo4j docker image and will start that container setting up the NEO4J_AUTH environment variable to none.
The web service will produce a container named at docker compose discretion using a docker image built from the Dockerfile found in the current directory (build: .). It will start that container setting up the environment variable NEO4J_URL to http://my-neo4j-db:7474 (note how we use here the name of the neo4j container my-neo4j-db). Furthermore, docker compose will instruct the Docker engine to expose the web container's port 3000 on the docker host port 80.
Firing it up
Make sure you are in the directory that contains the docker-compose.yml file and type: docker-compose --x-networking up.
Docker compose will read the docker-compose.yml file, figure out it has to first build a docker image for the web service, then create and start both containers and finally will provide you with the logs from both containers.
Once the log shows web_1 | Express server listening at: http://localhost:3000/, everything is cooked and you can direct your Internet navigator to http://<ip of the docker host>/.
To stop the application, hit Ctrl+C.
If you want to start the app in the background, use docker-compose --x-networking up -d instead. Then in order to display the logs, run docker-compose logs.
To stop the service: docker-compose stop
To delete the containers: docker-compose rm
Making neo4j storage persistent
The official neo4j docker image readme says the container persists its data on a volume at /data. We then need to instruct Docker Compose to mount that volume to a directory on the docker host.
Change the docker-compose.yml file with the following content:
db:
container_name: my-neo4j-db
image: neo4j
environment:
NEO4J_AUTH: none
volumes:
- ./neo4j-data:/data
web:
build: .
environment:
NEO4J_URL: http://my-neo4j-db:7474
ports:
- 80:3000
With that config file, when you will run docker-compose --x-networking up, docker compose will create a neo4j-data directory and mount it into the container at location /data.
Starting a 2nd instance of the application
Create a new directory and copy over the Dockerfile and docker-compose.yml files.
We then need to edit the docker-compose.yml file to avoid name conflict for the neo4j container and the port conflict on the docker host.
Change its content to:
db:
container_name: my-neo4j-db2
image: neo4j
environment:
NEO4J_AUTH: none
volumes:
- ./neo4j-data:/data
web:
build: .
environment:
NEO4J_URL: http://my-neo4j-db2:7474
ports:
- 81:3000
Now it is ready for the docker-compose --x-networking up command. Note that you must be in the directory with that new docker-compose.yml file to start the 2nd instance up.

Resources