My aim is to access a container via URL from another container using docker-compose.
So, suppose i have the following docker-compose.yml file
version: "3.8"
services:
web:
build: web
ports:
- "8000:8000"
depends_on:
- db
db:
image: postgres
ports:
- "8001:5432"
and a Dockerfile Dockerfile in the folder web
FROM alpine:3.7
RUN ping postgres://db:5432
Running docker-compose build returns
db uses an image, skipping
Building web
Step 1/2 : FROM alpine:3.7
---> 6d1ef012b567
Step 2/2 : RUN ping postgres://db:5432
---> Running in afbfcd27b340
ping: bad address 'postgres://db:5432'
Service 'web' failed to build : The command '/bin/sh -c ping postgres://db:5432' returned a non-zero code: 1
The docs for networking in docker compose (
https://docs.docker.com/compose/networking/#links) states:
Each container can now look up the hostname web or db and get back the appropriate container’s IP address. For example, web’s application code could connect to the URL postgres://db:5432 and start using the Postgres database.
What is the correct URL the connect to the container obtained from service db?
During the web image build, your db container does not exist, so using RUN is incorrect here.
One option would to include the CMD command in the Dockerfile which will instruct the web container to run the ping command every time the container is started up.
Also, I've adjusted the argument being passed to the ping command.
So, the web Dockerfile would be:
FROM alpine:3.7
CMD ["ping", "db:5432"]
Now, after docker-compose build and docker-compose up, you will see that the web container pings the db container on part 5432 and receives a response.
docker-compose starts a bridge network and adds all of the containers to this network so they can communicate with each other. Each container's hostname is the same as their service name in the docker-compose file. The hostnames are resoved by an internal DNS service.
Related
I am running Cypress version 10.9 from inside Docker in a Mac OS. I set my base URL as localhost:80. As a simple example, I am running an Apache server on localhost:80 which if I go to a web browser, I get the 'It works!' page, so it is indeed up. I also can ping localhost:80 from the same terminal I am executing my Docker Cypress container.
But I get this error every time when attempting to run my Cypress container:
Cypress could not verify that this server is running:
> http://localhost
We are verifying this server because it has been configured as your baseUrl.
I do see there are some stackoverflow posts(ie, [https://stackoverflow.com/questions/53959995/cypress-could-not-verify-that-the-server-set-as-your-baseurl-is-running][1]) that talk about this error. However, the application under test in these posts are inside another Docker container. The Apache page is not under a container.
This is my docker-compose.yml:
version: '3'
services:
# Docker entry point for the whole repo
e2e:
build:
context: .
dockerfile: Dockerfile
environment:
CYPRESS_BASE_URL: $CYPRESS_BASE_URL
CYPRESS_USERNAME: $CYPRESS_USERNAME
CYPRESS_PASSWORD: $CYPRESS_PASSWORD
volumes:
- ./:/e2e
I pass 'http://localhost' from my environment CYPRESS_BASE_URL setting.
This is the docker command I use to build my image:
docker compose up --build
And then to run the Cypress container:
docker compose run --rm e2e cypress run
Some other posts suggest running the docker run command with --network to make sure my Cypress container runs on the same network as the compose network(ref: Why Cypress is unable to determine if server is running?) but I am executing 'docker compose run' which does not have a --network argument.
I also verified that my /etc/hosts has an entry of 127.0.0.1 localhost as other posts have suggested. Any suggestions? Thanks.
I'm having a couple of issues running docker-compose.
docker-compose up already works in starting the webservice (stuffapi) and I can hit the endpoint with http://localhost:8080/stuff.
I have a small go app that I would like to run with docker-compose using a local dockerfile. The dockerfile when built locally cannot call the stuffapi service on localhost. I have tried using the service name, ie http://stuffapi:8080 however this gives an error lookup stuffapi on 192.168.65.1:53: no such host.
I'm guessing this has something to do with the default network setup?
After the stuffapi service has started I would like my service to be built (stuffsdk in dockerfile), then execute a command to run the go app which calls the stuff (web) service. docker-compose tries to build the local dockerfile first but when it runs its last command RUN ./main, it fails as the stuffapi hasn't been started first. In my service I have a depends_on the stuffapi service so I thought that would start first?
docker-compose.yaml
version: '3'
services:
stuffapi:
image: XXX
ports:
- 8080:8080
stuffsdk:
depends_on:
- stuffapi
build: .
dockerfile:
From golang:1.15
RUN mkdir /stuffsdk
RUN mkdir /main
ADD ./stuffsdk /stuffsdk
ADD ./main /main
ENV BASE_URL=http://stuffapi:8080
WORKDIR /main
RUN go build
RUN ./main
I have 2 hosts, a web unit (WU) and a computing unit (CU). On the WU, I have my website. On the CU, I have a redis server and a (C++) app that does some computing.
The user enters input data in the website, and then I want to enqueue a job from the WU to the Redis server on the CU. I have then a worker on the CU which performs a task.
Now, I am able to enqueue a job from the WU (outside of any docker image) to the CU from the terminal (using the python rq module). However, my website is in a docker image, and I can't get it working. From within the docker image, I try to connect to 172.17.0.1:6370 (172.17.0.1 is the IP of the gateway between the image and the docker host). The error I get is connection refused. Then I thought I might have to map the ports in my docker-compose file: 6739:6739. However, then I got an error saying the port is already used. And indeed, it is used by the stunnel4 service which allows me to enqueue jobs from the WU to the redis server on the CU.
Should I run the stunnel4 service in the docker image are something? And if so, how could I do that? Or should I tackle my problem in a different way?
Network structure
WU and CU are 2 (virtual) machines. My redis server is on CU and not in a docker container. I am able to connect to the redis server from WU to CU by means of the python redis module (but not from within a docker container). I had to set up a stunnel4.service for that (redis-client on WU and redis-server on CU).
Finally I managed to build a stunnel service in a docker container on the WU. I can now simply connect with python redis to that stunnel service, and the end of the tunnel points to the CU.
Here is what I did on the WU:
Dockerfile
FROM alpine:3.12
RUN apk add --no-cache stunnel
COPY ./entrypoint.sh /entrypoint.sh
RUN chmod +x /entrypoint.sh
COPY ./ca_file.crt /etc/stunnel/ca_file.crt
ENTRYPOINT ["/entrypoint.sh"]
entrypoint.sh
#!/bin/sh
cd /etc/stunnel
cat > stunnel.conf <<_EOF_
foreground = yes
[stunnel-client]
client = yes
accept = ${ACCEPT}
connect = ${CONNECT}
CAfile = ca_file.crt
verify = 4
_EOF_
exec stunnel "$#"
The ACCEPT and CONNECT values are specified in an environment file:
.env.stunnel
ACCEPT=6379
CONNECT=10.110.0.3:6379
where 10.110.0.3 is the IP address of my redis host.
docker-compose
stunnel-client:
container_name: stunnel-client
build:
context: ./stunnel
dockerfile: Dockerfile
restart: always
volumes:
- stunnel_volume:/etc/stunnel
env_file:
- ./.env.stunnel
networks:
- stunnel-net
ports:
- "6379:6379"
The stunnel-net is also in my web service so I can connect from there to the stunnel-client service by means of python redis.
I am writing a docker compose file for my web app.If I use 'link' to connect services with each other do I also need to include 'port'? And is 'depends on' an alternate option of 'links'? What will be best for connection services in a compose file with one another?
The core setup for this is described in Networking in Compose. If you do absolutely nothing, then one service can call another using its name in the docker-compose.yml file as a host name, using the port the process inside the container is listening on.
Up to startup-order issues, here's a minimal docker-compose.yml that demonstrates this:
version: '3'
services:
server:
image: nginx
client:
image: busybox
command: wget -O- http://server/
# Hack to make the example actually work:
# command: sh -c 'sleep 1; wget -O- http://server/'
You shouldn't use links: at all. It was an important part of first-generation Docker networking, but it's not useful on modern Docker. (Similarly, there's no reason to put expose: in a Docker Compose file.)
You always connect to the port the process inside the container is running on. ports: are optional; if you have ports:, cross-container calls always connect to the second port number and the remapping doesn't have any effect. In the example above, the client container always connects to the default HTTP port 80, even if you add ports: ['12345:80'] to the server container to make it externally accessible on a different port.
depends_on: affects two things. Try adding depends_on: [server] to the client container to the example. If you look at the "Starting..." messages that Compose prints out when it starts, this will force server to start starting before client starts starting, but this is not a guarantee that server is up and running and ready to serve requests (this is a very common problem with database containers). If you start only part of the stack with docker-compose up client, this also causes server to start with it.
A more complete typical example might look like:
version: '3'
services:
server:
# The Dockerfile COPYs static content into the image
build: ./server-based-on-nginx
ports:
- '12345:80'
client:
# The Dockerfile installs
# https://github.com/vishnubob/wait-for-it
build: ./client-based-on-busybox
# ENTRYPOINT and CMD will usually be in the Dockerfile
entrypoint: wait-for-it.sh server:80 --
command: wget -O- http://server/
depends_on:
- server
SO questions in this space seem to have a number of other unnecessary options. container_name: explicitly sets the name of the container for non-Compose docker commands, rather than letting Compose choose it, and it provides an alternate name for networking purposes, but you don't really need it. hostname: affects the container's internal host name (what you might see in a shell prompt for example) but it has no effect on other containers. You can manually create networks:, but Compose provides a default network for you and there's no reason to not use it.
I have a Node.js web-application that connects to a Neo4j database. I would like to encapsulate these in a single Docker image (using also a Neo4j Docker container), but I'm a docker novice and can't seem to figure this out. What's the recommended way to do it in the latest Docker versions?
My intuition would be to run the Neo4j container nested inside the app container. But from what I've read, I think the supported / recommended approach is to link the containers together. What I need is pretty well illustrated in this image. But the article where the image comes from isn't clear to me. Anyway, it's using the soon-to-be-deprecated legacy container linking, while networking is recommended these days. A tutorial or explanation would be much appreciated.
Also, how does docker-compose fit into all this?
Running a container within another container would imply to run a Docker engine within a Docker container. This is referenced as dind for Docker-in-Docker and I would strongly advise against it. You can search 'dind' online and discover why in most cases it is a bad idea, but as it is not the main object of your question I won't extend this subject any further.
Running both a node.js process and a neo4j process in the same container
While most people will tell you to refrain yourself from running more than one process within a Docker container, nothing prevents you from doing so. If you want to follow this path, take a look at the Using Supervisor with Docker from the Docker documentation website, or at the Phusion baseimage Docker image.
Just be aware that this way of doing things will make your Docker image more and more difficult to maintain over time.
Linking containers
As you found out, keeping Docker images as simple as you can (i.e: running one and only one app within a Docker container) will make your life easier on the long term.
Linking containers together is trivial when both containers run on the same Docker engine. It is just a matter of:
having your neo4j container expose the port its service listens on
running your node.js container with the --link <neo4j container name>:<alias> option
within the node.js application configuration, set the neo4j host to the <alias> hostname, docker will take care of forwarding that connection to the IP it assigned to the neo4j container
When you want to run those two containers on different hosts, things get more difficult.
With Docker Compose, you have to use the link: key to define your links
The new Docker network feature
You also discovered that linking containers won't be supported in the future and that the new way of making multiple Docker containers communicate is to create a virtual network and attach those 2 containers to that network.
Here's how to proceed:
docker network create mynet
docker run --detach --name myneo4j --net mynet neo4j
docker run --detach --name mynodejs --net mynet <your nodejs image>
Your node application configuration should then use myneo4j as the host to connect to.
To tell Docker Compose to use the new network feature, you would have to use the --x-networking option. Also you would not use the links: key.
Using the new networking feature also means that you won't be able to define any alias for the db. As a result you have to use the container name. Beware that unless you use the container_name: key in your docker-compose.yml file, Compose will create container names based on the directory which contains your docker-compose.yml file, the service name as found in the yml file and a number.
For instance, the following docker-compose.yml file, if within a directory named "foo" would create two containers named foo_web_1 and foo_db_1:
web:
build: .
ports:
- "8000:8000"
db:
image: postgres
when started with docker-compose --x-networking up, the web app configuration should then use foo_db_1 as the db hostname.
While if you use container_name:
web:
build: .
ports:
- "8000:8000"
db:
image: postgres
container_name: mydb
when started with docker-compose --x-networking up, the web app configuration should then use mydb as the db hostname.
Example of using Docker Compose to run a web app using nodeJS and neo4j
In this example, I will show how to dockerize the example app from github project aseemk/node-neo4j-template which uses nodejs and neo4j.
I assume you already have Docker 1.9.0+ and Docker Compose 1.5+ installed.
This project will use 2 docker containers, one to run the neo4j database and one to run the nodeJS web app.
Dockerizing the web app
We need to build a Docker image from which Docker compose will run a container. For that, we will write a Dockerfile.
Create a file named Dockerfile (mind the capital D) with the following content:
FROM node
RUN git clone https://github.com/aseemk/node-neo4j-template.git
WORKDIR /node-neo4j-template
RUN npm install
# ugly 20s sleep to wait for neo4j to initialize
CMD sleep 20s && node app.js
This Dockerfile describes the steps the Docker engine will have to follow to build a docker image for our web app. This docker image will:
be based on the official node docker image
clone the nodeJS example project from Github
change the working directory to the directory containing the git clone
run the npm install command to download and install the nodeJS app dependencies
instruct docker which command to use when running a container of that image
A quick review of the nodeJS code reveals that the author allows us to configure the URL to use to connect to the neo4j database using the NEO4J_URL environment variable.
Dockerizing the neo4j database
Well people took care of that for us already. We will use the official Docker image for neo4j which can be found on the Docker Hub.
A quick review of the readme tells us to use the NEO4J_AUTH environment variable to change the neo4j password. And setting this variable to none will disable the authentication all together.
Setting up Docker Compose
In the same directory as the one containing our Dockerfile, create a docker-compose.yml file with the following content:
db:
container_name: my-neo4j-db
image: neo4j
environment:
NEO4J_AUTH: none
web:
build: .
environment:
NEO4J_URL: http://my-neo4j-db:7474
ports:
- 80:3000
This Compose configuration file describes 2 services: db and web.
The db service will produce a container named my-neo4j-db from the official neo4j docker image and will start that container setting up the NEO4J_AUTH environment variable to none.
The web service will produce a container named at docker compose discretion using a docker image built from the Dockerfile found in the current directory (build: .). It will start that container setting up the environment variable NEO4J_URL to http://my-neo4j-db:7474 (note how we use here the name of the neo4j container my-neo4j-db). Furthermore, docker compose will instruct the Docker engine to expose the web container's port 3000 on the docker host port 80.
Firing it up
Make sure you are in the directory that contains the docker-compose.yml file and type: docker-compose --x-networking up.
Docker compose will read the docker-compose.yml file, figure out it has to first build a docker image for the web service, then create and start both containers and finally will provide you with the logs from both containers.
Once the log shows web_1 | Express server listening at: http://localhost:3000/, everything is cooked and you can direct your Internet navigator to http://<ip of the docker host>/.
To stop the application, hit Ctrl+C.
If you want to start the app in the background, use docker-compose --x-networking up -d instead. Then in order to display the logs, run docker-compose logs.
To stop the service: docker-compose stop
To delete the containers: docker-compose rm
Making neo4j storage persistent
The official neo4j docker image readme says the container persists its data on a volume at /data. We then need to instruct Docker Compose to mount that volume to a directory on the docker host.
Change the docker-compose.yml file with the following content:
db:
container_name: my-neo4j-db
image: neo4j
environment:
NEO4J_AUTH: none
volumes:
- ./neo4j-data:/data
web:
build: .
environment:
NEO4J_URL: http://my-neo4j-db:7474
ports:
- 80:3000
With that config file, when you will run docker-compose --x-networking up, docker compose will create a neo4j-data directory and mount it into the container at location /data.
Starting a 2nd instance of the application
Create a new directory and copy over the Dockerfile and docker-compose.yml files.
We then need to edit the docker-compose.yml file to avoid name conflict for the neo4j container and the port conflict on the docker host.
Change its content to:
db:
container_name: my-neo4j-db2
image: neo4j
environment:
NEO4J_AUTH: none
volumes:
- ./neo4j-data:/data
web:
build: .
environment:
NEO4J_URL: http://my-neo4j-db2:7474
ports:
- 81:3000
Now it is ready for the docker-compose --x-networking up command. Note that you must be in the directory with that new docker-compose.yml file to start the 2nd instance up.