docker-compose simple networking demo - docker

I am new to docker and docker-compose and I'm trying to understand networking in docker. I have the following docker-compose.yml file
version: '3'
services:
app0:
build:
context: ./
dockerfile: Dockerfile0
app1:
build:
context: ./
dockerfile: Dockerfile1
And the Dockerfiles look like
FROM: python:latest
I'm using a python image because that's what I want for my actual use-case.
I run
docker-compose build
docker-compose up
output:
Building app0
Step 1/1 : FROM python:latest
---> 3624d01978a1
Successfully built 3624d01978a1
Successfully tagged docker_test_app0:latest
Building app1
Step 1/1 : FROM python:latest
---> 3624d01978a1
Successfully built 3624d01978a1
Successfully tagged docker_test_app1:latest
Starting docker_test_app0_1 ... done
Starting docker_test_app1_1 ... done
Attaching to docker_test_app0_1, docker_test_app1_1
docker_test_app0_1 exited with code 0
docker_test_app1_1 exited with code 0
From what I've read, docker-compose will create a default network and both containers will be attached to that network and should be able to communicate. I want to come up with a very simple demonstration of this, for example using ping like this:
docker-compose run app0 ping app1
output:
ping: app1: Name or service not known
Am I misunderstanding how docker-compose networking works? Should I be able to ping app1 from app0 and vice versa?
running on amazon linux.
docker-compose version version 1.23.2, build 1110ad01

You need to add something (a script, via CMD) to those Python containers that keeps them running, something listening on a port or a simple loop.
Right now they immediately terminate after starting and there is nothing to ping. (The whole container shuts down when its command finished)

Defining services in the docker-composer.yaml file maybe not not enough as if one service will be down the other one won't have information about it's IP address.
You can however create a dependence between them which will for example allow the instance to automatically start app1 service when you start app0.
Set following configuration:
version: '3'
services:
app0:
build:
context: ./
dockerfile: Dockerfile0
depends_on:
- "app1"
app1:
build:
context: ./
dockerfile: Dockerfile1
This is a good practice in case you want services to communicate between each other.

Related

How to access a website running in a container when you´re using network_mode: host

I have a very tricky topic because I need to access a private DB in AWS. In order to connect to this DB, first I need to create a bridge like this:
ssh -L 127.0.0.1:LOCAL_PORT:DB_URL:PORT -N -J ACCOUNT#EMAIL.DOMAIN -i ~/KEY_LOCATION/KEY_NAME.pem PC_USER#PC_ADDRESS
Via 127.0.0.1:LOCAL_PORT:DB_URL I can connect to the DB in my Java app. Let´s say the port is 9991 for this case.
My docker files more or less look this:
docker-compose.yml
version: '3.4'
services:
api:
image: fanmixco/example:v0.01
build:
context: .
network_mode: host
environment:
- SPRING_DATASOURCE_URL=jdbc:postgresql://host.docker.internal:9991/MY_DB
Dockerfile
FROM openjdk:11
RUN mkdir /home/app/
WORKDIR /home/app/
RUN mkdir logs
COPY ./target/MY_JAVA_APP.jar .
EXPOSE 8080
ENTRYPOINT ["java", "-jar", "MY_JAVA_APP.jar"]
The image runs properly. However, if I try:
using localhost:8080/MY_APP fails
using 127.0.0.1/MY_APP fails
getting the container's IP and use it later fails
using host.docker.internal/MY_APP fails
I´m wondering how I can test my app. I know it´s running because I get a successful message in the console and the new data was added to the DB, but I don´t know how I can test it or access it. Any idea of the proper way to do it? Thanks.
P.S.:
I´m running my Images in Docker Desktop for Windows.
I have another case using tomcat 9 and running CMD ["catalina.sh", "run"] and I know it's working because I get this message in the console:
INFO [main] org.apache.catalina.startup.Catalina.start Server startup in [9905] milliseconds
But I cannot access it again.
I'm not really sure what the issue is based on the above information since I cannot replicate the system on my own machine.
However, these are some places to look:
you might be running into an issue similar to this: https://github.com/docker/for-mac/issues/1031 because of the networking magic you are doing with ssh and AWS DB
you should try specifying either a build/Dockerfile or an image, and avoid specifying both
version: '3.4'
services:
api:
image: fanmixco/example:v0.01 # choose using an image
build: # or building from a Dockerfile
context: . # but not both
network_mode: host
environment:
- SPRING_DATASOURCE_URL=jdbc:postgresql://host.docker.internal:9991/MY_DB
Hope that helps 🤞🏻 and good luck 🍀
I guess you need to bind the port of your container.
Try to add the 'port' property to your docker-compose file
version: '3.4'
services:
api:
image: fanmixco/example:v0.01
build:
context: .
port:
- 8080:8080
network_mode: host
environment:
- SPRING_DATASOURCE_URL=jdbc:postgresql://host.docker.internal:9991/MY_DB
Have a look on https://docs.docker.com/compose/compose-file/compose-file-v3/#endpoint_mode

issues in docker-compose when running up, cannot find localhost and services starting in wrong order

I'm having a couple of issues running docker-compose.
docker-compose up already works in starting the webservice (stuffapi) and I can hit the endpoint with http://localhost:8080/stuff.
I have a small go app that I would like to run with docker-compose using a local dockerfile. The dockerfile when built locally cannot call the stuffapi service on localhost. I have tried using the service name, ie http://stuffapi:8080 however this gives an error lookup stuffapi on 192.168.65.1:53: no such host.
I'm guessing this has something to do with the default network setup?
After the stuffapi service has started I would like my service to be built (stuffsdk in dockerfile), then execute a command to run the go app which calls the stuff (web) service. docker-compose tries to build the local dockerfile first but when it runs its last command RUN ./main, it fails as the stuffapi hasn't been started first. In my service I have a depends_on the stuffapi service so I thought that would start first?
docker-compose.yaml
version: '3'
services:
stuffapi:
image: XXX
ports:
- 8080:8080
stuffsdk:
depends_on:
- stuffapi
build: .
dockerfile:
From golang:1.15
RUN mkdir /stuffsdk
RUN mkdir /main
ADD ./stuffsdk /stuffsdk
ADD ./main /main
ENV BASE_URL=http://stuffapi:8080
WORKDIR /main
RUN go build
RUN ./main

Docker: How to update your container when your code changes

I am trying to use Docker for local development. The problem is that when I make a change to my code, I have to run the following commands to see the updates locally:
docker-compose down
docker images # Copy the name of the image
docker rmi <IMAGE_NAME>
docker-compose up -d
That's quite a mouthful, and takes a while. (Possibly I could make it into a bash script, but do you think that is a good idea?)
My real question is: Is there a command that I can use (even manually each time) that will update the image & container? Or do I have to go through the entire workflow above every time I make a change in my code?
Just for reference, here is my Dockerfile and docker-compose.yml.
Dockerfile
FROM node:12.18.3
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
RUN npm run build
EXPOSE 4000
CMD ["npm", "start"]
docker-compose.yml
version: "2"
services:
web:
build:
context: .
dockerfile: Dockerfile
container_name: web
restart: always
ports:
- "3000:3000"
depends_on:
- mongo
mongo:
container_name: mongo
image: mongo
volumes:
- ./data:/data/db
ports:
- "27017:27017"
Even though there are multiple good answers to this question, I think they missed the point, as the OP is asking about the local dev environment. The command I usually use in this situation is:
docker-compose up -d --build
If there aren't any errors in Dockerfile, it should rebuild all the images before bringing up the stack. It could be used in a shell script if needed.
#!/bin/bash
sudo docker-compose up -d --build
If you need to tear down the whole stack, you can have another script:
#!/bin/bash
sudo docker-compose down -v
The -v flag removes all the volumes so you can have a fresh start.
NOTE: In some cases, sudo might not be needed to run the command.
When a docker image is build the artifacts are already copied and no new change can reflect until you rebuild the image.
But
If it is only for local development, then you can leverage volume sharing to update code inside container in runtime. The idea is to share your app/repo directory on host machine with /usr/src/app (as per your Dockerfile) and with this approach your code (and new changes) will be appear on both host and the running container.
Also, you will need to restart the server on every change and for this you can run your app using nodemon (as it watches for changes in code and restarts the server)
Changes required in Dockerfile.
services:
web:
...
container_name: web
...
volumes:
- /path/in/host/machine:/usr/src/app
...
...
ports:
- "3000:3000"
depends_on:
- mongo
You may use Docker Swarm as an orchestration tool to apply rolling updates. Check Apply rolling updates to a service.
Basically you issue docker compose up once and do it with a shell script maybe, and once you get your containers running and then you may create a Jenkinsfile or configure a CI/CD pipeline to pull the updated image and apply it to running container with previous image with docker service update <NEW_IMAGE>.

Docker BaseX DBA

I use the following docker compose file to start the basexhttp server and the dba:
version: '3'
services:
basexhttp:
image: basex/basexhttp
ports:
- "1984:1984"
- "8984:8984"
dba:
image: basex/dba:8.5.4
ports:
- "11984:1984"
- "18984:8984"
- "18985:8985"
According to the documentation I should get the dba page with:
http://<host>:18984/dba.
Returns No function found that matches the request.
How do I get this to work?
Hi bergtwvd — I am sorry but your example is slightly outdated, we no longer maintain a separate basex/dba image — mostly due to our DBA no longer supporting connecting to remote basex instances..
I think the best approach is building your own image based on our "official" basexhttp image, that contains the DBA code:
Download BaseX.zip from http://files.basex.org/releases/
Create an empty folder for building your docker image.
Create a Dockerfile inside that folder with the following contents:
# Dockerfile
FROM basex/basexhttp:9.1
MAINTAINER BaseX Team
ADD ./webapp /srv/basex/webapp
Copy the webpapp folder contained in basex.zip into the same folder your Dockerfile is in
Run docker build:
# docker build
docker build -t mydba .
Sending build context to Docker daemon 685.6kB
Step 1/3 : FROM basex/basexhttp:latest
---> c9efb2903a40
Step 2/3 : MAINTAINER BaseX Team
---> Using cache
---> 11228f6d7b17
Step 3/3 : COPY webapp /srv/basex/
---> Using cache
---> d209f033d6d9
Successfully built d209f033d6d9
Successfully tagged mydba:latest
You may as well use this technique with docker-compose:
#docker-compose.yml
version: '3'
services:
dba:
build:
context: .
dockerfile: Dockerfile
ports:
- "8984:8984"
You should now be able to open http://localhost:8984 and access the DBA.
Hope this helps.

Docker RUN multiple instance of a image with different parameters

I am new to docker, so this may sound a bit basic question.
I have a VS.Net core2 console application that is able to take some commandline parameters and provide different services. so in a normal command prompt I can run something like
c:>dotnet myapplication.dll 5000 .\mydb1.db
c:>dotnet myapplication.dll 5001 .\mydb2.db
which creates 2 instance of this application listing on port 5000 & 5001.
I want to now create one docker container for this application and want to run multiple instance of that image and have an ability to pass this parameter as a commandline to the docker run command. However I am unable to see how to configure this either in the docker-compose.yml or the Dockerfile
DockerFile
FROM microsoft/dotnet:2.1-aspnetcore-runtime AS base
WORKDIR /app
EXPOSE 80
# ignoring some of the code here
ENTRYPOINT ["dotnet", "myapplication.dll"]
docker-Compose.yml
version: '3.4'
services:
my.app:
image: ${DOCKER_REGISTRY}my/app
ports:
- 5000:80
build:
context: .
dockerfile: dir/Dockerfile
I am trying to avoid creating multiple image one per each combination of commandline arguments. so is it possible to achieve what I am looking for?
Docker containers are started with an entrypoint and a command; when the container actually starts they are simply concatenated together. If the ENTRYPOINT in the Dockerfile is structured like a single command then the CMD in the Dockerfile or command: in the docker-compose.yml contains arguments to it.
This means you should be able to set up your docker-compose.yml as:
services:
my.app1:
image: ${DOCKER_REGISTRY}my/app
ports:
- 5000:80
command: [80, db1.db]
my.app2:
image: ${DOCKER_REGISTRY}my/app
ports:
- 5001:80
command: [80, db2.db]
(As a side note: if one of the options to the program is the port to listen on, this needs to match the second port in the ports: specification, and in my example I've chosen to have both listen on the "normal" HTTP port and remap it on the hosts using the ports: setting. One container could reach the other, if it needed to, as http://my.app2/ on the default HTTP port.)

Resources