Having trouble communicating between docker-compose services - docker

I have the following docker-compose file:
version: "3"
services:
scraper-api:
build: ./ATPScraper
volumes:
- ./ATPScraper:/usr/src/app
ports:
- "5000:80"
test-app:
build: ./test-app
volumes:
- "./test-app:/app"
- "/app/node_modules"
ports:
- "3001:3000"
environment:
- NODE_ENV=development
depends_on:
- scraper-api
Which build the following Dockerfile's:
scraper-api (a python flask application):
FROM python:3.7.3-alpine
WORKDIR /usr/src/app
COPY requirements.txt ./
RUN pip install --no-cache-dir -r requirements.txt
COPY . .
CMD ["python", "./app.py"]
test-app (a test react application for the api):
# base image
FROM node:12.2.0-alpine
# set working directory
WORKDIR /app
# add `/app/node_modules/.bin` to $PATH
ENV PATH /app/node_modules/.bin:/app/src/node_modules/.bin:$PATH
# install and cache app dependencies
COPY package.json /app/package.json
RUN npm install --silent
RUN npm install react-scripts#3.0.1 -g --silent
RUN npm install axios -g
# start app
CMD ["npm", "start"]
Admittedly, I'm a newbie when it comes to Docker networking, but I am trying to get the react app to communicate with the scraper-api. For example, the scraper-api has the following endpoint: /api/top_10. I have tried various permutations of the following url:
http://scraper-api:80/api/test_api. None of them have been working for me.
I've been scavenging the internet and I can't really find a solution.

The React application runs in the end user's browser, which has no idea this "Docker" thing exists at all and doesn't know about any of the Docker Compose networking setup. For browser apps that happen to be hosted out of Docker, they need to be configured to use the host's DNS name or IP address, and the published port of the back-end service.
A common setup (Docker or otherwise) is to put both the browser apps and the back-end application behind a reverse proxy. In that case you can use relative URLs without host names like /api/..., and they will be interpreted as "the same host and port", which bypasses this problem entirely.

As a side note: when no network is specified inside docker-compose.yml, default network will be created for you with the following name [dir location of docker_compose.yml]_default. For example, if docker_compose.yml is in app folder. the network will be named app_default.
Now, inside this network, containers are reachable by their service names. So scraper-api host should resolve to the right container.
It could be that you are using wrong endpoint URL. In the question, you mentioned /api/top_10 as an endpoint, but URL to test was http://scraper-api:80/api/test_api which is inconsistent.
Also, it could be that you confused the order of the ports in docker-compose.yml for scraper-api service:
ports:
- "5000:80"
5000 is being exposed to host where docker is running. 80 is internal app port. Normally, flask apps are listening on 5000, so I thought you might have meant to say:
ports:
- "80:5000"
In which case, between containers you have to use :5000 as destination port in URLs: http://scraper-api:5000 as an example (+ endpoint suffix, of course).
To check connectivity, you might want to bash into client container, and see if things are connecting:
docker-compose exec test-app bash
wget http://scraper-api
wget http://scraper-api:5000
etc.
If you get a response, then you have connectivity, just need to figure out correct endpoint URL.

Related

ASP.NET Core Webapi startup url - Docker-Compose vs. Docker build

I am encountering an interesting difference in startup behaviour when running a simple net6.0 web api built with docker-compose in comparison to being built with docker. The application itself runs in a kubernetes cluster.
Environment
Minikube v1.26.1
Docker Desktop v4.12
Docker Compose v2.10.2
Building with docker-compose
docker-compose.yml
version: "3.8"
services:
web.api:
build:
context: ./../
dockerfile: ./web.API/Dockerfile
The context is set to the parent directory due to some files needed there.
Dockerfile
FROM mcr.microsoft.com/dotnet/sdk:6.0-alpine AS build
WORKDIR /src
ENV ASPNETCORE_URLS=http://+:80
COPY Directory.Build.props ./Directory.Build.props
COPY .editorconfig ./.editorconfig
COPY ["webapi/web.API", "web.API/"]
RUN dotnet build "web.API/web.API.csproj" -c Release --self-contained true --runtime alpine-x64
RUN dotnet publish "webapi/web.API.csproj" -c Release -o /app/publish \
--no-restore \
--runtime alpine-x64 \
--self-contained true \
/p:PublishSingleFile=true
FROM mcr.microsoft.com/dotnet/runtime-deps:6.0-alpine
WORKDIR /app
EXPOSE 80
EXPOSE 443
COPY --from=build /app/publish .
ENTRYPOINT ["./web.API"]
This results in the app starting up within the kubernetes cluster with the following logs:
Now listening on: http://[::]:80
Building with docker build
Using the same Dockerfile mentioned earlier with the same build context you can see in the docker-compose.yml, a deployment to k8s results in the following log:
Now listening on: http://localhost:5000
Running the image locally
Running the exact same image from the k8s cluster locally however results in
Now listening on: http://[::]:80
Already tried
As suggested in many posts, I tried setting the environment variable ASPNETCORE_URLS via Dockerfile or k8s deployment.yml- neither of which had an impact on the startup url.
I can't seem to figure out why there is a difference between those 2 ways of building an image.
Update
The only thing that seems to work is to add
builder.WebHost.ConfigureKestrel(option =>{
option.ListenAnyIP(80);
});
to the Program.cs.
Still not sure about the reason behind the behaviour though.
A few things:
I assume that the container already running and working on port 80 (docker run) is stopped before attempting to run docker-compose?
Environment variables can be used in docker-compose.yml file
Ports most likely need to be exposed correctly, which from the Dockerfile and docker-compose.yml seems like it is not?
Environment Variables
First off, before the environmental ENV ASPNETCORE_URLS=http://+:80 is going to be of any use, your docker-compose instance does not define which ports to use, your docker-compose (if trimmed) does not show any ports.
Perhaps because the ports aren't exposed, this means the environmental tries to connect to 80, which fails (already in use/not exposed) and then fails, and somehow connects on 5000.
Alternatively, more likely: it does not really not _see your ENV ASPNETCORE_URLS
You can try environment variables directly in your docker-compose file:
my-service:
image: ${IMAGE_NAME}
environment:
MY_SECRET_KEY: ${MY_SECRET_KEY}
Publishing ports
In docker-compose file you need this, to publish ports:
ports:
- "80"
- "443"
... or
ports:
- "80:80" // "host-port:container-port"
- "443:1234"
Additional information
The keyword EXPOSE\expose in Dockerfile/docker-compose.yml is just informative (comments in a sense), functionally it does not process anything. A port need to be exposed (published) to be used.
So, those EXPOSE 443 & 80 is not telling Docker to use it, perhaps you are running your container for example like this:
This exposes port 80 for it to be available.
docker run -p 127.0.0.1:80:80/tcp image command
In short, use ports keyword in docker-compose.yml.
EDIT:
I read your comment above:
But the app is not accessible in k8s when listening to localhost:5000 even with correct service and container configuration
This indicates what I am trying to say regarding the ports being published or not. Your port 5000 is also not exposed because nothing in your configuration shows that is the case.

Is it even possible to convert my docker-compose.yml to heroku.yml?

So I'm trying to deploy my app to Heroku.
Here is my docker-compose.yml
version: '3'
#Define services
services:
#Back-end Spring Boot Application
entaurais:
#The docker file in scrum-app build the jar and provides the docker image with the following name.
build: ./entauraIS
container_name: backend
#Environment variables for Spring Boot Application.
ports:
- 8080:8080 # Forward the exposed port 8080 on the container to port 8080 on the host machine
depends_on:
- postgresql
postgresql:
image: postgres:13
environment:
- POSTGRES_PASSWORD=root
- POSTGRES_USER=postgres
- POSTGRES_DB=entauracars
ports:
- "5433:5433"
expose:
- "5433"
entaura-front:
build: ./entaura-front
container_name: frontend
ports:
- "4200:4200"
volumes:
- /usr/src/app/node_modules
My frontend Dockerfile:
FROM node:14.15.0
RUN mkdir /usr/src/app
WORKDIR /usr/src/app
COPY package*.json /usr/src/app/
RUN npm install
COPY . /usr/src/app
EXPOSE 4200
CMD [ "npm", "start" ]
My backend Dockerfile:
FROM maven:3.6.0-jdk-11-slim AS build
COPY src /usr/src/app/src
COPY pom.xml /usr/src/app
RUN mvn -f /usr/src/app/pom.xml clean package
FROM openjdk:11-jre-slim
COPY --from=build /usr/src/app/target/entauraIS.jar /usr/app/entauraIS.jar
ENTRYPOINT ["java","-jar","/usr/app/entauraIS.jar"]
As far as I'm aware heroku needs it's own heroku.yml file, but with the examples I've seen I have no idea how to convert it to my sitaution. Any help is appreaciated, I am completely lost with Heroku.
One of the examples of heroku.yml that I looked at:
build:
docker:
web: Dockerfile
run:
web: npm run start
release:
image: web
command:
-npm run migrate up
docker-compose.yml to heroku.yml
docker-compose has some similar fields that heroku.yml. You could create manually.
It will be awesome the creation of some npm module to convert the docker-compose to heroku.yml. You just need to read the docker-compose.yml and pick some values to create a heroku.yml. Check this to know how read and write yml files.
docker is not required in heroku
If you are looking for a platform to deploy your apps and avoid infrastructure nightmares, heroku is an option for you.
Even more, if your application are standard (java & nodejs), does not need crazy configurations to build and is self-contained (no private libraries), you don't need docker :D
If your nodejs package.json has the standard scripts: start and build, it will run in heroku just perform git push to heroku without Dockerfile. Heroku will detect the nodejs, version and your app will start.
If your java has the spring-boot standard configurations, is the same, just push your code to heroku. In this case, previously to the push, add the postgress add-on manually and use environment variables in your application.properties jdbc url.
one process by app in heroku
If you have api + frontend you will need two apps in heroku. Also your api will need the postgress add-on
Heroku does not work like docker-compose, I mean : one host with all of your apps: front + api + db
Docker
If you want to use Docker, just put the Dockerfile and git push. Heroku will detect that docker is required and will perform the standard commands : docker build ... docker run... so, no extra configuration is required
heroku.yml
If you Docker is mandatory for your apps, and the standard docker build ... and docker run... is not enough for your apps, you wil need heroku.yml
You will need one heroku.yml by each app in heroku.
One advantage of this could be that the manually addition of postgress add-on will not required because is defined in heroku.yml

issues in docker-compose when running up, cannot find localhost and services starting in wrong order

I'm having a couple of issues running docker-compose.
docker-compose up already works in starting the webservice (stuffapi) and I can hit the endpoint with http://localhost:8080/stuff.
I have a small go app that I would like to run with docker-compose using a local dockerfile. The dockerfile when built locally cannot call the stuffapi service on localhost. I have tried using the service name, ie http://stuffapi:8080 however this gives an error lookup stuffapi on 192.168.65.1:53: no such host.
I'm guessing this has something to do with the default network setup?
After the stuffapi service has started I would like my service to be built (stuffsdk in dockerfile), then execute a command to run the go app which calls the stuff (web) service. docker-compose tries to build the local dockerfile first but when it runs its last command RUN ./main, it fails as the stuffapi hasn't been started first. In my service I have a depends_on the stuffapi service so I thought that would start first?
docker-compose.yaml
version: '3'
services:
stuffapi:
image: XXX
ports:
- 8080:8080
stuffsdk:
depends_on:
- stuffapi
build: .
dockerfile:
From golang:1.15
RUN mkdir /stuffsdk
RUN mkdir /main
ADD ./stuffsdk /stuffsdk
ADD ./main /main
ENV BASE_URL=http://stuffapi:8080
WORKDIR /main
RUN go build
RUN ./main

Connection refused in Dockerfile but not when execed in

I have a command that calls a local docker container server.
I use docker-compose run name_of_service /bin/bash to exec into an image and from there calling command below works as expected.
pip install --trusted-host pypi --extra-index-url http://pypi:8000 -r requirements.txt
But running virtually the same command in Dockerfile results in a Retrying error
RUN pip install --trusted-host pypi --extra-index-url http://pypi:8000 -r requirements.txt --user
WARNING: Retrying (Retry(total=4, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<pip._vendor.urllib3.connection.HTTPConnection object at 0x7f54bac2dad0>: Failed to establish a new connection: [Errno -2] Name or service not known')': /custom-utils/
Both services are in one docker-compose.yml
Yaml
service:
image: service:20.10.1
build:
context: platform
dockerfile: service/Dockerfile
depends_on:
- api
- pypi
environment:
PORT: "8088"
ports:
- "8088:8088"
volumes:
- some_location_of_source
restart: always
pypi:
image: pypi:20.10.1
build:
context: services/pypi
dockerfile: Dockerfile
environment:
PORT: "8000"
expose:
- "8000"
ports:
- "8000:8000"
volumes:
- some_location_of_source
Dockerfile RUN instructions can never make network calls to other services, even in the same docker-compose.yml file. You need to arrange for the package server to run "somewhere else" (even in Docker but launched separately might work).
At a technical level there are two issues. Compose broadly gets to assume all image builds happen before any containers are launched, so there's no way to require the pypi service to start before the service image is built (depends_on: doesn't affect the build stage). Image builds also aren't attached to the Docker network that Compose creates, so they can't do things like resolve container hostnames; that will lead to the specific error you're getting.
It might work to split this into two separate Compose YAML files, one for the package server and one for the main service. You can launch the package server; then docker-compose build the main service; then stop the package server. Since you have published ports: you can reach the package server through one of the host's IP addresses; or if you're on a MacOS or Windows host, the special host name host.docker.internal; or otherwise use one of the techniques described in From inside of a Docker container, how do I connect to the localhost of the machine?.
RUN pip install \
--extra-index-url http://host.docker.internal:8000 \
-r requirements.txt
(Depending on what exactly is in this package server, you may not need it at all. If you python setup.py bdist_wheel or pip wheel the dependencies you keep there, you can COPY the resulting .whl files into your image and install them directly. If it's all from the same source tree then a multi-stage build where earlier stages just build libraries could work too.)

Docker composed services can't communicate by service name

tldr: I can't communicate with a docker composed service by its service name in order to make requests to an api running in networked containers.
I have a single page application that makes requests to a json api. Its Dockerfile looks like this:
FROM nginx:alpine
COPY dist /usr/share/nginx/html
EXPOSE 80
A build process does it's thing and puts all the static assets in a dist directory which is then copied to the html directory of the nginx web server.
I have a mock json api powered by json-server. Its Dockerfile looks like this:
FROM node:7.10.0-alpine
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY package.json /usr/src/app/
RUN npm install
COPY . /usr/src/app
EXPOSE 3000
CMD [ "npm", "start" ]
I have a docker-compose file that looks like this:
version: '2'
services:
badass-ui:
image: mydocker-hub/badass-ui
container_name: badass-ui
ports:
- "80:80"
badderer-api:
image: mydocker-hub/badderer-api
container_name: badderer-api
ports:
- "3000:3000"
I'm able to build both containers successfully, and am able to run "docker-compose up" with both containers running smoothly. Fetch requests from badass-ui to badderer-api:3000/users returns "net::ERR_NAME_NOT_RESOLVED". Fetch requests to http://192.168.99.100:3000/users (or whatever the container IP may be) work fine. I thought by using docker compose I would be able to reference the name of a service defined in docker-compose.yml as a domain name, and that would enable communication between the containers via domain name. This doesn't seem to work. Is there something wrong with my docker-compose.yml? I'm on Windows 10 Home edition, using the tools that come with the Docker Quickstart terminal for Windows. I'm using docker-compose version 1.13.0, docker version 17.05.0-ce, docker-machine version 0.11.0 and VirtualBox 5.1.20.
Since you are using docker-compose.yml version 2, links should not be necessary. Containers within a compose network should be able to resolve other compose containers by service name.
Reading the comments on your question it seems like the networking and host name resolution works, so it seems like the problem is in your web UI. I don't see you passing any type of configuration to the UI application saying where to find the api. Maybe there is a hard coded url to the api in your UI causing the error?
Edit:
Is your UI a client side/javascript app? Are you sure the app isn't actually making the call from your browser? Your browser running on your local machine and not in docker will not be able resolve the badderrer-api hostname.

Resources