Docker Compose service is making request to another service with IP - docker

I have a development server where I am using Docker Compose to run all the related services. The docker-compose file looks roughly like:
services:
webserver:
- ports:
- "${HOST_PORT}:${CONTAINER_PORT}"
redis:
- ports:
- "${REDIS_PORT}:${REDIS_PORT}"
apiserver:
- ports:
- "8383:8383"
(not a complete compose file, but I've only included what I believe are the relevant bits)
webserver spins up a BullMQ queue, which talks to redis. In one of the jobs that I have configured, I make a request to apiserver, via http://apiserver:8383/endpoint. If I ssh into the redis container and view the job that gets queue, I see that it has failed with the following: connect ECONNREFUSED 172.25.0.3:8383.
This works just fine when I run docker compose locally. Any idea why this might not be working?
Update:
Attempting a wget from within the web server instance:
wget http://apiserver:8383
results in:
wget http://apiserver:8383
--2022-12-24 23:58:28-- http://apiserver:8383/
Resolving apiserver (apiserver)... 172.26.0.4
Connecting to apiserver (apiserver)|172.26.0.4|:8383... failed: Connection refused.

Docker-compose service names have to respect domain name standard format.
Underscore "_" is not allowed in domain names.
Raname your service to api-server instead, you'll be good

Thank you for the responses. It turns out I wasn't paying close enough attention to the startup messages, and the apiserver was being Killed by Docker due to using too much memory. So the connection was refused because the server wasn't actually running.

Related

Migrate docker-compose to a single node docker-swarm cluster

At the moment I have implemented a flask application, connected with mysql database, and the entire implementation is running on a single webserver.
In order to avoid exposing my app publicly, I am running it on the localhost interface of the server, and I am only exposing the public interface (port 443), via a haproxy that redirects the traffic to localhost interface.
The configuration of docker-compose and haproxy can be found below
docker-compose:
version: '3.1'
services:
db:
image: mysql:latest
volumes:
- mysql-volume:/var/lib/mysql
container_name: mysql
ports:
- 127.0.0.1:3306:3306
environment:
MYSQL_ROOT_PASSWORD: xxxxxx
app:
#environment:
# - ENVIRONMENT=stage
# - BUILD_DATETIME=$(date +'%Y-%m-%d %H:%M:%S')
build:
context: .
dockerfile: Dockerfile
#labels:
# - "build_datetime=${BUILD_DATETIME}"
container_name: stage_backend
ports:
- 127.0.0.1:5000:5000
volumes:
mysql-volume:
driver: local
sample haproxy configuration:
global
log /dev/log local0
log /dev/log local1 notice
# Default SSL material locations
ca-base /etc/ssl/certs
crt-base /etc/ssl/private
defaults
log global
mode http
option httplog
option dontlognull
timeout connect 10s
timeout client 30s
timeout server 30s
frontend test
bind *:80
bind *:443 ssl crt /etc/letsencrypt/live/testdomain.com/haproxy.pem alpn h2,http/1.1
redirect scheme https if !{ ssl_fc }
mode http
acl domain_testdomain hdr_beg(host) -i testdomain.com
use_backend web_servers if domain_testdomain
backend web_servers
timeout connect 10s
timeout server 100s
balance roundrobin
mode http
server test_server 127.0.0.1:5000 check
So haproxy is running on the public interface as a service via systemd (not containerized) and containers are running on localhost.
This is going to become a production setup soon, so I want to deploy a single node docker swarm cluster, within that server only, as docker swarm implementation is more safe on a production environment.
My question is how can I deploy that on docker swarm.
Does it make sense to leave haproxy as a systemd service and somehow to make it forward requests to the docker swarm cluster?
Is it easier/better implementation, to also containerize the haproxy and put it inside the cluster as a docker-compose service?
If I follow the second approach, how can I make it run on a different interface than the application (haproxy --> public, flask & db --> localhost)
Again, I am talking about a single server here, so this is why I am trying to separate the network interfaces and only expose haproxy on 443 on the public interface.
Ideally I didn't want to change from haproxy to nginx reverse proxy, as I am familiar with it and how ssl termination exactly work there, but I am open to hear any other implementation that makes more sense.
You seem to be overthinking things, and in the process throwing away security features that docker offers.
first off, docker gives you private networking out the box in both compose and swarm modes. an implicit network called <stack>_default is created and services are attached to it, and DNS resolution is setup in each container to resolve each service name.
So, assuming your app and db don't explicitly declare any networks, then the following implicit declarations apply, and your app can connect to the db using the connection string mysql://db:3306 directly.
The db container does not need to either publish, or try and protect, access to this port, only other containers attached to the [stack_]default network will have access.
networks:
default: # implicit
services:
app:
networks:
default: # implicit
environment:
MYSQL: mysql://db:3306 #
db:
networks:
default: # implicit
At this point, its your choice to run HAProxy as a service or not. Personally I would (do). It is handy in swarm to have a single service that handles :80 and :443 ingress, does offloading, and then uses docker networks to direct traffic to other services on whatever service:port's handle those connections.
I use Traefik rather than HAProxy as it can use service labels to route traffic dynamically, but either way, having HAProxy as a service means, if you continue to use that, you can more easily deploy HAProxy config updates.

Understanding network access to docker containers

I am currently learning docker to be able to host a webpage in a container using nginx. The webpage accesses another container which runs flask. I think I have already solved my problem, but I am not sure why my solution works and would be happy about some feedback.
After setting up the containers, I tried to access the webpage from a firefox browser running on the host, which was successful. The browser reported CORS problems as soon as a service of the web page tried to contact flask.
However, after some hours of trying to fix the problem, I used the chrome browser which responded with another error message indicating that the flask address couldn't be reached.
The containers are started with docker-compose with the following yaml:
version: "3.8"
services:
webbuilder:
build: /var/www/html/Dashboard/
container_name: webbuilder
ports:
- 4998:80
watcher:
build: /home/dev/watcher/
container_name: watcher
ports:
- 5001:5001
expose:
- 5001
volumes:
- /var/www/services/venv_worker:/var/www/services/venv_worker
- /var/www/services/Worker:/var/www/services/Worker
- /home/dev/watcher/conf.d:/etc/supervisor.conf.d/
command: ["/usr/bin/supervisord"]
webbuilder is the nginx server hosting the web page. watcher is the flask server serving on 0.0.0.0:5001. I exposed this port and mapped it to the host for testing purposes:
I know that the containers generated with docker-compose are connected in a network and can be contacted using their container names instead of an actual ip address. I tested this with another network and it worked without problems.
The webpage running on webbuilder starts the service contacting watcher (where the flask server is). Because the container names can be resolved, the web page used the following address for http requests in my first approach:
export const environment = {
production: true,
apiURL: 'http://watcher:5001'
};
In this first attempt, there was no ports section in the docker-compose.yml, as I thought that the webpage inside the container could contact directly the watcher container running flask. This lead to the cors error message described above.
In a desperate attempt to solve the problem, I replaced the container name in apiURL with the concrete ip address of the container and also mapped the flask port 5001 to the host port 5001.
Confusingly, this works. The following images show what happens in my opinion.
The first picture shows my initial understanding of the situation. As this did not work, I am sure that it is wrong that the http request is executed by webbuilder. Instead, webbuilder only serves the homepage, but the service is executed from the host as depicted in image 2:
Is the situation described in image 2 correct? I think so, but it would be good if someone can confirm.
Thanks in advance.

azurite container not exposing ssl

I'm trying to setup azurite using these instructions, I see a number of others have been successful with this. I need to configure SSL (and eventually oauth) for my client app testing. The azurite container works fine without SSL, but when SSL is activated my client can't connect because the container isn't exposing the certificate.
I used mkcert to create the certificate. This is my docker-compose file. I'm mounting /certs and /data from my host.
version: '3.9'
services:
azurite:
image: mcr.microsoft.com/azure-storage/azurite
container_name: "azurite"
hostname: azurite
restart: always
ports:
- "10000:10000"
- "10001:10001"
- "10002:10002"
command: "azurite --oauth basic --cert /certs/127.0.0.1.pem --key /certs/127.0.0.1-key.pem --debug /logs/azurite-debug.log"
volumes:
- ./azurite-store:/data
- ./certs:/certs
- ./azurite-logs:/logs
Using openssl inside the container shows;
That's the cert I expect from mkcert & it's mounted as per the compose file.
From my laptop, openssl shows the following;
And there ends the fun! Why is the cert visible on the url inside the container, but not from the outside? I can't see anything in the compose file that would control if a cert is being exposed or not - I'm reasonably sure docker doesn't work like that - it's only exposing the tcp/ip layer to my laptop.
If I stop the container, port 10000 isn't reachable, start it and it opens so I don't think it's another process that I'm connecting to by mistake. Also, the fact that I get a connection means that it's not a connectivity issue.
Anyone got any thoughts on this one - it's weird! "Cert filtering" if I can call it that is certainly a new one!?
A little time away from the keyboard always helps
Looks like this is an application binding issue. Looks like node had bound to loopback (with the cert) and while docker had mapped the port out to my host, at an application layer node wasn't listening with a cert on that port. Changing blobHost to 0.0.0.0 allowed node to bind to all ip addresses on the container which in turn meant the cert was visible on the mapped port.
The docker-compose "command" becomes;
command: "azurite --oauth basic -l /data --cert /certs/127.0.0.1.pem --key /certs/127.0.0.1-key.pem --debug /logs/azurite-debug.log --blobHost 0.0.0.0 --queueHost 0.0.0.0 --tableHost 0.0.0.0"
I also found that despite mapping /data to a local volume I was loosing the blob containers on a container restart. Adding "-l /data" solved that one too.

GRPC service working fine running locally, but failing to connect when run from a Docker container - How to debug?

I have a GRPC service that I've implemented with TypeScript, and I'm interactively testing with BloomRPC. I've set it up (both client and server) with insecure connections to get things up and running. When I run the service locally (on port 3333), I'm able to interact with the service perfectly using BloomRPC - make requests, get responses.
However, when I include the service into a Docker container, and expose the same ports to the local machine, BloomRPC returns an error:
{
"error": "2 UNKNOWN: Stream removed"
}
I've double checked the ports, and they're open. I've enabled the additional GRPC debugging output logging, and tried tracing the network connections. I see a network connection through to the service on Docker, but then it terminates immediately. When I looked at tcpdump traces, I could see the connection coming in, but no response is provided from my service back out.
I've found other references to 2 UNKNOWN: Stream removed which appear to primarily be related to SSL/TLS setup, but as I'm trying to connect this in an insecure fashion, I'm uncertain what's happening in the course of this failure. I have also verified the service is actively running and logging in the docker container, and it responds perfectly well to HTTP requests on another port from the same process.
I'm at a loss as to what's causing the error and how to further debug it. I'm running the container using docker-compose, alongside a Postgres database.
My docker-compose.yaml looks akin to:
services:
sampleservice:
image: myserviceimage
environment:
NODE_ENV: development
GRPC_PORT: 3333
HTTP_PORT: 8080
GRPC_VERBOSITY: DEBUG
GRPC_TRACE: all
ports:
- 8080:8080
- 3333:3333
db:
image: postgres
ports:
- 5432:5432
Any suggestions on how I could further debug this, or that might explain what's happening so that I can run this service reliably within a container and interact with it from outside the container?

How to connect to a remote service from a Docker Container

My dotnet core app running in a docker container on my needs to connect to some external service via their IP one of which is an sql database running separately on a remote server hosted on google cloud. The app runs without issue when not running with docker, however with docker it fails with
An error occurred using the connection to database 'PartnersDb' on server '30.xx.xx.xx,39876'.
fail: Microsoft.EntityFrameworkCore.Update[10000]
An exception occurred in the database while saving changes for context type 'Partners.Api.Infrastructure.Persistence.MoneyTransferDbContext'.
System.InvalidOperationException: Timeout expired. The timeout period elapsed prior to obtaining a connection from the pool. This may have occurred because all pooled conn
ections were in use and max pool size was reached.
My Docker compose file looks like this
version: "3.5"
networks:
my-network:
name: my_network
services:
partners:
image: partners.api:latest
container_name: partners
build:
context: ./
restart: always
ports:
- "8081:80"
environment:
- ASPNETCORE_ENVIRONMENT=Docker
- ConnectionStrings:DefaultConnection=Server=30.xx.xx.xx,39876;Database=PartnersDb;User Id=don;Password=Passwor123$$
networks:
- my-network
volumes:
- /Users/mine/Desktop/logsā©:/logs
I have
bin bashed into the running container and I'm able to run pings to
the remote sql database server
I've also being able to telnet to
the remote sql database server on the database port
However, problem arises when i do docker-compose up then I get the error above.
Docker version 19.03.5, build 633a0ea is running on MacOS Mojave 10.14.6
I really do not know what to do at this stage.
There are two separate networks. Inside docker is a separate network. Outside docker, on the host machine, it's a different network. If you are accessing it by localhost or IP address it won't work as you are expecting it.
docker network ls
will show you a similar output like below:
NETWORK ID NAME DRIVER SCOPE
58a4dd9893e9 133_default bridge local
424817227b42 bridge bridge local
739297b8107e host host local
b9c4fb3ed4ba none null local
You need to add the host for Java service locally. Try running like a below command:
For Service:
docker run --add-host remoteservice:<ip address of java service> <your image>
Hopefully, this will fix it.
More here: https://docs.docker.com/engine/reference/run/#managing-etchosts
For PartnersDb Database:
If PartnersDb is a SQL database you'll have to configure SQL Server to listen to specific ports. Through SQL Server Configuration Manager > SQL Server Network configuration > TCP/IP Properties.
More here: https://learn.microsoft.com/en-us/sql/database-engine/configure-windows/configure-a-server-to-listen-on-a-specific-tcp-port?view=sql-server-ver15
There are similar settings for MySQL as well.
After which you'll have to run add-host switch
docker run --add-host PartnersDb:<ip address of PartnersDb database> <your image>
You can update the hosts file as well with these settings. I'd prefer it through the command line instead.

Resources