Redis docker not available on localhost of docker-compose - docker

I've currently ran into a problem i'm trying to solve for more than a week and i'm getting nowhere. I hope you can point me into the right direction.
Initial Situation
Description
The project i am building is a NestJS App which connects to some APIs. Internally, it uses bullmq as a message queue, which itself uses ioredis to connect to a redis database. I've connected my self-written server component as well as the redis (which uses docker) via docker-compose up with the following configuration:
version: '3'
services:
server:
image: myserver:1.4.0
container_name: myserver
depends_on:
- db
ports:
- 3001:3000
environment:
- REDIS_HOST=redis
db:
image: redis:6.0.8
container_name: redis
ports:
- 6379:6379
Versions
Workstation
Docker version 19.03.13, build 4484c46d9d
docker-compose version 1.27.4, build 40524192
Server Component
bullmq 1.9.0
ioredis 4.17.3
redis docker
6.0.8
Problem
The problem of my server-component is, that it tries to connect to the redis instance under the given REDIS_HOST at port 6379 using the following code:
readonly connection = new Redis(
+(process.env.REDIS_PORT ?? this.configService.get('redis_port')),
process.env.REDIS_HOST ?? this.configService.get('redis_host'),
);
but throws the following error:
[ioredis] Unhandled error event: Error: connect ECONNREFUSED 127.0.0.1:6379
I expected it to just see the redis instance at the exposed port.
So, it doesn't see the redis instance at 127.0.0.1: but shouldn't it use the given ip?
Thing i checked
The server code is correct, the REDIS_HOST is correctly submitted and called in the ioredis call. So further digging inside ioredis i found this issue . So, it should be available given all the hints as locally on my workstation, i'm using 0.0.0.0:6379 to connect and it works just fine.
Docker compose does create a network bridge automatically, and using netcat i checked, port 6379 on the ip of the redis docker (as well as the aliases redis & db), the redis instance is available from the server dockers console.
I then explicitely set the subnet using the network configuration of docker-compose as well as giving the containers static ips, but as i already described: the ip is correctly resolved.
I found the following issue on the docker github issue 204. I think this is exactly the problem i am facing here, but how does one solve it?
tl;dr ioredis tries to connect to the correctly resolved ip of the redis instance, but fails, as the instance is not available on the local ip of the server component.
What my current state is
I sob uncontrollably.
I currently am out of ideas how to get the "myserver"-container to connect to the redis instance via ioredis. My point of view is, that the problem i am having has to be connected to the way docker on windows resolves ips to 127.0.0.1. .
Is my point of view right?
What other way can you suggest to try out?
Best regards & thanks in advance.
Edit (2020-11-27): After some digging and further investigating the suggestions of Jeffrey Mixon, I'm unfortuately not any closer to a solution. My last steps included:
updating all dependencies (among others bullmq to v1.11, ioredis to 4.19.2). This did not change anything.
I then found a relatively new post on the bullmq issue board of a similar problem and i switched from reusing the connection in a connection object like shown above to always creating a new connection, as its also explained in the bullmq docs. But this also did not help.
new Queue(name, {
connection: {
host: this.redisHost,
port: this.redisPort,
},
})
I then switched from using the 'Redis' Object from the IORedis library to the 'IORedis', but still: nothing changed in the habit of my application docker. Even though the Redis connection is correctly called with the redis host, it still tries to connect to 127.0.0.1:6379 as shown above.
Lastly, the strange behavior, that if e.g i choose an unsolvable host url, the application docker correctly tries to connect to tthat unresolveable host). But as soon, as this host is available in the network of docker-compose, it uses 127.0.0.1.
Edit (2020-12-01):
In the meantime, i checked on a clean linux machine if the problem could by happen only on docker-for-windows, but it does happen on linux as well.
I did not solve the problem itself, but i bypassed it for me by just putting everything inside one docker. As my application is more of a proof of concept, there is no big pain in doing so. I would leave the question open if there happens to be a solution in the future or more people having the same question.
For those wondering, my dockerfile including redis now stacks another layer on top of a redis image. I'm adding the parts prom the ng-cli-e2e image i used before. So in the beginning of my existing dockerfile i added:
FROM redis:6.0.9-buster
RUN apt update
RUN apt install nodejs -y
RUN apt install npm -y
In the end i created a small wrapper script which justs starts the redis server as well as my application. I'm also exposing two ports now, if i want to access everything from my machine.
EXPOSE 3000 6379
CMD ./docker-start-wrapper.sh
It's not the most beautiful solution, but it does work for the moment.

The problem is that your application container is using localhost as the hostname for connecting to the redis container. It should be using the hostname redis in this case.
Consider the following demonstration:
version: '3.7'
services:
server:
image: busybox
container_name: myserver
entrypoint: /bin/sh
# if redis is replaced by localhost or 127.0.0.1 below, this container will fail
command: "-c \"sleep 2 && nc -v redis 6379\""
depends_on:
- db
ports:
- 3001:3000
environment:
- REDIS_HOST=redis
db:
image: busybox
container_name: redis
entrypoint: /bin/nc
command: "-l -p 6379 -v"
# is not necessary to publish these ports
#ports:
# - 6379:6379
$ docker-compose -f scratch_2.yml up
Creating network "scratches_default" with the default driver
Creating redis ... done
Creating myserver ... done
Attaching to redis, myserver
redis | listening on [::]:6379 ...
myserver | redis (172.25.0.2:6379) open
redis | connect to [::ffff:172.25.0.2]:6379 from [::ffff:172.25.0.3]:34821 ([::ffff:172.25.0.3]:34821)
myserver exited with code 0
redis exited with code 0
When you publish ports, they are for use outside the containers on the host. By attempting to connect your mysever container to 127.0.0.1, the container is simply attempting to connect to itself.

The problem with docker-compose is that redis is not on localhost, but it is on its own net instead. By default, all the containers in a docker compose share the same default net, so your redis container should be available by all the other containers in that docker-compose with the host redis (or your container name, in your case db).
Another point to remark is that if you are using bullmq, not only the Queue options need a custom collection, but also any Worker or QueueScheduler that you use, so you shall pass the custom connection options also to them.

Related

Hasura Console error: [...] No connection could be made because the target machine actively refused it

I have the Docker toolbox version installed to be able to run Hasura locally. Docker is fully functional (I think) and up-to-date and can successfully pull images and spin up containers. I think the Hasura CLI is successfully installed as well, as some commands like hasura version or --help can be successfully executed; however, when I try to run the hasura console command in the terminal, this error is returned:
time="2020-09-15T09:28:16-05:00" level=fatal msg="version check: failed to get version from server: failed making version api call: Get http://localhost:8080/v1/version: dial tcp [::1]:8080: connectex: No connection could be made because the target machine actively refused it."
I entirely disabled my antivirus and all firewalls I can find, as well as making a PATH environment variable for hasura like the documentation suggests, and ran the command again but that didn't fix the issue.
Does anyone know what might be causing this? I apologize if the question is vague, I'm very new to both Docker and Hasura. Please let me know if any further info is needed! Thank you!!
I believe the reason might be that the graphql-engine wasn't running. Try this:
Run hasura init
In the root directory create docker-compose.yaml file with the following content:
version: '3.6'
services:
postgres:
image: postgres:13.0
restart: always
volumes:
- db_data:/var/lib/postgresql/data
environment:
POSTGRES_PASSWORD: postgres
graphql-engine:
image: hasura/graphql-engine:latest
ports:
- "8080:8080"
depends_on:
- "postgres"
restart: always
environment:
HASURA_GRAPHQL_DATABASE_URL: postgres://postgres:postgres#postgres:5432/postgres
HASURA_GRAPHQL_ENABLE_CONSOLE: "false"
HASURA_GRAPHQL_DEV_MODE: "true"
HASURA_GRAPHQL_ENABLED_LOG_TYPES: startup, http-log, webhook-log, websocket-log, query-log
HASURA_GRAPHQL_ADMIN_SECRET: myadminsecretkey
volumes:
db_data:
Also see this: https://hasura.io/docs/1.0/graphql/core/getting-started/docker-simple.html#docker-simple
Start docker services with docker-compose up
In another terminal run hasura console --admin-secret myadminsecretkey. You should see the network address of the console in the terminal output.
I'm a complete noob with hasura, but encountered this problem myself.
This is the real solution:
Run the command in powershell as admin.
make sure the config.yaml points to the localhost:PORT that your graphql-engine instance is running on.
I.E
Follow docker setup process, run graphql-engine (on docker)
Next: hasura init -> edit config.yaml point it to where engine is running -> hasura console

mediasoup v3 with Docker

Im trying to run an 2 WebRTC example(using mediasoup) in docker
I want to run two servers as I am working on video calling across a set of instances!
My Error:
Have you seen this Error:
createProducerTransport null Error: port bind failed due to address not available [transport:udp, ip:'172.17.0.1', port:50517, attempt:1/50000]
I think it's something to do with setting the docker network?
docker-compose.yml
version: "3"
services:
db:
image: mysql
restart: always
app:
image: app
build: .
ports:
- "1440:443"
- "2000-2020"
- "80:8080"
depends_on:
- db
app2:
image: app
build: .
ports:
- "1441:443"
- "2000-2020"
- "81:8080"
depends_on:
- db
Dockerfile
FROM node:12
WORKDIR /app
COPY . .
CMD npm start
It sais it couldn't bind the address so it could be the ip or the port that causes the problem.
The ip seems like it's the ip of the docker instance. although of the docker instances are in two different machines it should be the ip of the server and not the docker instance. (in the mediasoup settings)
There are also ports of the rtcp connection that have to be opened in the docker instance. They are normally also in the mediasouo config file. usually a range of a few hundred ports that need to be opened.
You should set your rtc min and max port to 2000 and 2020 for testing purpose. Also you are not forwarding these ports I guess. In docker-compose use 2000-2020:2000-2020 Also make sure to set your listenIps properly.
If you are running mediasoup in docker, container where mediasoup is installed should be run in network mode host.
This is explained here:
How to use host network for docker compose?
and official docs
https://docs.docker.com/network/host/
Also you should pay attention to mediasoup configuration settings webRtcTransport.listenIps and plainRtpTransport.listenIp they should tell client on which IP address is your mediasoup server listening.

Name a service in docker compose to be used as a hostname

I want to spin up a database container (e.g: MongoDb) with docker-compose so that I can run some tests against the database.
This is my mongodb.yml docker-compose file.
version: '3.7'
services:
mongo:
image: mongo:latest
restart: always
ports:
- 27017:27017
environment:
- MONGO_INITDB_ROOT_USERNAME=root
- MONGO_INITDB_ROOT_PASSWORD=example
mongo-express:
image: mongo-express:latest
restart: always
ports:
- 8081:8081
environment:
- ME_CONFIG_MONGODB_ADMINUSERNAME=root
- ME_CONFIG_MONGODB_ADMINPASSWORD=example
depends_on:
- mongo
When I run it with docker-compose -f mongodb.yml up I can successfully connect to MongoDb on localhost. In other words, the following connection string is valid: "mongodb://root:example#localhost:27017/admin?ssl=false"
I want to use the equivalent to alias so that instead localhost, MongoDb is accessible through hostname potato
In GitLab CI/CD, with a Docker runner, I can spin up a mongo container and provide an alias without any issue. Like this:
my_ci_cd_job:
stage: my_stage
services:
- name: mongo:latest
alias: potato
variables:
MONGO_INITDB_ROOT_USERNAME: root
MONGO_INITDB_ROOT_PASSWORD: example
This allows me to connect within the GitLab CI/CD with "mongodb://root:example#potato:27017/admin?ssl=false"
I need the same locally, so that I can reuse my connection string.
I've added under image: mongo:latest the following
container_name: potato
But I cannot connect to the server potato. I've tried a few combinations with network, alias, etc. No luck. I don't event understand what I am doing anymore. Isn't there a simple equivalent to give an alias to a container so that my C# app or MongoDb client can access it?
Even the documentation, at the section https://docs.docker.com/compose/compose-file/#external_links is useless in my opinion. It mentions random samples not defined elsewhere.
Any help is much appreciated!
I've tried..
I've tried the following: How do I set hostname in docker-compose? without success
I have spent a few hours reading the docker compose documentation and it's extremly confusing. The fact that most of the questions are out of context without specific examples, does not help either, because it requires some deeper knowledge.
SOLUTION
Thanks to the replies it's clear this is a hacky thing not really recommended.
I went with the recommendations and I have now the connection string as:
mongodb://root:example#potato:27017/admin?ssl=false
by default. That way, I don't need to change anything for my GitLab CI/CD pipeline that has the alias potato for mongo db.
When running locally the mongoDb container with docker compose, it does so in localhost, but I edited my hosts file (e.g: in Windows C:\Windows\System32\drivers\etc, in Linux /etc/hosts) to make any request to potato go to localhost or 127.0.0.1
127.0.0.1 potato
And that way I can connect to MongoDb from my code or Robo3T as if it was running on a host called potato.
Feel free to comment any better alternative if there is. Ta.
If I understand you correctly, you want to bind a hostname (e.g., potato) to an ip address on your host machine.
Afaik this is not possible, but there are workarounds[1].
Everytime you start your docker-compose a network is used between those containers, and there is no way for you to be sure which ip addresses they will get. These could be 172.17.0.0/24 or 172.14.0.0/24 or anything else really.
The only thing you know for sure is that your host will have a service running at port 27017. So you could say that the hostname potato points to localhost on your hostmachine by adding 127.0.0.1 potato to your /etc/hosts file on your host.
That way the connection string "mongodb://root:example#localhost:27017/admin?ssl=false" will point to the local port from the perspective of your host machine, while it will point to the docker container from the perspective of the rest of your docker-compose services.
I do have to say that I find this a hacky approach, and as #DavidMaze said, it's normal to need different connection strings depending on the context you use them in.
[1] https://github.com/dlemphers/docker-local-hosts

HttpException: -404 Failed to connect to remote server on mac while running Docker

I am getting Error HttpException: -404 Failed to connect to remote server while running jar file from docker execute a command docker exec -it Test_docker java -jar TestDocker.jar.
Note: I have created docker on windows, Where my docker machine IP is 192.168.99.100 and my docker exec command running successfully.I am accessing SPARQL endpoint on windows using URL: http://192.168.99.100:8890/sparql this will work perfectly. But when I am using same on mac it will give me an error which I mention above. I have also try to change SPARQL endpoint on my code as http://localhost:8890/sparql but not work well though it will work fine on chrome browser on mac while executing through command it will giving me an error.
Here my docker-compose file,
version: "3"
services:
jardemo_test:
container_name: Test_docker
image: "java:latest"
working_dir: /usr/src/myapp
volumes:
- /docker/test:/usr/src/myapp
tty: true
depends_on:
- virtuoso
virtuoso:
container_name: virtuoso_docker
image: openlink/virtuoso_opensource
ports:
- "8890:8890"
- "1111:1111"
environment:
DB_USER_NAME: dba
DBA_PASSWORD: dba
DEFAULT_GRAPH: http://localhost:8890/test
volumes:
- /docker/virtuoso-test/:/data
Note: I have tried all the way to set the environment variable on docker-compose file default graph URL with all the IP address but it won't allow.Which IP I have tried all combination listed below.
Though I am getting the same error.
DEFAULT_GRAPH: http://localhost:8890/test
DEFAULT_GRAPH: http://127.0.0.1:8890/test
DEFAULT_GRAPH: http://0.0.0.0:8890/test
below is my docker-compose ps result,
$ docker-compose ps
Name Command State Ports
---------------------------------------------------------------------------------------------------------
Test_docker /bin/bash Up
virtuoso_docker /opt/virtuoso-opensource/b ... Up 0.0.0.0:1111->1111/tcp, 0.0.0.0:8890->8890/tcp
Below is my code which I am using,
QueryExecution qexec = QueryExecutionFactory.sparqlService("http://localhost:8890/sparql", queryString);
ResultSet results1 = qexec.execSelect();
Info: After running successful docker I have accessed the http://localhost:8890/sparql. it will successfully run on the mac.
Can anybody please help me to solve this issue?Also, welcome your suggestions and thought.Thanks for the help and your time in advance.
As per my colleague suggested, The problem is that the code in de docker file sees the docker as the local host. The IP-address 192.168.99.100 is also not known because mac doesn't have it. To solve the problem of connections, docker uses its own network.The docker-compose service names are used as the reference. So instead of using http://localhost:8890/sparql, you should use http://virtuoso:8890/sparql as virtuoso is the service name.
I tried this and it will solve my problem.

Access host machine dns from a docker container

/I'm using docker beta on a mac an have some services set up in service-a/docker-compose.yml:
version: '2'
services:
service-a:
# ...
ports:
- '4000:80'
I then set up the following in /etc/hosts:
::1 service-a.here
127.0.0.1 service-a.here
and I've got an nginx server running that proxies service-a.here to localhost:4000.
So on my mac I can just run: curl http://service-a.here. This all works nicely.
Now, I'm building another service, service-b/docker-compose.yml:
version: '2'
services:
service-b:
# ...
ports:
- '4001:80'
environment:
SERVICE_A_URL: service-a.here
service-b needs service-a for a couple of things:
It needs to redirect the user in the browser to the $SERVICE_A_URL
It needs to perform HTTP requests to service-a, also using the $SERVICE_A_URL
With this setup, only the redirection (1.) works. HTTP requests (2.) do not work because the service-b container
has no notion of service-a.here in it's DNS.
I tried adding service-a.here using the add_hosts configuration variable, but I'm not sore what to set it to. localhost will not work of course.
Note that I really want to keep the docker-compose files separate (joining them would not fix my problem by the way) because they both already have a lot of services running inside of them.
Is there a way to have access to the DNS resolving on localhost from inside a docker container, so that for instance curl service-a.here will work from inside a container?
You can use 'link' instruction in your docker-compose.yml file to automatically resolve the address from your container service-b.
service-b:
image: blabla
links:
- service-a:service-a
service-a:
image: blablabla
You will now have a line in the /etc/hosts of you service-b saying:
service-a 172.17.0.X
And note that service-a will be created before service-b while composing your app. I'm not sure how you can after that specify a special IP but Docker's documentation is pretty well done. Hope that's what you were looking for.

Resources