issue redis and docker - docker

I currently have a very strange error with docker more precisely with redis.
My backend runs with nodejs and typescript:
code
const redisPubSubOptions: any = {
host: process.env.REDIS_HOST || "127.0.0.1",
port: process.env.REDIS_PORT || 6379,
connectTimeout: 10000,
retryStrategy: (times: any) => Math.min(times * 50, 2000),
};
export const pubsub: RedisPubSub = new RedisPubSub({
publisher: new Redis(redisPubSubOptions),
subscriber: new Redis(redisPubSubOptions),
});
Dockerfile
FROM node:14-alpine as tsc-builder
WORKDIR /usr/src/app
COPY . .
RUN yarn install
EXPOSE 4000
CMD yarn run dev
docker-compose
version: "3.8"
services:
backend:
build: .
container_name: backend
ports:
- 4242:4242
depends_on:
- redis
env_file:
- ./docker/env/.env.dev
environment:
- ENVIRONMENT=development
- REDIS_PORT=6379
- REDIS_HOST=redis
redis:
image: redis:6.0.12-alpine
command: redis-server --maxclients 100000 --appendonly yes
hostname: redis
ports:
- "6379:6379"
restart: always
when I start my server the backend works and then the redis error comes after:
Error: connect ECONNREFUSED 127.0.0.1:6379

Both Redis and your backend run on different containers, so they have different IP addresses in the docker network. You are trying to connect to 127.0.0.1, which is the local address of the backend container.
Method 1:
Since you are using docker-compose (and of course it creates a network between services), you can use the service name instead of 127.0.0.1. For example:
const redisPubSubOptions: any = {
host: process.env.REDIS_HOST || "redis",
port: process.env.REDIS_PORT || 6379,
connectTimeout: 10000,
retryStrategy: (times: any) => Math.min(times * 50, 2000),
};
export const pubsub: RedisPubSub = new RedisPubSub({
publisher: new Redis(redisPubSubOptions),
subscriber: new Redis(redisPubSubOptions),
});
Method 2:
The other method is to expose the Redis port to the IP address of the Docker interface in the Host machine. Most of the time that is 172.17.0.1, but with ip -o a (If you are using Linux) you can see the Docker interface and its IP address.
so you need to do this for that:
redis:
image: redis:6.0.12-alpine
command: redis-server --maxclients 100000 --appendonly yes
hostname: redis
ports:
- "172.17.0.1:6379:6379"
restart: always
This address 172.17.0.1:6379 (Or any Docker interface IP address on the Host) should be exposed for Redis. Easily you can use this address in the application.
Note: You can handle these values using environment variable which is a better and more standard solution. You can take a look at this.

Related

Go backend to redis connection refused after docker compose up

I'm currently trying to introduce docker compose to my project. It includes a golang backend using the redis in-memory database.
version: "3.9"
services:
frontend:
...
backend:
build:
context: ./backend
ports:
- "8080:8080"
environment:
- NODE_ENV=production
env_file:
- ./backend/.env
redis:
image: "redis"
ports:
- "6379:6379"
FROM golang:1.16-alpine
RUN mkdir -p /usr/src/app
ENV PORT 8080
WORKDIR /usr/src/app
COPY go.mod /usr/src/app
COPY . /usr/src/app
RUN go build -o main .
EXPOSE 8080
CMD [ "./main" ]
The build runs successfully, but after starting the services, the go backend immediately exits throwing following error:
Error trying to ping redis: dial tcp 127.0.0.1:6379: connect: connection refused
Error being catched here:
_, err = client.Ping(ctx).Result()
if err != nil {
log.Fatalf("Error trying to ping redis: %v", err)
}
How come the backend docker service isn't able to connect to redis? Important note: when the redis service is running and I start my backend manually using go run *.go, there's no error and the backend starts successfully.
When you run your Go application inside a docker container, the localhost IP 127.0.0.1 is referring to this container. You should use the hostname of your Redis container to connect from your Go container, so your connection string would be:
redis://redis
I found I was having this same issue. Simply changing (in redis.NewClient(&redis.Options{...}) Addr: "localhost:6379"to Addr: "redis:6379" worked.
Faced similar issue with Golang and redis.
version: '3.0'
services:
redisdb:
image: redis:6.0
restart: always
ports:
- "6379:6379"
container_name: redisdb-container
command: ["redis-server", "--bind", "redisdb", "--port", "6379"]
urlshortnerservice:
depends_on:
- redisdb
ports:
- "7777:7777"
restart: always
container_name: url-shortner-container
image: url-shortner-service
In redis configuration use
redisClient := redis.NewClient(&redis.Options{
Addr: "redisdb:6379",
Password: "",
DB: 0,
})

Cannot connect to Redis from Laravel Application

I have to configure Redis with Socketio in my Laravel application. However, what ever I have tried so far, I get the same error:
Connection refused [tcp://127.0.0.1:6379] i
I can go to the container with docker exec -it id sh and when I ping the server I get the PONG message. Client is already 'predis' in my database.php file and package also installed.
.env
REDIS_HOST=redis
REDIS_PASSWORD=null
REDIS_PORT=6379
docker-compose.yml
version: "2"
services:
api:
build: .
ports:
- 9000:9000
volumes:
- .:/app
- /app/vendor
depends_on:
- postgres
- redis
environment:
DATABASE_URL: postgres://xx#postgres/xx
postgres:
image: postgres:latest
environment:
POSTGRES_USER: xx
POSTGRES_DB: xx
POSTGRES_PASSWORD: xx
volumes:
- .Data:/var/lib/postgresql/data
ports:
- 3306:5432
redis:
build: ./Redis/
ports:
- 6003:6379
volumes:
- ../RedisData/data:/data
command: redis-server --appendonly yes
Dockerfile (redis)
FROM redis:alpine
COPY redis.conf /usr/local/etc/redis/redis.conf
CMD [ "redis-server", "/usr/local/etc/redis/redis.conf" ]
The error is saying it can connect to 127.0.0.1 on port 6379. So make sure the host and port is ok:
host 127.0.0.1 is ok: this work if you run the php on the same host than redis, or if you run php on Docker host machine, but in this case, the port will be 6003
port 6379 is ok: host is not good, you must specify the Docker container hostname: redis
make sure configuration cache is ok
Set your REDIS_HOST to redis like this REDIS_HOST=redis. The reason is that you already built your docker file and specified redis as the name of your redis service
Had Same issue...
Also updated following in redis.conf
bind 127.0.0.1
To
bind redis
as redis is the existing host now

Connect to Redis Docker container from Vagrant machine

We're making move to Docker from Vagrant.
Our first aim is to move some services out first. In this case I'm trying to host a redis server on a docker container and connect to it from my vagrant machine.
On the vagrant machine there is an apache2 webserver hosting a Laravel App
It's the connection part I'm struggling with, currently I have
Dockerfile.redis
FROM redis:3.2.12
RUN redis-server
docker-compose.yml (concatenated)
version: '3'
services:
redis:
build:
context: .
dockerfile: Dockerfile.redis
working_dir: /opt
ports:
- "6379:6379"
I've tried various way to connect to this:
Attempt 1
Using the host ip 10.0.2.2 in the config in Laravel. Results in a "Connection refused"
Attempt 2
Set up a network in the docker compose
redis:
build:
context: .
dockerfile: Dockerfile.redis
working_dir: /opt
network:
- app_net:
ipv4_address: 172.16.238.10
ports:
- "6379:6379"
networks:
app_net:
driver: bridge
ipam:
driver: default
- subnet: 172.16.238.0/24
This instead results in timeouts. Most solutions seem to require a gateway configured on the network, but this isn't configurable in docker compose 3. Is there maybe a way around this?
If anyone can give any guidance that would be great, most guides talk about connect to dockers in a vagrant rather than from one.
FYI - this is using Docker for Mac and version 3 of docker compose
We were able to get this going use purely docker compose and not having a dockerfile for redis at all:
redis:
image: redis
container_name: redis
working_dir: /opt
ports:
- "6379:6379"
Once done like this, able to connect to redis from within the vagrant file using
redis-cli -h 10.0.2.2
Or as the following in laravel, although we're using environment variables to set these)
'redis' => [
'client' => 'phpredis',
'default' => [
'host' => '10.0.2.2',
'password' => null,
'port' => 6379,
'database' => 0,
]
]
Your Attempt 1 should work actually. When you create a service without defining a network, docker-compose automatically creates a bridge network. For example:
When you run docker-compose up on this:
version: '3'
services:
redis:
build:
context: .
dockerfile: Dockerfile.redis
working_dir: /opt
ports:
- "6379:6379"
docker-compose creates a bridge network named <project name>_default, which is docker_compose_test_default in my case, as shown below:
me#myshell:~/docker_compose_test $ docker network ls
NETWORK ID NAME DRIVER SCOPE
6748b1ea4b85 bridge bridge local
4601c6ea30c3 docker_compose_test_default bridge local
80033acaa6e4 host host local
When you inspect your container, you can see that an IP has already been assigned to it:
docker inspect e6b196f952af
...
"Networks": {
"bridge": {
...
"Gateway": "172.18.0.1",
"IPAddress": "172.18.0.2",
You can then use this IP to connect from the host or your vagrant box:
me#myshell:~/docker_compose_test $ redis-cli -h 172.18.0.2 -p 6379
172.18.0.2:6379> ping
PONG

Connecting to MySQL from Flask Application using docker-compose

I have an application using Flask and MySQL. The application does not connect to MySQL container from the Flask Application but it can be accessed using Sequel Pro with the same credentials.
Docker Compose File
version: '2'
services:
web:
build: flask-app
ports:
- "5000:5000"
volumes:
- .:/code
mysql:
build: mysql-server
environment:
MYSQL_DATABASE: test
MYSQL_ROOT_PASSWORD: root
MYSQL_ROOT_HOST: 0.0.0.0
MYSQL_USER: testing
MYSQL_PASSWORD: testing
ports:
- "3306:3306"
Docker file for MySQL
The docker file for MySQL will add schema from test.dump file.
FROM mysql/mysql-server
ADD test.sql /docker-entrypoint-initdb.d
Docker file for Flask
FROM python:latest
COPY . /app
WORKDIR /app
RUN pip install -r requirements.txt
ENTRYPOINT ["python"]
CMD ["app.py"]
Starting point app.py
from flask import Flask, request, jsonify, Response
import json
import mysql.connector
from flask_cors import CORS, cross_origin
app = Flask(__name__)
def getMysqlConnection():
return mysql.connector.connect(user='testing', host='0.0.0.0', port='3306', password='testing', database='test')
#app.route("/")
def hello():
return "Flask inside Docker!!"
#app.route('/api/getMonths', methods=['GET'])
#cross_origin() # allow all origins all methods.
def get_months():
db = getMysqlConnection()
print(db)
try:
sqlstr = "SELECT * from retail_table"
print(sqlstr)
cur = db.cursor()
cur.execute(sqlstr)
output_json = cur.fetchall()
except Exception as e:
print("Error in SQL:\n", e)
finally:
db.close()
return jsonify(results=output_json)
if __name__ == "__main__":
app.run(debug=True,host='0.0.0.0')
When I do a GET request on http://localhost:5000/ using REST Client I get a valid response.
A GET request on http://localhost:5000/api/getMonths gives error message:
mysql.connector.errors.InterfaceError: 2003: Can't connect to MySQL server on '0.0.0.0:3306' (111 Connection refused)
When the same credentials were used on Sequel Pro, I was able to access the database.
Please advice me on how to connect the MySQL container from the Flask Application. This is my first time suing Docker and do forgive me if this is a silly mistake from my part.
Change this
return mysql.connector.connect(user='testing', host='0.0.0.0', port='3306', password='testing', database='test')
to
return mysql.connector.connect(user='testing', host='mysql', port='3306', password='testing', database='test')
Your code is running inside the container and not on your host. So you need to provide it a address where it can reach within container network. For docker-compose each service is reachable using its name. So in your it is mysql as that is name you have used for the service
For others who encounter similar issue, if you are mapping different ports from host to container for the MySQL service, make sure that container that needs to connect to the MySQL service is using the port for the container not for the host.
Here is an example of a docker compose file. Here you can see that my application (which is running in a container) will be using port 3306 to connect to the MySQL service (which is also running in a container on port 3306). Anyone connecting to this MySQL service from the outside of the "backend" network which is basically anything that does not run in a container with the same network will need to use port 3308 to connect to this MySQL service.
version: "3"
services:
redis:
image: redis:alpine
command: redis-server --requirepass imroot
ports:
- "6379:6379"
networks:
- frontend
mysql:
image: mariadb:10.5
command: --default-authentication-plugin=mysql_native_password
ports:
- "3308:3306"
volumes:
- mysql-data:/var/lib/mysql/data
networks:
- backend
environment:
MYSQL_ROOT_PASSWORD: imroot
MYSQL_DATABASE: test_junkie_hq
MYSQL_HOST: 127.0.0.1
test-junkie-hq:
depends_on:
- mysql
- redis
image: test-junkie-hq:latest
ports:
- "80:5000"
networks:
- backend
- frontend
environment:
TJ_MYSQL_PASSWORD: imroot
TJ_MYSQL_HOST: mysql
TJ_MYSQL_DATABASE: test_junkie_hq
TJ_MYSQL_PORT: 3306
TJ_APPLICATION_PORT: 5000
TJ_APPLICATION_HOST: 0.0.0.0
networks:
backend:
frontend:
volumes:
mysql-data:

Docker-compose port forwarding

I have a website hosted on shared hosting on production. The website connects to the database via localhost in the code. In my docker-compose I have a php:5.6-apache and mysql:5.6 instance.
Is there any way to tell docker-compose to have port 3306 on the web container port forwarded to 3306 on the db container, so that when the web container tries to connect to localhost on 3306 it gets sent to db on 3306 and also share port 80 on the web container to the outside world?
Current docker-compose.yml:
version: "3"
services:
web:
build: .
#image: php:5.6-apache
ports:
- "8080:80"
environment:
- "APP_LOG=php://stderr"
- "LOG_LEVEL=debug"
volumes:
- .:/var/www/html
network_mode: service:db # See https://stackoverflow.com/a/45458460/95195
# networks:
# - internal
working_dir: /var/www
db:
image: mysql:5.6
ports:
- "3306:3306"
environment:
- "MYSQL_XXXXX=*****"
volumes:
- ./provision/mysql/docker-entrypoint-initdb.d:/docker-entrypoint-initdb.d
# networks:
# - internal
networks:
internal:
driver: bridge
Current error:
ERROR: for web Cannot create container for service web: conflicting options: port publishing and the container type network mode
Yes it is possible. You need to use the network_mode option. See the below example
version: '2'
services:
db:
image: mysql
environment:
MYSQL_ROOT_PASSWORD: root
ports:
- "80:80"
- "3306:3306"
app:
image: ubuntu:16.04
command: bash -c "apt update && apt install -y telnet && sleep 10 && telnet localhost 3306"
network_mode: service:db
outputs
app_1 | Trying 127.0.0.1...
app_1 | Connected to localhost.
app_1 | Escape character is '^]'.
app_1 | Connection closed by foreign host.
network_mode: service:db instructs docker to not assign the app services it own private network. Instead let it join the network of db service. So any port mapping that you need to do, needs to happen on the db service itself.
The way I usually use it is different, I create a base service which runs a infinite loop and the db and app service both are launched on base service network. All ports mapping need to happen at the base service.

Resources