Cannot connect to Redis from Laravel Application - docker

I have to configure Redis with Socketio in my Laravel application. However, what ever I have tried so far, I get the same error:
Connection refused [tcp://127.0.0.1:6379] i
I can go to the container with docker exec -it id sh and when I ping the server I get the PONG message. Client is already 'predis' in my database.php file and package also installed.
.env
REDIS_HOST=redis
REDIS_PASSWORD=null
REDIS_PORT=6379
docker-compose.yml
version: "2"
services:
api:
build: .
ports:
- 9000:9000
volumes:
- .:/app
- /app/vendor
depends_on:
- postgres
- redis
environment:
DATABASE_URL: postgres://xx#postgres/xx
postgres:
image: postgres:latest
environment:
POSTGRES_USER: xx
POSTGRES_DB: xx
POSTGRES_PASSWORD: xx
volumes:
- .Data:/var/lib/postgresql/data
ports:
- 3306:5432
redis:
build: ./Redis/
ports:
- 6003:6379
volumes:
- ../RedisData/data:/data
command: redis-server --appendonly yes
Dockerfile (redis)
FROM redis:alpine
COPY redis.conf /usr/local/etc/redis/redis.conf
CMD [ "redis-server", "/usr/local/etc/redis/redis.conf" ]

The error is saying it can connect to 127.0.0.1 on port 6379. So make sure the host and port is ok:
host 127.0.0.1 is ok: this work if you run the php on the same host than redis, or if you run php on Docker host machine, but in this case, the port will be 6003
port 6379 is ok: host is not good, you must specify the Docker container hostname: redis
make sure configuration cache is ok

Set your REDIS_HOST to redis like this REDIS_HOST=redis. The reason is that you already built your docker file and specified redis as the name of your redis service

Had Same issue...
Also updated following in redis.conf
bind 127.0.0.1
To
bind redis
as redis is the existing host now

Related

Can't connect to Docker Container on Windows

I have a docker-compose file that looks like this
version: "3"
services:
db:
image: postgres
restart: always
ports:
- "5432:5432"
environment:
POSTGRES_USER: michael
POSTGRES_PASSWORD: pass123
admin:
image: dpage/pgadmin4
restart: always
environment:
PGADMIN_DEFAULT_EMAIL: ${PGADMIN_DEFAULT_EMAIL:-pgadmin4#pgadmin.org}
PGADMIN_DEFAULT_PASSWORD: ${PGADMIN_DEFAULT_PASSWORD:-admin}
PGADMIN_CONFIG_SERVER_MODE: 'False'
ports:
- "5050:5050"
I run docker-compose up -d and I can see my apps running from Docker Desktop. I cannot however connect to my pgadmin instance at port 5050 using localhost. Any ideas?
Docker container of pgAdmin by default runs on port 80 as per the documentation here https://www.pgadmin.org/docs/pgadmin4/latest/container_deployment.html
You are exposing port 5050 through the mapping. Either add a environment variable PGADMIN_LISTEN_PORT to the docker_compose to make pgAdmin run on port 5050
OR
change port mapping to 5050:80 for the pgAdmin service
Check the docker inspect or docker ps results to ensure that you have your port exposed correctly
Try to connect to it using the public IP

Access ftp service via other docker container

I have a Golang app, and it is supposed to connect to a FTP Server.
Now, both Golang app and FTP Server is dockerized, but I don't know how to connect to FTP server from Golang app
Here is my docker-compose.yml
version: '2'
services:
myappgo:
image: myappgo:exp
volumes:
- ./volume:/go
networks:
myappgo_network:
env_file:
- test.env
ftpd-server:
container_name: ftpd-server
image: stilliard/pure-ftpd:hardened
ports:
- "21:21"
- "30000-30009:30000-30000"
environment:
PUBLICHOST: "localhost"
FTP_USER_NAME: "test"
FTP_USER_PASS: "test"
FTP_USER_HOME: "/home/test"
restart: on-failure
networks:
myappgo_network:
networks:
myappgo_network:
When I run docker compose, all services are up.
I could get IP of ftp container with:
docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' ftpd-server
And then, I installed a ftp client for alpine in my golang container, lftp:
docker exec -it my_app_go sh
apk add lftp
lftp -d ftp://test:test#172.19.0.2 # -d for debug
lftp test#172.19.0.2:~> ls
---- Connecting to 172.19.0.2 (172.19.0.2) port 21
`ls' at 0 [Connecting...]
What am I missing ?
At least, you need 21/TCP for commands and 20/TCP for data on ftp-server:
ports:
- "21:21"
- "20:20"
- "30000-30009:30000-30009"
I changed your compose-file a little bit:
version: '2'
services:
myappgo:
image: alpine:3.8
tty: true
networks:
swarm_default:
ftpd-server:
container_name: ftpd-server
image: stilliard/pure-ftpd:hardened
ports:
- "21:21"
- "20:20"
- "30000-30009:30000-30009"
environment:
PUBLICHOST: "localhost"
FTP_USER_NAME: "test"
FTP_USER_PASS: "test"
FTP_USER_HOME: "/home/test"
restart: on-failure
networks:
swarm_default:
networks:
swarm_default:
Then I created on ftp-server file /home/test/1 and I can see it from mygoapp-container:
/ # lftp ftp://test:test#172.19.0.2
lftp test#172.19.0.2:/> dir
-rw-r--r-- 1 0 0 0 Jan 22 14:18 1
First simplify your dockerfile
version: '3' # i assume you can migrate to version 3, yes?
services:
myappgo:
image: myappgo:exp
volumes:
- ./volume:/go
env_file:
- test.env
ftpd-server:
image: stilliard/pure-ftpd:hardened
environment:
PUBLICHOST: "0.0.0.0"
FTP_USER_NAME: "test"
FTP_USER_PASS: "test"
FTP_USER_HOME: "/home/test"
restart: on-failure
Second, default network is created by docker-compose; no need to do it explicitly. All services get connected to it under their names, so you access them not by ip but by name like ftpd-server
Third, you dont need to expose your ports if you access them from inside. If you need to access them from outside, then you expose.
Next, launch ftp with binding to 0.0.0.0 - binding any tcp service to localhost or 127.0.0.1 makes it accessable only locally.
Last, use service names to connect. Forget about ip addresses and docker inspect. You connection from myappgo to ftp will look like ftp://ftpd-server/foo/bar

Connecting to MySQL from Flask Application using docker-compose

I have an application using Flask and MySQL. The application does not connect to MySQL container from the Flask Application but it can be accessed using Sequel Pro with the same credentials.
Docker Compose File
version: '2'
services:
web:
build: flask-app
ports:
- "5000:5000"
volumes:
- .:/code
mysql:
build: mysql-server
environment:
MYSQL_DATABASE: test
MYSQL_ROOT_PASSWORD: root
MYSQL_ROOT_HOST: 0.0.0.0
MYSQL_USER: testing
MYSQL_PASSWORD: testing
ports:
- "3306:3306"
Docker file for MySQL
The docker file for MySQL will add schema from test.dump file.
FROM mysql/mysql-server
ADD test.sql /docker-entrypoint-initdb.d
Docker file for Flask
FROM python:latest
COPY . /app
WORKDIR /app
RUN pip install -r requirements.txt
ENTRYPOINT ["python"]
CMD ["app.py"]
Starting point app.py
from flask import Flask, request, jsonify, Response
import json
import mysql.connector
from flask_cors import CORS, cross_origin
app = Flask(__name__)
def getMysqlConnection():
return mysql.connector.connect(user='testing', host='0.0.0.0', port='3306', password='testing', database='test')
#app.route("/")
def hello():
return "Flask inside Docker!!"
#app.route('/api/getMonths', methods=['GET'])
#cross_origin() # allow all origins all methods.
def get_months():
db = getMysqlConnection()
print(db)
try:
sqlstr = "SELECT * from retail_table"
print(sqlstr)
cur = db.cursor()
cur.execute(sqlstr)
output_json = cur.fetchall()
except Exception as e:
print("Error in SQL:\n", e)
finally:
db.close()
return jsonify(results=output_json)
if __name__ == "__main__":
app.run(debug=True,host='0.0.0.0')
When I do a GET request on http://localhost:5000/ using REST Client I get a valid response.
A GET request on http://localhost:5000/api/getMonths gives error message:
mysql.connector.errors.InterfaceError: 2003: Can't connect to MySQL server on '0.0.0.0:3306' (111 Connection refused)
When the same credentials were used on Sequel Pro, I was able to access the database.
Please advice me on how to connect the MySQL container from the Flask Application. This is my first time suing Docker and do forgive me if this is a silly mistake from my part.
Change this
return mysql.connector.connect(user='testing', host='0.0.0.0', port='3306', password='testing', database='test')
to
return mysql.connector.connect(user='testing', host='mysql', port='3306', password='testing', database='test')
Your code is running inside the container and not on your host. So you need to provide it a address where it can reach within container network. For docker-compose each service is reachable using its name. So in your it is mysql as that is name you have used for the service
For others who encounter similar issue, if you are mapping different ports from host to container for the MySQL service, make sure that container that needs to connect to the MySQL service is using the port for the container not for the host.
Here is an example of a docker compose file. Here you can see that my application (which is running in a container) will be using port 3306 to connect to the MySQL service (which is also running in a container on port 3306). Anyone connecting to this MySQL service from the outside of the "backend" network which is basically anything that does not run in a container with the same network will need to use port 3308 to connect to this MySQL service.
version: "3"
services:
redis:
image: redis:alpine
command: redis-server --requirepass imroot
ports:
- "6379:6379"
networks:
- frontend
mysql:
image: mariadb:10.5
command: --default-authentication-plugin=mysql_native_password
ports:
- "3308:3306"
volumes:
- mysql-data:/var/lib/mysql/data
networks:
- backend
environment:
MYSQL_ROOT_PASSWORD: imroot
MYSQL_DATABASE: test_junkie_hq
MYSQL_HOST: 127.0.0.1
test-junkie-hq:
depends_on:
- mysql
- redis
image: test-junkie-hq:latest
ports:
- "80:5000"
networks:
- backend
- frontend
environment:
TJ_MYSQL_PASSWORD: imroot
TJ_MYSQL_HOST: mysql
TJ_MYSQL_DATABASE: test_junkie_hq
TJ_MYSQL_PORT: 3306
TJ_APPLICATION_PORT: 5000
TJ_APPLICATION_HOST: 0.0.0.0
networks:
backend:
frontend:
volumes:
mysql-data:

consumer: Cannot connect to amqp://user:**#localhost:5672//: [Errno 111] Connection refused

I am trying to build my airflow using docker and rabbitMQ. I am using rabbitmq:3-management image. And I am able to access rabbitMQ UI, and API.
In airflow I am building airflow webserver, airflow scheduler, airflow worker and airflow flower. Airflow.cfg file is used to config airflow.
Where I am using broker_url = amqp://user:password#127.0.0.1:5672/ and celery_result_backend = amqp://user:password#127.0.0.1:5672/
My docker compose file is as follows
version: '3'
services:
rabbit1:
image: "rabbitmq:3-management"
hostname: "rabbit1"
environment:
RABBITMQ_ERLANG_COOKIE: "SWQOKODSQALRPCLNMEQG"
RABBITMQ_DEFAULT_USER: "user"
RABBITMQ_DEFAULT_PASS: "password"
RABBITMQ_DEFAULT_VHOST: "/"
ports:
- "5672:5672"
- "15672:15672"
labels:
NAME: "rabbitmq1"
webserver:
build: "airflow/"
hostname: "webserver"
restart: always
environment:
- EXECUTOR=Celery
ports:
- "8080:8080"
depends_on:
- rabbit1
command: webserver
scheduler:
build: "airflow/"
hostname: "scheduler"
restart: always
environment:
- EXECUTOR=Celery
depends_on:
- webserver
- flower
- worker
command: scheduler
worker:
build: "airflow/"
hostname: "worker"
restart: always
depends_on:
- webserver
environment:
- EXECUTOR=Celery
command: worker
flower:
build: "airflow/"
hostname: "flower"
restart: always
environment:
- EXECUTOR=Celery
ports:
- "5555:5555"
depends_on:
- rabbit1
- webserver
- worker
command: flower
I am able to build images using docker compose. However, I am not able to connect my airflow scheduler to rabbitMQ. I am getting following error:
consumer: Cannot connect to amqp://user:**#localhost:5672//: [Errno
111] Connection refused.
I have tried using 127.0.0.1 and localhost both.
What I am doing wrong ?
From within your airflow containers, you should be able to connect to the service rabbit1. So all you need to do is to change amqp://user:**#localhost:5672//: to amqp://user:**#rabbit1:5672//: and it should work.
Docker compose creates a default network and attaches services that do not explicitly define a network to it.
You do not need to expose the 5672 & 15672 ports on rabbit1 unless you want to be able to access it from outside the application.
Also, generally it is not recommended to build images inside docker-compose.
I solved this issue by installing rabbitMQ server into my system with command sudo apt install rabbitmq-server.

connect to mysql database from docker container

I have this docker file and it is working as expected. I have php application that connects to mysql on localhost.
# cat Dockerfile
FROM tutum/lamp:latest
RUN rm -fr /app
ADD crm_220 /app/
ADD crmbox.sql /
ADD mysql-setup.sh /mysql-setup.sh
EXPOSE 80 3306
CMD ["/run.sh"]
When I tried to run the database as separate container, my php application is still pointing to localhost. When I connect to the "web" container, I am not able to connect to "mysql1" container.
# cat docker-compose.yml
web:
build: .
restart: always
volumes:
- .:/app/
ports:
- "8000:8000"
- "80:80"
links:
- mysql1:mysql
mysql1:
image: mysql:latest
volumes:
- "/var/lib/mysql:/var/lib/mysql"
ports:
- "3306:3306"
environment:
MYSQL_ROOT_PASSWORD: secretpass
How does my php application connect to mysql from another container?
This is similar to the question asked here...
Connect to mysql in a docker container from the host
I do not want to connect to mysql from host machine, I need to connect from another container.
At first you shouldn't expose mysql 3306 port if you not want to call it from host machine. At second links are deprecated now. You can use network instead. I not sure about compose v.1 but in v.2 all containers in common docker-compose file are in one network (more about networks) and can be resolved by name each other. Example of docker-compose v.2 file:
version: '2'
services:
web:
build: .
restart: always
volumes:
- .:/app/
ports:
- "8000:8000"
- "80:80"
mysql1:
image: mysql:latest
volumes:
- "/var/lib/mysql:/var/lib/mysql"
environment:
MYSQL_ROOT_PASSWORD: secretpass
With such configuration you can resolve mysql container by name mysql1 inside web container.
For me, the name resolutions is never happening. Here is my docker file, and I was hoping to connect from app host to mysql, where the name is mysql and passed as an env variable to the other container - DB_HOST=mysql
version: "2"
services:
app:
build:
context: ./
dockerfile: /src/main/docker/Dockerfile
image: crossblogs
environment:
- DB_HOST=mysql
- DB_PORT=3306
ports:
- 8080:8080
depends_on:
- mysql
mysql:
image: mysql:5.7.20
environment:
- MYSQL_USER=root
- MYSQL_ALLOW_EMPTY_PASSWORD=yes
- MYSQL_DATABASE=crossblogs
ports:
- 3306:3306
command: mysqld --lower_case_table_names=1 --skip-ssl --character_set_server=utf8 --explicit_defaults_for_timestamp

Resources