How to use redis username with password in docker-compose? - docker

Currently I have the below service configured in my docker-compose which works correct with redis password. However, I would like to use also redis username together with password. Is there any similar command to requirepass or something else to enable username?
version: '3.9'
volumes:
redis_data: {}
networks:
ee-net:
driver: bridge
services:
redis:
image: 'redis:latest'
container_name: redis
hostname: redis
networks:
- ee-net
ports:
- '6379:6379'
command: '--requirepass redisPassword'
volumes:
- redis_data:/data

You can specify a config file
$ cat redis.conf
requirepass password
#aclfile /etc/redis/users.acl
Then add the following to your docker compose file
version: '3'
services:
redis:
image: redis:latest
command: ["redis-server", "/etc/redis/redis.conf"]
volumes:
- ./redis.conf:/etc/redis/redis.conf
ports:
- "6379:6379"
Then you will get the password requirements
redis-cli
127.0.0.1:6379> ping
(error) NOAUTH Authentication required.
127.0.0.1:6379> AUTH password
OK
127.0.0.1:6379> ping
PONG
You may want to look into the ACLs line commented out there if you require more fine grained control

Related

How to Secure Redis on Pionion Server

I want to secure Redis on Pionion server. I tried to secure it using (--requirepass) command but it didn't work as expected.
My current docker-compose.yml file.
redis:
image: redis:6.0.9
command: --requirepass "password"
ports:
- 6379:6379
networks:
- ionnet

Docker network configuration endpoints

I want to know how to configure correctly the backend endpoint.
I have a docker images that runs different containers:
Backend
Frontend
Nginx for backend
DB
From my understanding, since all containers are running on the same machine, I should be able to reach the backend with "host.docker.internal".
Indeed I can successfully do it on the local machine where Docker is running on.
By the way the frontend is not able to resolve the endpoint "host.docker.internal" if I try to make a request from another machine. Please note that I'm able to reach the frontend from another machine, it's just a matter of endpoint configuration.
Note that "192.168.1.11" is the IP of the machine where Docker is running, and "8888" it's the port where the frontend is.
Obviously I can succesfully make the requests from other machines too if I put the static IP address instead of "host.docker.internal". But the question is: since the React frontend application is served on Docker itself, shouldn't it be able to resolve the "host.docker.internal" endpoint?
Just for reference, here it is my docker compose:
version: "3.8"
services:
db: #mysqldb
image: mysql:5.7
container_name: ${DB_SERVICE_NAME}
restart: unless-stopped
environment:
MYSQL_DATABASE: ${DB_DATABASE}
MYSQL_ROOT_PASSWORD: ${DB_PASSWORD}
MYSQL_PASSWORD: ${DB_PASSWORD}
MYSQL_USER: ${DB_USERNAME}
SERVICE_TAGS: dev
SERVICE_NAME: mysql
ports:
- $MYSQLDB_LOCAL_PORT:$MYSQLDB_DOCKER_PORT
volumes:
- ./docker-compose/mysql:/docker-entrypoint-initdb.d
networks:
- backend
mrmfrontend:
build:
context: ./mrmfrontend
args:
- REACT_APP_API_BASE_URL=$CLIENT_API_BASE_URL
- REACT_APP_BACKEND_ENDPOINT=$REACT_APP_BACKEND_ENDPOINT
- REACT_APP_FRONTEND_ENDPOINT=$REACT_APP_FRONTEND_ENDPOINT
- REACT_APP_FRONTEND_ENDPOINT_ERROR=$REACT_APP_FRONTEND_ENDPOINT_ERROR
- REACT_APP_CUSTOMER=$REACT_APP_CUSTOMER
- REACT_APP_NAME=$REACT_APP_NAME
- REACT_APP_OWNER=""
ports:
- $REACT_LOCAL_PORT:$REACT_DOCKER_PORT
networks:
- frontend
volumes:
- ./docker-compose/nginx/frontend:/etc/nginx/conf.d/
app:
build:
args:
user: admin
uid: 1000
context: ./MRMBackend
dockerfile: Dockerfile
image: backend
container_name: backend-app
restart: unless-stopped
working_dir: /var/www/
volumes:
- ./MRMBackend:/var/www
networks:
- backend
nginx:
image: nginx:alpine
container_name: backend-nginx
restart: unless-stopped
ports:
- 8000:80
volumes:
- ./MRMBackend:/var/www
- ./docker-compose/nginx/backend:/etc/nginx/conf.d/
networks:
- backend
- frontend
volumes:
db:
networks:
frontend:
driver: bridge
backend:
driver: bridge
The endpoint is configured in this way in the .env:
REACT_APP_BACKEND_ENDPOINT="http://host.docker.internal:8000"

docker-compose : Why agent & app service fail due to hostname?

Below is the working docker-compose file in v2 spec:
version: '2'
volumes:
webroot:
driver: local
services:
app: # Launch uwsgi application server
build:
context: ../../
dockerfile: docker/release/Dockerfile
links:
- dbc
volumes:
- webroot:/var/www/someapp
environment:
DJANGO_SETTINGS_MODULE: someapp.settings.release
MYSQL_HOST: dbc
MYSQL_USER: todo
MYSQL_PASSWORD: passwd
command:
- uwsgi
- "--socket /var/www/someapp/someapp.sock"
- "--chmod-socket=666"
- "--module someapp.wsgi"
- "--master"
- "--die-on-term"
test: # Run acceptance test cases
image: shamdockerhub/someapp-specs
links:
- nginx
environment:
URL: http://nginx:8000/todos
JUNIT_REPORT_PATH: /reports/acceptance.xml
JUNIT_REPORT_STACK: 1
command: --reporter mocha-jenkins-reporter
nginx: # Start nginx web server that forwards https packets to uwsgi server
build:
context: .
dockerfile: Dockerfile.nginx
ports:
- "8000:8000"
links:
- app
volumes:
- webroot:/var/www/someapp
dbc: # Launch MySQL server
image: mysql:5.6
hostname: dbr
expose:
- "3306"
environment:
MYSQL_DATABASE: someapp
MYSQL_USER: todo
MYSQL_PASSWORD: passwd
MYSQL_ROOT_PASSWORD: passwd
agent: # Ensure DB server is runnin
image: shamdockerhub/ansible
links:
- dbc
environment:
PROBE_HOST: "dbc"
PROBE_PORT: "3306"
command: ["probe.yml"]
where entries
MYSQL_HOST: dbc
PROBE_HOST: "dbc"
does not look intuitive, because the hostname is set to dbr in dbc service
1)
app service fails with below error on using MYSQL_HOST: dbr
django.db.utils.OperationalError: (2005, "Unknown MySQL server host 'dbr' (0)")
2)
agent service also fails in below ansible code when PROBE_HOST: "dbr"
set_fact:
probe_host: "{{ lookup('env', 'PROBE_HOST') }}"
local_action: >
wait_for host={{ probe_host }}
1)
Why these two services are failing with value dbr?
2)
How to make these two services work with MYSQL_HOST: dbr
and PROBE_HOST: "dbr"?
that is how Docker works because the hostname is not unique and that will lead to a problem if you give two containers the same hostname therefore compose will always use the service name for DNS resolution
Setting hostname: is equivalent to the hostname(8) command on plain Linux: it changes what the container thinks its own hostname is, but doesn't affect anything outside the container that might try to reach it. On plain Linux running hostname dbr won't change an external DNS server or other machines' /etc/hosts files, for example. Setting the hostname might affect a shell prompt, in the unusual case of getting an interactive shell inside a container; it has no effect on networking.
Within a single Docker Compose file, if you have no special configuration for networks:, any container can reach any other container using the name of its block in the YAML file. In your file, app, nginx, test, dbc, and agent are valid hostnames. If you manually specify a container_name: I believe that will also be reachable; network aliases as suggested in #asolanki's answer give yet another name; and the deprecated links: option would give still another. All of these are in addition to the standard name Compose gives you.
Networking in Compose has some reasonable explanations of all of this.
In your example, dbr is not a valid hostname. dbc is the Compose service name of the container, but nothing from the previous listing causes a hostname dbr to exist. It happens to be the name you'll see in the prompt if you docker-compose exec dlc sh but nobody else thinks that container has that name.
As a specific corollary to "links: is deprecated", the form of links: you have does absolutely nothing. links: [dbc] makes the container that would otherwise be visible under the name dbc visible to that specific container as that same name. You could use it to give an alternate name to a container from the point of view of a client, but I wouldn't.
Your docker-compose.yml file doesn't have any networks: blocks, and so Compose will create a default network and attach all of the containers to it. This is totally fine and I would not recommend changing it. If you do declare multiple networks, the other requirement here is that the client and server need to be on the same network to reach each other. (Containers without a networks: block implicitly have networks: [default].)
If you want to reference the service by another name you can use network alias.
Modified compose file to use network alias
version: '2'
volumes:
webroot:
driver: local
services:
app: # Launch uwsgi application server
build:
context: ../../
dockerfile: docker/release/Dockerfile
links:
- dbc
volumes:
- webroot:/var/www/someapp
environment:
DJANGO_SETTINGS_MODULE: someapp.settings.release
MYSQL_HOST: dbc
MYSQL_USER: todo
MYSQL_PASSWORD: passwd
command:
- uwsgi
- "--socket /var/www/someapp/someapp.sock"
- "--chmod-socket=666"
- "--module someapp.wsgi"
- "--master"
- "--die-on-term"
networks:
new:
aliases:
- myapp
test: # Run acceptance test cases
image: shamdockerhub/someapp-specs
links:
- nginx
environment:
URL: http://nginx:8000/todos
JUNIT_REPORT_PATH: /reports/acceptance.xml
JUNIT_REPORT_STACK: 1
command: --reporter mocha-jenkins-reporter
networks:
- new
nginx: # Start nginx web server that forwards https packets to uwsgi server
build:
context: .
dockerfile: Dockerfile.nginx
ports:
- "8000:8000"
links:
- app
volumes:
- webroot:/var/www/someapp
networks:
- new
dbc: # Launch MySQL server
image: mysql:5.6
hostname: dbr
expose:
- "3306"
environment:
MYSQL_DATABASE: someapp
MYSQL_USER: todo
MYSQL_PASSWORD: passwd
MYSQL_ROOT_PASSWORD: passwd
networks:
new:
aliases:
- dbr
agent: # Ensure DB server is runnin
image: shamdockerhub/ansible
links:
- dbc
environment:
PROBE_HOST: "dbc"
PROBE_PORT: "3306"
command: ["probe.yml"]
networks:
- new
networks:
new:

ConnectionError: Error -3 connecting to redis:6379. Try again

I cannot connect redis client in a docker container with custom redis.conf file. Also even if i remove the code for connect redis with custom redis.conf file docker will still attempt to connect to custom redis file.
Docker.compose.yml
version: '2'
services:
data:
environment:
- RHOST=redis
command: echo true
networks:
- redis-net
depends_on:
- redis
redis:
image: redis:latest
build:
context: .
dockerfile: Dockerfile_redis
ports:
- "6379:6379"
command: redis-server /etc/redis/redis.conf
volumes:
- ./redis.conf:/etc/redis/redis.conf
networks:
redis-net:
volumes:
redis-data:
Dockerfile_redis
FROM redis:latest
COPY redis.conf /etc/redis/redis.conf
CMD [ "redis-server", "/etc/redis/redis.conf" ]
This is where i connect to redis. I use requirepass in redis.conf file.
redis_client = redis.Redis(host='redis',password='password1')
Is there a way to find out original redis.conf file that docker uses so then i could just change password to make redis secure ? I just use original redis.conf file which comes after installation of redis to server with "apt install redis" then i change requirepass.
I have fixed this issue finally with help of https://github.com/sameersbn/docker-redis.
There are no need to use dockerfile for redis in this case.
Docker.compose.yml:
version: '2'
services:
data:
command: echo true
environment:
- RHOST=Redis
depends_on:
- Redis
Redis:
image: sameersbn/redis:latest
ports:
- "6379:6379"
environment:
- REDIS_PASSWORD=changeit
volumes:
- /srv/docker/redis:/var/lib/redis
restart: always
redis_connect.py
redis_client = redis.Redis(host='Redis',port=6379,password='changeit')

Can't to connect to postgres container

I define postgres server in docker-compose.yml:
db:
image: postgres:9.5
expose:
- 5432
Then in another docker container I tried to connect to this postgres container. But it gives an error with warning:
Is the server running on host "db" (172.22.0.2) and accepting
data-service_1 | TCP/IP connections on port 5432?
Why container can't to connect to another by provided information (host="db" and port=5432)?
PS
Full docker-compose.yml:
version: "2"
services:
data-service:
build: .
depends_on:
- db
ports:
- "50051:50051"
db:
image: postgres:9.5
depends_on:
- data-volume
environment:
- POSTGRES_USER=cobrain
- POSTGRES_PASSWORD=a
- POSTGRES_DB=datasets
ports:
- "8000:5432"
expose:
- 5432
volumes_from:
- data-volume
# - container:postgres9.5-data
restart: always
data-volume:
image: busybox
command: echo "I'm data container"
volumes:
- /var/lib/postgresql/data
Solution #1. Same file.
To be able to access the db container, you have to define your other containers in context of docker-compose.yml. When containers are started, each container gets all other containers mapped in /etc/hosts.
Just do
version: '2'
services:
web:
image: your/image
db:
image: postgres:9.5
If you do not wish to put your other containers into the same docker-compose.yml, there are other solutions:
Solution #2. IP
Do docker inspect <name of your db container> and look for IPAddress directive in the result listing. Use that IPAddress as host to connect to.
Solution #3. Networks
Make your containers join same network. For that, under each service, define:
services:
db:
networks:
- myNetwork
Don't forget to change db for each container you are starting.
I usually go with the first solution during development. I use apache+php as one container and pgsql as another one, a separate DB for every project. I do not start more than one setting of docker-compose.yml, so in this case defining both containers in one .yml config is perfect.
the depends on is not correct. i would try to use other paramters like LINKS and environment:
version: "2"
services:
data-service:
build: .
links:
- db
ports:
- "50051:50051"
volumes_from: ["db"]
environment:
DATABASE_HOST: db
db:
image: postgres:9.5
environment:
- POSTGRES_USER=cobrain
- POSTGRES_PASSWORD=a
- POSTGRES_DB=datasets
ports:
- "8000:5432"
expose:
- 5432
#volumes_from:
#- data-volume
# - container:postgres9.5-data
restart: always
data-volume:
image: busybox
command: echo "I'm data container"
volumes:
- /var/lib/postgresql/data
this one works for me (not postgres but mysql)

Resources