I want to secure Redis on Pionion server. I tried to secure it using (--requirepass) command but it didn't work as expected.
My current docker-compose.yml file.
redis:
image: redis:6.0.9
command: --requirepass "password"
ports:
- 6379:6379
networks:
- ionnet
Related
Here is my docker-compose.yml file:
version: '3'
volumes:
portainer_data:
services:
jupyter:
image: catecb/bdp2_midterm_review
ports:
- 8888:8888
volumes:
- /home/enigma/review/work:/home/jovyan/work
user: root
environment:
- JUPYTER_TOKEN=bdp_psw
- CHOWN_HOME=yes
- GEN_CERT=yes
portainer:
image: catecb/portainer
restart: always
ports:
- 9000:9000
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- portainer_data:/data
Which works perfectly without GEN_CERT=yes. But when I add this option I get this:
Your connection is not private
Attackers might be trying to steal your information from <IP Address> (for example, passwords, messages, or credit cards). Learn more
I am trying to turn on https access but this flag doesn't seem to help at all.
Currently I have the below service configured in my docker-compose which works correct with redis password. However, I would like to use also redis username together with password. Is there any similar command to requirepass or something else to enable username?
version: '3.9'
volumes:
redis_data: {}
networks:
ee-net:
driver: bridge
services:
redis:
image: 'redis:latest'
container_name: redis
hostname: redis
networks:
- ee-net
ports:
- '6379:6379'
command: '--requirepass redisPassword'
volumes:
- redis_data:/data
You can specify a config file
$ cat redis.conf
requirepass password
#aclfile /etc/redis/users.acl
Then add the following to your docker compose file
version: '3'
services:
redis:
image: redis:latest
command: ["redis-server", "/etc/redis/redis.conf"]
volumes:
- ./redis.conf:/etc/redis/redis.conf
ports:
- "6379:6379"
Then you will get the password requirements
redis-cli
127.0.0.1:6379> ping
(error) NOAUTH Authentication required.
127.0.0.1:6379> AUTH password
OK
127.0.0.1:6379> ping
PONG
You may want to look into the ACLs line commented out there if you require more fine grained control
I am running multiple docker containers. I want to invoke a graphql Hasura api running on a docker container from a node js application running on another container. I am unable to use same url - (http:///v1/graphql) that I use to access the Hasura api for accessing from node js application.
I tried http://localhost/v1/graphql but that is not also working.
The following is the docker compose file for Hasura graphql
version: '3.6'
services:
postgres:
image: postgis/postgis:12-master
restart: always
volumes:
- db_data:/var/lib/postgresql/data
environment:
POSTGRES_PASSWORD: <postgrespassword>
pgadmin:
image: dpage/pgadmin4
restart: always
depends_on:
- postgres
ports:
- 5050:80
## you can change pgAdmin default username/password with below environment variables
environment:
PGADMIN_DEFAULT_EMAIL: <email>
PGADMIN_DEFAULT_PASSWORD: <pass>
graphql-engine:
image: hasura/graphql-engine:v1.3.0-beta.3
depends_on:
- "postgres"
restart: always
environment:
# database url to connect
HASURA_GRAPHQL_DATABASE_URL: postgres://postgres:postgrespassword#postgres:5432/postgres
# enable the console served by server
HASURA_GRAPHQL_ENABLE_CONSOLE: "true" # set "false" to disable console
## uncomment next line to set an admin secret key
HASURA_GRAPHQL_ADMIN_SECRET: <secret>
HASURA_GRAPHQL_UNAUTHORIZED_ROLE: anonymous
HASURA_GRAPHQL_JWT_SECRET: '{ some secret }'
command:
- graphql-engine
- serve
caddy:
image: abiosoft/caddy:0.11.0
depends_on:
- "graphql-engine"
restart: always
ports:
- "80:80"
- "443:443"
volumes:
- ./Caddyfile:/etc/Caddyfile
- caddy_certs:/root/.caddy
volumes:
db_data:
caddy_certs:
The caddy file has the following configuration:
# replace :80 with your domain name to get automatic https via LetsEncrypt
:80 {
proxy / graphql-engine:8080 {
websocket
}
}
What is the api end point I should be using from another docker container (not present in this docker-compose) to access the hasura api? From browser I use http://#ipaddress /v1/graphql.
What is the configuration of caddy actually do here?
I cannot connect redis client in a docker container with custom redis.conf file. Also even if i remove the code for connect redis with custom redis.conf file docker will still attempt to connect to custom redis file.
Docker.compose.yml
version: '2'
services:
data:
environment:
- RHOST=redis
command: echo true
networks:
- redis-net
depends_on:
- redis
redis:
image: redis:latest
build:
context: .
dockerfile: Dockerfile_redis
ports:
- "6379:6379"
command: redis-server /etc/redis/redis.conf
volumes:
- ./redis.conf:/etc/redis/redis.conf
networks:
redis-net:
volumes:
redis-data:
Dockerfile_redis
FROM redis:latest
COPY redis.conf /etc/redis/redis.conf
CMD [ "redis-server", "/etc/redis/redis.conf" ]
This is where i connect to redis. I use requirepass in redis.conf file.
redis_client = redis.Redis(host='redis',password='password1')
Is there a way to find out original redis.conf file that docker uses so then i could just change password to make redis secure ? I just use original redis.conf file which comes after installation of redis to server with "apt install redis" then i change requirepass.
I have fixed this issue finally with help of https://github.com/sameersbn/docker-redis.
There are no need to use dockerfile for redis in this case.
Docker.compose.yml:
version: '2'
services:
data:
command: echo true
environment:
- RHOST=Redis
depends_on:
- Redis
Redis:
image: sameersbn/redis:latest
ports:
- "6379:6379"
environment:
- REDIS_PASSWORD=changeit
volumes:
- /srv/docker/redis:/var/lib/redis
restart: always
redis_connect.py
redis_client = redis.Redis(host='Redis',port=6379,password='changeit')
Firstly thank you for your time .
i was trying my hands on docker.
when i saw this article
http://jasonwilder.com/blog/2014/03/25/automated-nginx-reverse-proxy-for-docker/
please have a look at my docker-compose.yml file , i am using below images
jwilder/nginx-proxy:latest
grafana/grafana:4.6.2
version: "2"
services:
proxy:
build: ./proxy
container_name: proxy
restart: always
volumes:
- /etc/localtime:/etc/localtime:ro
- /var/run/docker.sock:/tmp/docker.sock:ro
ports:
- 80:80
- 443:443
grafana:
build: ./grafana
container_name: grafana
volumes:
- grafana-data:/var/lib/grafana
environment:
VIRTUAL_HOST: grafana.localhost
GF_SECURITY_ADMIN_PASSWORD: password
depends_on:
- proxy
volumes:
grafana-data:
so when i do docker-compose up -d on my local system i am able to access the grafana container.
Now i have deploy this docker app on aws how do i access the grafana container on ec2 with VIRTUAL_HOST
any help or idea how to do this will be appreciated ! Thanks !