Docker redis container instantly exits - docker

I'm new with docker.
This is my dosker-compose.yml. Php and nginx containers are running, but redis is instantly exits.
version: "3.3"
services:
#### PHP-FPM ##############################################
php-fpm:
build:
context: ./docker/php
dockerfile: php-fpm.docker
volumes:
- ${APP_CODE_PATH_HOST}/backend/api:/var/www/html
environment:
- REDIS_PASSWORD=${REDIS_PASSWORD}
- REDIS_HOST=${REDIS_HOST}
- REDIS_PORT=${REDIS_PORT}
depends_on:
- redis
links:
- redis
expose:
- 9000
### REDIS ##############################################
redis:
image: 'bitnami/redis:latest'
environment:
- ALLOW_EMPTY_PASSWORD=yes
ports:
- ${REDIS_PORT}:6379
volumes:
- ${DOCKER_STORAGE_REDIS}:/bitnami/redis/data
My yml file looks like in bitnami:latest, can you tell me please what is miss?
And this is .env file content
### APP ################################################
APP_CODE_PATH_HOST=./src
### Redis #############################################
REDIS_PORT=63800
REDIS_HOST=redis
### DOCKER storage ####################################
DOCKER_STORAGE_REDIS=./storage/redis

Turned out problem was in rights, redis couldn't get acess to folders on my machine, chmod 777 ./storage/redis solved it

Related

How to allow access to Docker container in MERN project only through specific URL while blocking ports?

I have a MERN project set up with Docker. Development environment is fine; it's production I'm having trouble with.
This is the behavior that I desire:
In Development:
This is the state of the containers:
Nodemon runs in the node express image (from node:alpine) container with port 9000 open to the host. (this is the backend/api)
MongoDB runs in its own container based on the official image with port 27017 open to the host. (this is the database)
React runs with warm reload in its image (from node:alpine) container with port 3000 open to the host. (this is the frontend)
In Production:
This is the state of the containers:
Node runs in the node express image (from node:alpine) container with no ports open to the host.
MongoDB runs in its own container based on the official image with no ports open to the host.
React runs in its image (from nginx:alpine) container with port 80 open to the host.
The backend/api refers to the database using the container name, and the frontend/react container refers to the backend using the container name.
I put proxy: localhost:9000 in the react package.json file. In production, I put the following in the nginx.conf file.
location / {
root /usr/share/nginx/html;
index index.html index.htm;
try_files $uri /index.html;
}
location /api {
proxy_pass http://localhost:9000;
}
In the production docker-compose.yml file, I removed expose: "9000" and ports: "9000:9000" that were present in the docker-compose.yml file. I run docker-compose -f docker-compose.yml -f docker-compose.production.yml up.
My problem is that the ports "localhost:9000" and "localhost:27017" are still exposed in production for some reason. I want all routes, except for "example.com/api", to go through React. Only "example.com/api" must go directly to the backend.
Also, I'm not sure if this is related, but is there a way to make sure "example.com/api" goes to the backend without having to do require("express")().get('/api'...? As in, just doing require("express")().get('/'... takes calls to "example.com/api" by default.
Note: I used networks, not links, in order to connect containers together. Backend is connected to both React and MongoDB, while React and MongoDB are not connected to each other.
Here is my docker-compose.yml:
version: "3.7"
services:
##############################
# Back-End Container
##############################
backend:
container_name: backend
build:
context: ./backend/
target: development
restart: always
expose:
- "9000"
environment:
- MONGO_URI=mongodb://db:27017/db
- PORT=9000
- NODE_ENV=development
- DEBUG=app
- JWT_SECRET=secretsecret
- JWT_EXPIRY=30d
ports:
- "9000:9000"
- "9229:9229"
volumes:
- "./backend/:/home/node/app/"
- /home/node/app/node_modules/
depends_on:
- db
networks:
- client
- server
##############################
# Front-End Container
##############################
frontend:
container_name: frontend
build:
context: ./frontend/
target: development
restart: always
expose:
- "3000"
- "35729"
environment:
- NODE_ENV=development
- REACT_APP_PORT=3000
- CHOKIDAR_USEPOLLING=true
ports:
- "3000:3000"
- "35729:35729"
volumes:
- "./frontend/:/home/node/app/"
- /home/node/app/node_modules/
networks:
- client
##############################
# MongoDB Container
##############################
db:
container_name: db
image: mongo
restart: always
volumes:
- dbdata:/data/db/
ports:
- "27017:27017"
networks:
- server
networks:
client:
server:
volumes:
dbdata:
Here is my .env file
MONGO_URI=db:27017/somedb?authSource=admin
PORT=9000
MONGO_PORT=27017
MONGO_INITDB_ROOT_USERNAME=mongoadmin
MONGO_INITDB_ROOT_PASSWORD=secret
MONGO_INITDB_DATABASE=somedb
NODE_ENV=production
Here is my docker.compose.production.yml:
version: "3.7"
services:
##############################
# Back-End Container
##############################
backend:
container_name: backend
init: true
environment:
- MONGO_URI=mongodb://${MONGO_INITDB_ROOT_USERNAME}:${MONGO_INITDB_ROOT_PASSWORD}#${MONGO_URI}
- PORT=${PORT}
- NODE_ENV=${NODE_ENV}
build:
context: ./backend/
target: production
restart: always
depends_on:
- db
networks:
- client
- server
##############################
# Front-End Container
##############################
frontend:
container_name: frontend
build:
context: ./frontend/
target: production
restart: always
environment:
- NODE_ENV=${NODE_ENV}
expose:
- "80"
ports:
- "80:80"
networks:
- client
##############################
# MongoDB Container
##############################
db:
container_name: db
image: mongo
restart: always
environment:
MONGO_INITDB_ROOT_USERNAME: ${MONGO_INITDB_ROOT_USERNAME}
MONGO_INITDB_ROOT_PASSWORD: ${MONGO_INITDB_ROOT_PASSWORD}
volumes:
- dbdata:/data/db/
ports:
- "27017:27017"
networks:
- server
networks:
client:
server:
volumes:
dbdata:
Dockerfiles only have FROM, WORKDIR, RUN, COPY, and CMD.
I run docker-compose -f docker-compose.yml -f docker-compose.production.yml up.
Why did I include the development docker-compose.yml? All I had to do was remove that part and run docker-compose.production.yml directly. Didn't need any overriding done.
Solution was to run:
docker-compose -f docker-compose.production.yml up
For those looking for why proxy didn't lead to root of backend was because I didn't include a forward slash in the url of the proxy in the nginx.conf.

Collect tomcat logs from tomcat docker container to Filebeat docker container

I have a Tomcat docker container and Filebeat docker container both are up and running.
My objective: I need to collect tomcat logs from running Tomcat container to Filebeat container.
Issue: I have no idea how to get collected log files from Tomcat container.
What I have tried so far: I have tried to create a docker volume and add tomcat logs to that volume and access that volume from filebeat container, but ended with no success.
Structure: I have wrote docker-compose.yml file under project Logstash(root directory of the project) with following project structure.(Here I want to up and run Elasticsearch, Logstash, Filebeat and Kibana docker containers from one configuration file). docker-containers(root directory of the project) with following structure (here I want to up and run Tomcat, Nginx and Postgres containers from one configuration file).
Logstash: contain 4 main sub directories (Filebeat, Logstash, Elasticsearch and Kibana), ENV file and docker-compose.yml file. Both sub directories contain Dockerfiles to pull images and build the containers.
docker-containers: contains 3 main sub directories (Tomcat, Nginx and Postgres). ENV file and docker-compose.yml file. Both sub directories contain separate Dockerfiles to pull docker image and build the container.
Note: I think this basic structure my helpful to understand my requirements.
docker-compose.yml files
Logstash.docker-compose.yml file
version: '2'
services:
elasticsearch:
container_name: OTP-Elasticsearch
build:
context: ./elasticsearch
args:
- ELK_VERSION=${ELK_VERSION}
volumes:
- ./elasticsearch/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml:ro
ports:
- "9200:9200"
- "9300:9300"
environment:
ES_JAVA_OPTS: "-Xmx256m -Xms256m"
networks:
- elk
filebeat:
container_name: OTP-Filebeat
command:
- "-e"
- "--strict.perms=false"
user: root
build:
context: ./filebeat
args:
- ELK_VERSION=${ELK_VERSION}
volumes:
- ./filebeat/config/filebeat.yml:/usr/share/filebeat/filebeat.yml
environment:
LS_JAVA_OPTS: "-Xmx256m -Xms256m"
networks:
- elk
depends_on:
- elasticsearch
- logstash
logstash:
container_name: OTP-Logstash
build:
context: ./logstash
args:
- ELK_VERSION=${ELK_VERSION}
volumes:
- ./logstash/config/logstash.yml:/usr/share/logstash/config/logstash.yml:ro
- ./logstash/pipeline:/usr/share/logstash/pipeline:ro
expose:
- 5044/tcp
ports:
- "9600:9600"
- "5044:5044"
environment:
LS_JAVA_OPTS: "-Xmx256m -Xms256m"
networks:
- elk
links:
- elasticsearch
depends_on:
- elasticsearch
kibana:
container_name: OTP-Kibana
build:
context: ./kibana
args:
- ELK_VERSION=${ELK_VERSION}
volumes:
- ./kibana/config/:/usr/share/kibana/config:ro
ports:
- "5601:5601"
networks:
- elk
links:
- elasticsearch
depends_on:
- elasticsearch
- logstash
- filebeat
networks:
elk:
driver: bridge
docker-containers.docker-compose.yml file
version: '2'
services:
# Nginx
nginx:
container_name: OTP-Nginx
restart: always
build:
context: ./nginx
args:
- comapanycode=${COMPANY_CODE}
- dbtype=${DB_TYPE}
- dbip=${DB_IP}
- dbname=${DB_NAME}
- dbuser=${DB_USER}
- dbpassword=${DB_PASSWORD}
- webdirectory=${WEB_DIRECTORY}
ports:
- "80:80"
links:
- db:db
volumes:
- ./log/nginx:/var/log/nginx
depends_on:
- db
# Postgres
db:
container_name: OTP-Postgres
restart: always
ports:
- "5430:5430"
build:
context: ./postgres
args:
- food_db_version=${FOOD_DB_VERSION}
- dbtype=${DB_TYPE}
- retail_db_version=${RETAIL_DB_VERSION}
- dbname=${DB_NAME}
- dbuser=${DB_USER}
- dbpassword=${DB_PASSWORD}
volumes:
- .data/db:/octopus_docker/postgresql/data
# Tomcat
tomcat:
container_name: OTP-Tomcat
restart: always
build:
context: ./tomcat
args:
- dbuser=${DB_USER}
- dbpassword=${DB_PASSWORD}
links:
- db:db
volumes:
- ./tomcat/${WARNAME}.war:/usr/local/tomcat/webapps/${WARNAME}.war
ports:
- "8080:8080"
depends_on:
- db
- nginx
Additional files:
filebeat.yml (configuration file inside Logstash/Filbeat/config/)
filebeat.inputs:
- type: log
enabled: true
paths:
- /usr/local/tomcat/logs/.*log
output.logstash:
hosts: ["logstash:5044"]
Additional Info:
System I am using is Ubuntu 18.04
My goal is to collect tomcat logs from running tomcat container and forward them to Logstash and filter logs and forward that logs to Elasticsearch and finally to Kibana for Visualization purpose.
For now I can collect local machine(host) logs and visualize them in Kibana.(/var/log/)
My Problem:
I need to know proper way to get collected tomcat logs from tomcat container and forward them to logstash container via filebeat container.
Any discussion, answer or any help to understand a way to do this is highly expected.
Thanks.
So loooong... Create shared volume among all containers and setup your tomcat to save log files into that folder. If you can put all services into one docker-compose.yml, just setup volume internally:
docker-compose.yml
version: '3'
services:
one:
...
volumes:
- logs:/var/log/shared
two:
...
volumes:
- logs:/var/log/shared
volumes:
logs:
If you need several docker-compose.yml files, create volume globally in advance with docker volume create logs and map it into both compose files:
version: '3'
services:
one:
...
volumes:
- logs:/var/log/shared
two:
...
volumes:
- logs:/var/log/shared
volumes:
logs:
external: true

ConnectionError: Error -3 connecting to redis:6379. Try again

I cannot connect redis client in a docker container with custom redis.conf file. Also even if i remove the code for connect redis with custom redis.conf file docker will still attempt to connect to custom redis file.
Docker.compose.yml
version: '2'
services:
data:
environment:
- RHOST=redis
command: echo true
networks:
- redis-net
depends_on:
- redis
redis:
image: redis:latest
build:
context: .
dockerfile: Dockerfile_redis
ports:
- "6379:6379"
command: redis-server /etc/redis/redis.conf
volumes:
- ./redis.conf:/etc/redis/redis.conf
networks:
redis-net:
volumes:
redis-data:
Dockerfile_redis
FROM redis:latest
COPY redis.conf /etc/redis/redis.conf
CMD [ "redis-server", "/etc/redis/redis.conf" ]
This is where i connect to redis. I use requirepass in redis.conf file.
redis_client = redis.Redis(host='redis',password='password1')
Is there a way to find out original redis.conf file that docker uses so then i could just change password to make redis secure ? I just use original redis.conf file which comes after installation of redis to server with "apt install redis" then i change requirepass.
I have fixed this issue finally with help of https://github.com/sameersbn/docker-redis.
There are no need to use dockerfile for redis in this case.
Docker.compose.yml:
version: '2'
services:
data:
command: echo true
environment:
- RHOST=Redis
depends_on:
- Redis
Redis:
image: sameersbn/redis:latest
ports:
- "6379:6379"
environment:
- REDIS_PASSWORD=changeit
volumes:
- /srv/docker/redis:/var/lib/redis
restart: always
redis_connect.py
redis_client = redis.Redis(host='Redis',port=6379,password='changeit')

Why Dockerfile doesn't run multiple commands

I want use Docker run my project(react+nodejs+mongodb),
Dockerfile:
FROM node:8.9-alpine
ENV NODE_ENV production
WORKDIR /usr/src/app
COPY ["package.json", "package-lock.json*", "npm-shrinkwrap.json*", "./"]
RUN npm install --production --silent && mv node_modules ../
COPY . .
CMD nohup sh -c 'npm start && node ./server/server.js'
docker-compose.yml:
version: '2.1'
services:
chat:
image: chat
container_name: chat
build: .
environment:
NODE_ENV: production
ports:
- "3000:3000"
- "8080:8080"
volumes:
- ./:/usr/src/app
links:
- mongo
mongo:
container_name: mongo
image: mongo
ports:
- "27017:27017"
run docker-compose up --build, the 3000 port is worked, but the 8080 port dies
localhost:3000
localhost:8080
I would suggest create a container for the server and have it seperate from the "chat" container. Its best to have each container do one thing and one thing only (almost like the philosophy behind unix commands)
In any case here is some modifications that I would make to the compose file.
version: '2.1'
services:
chat:
image: chat
container_name: chat
build: .
environment:
NODE_ENV: production
ports:
- "3000:3000"
- "8080:8080"
volumes:
- ./:/usr/src/app
links:
- mongo
mongo:
container_name: mongo
image: mongo
# You don't need to expose this port to the outside world. Because you linked the two containers the chat app
# will be able to connect to mongodb using hostname mongodb inside the container network.
# ports:
# - "27017:27017"
Btw what happens if you run:
$ docker-compose down
and then
$ docker-compose up
$ docker ps
can you see the ports exposed in docker ps output?
your chat service depends on mongo so you also need to have this in your chat
depends_on:
- mongo
This docker-compose file works for me. Note that i am saving the data from the database to a local directory. You should add this directory to gitignore.
version: "3.2"
services:
mongo:
container_name: mongo
image: mongo:latest
environment:
- MONGO_INITDB_ROOT_USERNAME=root
- MONGO_INITDB_ROOT_PASSWORD=password
- NODE_ENV=production
ports:
- "28017:27017"
expose:
- 28017 # you can connect to this mongodb with studio3t
volumes:
- ./mongodb-data:/data/db
restart: always
networks:
- docker-network
express:
container_name: express
environment:
- NODE_ENV=development
restart: always
build:
context: .
args:
buildno: 1
expose:
- 3000
ports:
- "3000:3000"
links:
- mongo # link this service to the database service
depends_on:
- mongo
command: "npm start" # override the default command to use nodemon in dev
networks:
- docker-network
networks:
docker-network:
driver: bridge
You may also find that using node you have to wait for the mongodb container to be ready before you can connect to the database.

connect to mysql database from docker container

I have this docker file and it is working as expected. I have php application that connects to mysql on localhost.
# cat Dockerfile
FROM tutum/lamp:latest
RUN rm -fr /app
ADD crm_220 /app/
ADD crmbox.sql /
ADD mysql-setup.sh /mysql-setup.sh
EXPOSE 80 3306
CMD ["/run.sh"]
When I tried to run the database as separate container, my php application is still pointing to localhost. When I connect to the "web" container, I am not able to connect to "mysql1" container.
# cat docker-compose.yml
web:
build: .
restart: always
volumes:
- .:/app/
ports:
- "8000:8000"
- "80:80"
links:
- mysql1:mysql
mysql1:
image: mysql:latest
volumes:
- "/var/lib/mysql:/var/lib/mysql"
ports:
- "3306:3306"
environment:
MYSQL_ROOT_PASSWORD: secretpass
How does my php application connect to mysql from another container?
This is similar to the question asked here...
Connect to mysql in a docker container from the host
I do not want to connect to mysql from host machine, I need to connect from another container.
At first you shouldn't expose mysql 3306 port if you not want to call it from host machine. At second links are deprecated now. You can use network instead. I not sure about compose v.1 but in v.2 all containers in common docker-compose file are in one network (more about networks) and can be resolved by name each other. Example of docker-compose v.2 file:
version: '2'
services:
web:
build: .
restart: always
volumes:
- .:/app/
ports:
- "8000:8000"
- "80:80"
mysql1:
image: mysql:latest
volumes:
- "/var/lib/mysql:/var/lib/mysql"
environment:
MYSQL_ROOT_PASSWORD: secretpass
With such configuration you can resolve mysql container by name mysql1 inside web container.
For me, the name resolutions is never happening. Here is my docker file, and I was hoping to connect from app host to mysql, where the name is mysql and passed as an env variable to the other container - DB_HOST=mysql
version: "2"
services:
app:
build:
context: ./
dockerfile: /src/main/docker/Dockerfile
image: crossblogs
environment:
- DB_HOST=mysql
- DB_PORT=3306
ports:
- 8080:8080
depends_on:
- mysql
mysql:
image: mysql:5.7.20
environment:
- MYSQL_USER=root
- MYSQL_ALLOW_EMPTY_PASSWORD=yes
- MYSQL_DATABASE=crossblogs
ports:
- 3306:3306
command: mysqld --lower_case_table_names=1 --skip-ssl --character_set_server=utf8 --explicit_defaults_for_timestamp

Resources