I have an elk docker-compose file that I think cannot run on my server because the docker-compose version is too old...
$ docker-compose -version
docker-compose version 1.6.2, build 4d72027
And here is my docker-compose file...
version: '2'
services:
elasticsearch:
image: elasticsearch:5
command: elasticsearch
environment:
# This helps ES out with memory usage
- ES_JAVA_OPTS=-Xmx1g -Xms1g
volumes:
# Persist elasticsearch data to a volume
- elasticsearch:/usr/share/elasticsearch/data
# Extra ES configuration options
- ./es/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml
ports:
- "9200:9200"
- "9300:9300"
logstash:
image: logstash:5
command: logstash -w 4 -f /etc/logstash/conf.d/logstash.conf
environment:
# This helps Logstash out if it gets too busy
- LS_HEAP_SIZE=2048m
volumes:
# volume mount the logstash config
- ./logstash/logstash.conf:/etc/logstash/conf.d/logstash.conf
- /Users/rriviere/workspace/parks-dpe/activities-api-v1/app-logs/:/tmp/app-logs
ports:
# GELF port for Docker logs
- "12201:12201/udp"
# UDP port for syslogs
- "5000:5000/udp"
# Default TCP port
- "5001:5001"
links:
- elasticsearch
kibana:
image: kibana:5
environment:
# Point Kibana to the elasticsearch container
- ELASTICSEARCH_URL=http://elasticsearch:9200
ports:
- "5601:5601"
links:
- elasticsearch
kopf:
image: rancher/kopf:v0.4.0
ports:
- "8080:80"
environment:
KOPF_ES_SERVERS: "elasticsearch:9200"
links:
- elasticsearch
volumes:
elasticsearch:
Whilst I'm not looking for an exact answer here can someone help me with what is required to create myself a docker compose v1 file that would do the same things.
thanks
you better check the docs for more details but just looking at your file, I think if you remove following lines then you are good to go:
version: '2'
services:
version 2 has some additional features which I dont see in your case, so you are pretty compatible with v1.
Related
I ran a docker-compose file to setup elasticsearch and Kibana on Ubuntu 18.04LTS. Kibana container is up and running just fine but elasticsearch goes down after about 10secs. I have restarted the containers and docker service several times and still got the same result. Been on this all day and hoping that I get some help.
Docker-Compose file.
version: "3.0"
services:
elasticsearch:
container_name: es-container
image: docker.elastic.co/elasticsearch/elasticsearch:7.16.3
environment:
- xpack.security.enabled=true
- xpack.security.audit.enabled=true
- "discovery.type=single-node"
- ELASTIC_PASSWORD=secretpassword
networks:
- es-net
ports:
- 9200:9200
kibana:
container_name: kb-container
image: docker.elastic.co/kibana/kibana:7.16.3
environment:
- ELASTICSEARCH_HOSTS=http://es-container:9200
- ELASTICSEARCH_USERNAME=elastic
- ELASTICSEARCH_PASSWORD=secretpassword
networks:
- es-net
depends_on:
- elasticsearch
ports:
- 5601:5601
networks:
es-net:
driver: bridge
Also checked the logs on the es-container and it displayed;
Created elasticsearch keystore in
/usr/share/elasticsearch/config/elasticsearch.keystore
Audit logging can be only enabled with paid ES subscription and you don't provide any license info to your container.
i using docker with https://github.com/markshust/docker-magento . when i in step by step try to import db i got error ERROR 2002 (HY000): Can't connect to MySQL server on 'db' (115)
I tried this solution ERROR 2002 (HY000): Can't connect to MySQL server on 'db' (115).
my yml file
## Mark Shust's Docker Configuration for Magento
## (https://github.com/markshust/docker-magento)
##
## Version 41.0.2
## To use SSH, see https://github.com/markshust/docker-magento#ssh
## Linux users, see https://github.com/markshust/docker-magento#linux
## If you changed the default Docker network, you may need to replace
## 172.17.0.1 in this file with the result of:
## docker network inspect bridge --format='{{(index .IPAM.Config 0).Gateway}}'
version: "3"
services:
app:
image: markoshust/magento-nginx:1.18-5
ports:
- "80:8000"
- "443:8443"
volumes: &appvolumes
- ~/.composer:/var/www/.composer:cached
- ~/.ssh/id_rsa:/var/www/.ssh/id_rsa:cached
- ~/.ssh/known_hosts:/var/www/.ssh/known_hosts:cached
- appdata:/var/www/html
- sockdata:/sock
- ssldata:/etc/nginx/certs
extra_hosts: &appextrahosts
## M1 Mac support to fix Docker delay, see #566
- "app:172.17.0.1"
- "phpfpm:172.17.0.1"
- "db:172.17.0.1"
- "redis:172.17.0.1"
- "elasticsearch:172.17.0.1"
- "rabbitmq:172.17.0.1"
## Selenium support, replace "magento.test" with URL of your site
- "magento.test:172.17.0.1"
phpfpm:
image: markoshust/magento-php:7.2-fpm
volumes: *appvolumes
env_file: env/phpfpm.env
db:
image: mariadb:10.1
command: mysqld --innodb_force_recovery=6 --lower_case_table_names=1 --skip-ssl --character_set_server=utf8mb4 --explicit_defaults_for_timestamp
ports:
- "3306:3306"
env_file: env/db.env
volumes:
- dbdata:/var/lib/mysql
redis:
image: redis:6.0-alpine
ports:
- "6379:6379"
elasticsearch:
image: markoshust/magento-elasticsearch:7.9.3-1
ports:
- "9200:9200"
- "9300:9300"
environment:
- "discovery.type=single-node"
## Set custom heap size to avoid memory errors
- "ES_JAVA_OPTS=-Xms1g -Xmx1g"
## Avoid test failures due to small disks
## More info at https://github.com/markshust/docker-magento/issues/488
- "cluster.routing.allocation.disk.threshold_enabled=false"
- "index.blocks.read_only_allow_delete"
rabbitmq:
image: rabbitmq:3.8.22-management-alpine
ports:
- "15672:15672"
- "5672:5672"
volumes:
- rabbitmqdata:/var/lib/rabbitmq
env_file: env/rabbitmq.env
mailcatcher:
image: sj26/mailcatcher
ports:
- "1080:1080"
## Selenium support, uncomment to enable
#selenium:
# image: selenium/standalone-chrome-debug:3.8.1
# ports:
# - "5900:5900"
# extra_hosts: *appextrahosts
volumes:
appdata:
dbdata:
rabbitmqdata:
sockdata:
ssldata:
Remember that you will need to connect to the running docker container. So you probably want to use TCP instead of Unix socket. Check the output of the docker ps command and look for running MySQL containers. If you find one then use MySQL command like this: MySQL -h 127.0.0.1 -P <mysql_port> (you will find a port in docker ps output). If you can't find any running MySQL container in docker ps output then try docker images to find MySQL image name and try something like this to run it: docker run -d -p 3306:3306 tutum/MySQL where "tutum/MySQL" is image name found in docker images.
So, I've setup several container apps that use MariaDB as their db backend, using docker-compose.
Containers are setup as needed and therefore MariaDB gets installed each time on every container that uses the db.
For example, I have some containers (PHPMyAdmin, NGiNX-PM, etc.) that use MariaDB, and they, in turn, have a version of it installed within their container. I also have a separate container (MariaDB) that I would rather have shared amongst the other containered apps and, thereby, I'd only have to maintain one version of the db.
I've searched for a solution, but no luck. Needless to say, I'm a noob at docker.
The only thing I can come up with is that all the apps need to be installed through the same docker-compose.yaml file to use the same db? That would make for a very long file if I had many containers running, and I'd prefer to have a directory per app and all the app's contents available in this one location.
I'm sure there is a way, I just haven't been able to figure it out.
So this is what I've tried:
The following setup is what I've tried but I am unable to get it to work:
(/docker/apps/mariadb/mariadb.yml)
version: '3.9'
networks:
NET:
external: true
services:
#############################################################################################
# MariaDB (docker-compose -f mariadb.yml up -d) #
#############################################################################################
mariadb:
image: jsurf/rpi-mariadb:latest
restart: unless-stopped
environment:
- TZ=${TIMEZONE}
- MYSQL_DATABASE=dockerApps
- MYSQL_USER=root
- MYSQL_ROOT_PASSWORD=${MYSQL_ROOT_PASSWORD}
- MYSQL_PASSWORD=${MYSQL_PASSWORD}
volumes:
- $HOME/docker/apps/mariadb/db:/var/lib/mysql
expose:
- '3306'
networks:
- NET
(/docker/apps/nginxpm/nginxpm.yml)
version: '3.9'
networks:
NET:
external: true
services:
#############################################################################################
# NGiNX Proxy Manager (docker-compose -f nginxpm.yml up -d) #
#############################################################################################
nginxpm:
container_name: NGiNX_Proxy_Manager
image: 'jc21/nginx-proxy-manager:latest'
ports:
- '80:80'
- '81:81'
- '443:443'
volumes:
- ./config.json:/app/config/production.json
- ./data:/data
- ./letsencrypt:/etc/letsencrypt
networks:
- NET
depends_on:
- mariadb
(/docker/apps/phpmyadmin/phpmyadmin.yml)
version: "3.9"
networks:
NET:
external: true
services:
#############################################################################################
# phpMyAdmin (docker-compose up -d -OR- docker-compose -f phpmyadmin.yml up -d) #
#############################################################################################
phpmyadmin:
image: phpmyadmin:latest
container_name: phpMyAdmin
restart: unless-stopped
environment:
PMA_HOST: mariadb
PMA_USER: root
PMA_PASSWORD: ${MYSQL_PASSWORD}
volumes:
# Must add ServerName directive to end of file "ServerName 127.0.0.1"
- $HOME/docker/apps/phpmyadmin/apache2.conf:/etc/apache2/apache2.conf
ports:
- '8004:80'
networks:
- NET
Any help in this matter is greatly appreciated.
Ok, so after some more reading and testing, I've found the answer to my issue. I was assuming that "depends_on" was supposed to connect the containers, somehow. Not true!
I found that "external_links" is the correct way of connecting them.
So, my final docker-compose file looks like this:
(/docker/apps/nginxpm/nginxpm.yml)
version: '3.9'
networks:
NET:
external: true
services:
#############################################################################################
# NGiNX Proxy Manager (docker-compose -f nginxpm.yml up -d) #
#############################################################################################
nginxpm:
container_name: NGiNX_Proxy_Manager
image: 'jc21/nginx-proxy-manager:latest'
ports:
- '80:80'
- '81:81'
- '443:443'
volumes:
- ./config.json:/app/config/production.json
- ./data:/data
- ./letsencrypt:/etc/letsencrypt
networks:
- NET
external_links:
- mariadb
there is ruby on rails application which uses mongodb and postgresql databases. When I run it locally everything works fine, however when I try to open in a remote container, it throws error message
2021-03-14T20:22:27.985+0000 Failed: error connecting to db server: no reachable servers
the docker-compose.yml file defines following services:
redis mongodb db rails
I start remote containers with following command:
docker-compose build - build successful
docker-compose up -d - containers are up and running
when I connect to the rails container and try to do
bundle exec rake aws:restore_db
error mentioned above is thrown. I don't know what is wrong here. The mongodb container is up and running.
the docker-compose.yml is mentioned below:
version: '3.4'
services:
redis:
image: redis:5.0.5
mongodb:
image: mongo:3.6.13
volumes:
- mongo-data:/data/db
db:
image: postgres:11.3
volumes:
- db-data:/var/lib/postgresql/data
rails:
build: .
image: proj:latest
depends_on:
- db
- mongodb
- redis
volumes:
- .:/proj
ports:
- "3000:3000"
tty: true
stdin_open: true
env_file:
- .env/development.env
volumes:
db-data:
mongo-data:
this is how I start all four remote containers:
$ docker-compose up -d
Starting proj_db_1 ... done
Starting proj_redis_1 ... done
Starting proj_mongodb_1 ... done
Starting proj_rails_1 ... done
please help me to understand how remote containers should interact with each other.
Your configuration should point to the services by name and not to a port on localhost. For example, if you ware connecting to redis as localhost:6380 or 127.0.0.1:6380, now you need to use redis:6380
If this is still not helping, you can try to add links between containers in order the names given to them as services to be resolved. So the file will look something like this:
version: '3.4'
services:
redis:
image: redis:5.0.5
networks:
- front-end
links:
- "mongodb:mongodb"
- "db:db"
- "rails:rails"
mongodb:
image: mongo:3.6.13
volumes:
- mongo-data:/data/db
networks:
- front-end
links:
- "redis:redis"
- "db:db"
- "rails:rails"
db:
image: postgres:11.3
volumes:
- db-data:/var/lib/postgresql/data
networks:
- front-end
links:
- "redis:redis"
- "mongodb:mongodb"
- "rails:rails"
rails:
build: .
image: proj:latest
depends_on:
- db
- mongodb
- redis
volumes:
- .:/proj
ports:
- "3000:3000"
tty: true
stdin_open: true
env_file:
- .env/development.env
networks:
- front-end
links:
- "redis:redis"
- "mongodb:mongodb"
- "db:db"
volumes:
db-data:
mongo-data:
networks:
front-end:
The links will allow for a hostnames to be defined in the containers.
The link flag is legacy, and in new versions of docker-engine it's not required for user defined networks. Also, the links will be ignored in case of docker swarm deployment. However since there are sill old installations of Docker and docker-compose, this is one thing to try in troubleshooting.
I have a Tomcat docker container and Filebeat docker container both are up and running.
My objective: I need to collect tomcat logs from running Tomcat container to Filebeat container.
Issue: I have no idea how to get collected log files from Tomcat container.
What I have tried so far: I have tried to create a docker volume and add tomcat logs to that volume and access that volume from filebeat container, but ended with no success.
Structure: I have wrote docker-compose.yml file under project Logstash(root directory of the project) with following project structure.(Here I want to up and run Elasticsearch, Logstash, Filebeat and Kibana docker containers from one configuration file). docker-containers(root directory of the project) with following structure (here I want to up and run Tomcat, Nginx and Postgres containers from one configuration file).
Logstash: contain 4 main sub directories (Filebeat, Logstash, Elasticsearch and Kibana), ENV file and docker-compose.yml file. Both sub directories contain Dockerfiles to pull images and build the containers.
docker-containers: contains 3 main sub directories (Tomcat, Nginx and Postgres). ENV file and docker-compose.yml file. Both sub directories contain separate Dockerfiles to pull docker image and build the container.
Note: I think this basic structure my helpful to understand my requirements.
docker-compose.yml files
Logstash.docker-compose.yml file
version: '2'
services:
elasticsearch:
container_name: OTP-Elasticsearch
build:
context: ./elasticsearch
args:
- ELK_VERSION=${ELK_VERSION}
volumes:
- ./elasticsearch/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml:ro
ports:
- "9200:9200"
- "9300:9300"
environment:
ES_JAVA_OPTS: "-Xmx256m -Xms256m"
networks:
- elk
filebeat:
container_name: OTP-Filebeat
command:
- "-e"
- "--strict.perms=false"
user: root
build:
context: ./filebeat
args:
- ELK_VERSION=${ELK_VERSION}
volumes:
- ./filebeat/config/filebeat.yml:/usr/share/filebeat/filebeat.yml
environment:
LS_JAVA_OPTS: "-Xmx256m -Xms256m"
networks:
- elk
depends_on:
- elasticsearch
- logstash
logstash:
container_name: OTP-Logstash
build:
context: ./logstash
args:
- ELK_VERSION=${ELK_VERSION}
volumes:
- ./logstash/config/logstash.yml:/usr/share/logstash/config/logstash.yml:ro
- ./logstash/pipeline:/usr/share/logstash/pipeline:ro
expose:
- 5044/tcp
ports:
- "9600:9600"
- "5044:5044"
environment:
LS_JAVA_OPTS: "-Xmx256m -Xms256m"
networks:
- elk
links:
- elasticsearch
depends_on:
- elasticsearch
kibana:
container_name: OTP-Kibana
build:
context: ./kibana
args:
- ELK_VERSION=${ELK_VERSION}
volumes:
- ./kibana/config/:/usr/share/kibana/config:ro
ports:
- "5601:5601"
networks:
- elk
links:
- elasticsearch
depends_on:
- elasticsearch
- logstash
- filebeat
networks:
elk:
driver: bridge
docker-containers.docker-compose.yml file
version: '2'
services:
# Nginx
nginx:
container_name: OTP-Nginx
restart: always
build:
context: ./nginx
args:
- comapanycode=${COMPANY_CODE}
- dbtype=${DB_TYPE}
- dbip=${DB_IP}
- dbname=${DB_NAME}
- dbuser=${DB_USER}
- dbpassword=${DB_PASSWORD}
- webdirectory=${WEB_DIRECTORY}
ports:
- "80:80"
links:
- db:db
volumes:
- ./log/nginx:/var/log/nginx
depends_on:
- db
# Postgres
db:
container_name: OTP-Postgres
restart: always
ports:
- "5430:5430"
build:
context: ./postgres
args:
- food_db_version=${FOOD_DB_VERSION}
- dbtype=${DB_TYPE}
- retail_db_version=${RETAIL_DB_VERSION}
- dbname=${DB_NAME}
- dbuser=${DB_USER}
- dbpassword=${DB_PASSWORD}
volumes:
- .data/db:/octopus_docker/postgresql/data
# Tomcat
tomcat:
container_name: OTP-Tomcat
restart: always
build:
context: ./tomcat
args:
- dbuser=${DB_USER}
- dbpassword=${DB_PASSWORD}
links:
- db:db
volumes:
- ./tomcat/${WARNAME}.war:/usr/local/tomcat/webapps/${WARNAME}.war
ports:
- "8080:8080"
depends_on:
- db
- nginx
Additional files:
filebeat.yml (configuration file inside Logstash/Filbeat/config/)
filebeat.inputs:
- type: log
enabled: true
paths:
- /usr/local/tomcat/logs/.*log
output.logstash:
hosts: ["logstash:5044"]
Additional Info:
System I am using is Ubuntu 18.04
My goal is to collect tomcat logs from running tomcat container and forward them to Logstash and filter logs and forward that logs to Elasticsearch and finally to Kibana for Visualization purpose.
For now I can collect local machine(host) logs and visualize them in Kibana.(/var/log/)
My Problem:
I need to know proper way to get collected tomcat logs from tomcat container and forward them to logstash container via filebeat container.
Any discussion, answer or any help to understand a way to do this is highly expected.
Thanks.
So loooong... Create shared volume among all containers and setup your tomcat to save log files into that folder. If you can put all services into one docker-compose.yml, just setup volume internally:
docker-compose.yml
version: '3'
services:
one:
...
volumes:
- logs:/var/log/shared
two:
...
volumes:
- logs:/var/log/shared
volumes:
logs:
If you need several docker-compose.yml files, create volume globally in advance with docker volume create logs and map it into both compose files:
version: '3'
services:
one:
...
volumes:
- logs:/var/log/shared
two:
...
volumes:
- logs:/var/log/shared
volumes:
logs:
external: true