Docker-compose is not generating log file for spring boot application - docker

We are developed a project which has multiple micro-service developed on spring boot.We are using docker container and docker-compose.We are facing issue in generating the application log file.We have written below config in application.yml file.
logging:
file: /data/test/run/logs/x.log
After generating the image if we start a container seperatly(using docker run imageName) log file is generated in container.But when we up our containers using docker-compose(docker-compose up) same images,log file is not being generating in container.
docker-compose.yml
version: '2'
services:
lb:
image: dockercloud/haproxy
links:
- x-service
volumes:
- /var/run/docker.sock:/var/run/docker.sock
ports:
- "80:80"
- "1936:1936"
eureka-service:
image: x.y.com/registration-server:0.0.2
ports:
- "2323:2323"
environment:
- APPBINARY=registration-server.jar
entrypoint:
- /usr/bin/jarrun.sh
- QA
x-service:
image: x.y.com/x-service:0.2.7
ports:
- "4444"
links:
- eureka-service
environment:
- JAVA_OPTS=-Xms512M -Xmx1024M
- VIRTUAL_HOST=*/x/*
- "SPRING_PROFILES_ACTIVE=qa"
- APPBINARY=x-service.jar
- environment=qa
extra_hosts:
- "service1.test.com:111.11.1.111"
entrypoint:
- /usr/bin/jarrun.sh
- QA

Related

How to deploy on AWS a Rails webpack_dev_server Docker service?

I have a simple Rails/React app that works with Docker with 3 services:
'database' for postgres
'web' for Rails
'webpack_dev_server' for react
In AWS I've created:
* built a custom image for nginx,
* set s3 to hold ecs configs.
* a production cluster,
* private repositories for the 'web' and nginx, tagged both images and pushed to the repositories
* create 4 ec2 instances, 2 for the web and 2 for react
Now I'm ready to create task definitions but I'm not sure how to handle webpack_dev_server (React).
Can we build the image with the same dockerfile as the the web?
For the task definition, should it look like the web as well?
Here's the docker-compose.yml file that works.
version: '3'
services:
web:
build: .
ports:
- "3000:3000"
volumes:
- .:/usr/src/app
- gem_cache:/gems
env_file:
- .env/development/database
- .env/development/web
environment:
- WEBPACKER_DEV_SERVER_HOST=webpack_dev_server
- DOCKERIZED=true
webpack_dev_server:
build: .
command: ./bin/webpack-dev-server
ports:
- 3035:3035
volumes:
- .:/usr/src/app
- gem_cache:/gems
env_file:
- .env/development/web
- .env/development/database
environment:
- WEBPACK_DEV_SERVER=0.0.0.0
database:
image: postgres
env_file:
- .env/development/database
volumes:
- db_data:/var/lib/postgresql/data
volumes:
db_data:
gem_cache:

How do I build and run api-platform image to a production docker container?

I have followed the api-platform tutorial and successfully built and started the application using Docker on my localhost machine.
I have a production server running Ubuntu 16.04.5 LTS, and a newly installed Docker version 18.06.1-ce.
How would I build this code on my local machine and run it on the Docker server?
I have also looked at the Deploying API Platform Applications documentation but I am not sure how to use this.
I am struggling to understand how to build api-platform from my localhost to the server
this is docker-compose.yml file try this please docker-compose up -d
version: '3.4'
services:
php:
image: ${CONTAINER_REGISTRY_BASE}/php
build:
context: ./api
target: api_platform_php
cache_from:
- ${CONTAINER_REGISTRY_BASE}/php
- ${CONTAINER_REGISTRY_BASE}/nginx
- ${CONTAINER_REGISTRY_BASE}/varnish
depends_on:
- db
# Comment out these volumes in production
volumes:
- ./api:/srv/api:rw,cached
# If you develop on Linux, uncomment the following line to use a bind-mounted host directory instead
# - ./api/var:/srv/api/var:rw
api:
image: ${CONTAINER_REGISTRY_BASE}/nginx
build:
context: ./api
target: api_platform_nginx
cache_from:
- ${CONTAINER_REGISTRY_BASE}/php
- ${CONTAINER_REGISTRY_BASE}/nginx
- ${CONTAINER_REGISTRY_BASE}/varnish
depends_on:
- php
# Comment out this volume in production
volumes:
- ./api/public:/srv/api/public:ro
ports:
- "8080:80"
cache-proxy:
image: ${CONTAINER_REGISTRY_BASE}/varnish
build:
context: ./api
target: api_platform_varnish
cache_from:
- ${CONTAINER_REGISTRY_BASE}/php
- ${CONTAINER_REGISTRY_BASE}/nginx
- ${CONTAINER_REGISTRY_BASE}/varnish
depends_on:
- api
volumes:
- ./api/docker/varnish/conf:/usr/local/etc/varnish:ro
tmpfs:
- /usr/local/var/varnish:exec
ports:
- "8081:80"
db:
# In production, you may want to use a managed database service
image: postgres:10-alpine
environment:
- POSTGRES_DB=api
- POSTGRES_USER=api-platform
# You should definitely change the password in production
- POSTGRES_PASSWORD=!ChangeMe!
volumes:
- db-data:/var/lib/postgresql/data:rw
# You may use a bind-mounted host directory instead, so that it is harder to accidentally remove the volume and lose all your data!
# - ./docker/db/data:/var/lib/postgresql/data:rw
ports:
- "5432:5432"
client:
# Use a static website hosting service in production
# See https://github.com/facebookincubator/create-react-app/blob/master/packages/react-scripts/template/README.md#deployment
image: ${CONTAINER_REGISTRY_BASE}/client
build:
context: ./client
cache_from:
- ${CONTAINER_REGISTRY_BASE}/client
env_file:
- ./client/.env
volumes:
- ./client:/usr/src/client:rw,cached
- /usr/src/client/node_modules
ports:
- "80:3000"
admin:
# Use a static website hosting service in production
# See https://github.com/facebookincubator/create-react-app/blob/master/packages/react-scripts/template/README.md#deployment
image: ${CONTAINER_REGISTRY_BASE}/admin
build:
context: ./admin
cache_from:
- ${CONTAINER_REGISTRY_BASE}/admin
volumes:
- ./admin:/usr/src/admin:rw,cached
- /usr/src/admin/node_modules
ports:
- "81:3000"
h2-proxy:
# Don't use this proxy in prod
build:
context: ./h2-proxy
depends_on:
- client
- admin
- api
- cache-proxy
ports:
- "443:443"
- "444:444"
- "8443:8443"
- "8444:8444"
volumes:
db-data: {}

Collect tomcat logs from tomcat docker container to Filebeat docker container

I have a Tomcat docker container and Filebeat docker container both are up and running.
My objective: I need to collect tomcat logs from running Tomcat container to Filebeat container.
Issue: I have no idea how to get collected log files from Tomcat container.
What I have tried so far: I have tried to create a docker volume and add tomcat logs to that volume and access that volume from filebeat container, but ended with no success.
Structure: I have wrote docker-compose.yml file under project Logstash(root directory of the project) with following project structure.(Here I want to up and run Elasticsearch, Logstash, Filebeat and Kibana docker containers from one configuration file). docker-containers(root directory of the project) with following structure (here I want to up and run Tomcat, Nginx and Postgres containers from one configuration file).
Logstash: contain 4 main sub directories (Filebeat, Logstash, Elasticsearch and Kibana), ENV file and docker-compose.yml file. Both sub directories contain Dockerfiles to pull images and build the containers.
docker-containers: contains 3 main sub directories (Tomcat, Nginx and Postgres). ENV file and docker-compose.yml file. Both sub directories contain separate Dockerfiles to pull docker image and build the container.
Note: I think this basic structure my helpful to understand my requirements.
docker-compose.yml files
Logstash.docker-compose.yml file
version: '2'
services:
elasticsearch:
container_name: OTP-Elasticsearch
build:
context: ./elasticsearch
args:
- ELK_VERSION=${ELK_VERSION}
volumes:
- ./elasticsearch/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml:ro
ports:
- "9200:9200"
- "9300:9300"
environment:
ES_JAVA_OPTS: "-Xmx256m -Xms256m"
networks:
- elk
filebeat:
container_name: OTP-Filebeat
command:
- "-e"
- "--strict.perms=false"
user: root
build:
context: ./filebeat
args:
- ELK_VERSION=${ELK_VERSION}
volumes:
- ./filebeat/config/filebeat.yml:/usr/share/filebeat/filebeat.yml
environment:
LS_JAVA_OPTS: "-Xmx256m -Xms256m"
networks:
- elk
depends_on:
- elasticsearch
- logstash
logstash:
container_name: OTP-Logstash
build:
context: ./logstash
args:
- ELK_VERSION=${ELK_VERSION}
volumes:
- ./logstash/config/logstash.yml:/usr/share/logstash/config/logstash.yml:ro
- ./logstash/pipeline:/usr/share/logstash/pipeline:ro
expose:
- 5044/tcp
ports:
- "9600:9600"
- "5044:5044"
environment:
LS_JAVA_OPTS: "-Xmx256m -Xms256m"
networks:
- elk
links:
- elasticsearch
depends_on:
- elasticsearch
kibana:
container_name: OTP-Kibana
build:
context: ./kibana
args:
- ELK_VERSION=${ELK_VERSION}
volumes:
- ./kibana/config/:/usr/share/kibana/config:ro
ports:
- "5601:5601"
networks:
- elk
links:
- elasticsearch
depends_on:
- elasticsearch
- logstash
- filebeat
networks:
elk:
driver: bridge
docker-containers.docker-compose.yml file
version: '2'
services:
# Nginx
nginx:
container_name: OTP-Nginx
restart: always
build:
context: ./nginx
args:
- comapanycode=${COMPANY_CODE}
- dbtype=${DB_TYPE}
- dbip=${DB_IP}
- dbname=${DB_NAME}
- dbuser=${DB_USER}
- dbpassword=${DB_PASSWORD}
- webdirectory=${WEB_DIRECTORY}
ports:
- "80:80"
links:
- db:db
volumes:
- ./log/nginx:/var/log/nginx
depends_on:
- db
# Postgres
db:
container_name: OTP-Postgres
restart: always
ports:
- "5430:5430"
build:
context: ./postgres
args:
- food_db_version=${FOOD_DB_VERSION}
- dbtype=${DB_TYPE}
- retail_db_version=${RETAIL_DB_VERSION}
- dbname=${DB_NAME}
- dbuser=${DB_USER}
- dbpassword=${DB_PASSWORD}
volumes:
- .data/db:/octopus_docker/postgresql/data
# Tomcat
tomcat:
container_name: OTP-Tomcat
restart: always
build:
context: ./tomcat
args:
- dbuser=${DB_USER}
- dbpassword=${DB_PASSWORD}
links:
- db:db
volumes:
- ./tomcat/${WARNAME}.war:/usr/local/tomcat/webapps/${WARNAME}.war
ports:
- "8080:8080"
depends_on:
- db
- nginx
Additional files:
filebeat.yml (configuration file inside Logstash/Filbeat/config/)
filebeat.inputs:
- type: log
enabled: true
paths:
- /usr/local/tomcat/logs/.*log
output.logstash:
hosts: ["logstash:5044"]
Additional Info:
System I am using is Ubuntu 18.04
My goal is to collect tomcat logs from running tomcat container and forward them to Logstash and filter logs and forward that logs to Elasticsearch and finally to Kibana for Visualization purpose.
For now I can collect local machine(host) logs and visualize them in Kibana.(/var/log/)
My Problem:
I need to know proper way to get collected tomcat logs from tomcat container and forward them to logstash container via filebeat container.
Any discussion, answer or any help to understand a way to do this is highly expected.
Thanks.
So loooong... Create shared volume among all containers and setup your tomcat to save log files into that folder. If you can put all services into one docker-compose.yml, just setup volume internally:
docker-compose.yml
version: '3'
services:
one:
...
volumes:
- logs:/var/log/shared
two:
...
volumes:
- logs:/var/log/shared
volumes:
logs:
If you need several docker-compose.yml files, create volume globally in advance with docker volume create logs and map it into both compose files:
version: '3'
services:
one:
...
volumes:
- logs:/var/log/shared
two:
...
volumes:
- logs:/var/log/shared
volumes:
logs:
external: true

Deploy existing Prestashop to server using Docker

I've created PrestaShop store on server. Is there any possible way to use docker for my store and migrate it into another server using docker? I know that I'll need docker-compose but to be honest I don't know what to do with files on current server.
Ok, so I deeped into problem and solution for ma quesstion is as below. What I did is pull original image from prestashop and copy there my files.
Next step was use mariadb image. I had backup.sql file exported from previous store phpmyadmin
version: '2'
services:
prestashop:
image: prestashop
ports:
- 80:80
links:
- mariadb:mariadb
depends_on:
- mariadb
volumes:
- ./src:/var/www/html
- ./src/modules:/var/www/html/modules
- ./src/themes:/var/www/html/themes
- ./src/override:/var/www/html/override
environment:
- PS_DEV_MODE=1
- DB_SERVER=mariadb
- DB_USER=root
- DB_PASSWD=root
- DB_NAME=prestashop
- PS_INSTALL_AUTO=0
mariadb:
image: mariadb
volumes:
- backup.sql:/docker-entrypoint-initdb.d
environment:
- MYSQL_ROOT_PASSWORD=root
- MYSQL_DATABASE=prestashop
phpmyadmin:
image: phpmyadmin/phpmyadmin
links:
- mariadb
ports:
- 81:80
environment:
- PMA_HOST=mariadb
- PMA_USER=root
- PMA_PASSWORD=root
The biggest issue is IP in docker-machine. Keep in mind that if you are using docker toolbox you have IP 192.168.99.100 but in Docker for Windows your IP depends on localhost (or just type localhost).
You can use this docker-compose.yml :
version: "3"
services:
prestashop:
image: prestashop/prestashop
networks:
mycustomnetwork:
ports:
- 82:80
links:
- mariadb:mariadb
depends_on:
- mariadb
volumes:
- ./src:/var/www/html
- ./src/modules:/var/www/html/modules
- ./src/themes:/var/www/html/themes
- ./src/override:/var/www/html/override
environment:
- PS_DEV_MODE=1
- DB_SERVER=mariadb
- DB_USER=root
- DB_PASSWD=mycustompassword
- DB_NAME=prestashop
- PS_INSTALL_AUTO=0
mariadb:
image: mariadb
networks:
mycustomnetwork:
volumes:
- presta_db:/var/lib/mysql
environment:
- MYSQL_ROOT_PASSWORD=mycustompassword
- MYSQL_DATABASE=prestashop
phpmyadmin:
image: phpmyadmin/phpmyadmin
networks:
mycustomnetwork:
links:
- mariadb:mariadb
ports:
- 1235:80
depends_on:
- mariadb
environment:
- PMA_HOST=mariadb
- PMA_USER=root
- PMA_PASSWORD=mycustompassword
volumes:
presta_db:
networks:
mycustomnetwork:
external: true
Replace mycustomnetwork and mycustompassword
Then run docker-compose up
Web url : localhost:82
PHP MyAdmin url : localhost:1235
You can follow this tutorial to setup Prestashop in a Docker environment.
https://hub.docker.com/r/prestashop/prestashop/
You will need to add your current files to the Prestashop container and most likely import your database in a MySQL container. Docker-compose will be used to launch those containers together. Once this is done, you will be able to deploy the whole thing anywhere.
You should also include bridge network in your compose file, some examples might work from here https://runnable.com/docker/docker-compose-networking.
This way db can be configured to be accessed only by prestashop on local docker network without being exposed outside. Presta db can also be pointed to the name of the running image, in case your IP changes or something. All what you would leave running is port 80 on the app.

Docker Compose and rabbitmq docker image plugins

I would like to have my customized image based on rabbitmq. That customized image I like to create with docker-compose. I want management plugi started
If I use docker compose as
rabbitmq: # https://registry.hub.docker.com/_/rabbitmq/
image: rabbitmq:3-management
ports:
- 5672:5672
- 15672:15672
- 8080:8080
it does bring up management plugin.
If I use docker compose
version: '2'
services:
# Rabbit service. See https://hub.docker.com/_/rabbitmq/
rabbit:
container_name: dev-rabbit
image: rabbitmq-our:3-management
build: ./rabbitmq-our
environment:
- RABBITMQ_DEFAULT_USER=rabbit
- RABBITMQ_DEFAULT_PASS=mq
- RABBITMQ_DEFAULT_VHOST=my_vhost
ports:
- "4369:4369"
- "5671:5671"
- "5672:5672"
- "15672:15672"
- "8080:8080"
and Dockerfile in rabbitmq-our/ folder such as
FROM rabbitmq
Then no plugins are started and I am not getting the management console.
How I can specify running that "3-management" plugin in my custom image startup ?
I've my compose like this and works the RabbitMQ admin plugin.
rabbit:
container_name: dev_rabbit
hostname: rabbit
image: rabbitmq:3.6.6-management
environment:
- RABBITMQ_DEFAULT_USER=user
- RABBITMQ_DEFAULT_PASS=user
ports:
- "5672:5672"
- "15672:15672"
I take it from the hub.docker official page.

Resources