Liferay hangs in Docker - docker

I am trying to set up Liferay in docker using docker-compose for the first time. I have to admit, I am new to Liferay.
I used the liferay-gradle task ./gradlew buildDockerImage to generate a docker folder (with all required build files) under build which has everything needed to setup a working container, I then wrote this docker-compose file to use that image and generated files in order to build a self-contained liferay project containing mysql, elasticsearch, kibana, and liferay, my docker-compose file looks like this:
version: '3.8'
services:
mysql:
container_name: mysql
image: mysql:${MYSQL_VERSION}
deploy:
resources:
limits:
memory: 1G
command: --default-authentication-plugin=mysql_native_password
ports:
- ${MYSQL_PORT}:3306
environment:
MYSQL_ROOT_PASSWORD: ${MYSQL_ROOT_PASSWORD}
MYSQL_DATABASE: ${MYSQL_DATABASE}
MYSQL_USER: ${MYSQL_USER}
MYSQL_PASSWORD: ${MYSQL_PASSWORD}
volumes:
- mysqldata:/var/lib/mysql
- ./scripts/data.sql:/docker-entrypoint-initdb.d/data.sql
networks:
- lfnet
restart: unless-stopped
es01:
container_name: es1
depends_on:
- mysql
image: docker.elastic.co/elasticsearch/elasticsearch:${ES_STACK_VERSION}
deploy:
resources:
limits:
memory: 1G
networks:
- lfnet
volumes:
- esdata01:/usr/share/elasticsearch/data
ports:
- ${ES_PORT}:9200
environment:
- node.name=es01
- cluster.name=${CLUSTER_NAME}
- bootstrap.system_call_filter=false
- ELASTIC_PASSWORD=${ELASTIC_PASSWORD}
- bootstrap.memory_lock=true
- ES_JAVA_OPTS=-Xms512m -Xmx512m
- ES_SKIP_SET_KERNEL_PARAMETERS=true
- discovery.zen.ping.unicast.hosts=es2,es3
- discovery.zen.minimum_master_nodes=2
- xpack.security.enabled=false
- xpack.watcher.enabled=false
- xpack.monitoring.enabled=false
ulimits:
memlock:
soft: -1
hard: -1
es02:
container_name: es2
depends_on:
- es01
image: docker.elastic.co/elasticsearch/elasticsearch:${ES_STACK_VERSION}
deploy:
resources:
limits:
memory: 1G
volumes:
- certs:/usr/share/elasticsearch/config/certs
- esdata02:/usr/share/elasticsearch/data
networks:
- lfnet
environment:
- node.name=es02
- cluster.name=${CLUSTER_NAME}
- bootstrap.system_call_filter=false
- ELASTIC_PASSWORD=${ELASTIC_PASSWORD}
- bootstrap.memory_lock=true
- ES_JAVA_OPTS=-Xms512m -Xmx512m
- ES_SKIP_SET_KERNEL_PARAMETERS=true
- discovery.zen.ping.unicast.hosts=es1,es3
- discovery.zen.minimum_master_nodes=2
- xpack.security.enabled=false
- xpack.watcher.enabled=false
- xpack.monitoring.enabled=false
ulimits:
memlock:
soft: -1
hard: -1
es03:
container_name: es3
depends_on:
- es02
image: docker.elastic.co/elasticsearch/elasticsearch:${ES_STACK_VERSION}
deploy:
resources:
limits:
memory: 1G
networks:
- lfnet
volumes:
- certs:/usr/share/elasticsearch/config/certs
- esdata03:/usr/share/elasticsearch/data
environment:
- node.name=es03
- cluster.name=${CLUSTER_NAME}
- bootstrap.system_call_filter=false
- ELASTIC_PASSWORD=${ELASTIC_PASSWORD}
- bootstrap.memory_lock=true
- ES_JAVA_OPTS=-Xms512m -Xmx512m
- ES_SKIP_SET_KERNEL_PARAMETERS=true
- discovery.zen.ping.unicast.hosts=es1,es2
- discovery.zen.minimum_master_nodes=2
- xpack.security.enabled=false
- xpack.watcher.enabled=false
- xpack.monitoring.enabled=false
ulimits:
memlock:
soft: -1
hard: -1
kibana:
container_name: kibana
depends_on:
- es01
- es02
- es03
image: docker.elastic.co/kibana/kibana:${ES_STACK_VERSION}
deploy:
resources:
limits:
memory: 1G
networks:
- lfnet
volumes:
- kibanadata:/usr/share/kibana/data
ports:
- ${KIBANA_PORT}:5601
environment:
- SERVERNAME=kibana
- ELASTICSEARCH_URL=http://es01:9200
- ELASTICSEARCH_USERNAME=kibana_system
- ELASTICSEARCH_PASSWORD=${KIBANA_PASSWORD}
liferay:
container_name: liferay
deploy:
resources:
limits:
memory: 1G
build:
context: ../build/docker
dockerfile: ../../docker/Dockerfile
depends_on:
- es01
- es02
- es03
- mysql
environment:
LIFERAY_RETRY_PERIOD_JDBC_PERIOD_ON_PERIOD_STARTUP_PERIOD_DELAY: 10
LIFERAY_RETRY_PERIOD_JDBC_PERIOD_ON_PERIOD_STARTUP_PERIOD_MAX_PERIOD_RETRIES: 10
LIFERAY_JVM_OPTS: "-Xms512m -Xmx512m"
ports:
- ${LF_PORT}:8080
- ${LF_GOGO_PORT}:11311
networks:
- lfnet
# Volumes
volumes:
liferay-data:
liferay-osgi-configs:
liferay-osgi-marketplace:
liferay-osgi-modules:
liferay-osgi-war:
certs:
driver: local
esdata01:
driver: local
esdata02:
driver: local
esdata03:
driver: local
kibanadata:
driver: local
mysqldata:
driver: local
# Networks
networks:
lfnet:
driver: bridge
So far all the containers seem to work fine except Liferay, for some reason, it gets stuck here and does not reach a point where it is accessible:
[LIFERAY] To SSH into this container, run: "docker exec -it 255c0c98fb63 /bin/bash".
[LIFERAY] Executing scripts in /usr/local/liferay/scripts/pre-configure:
[LIFERAY] Executing 100_liferay_image_setup.sh.
[LIFERAY] Copying /home/liferay/configs/local config files:
/home/liferay/configs/local
└── portal-ext.properties
[LIFERAY] ... into /opt/liferay.
[LIFERAY] The directory /mnt/liferay/files does not exist. Create the directory $(pwd)/xyz123/files on the host operating system to create the directory /mnt/liferay/files on the container. Files in /mnt/liferay/files will be copied to /opt/liferay before Liferay Portal starts.
[LIFERAY] Executing scripts in /mnt/liferay/scripts:
[LIFERAY] The directory /mnt/liferay/deploy is ready. Copy files to $(pwd)/xyz123/deploy on the host operating system to deploy modules to Liferay Portal at runtime.
[LIFERAY] Starting Liferay Portal. To stop the container with CTRL-C, run this container with the option "-it".
My liferay Dockerfile looks like this:
FROM liferay/portal:7.2.0-ga1
ENV LIFERAY_WORKSPACE_ENVIRONMENT=dev
COPY --chown=liferay:liferay deploy /mnt/liferay/deploy
COPY --chown=liferay:liferay patching /mnt/liferay/patching
COPY --chown=liferay:liferay scripts /mnt/liferay/scripts
COPY --chown=liferay:liferay configs /home/liferay/configs
COPY --chown=liferay:liferay 100_liferay_image_setup.sh /usr/local/liferay/scripts/pre-configure/100_liferay_image_setup.sh
I am confused, I do not understand why the Liferay container hangs at Starting Liferay Portal. To stop the container with CTRL-C, run this container with the option "-it".,
I need help.

Related

Kibana server is not ready yet(Mac m1)

I trying to use docker container :elastic search and kibana. \But keep face to this word "Kibana server is not ready yet", when I connect to the web (http://localhost:5601/)
my system:
Mac m1 os
docker version : 20.10.22, build 3a2c30b
Below is my yaml file
version: '3.6'
services:
es01:
image: docker.elastic.co/elasticsearch/elasticsearch:7.16.2
platform:linux/amd64
container_name: es01
environment:
- node.name=es01
- cluster.name=es-docker-cluster
- discovery.seed_hosts=es02,es03
- cluster.initial_master_nodes=es01,es02,es03
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms1g -Xmx1g"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- data01:/usr/share/elasticsearch/data
ports:
- 9200:9200
networks:
- elastic
kibana:
image: docker.elastic.co/kibana/kibana:7.16.1
platform: linux/amd64
container_name: kibana
ports:
- 5601:5601
environment:
- ELASTICSEARCH_HOSTS=["http://es01:9200"]
depends_on:
- es01
networks:
- elastic
volumes:
data01:
driver: local
networks:
elastic:
driver: bridge
docker ps -a : looks fine.
I have no idea what is wrong.
please let me know.

Access Localstack elasticsearch domain in Sam local lambda

Created an Elasticsearch domain from Localstack and can able to access the endpoint(Postman | Non dockerized application). When I'm calling the same ES URL from the SAM Lambda application using Axios
http://host.docker.internal:4566/es/us-east-1/idx/_all/_search
Returns as
"code":"ERR_BAD_REQUEST","status":404
But when I checked Localstack health using
http://host.docker.internal:4566/health
Returns AWS running services
Sharing my Docker file for Localstack
version: "3.9"
services:
elasticsearch:
container_name: elasticsearch
image: docker.elastic.co/elasticsearch/elasticsearch:7.10.2
network_mode: bridge
environment:
- node.name=elasticsearch
- cluster.name=es-docker-cluster
- discovery.type=single-node
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ports:
- "9200:9200"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- data01:/usr/share/elasticsearch/data
localstack:
container_name: "${LOCALSTACK_DOCKER_NAME-localstack_main}"
image: localstack/localstack
network_mode: bridge
ports:
- "4566:4566"
depends_on:
- elasticsearch
environment:
- ES_CUSTOM_BACKEND=http://elasticsearch:9200
- DEBUG=${DEBUG- }
- DATA_DIR=${DATA_DIR- }
- LAMBDA_EXECUTOR=${LAMBDA_EXECUTOR- }
- KINESIS_ERROR_PROBABILITY=${KINESIS_ERROR_PROBABILITY- }
- DOCKER_HOST=unix:///var/run/docker.sock
- HOST_TMP_FOLDER=${TMPDIR}
- ES_ENDPOINT_STRATEGY=path
# - LOCALSTACK_HOSTNAME=localstack
volumes:
- "${TMPDIR:-/tmp/localstack}:/tmp/localstack"
- "/var/run/docker.sock:/var/run/docker.sock"
links:
- elasticsearch
volumes:
data01:
driver: local
Do i want modify network in docker?
Please help me to resolve this error

How do I change the default URL for Logstash running under docker?

I am attempting to run an elasticsearch cluster with Kibana and Logstash using docker-compose.
The problem I'm running into is that Logstash keeps looking for the elastic search DB hostname as http://elasticsearch:9200. Here's an example of the logstash output.
logstash | [2021-08-23T15:30:03,534][WARN ][logstash.licensechecker.licensereader] Marking url as dead. Last error: [LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError] Elasticsearch Unreachable: [http://elasticsearch:9200/][Manticore::ResolutionFailure] elasticsearch {:url=>http://elasticsearch:9200/, :error_message=>"Elasticsearch Unreachable: [http://elasticsearch:9200/][Manticore::ResolutionFailure] elasticsearch", :error_class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError"}
logstash | [2021-08-23T15:30:03,540][ERROR][logstash.licensechecker.licensereader] Unable to retrieve license information from license server {:message=>"Elasticsearch Unreachable: [http://elasticsearch:9200/][Manticore::ResolutionFailure] elasticsearch"}
I'm also attaching my docker-compose.yml file.
version: '2.2'
services:
es01:
image: docker.elastic.co/elasticsearch/elasticsearch:7.14.0
container_name: es01
environment:
- node.name=es01
- cluster.name=es-docker-cluster
- discovery.seed_hosts=es02,es03
- cluster.initial_master_nodes=es01,es02,es03
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- data01:/usr/share/elasticsearch/data
ports:
- 9200:9200
networks:
- elastic
es02:
image: docker.elastic.co/elasticsearch/elasticsearch:7.14.0
container_name: es02
environment:
- node.name=es02
- cluster.name=es-docker-cluster
- discovery.seed_hosts=es01,es03
- cluster.initial_master_nodes=es01,es02,es03
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- data02:/usr/share/elasticsearch/data
networks:
- elastic
es03:
image: docker.elastic.co/elasticsearch/elasticsearch:7.14.0
container_name: es03
environment:
- node.name=es03
- cluster.name=es-docker-cluster
- discovery.seed_hosts=es01,es02
- cluster.initial_master_nodes=es01,es02,es03
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- data03:/usr/share/elasticsearch/data
networks:
- elastic
kib01:
image: docker.elastic.co/kibana/kibana:7.14.0
container_name: kib01
ports:
- 5601:5601
environment:
ELASTICSEARCH_URL: http://es01:9200
ELASTICSEARCH_HOSTS: '["http://es01:9200","http://es02:9200","http://es03:9200"]'
networks:
- elastic
logstash:
image: logstash:7.14.0
environment:
ELASTICSEARCH_HOST: localhost
container_name: logstash
hostname: localhost
ports:
- 9600:9600
- 8089:8089
volumes:
- ./logstash/logstash.yml
- ./logstash/pipelines.yml
- ./logstash/data
command: --config.reload.automatic
environment:
LS_JAVA_OPTS: "-Xmx1g -Xms1g"
links:
- es01:es01
depends_on:
- es01
networks:
- elastic
volumes:
data01:
driver: local
data02:
driver: local
data03:
driver: local
networks:
elastic:
driver: bridge
For some reason, putting the host into the docker compose yaml file doesn't seem to work. Where should I go to point logstash to locahost rather than 'elasticsearch'?
Thanks
I don't think Logstash has any such variable ELASTICSEARCH_HOST... Plus, localhost would refer to the Logstash container itself, not something else. And don't set hostname: localhost for a container...
You have no container/service named elasticsearch, you have es01-3, thus why its unable to connect, (notice that Kibana uses the correct addresses) and you'd modify that address in the Logstash config/pipeline files

Can not share volume between services defined in docker-compose

I'm running Docker for Mac Version 17.12.0-ce-mac55
I have a docker-compose file that I'm converting from docker-compose version 3 to version 2 to work better with Openshift.
---
version: '2'
services:
fpm:
build:
context: .
dockerfile: Dockerfile.openshift
args:
TIMEZONE: America/Chicago
APACHE_DOCUMENT_ROOT: /usr/local/apache2/htdocs
image: widget-fpm
restart: always
depends_on:
- es
- db
environment:
# taken from sample.env
- TIMEZONE=${TIMEZONE}
- APACHE_DOCUMENT_ROOT=/usr/local/apache2/htdocs
- GET_HOSTS_FROM=dns
- SYMFONY__DATABASE__HOST=db
- SYMFONY__DATABASE__PORT=5432
- SYMFONY__DATABASE__NAME=widget
- SYMFONY__DATABASE__USER=widget
- SYMFONY__DATABASE__PASSWORD=widget
- SYMFONY__DATABASE__SCHEMA=widget
- SYMFONY__DATABASE__DRIVER=pdo_pgsql
- SYMFONY_ENV=prod
- SYMFONY__ELASTICSEARCH__HOST=es:9200
- SYMFONY__SECRET=dsakfhakjhsdfjkhajhjds
- SYMFONY__LOCALE=en
- SYMFONY__RBAC__HOST=rbac
- SYMFONY__RBAC__PROTOCOL=http
- SYMFONY__RBAC__CONNECT__PATH=v1/connect
- SYMFONY__PROJECT_URL=http://localhost
- SYMFONY__APP__NAME=widget
- SYMFONY__CURRENT__API__VERSION=1
volumes:
# use docroot env to change this directory
- src:/usr/local/apache2/htdocs
- symfony-cache:/usr/local/apache2/htdocs/app/cache
- symfony-log:/usr/local/apache2/htdocs/app/logs
expose:
- "9000"
networks:
- client-network
- data-network
labels:
kompose.service.expose: "false"
webserver:
build: ./provisioning/webserver/apache
image: widget_web
restart: "no"
ports:
- "80"
- "443"
volumes_from:
- fpm:ro
depends_on:
- fpm
networks:
- client-network
labels:
com.singlehop.description: "Widget Service Web Server"
com.singlehop.development: "false"
kompose.service.expose: "true"
kompose.service.type: "nodeport"
db:
build: ./provisioning/database/postgres
image: widget_postgres
restart: always
volumes:
- data-volume:/var/lib/postgresql/data
environment:
POSTGRES_USER: widget
POSTGRES_PASSWORD: widget
expose:
- "5432"
networks:
- data-network
labels:
com.singlehop.description: "Widget Service Postgres Database Server"
com.singlehop.development: "false"
io.openshift.non-scalable: "true"
kompose.service.expose: "false"
kompose.volume.size: 100Mi
es:
image: elasticsearch:5.6
restart: always
environment:
#- cluster.name=docker-cluster
#- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
command: ["-Ecluster.name=docker-cluster", "-Ebootstrap.memory_lock=true"]
ulimits:
memlock:
soft: -1
hard: -1
labels:
com.singlehop.description: "Generic Elasticsearch5 DB"
com.singlehop.development: "false"
kompose.service.expose: "false"
kompose.volume.size: 100Mi
volumes:
- es-data:/usr/share/elasticsearch/data
expose:
- "9200-9300"
networks:
- data-network
migration:
# #todo can we use the exact same build/image I created above?
image: singlehop/widget-fpm
environment:
# taken from sample.env
- TIMEZONE=America/Chicago
- APACHE_DOCUMENT_ROOT=/usr/local/apache2/htdocs
- GET_HOSTS_FROM=dns
- SYMFONY__DATABASE__HOST=db
- SYMFONY__DATABASE__PORT=5432
- SYMFONY__DATABASE__NAME=widget
- SYMFONY__DATABASE__USER=widget
- SYMFONY__DATABASE__PASSWORD=widget
- SYMFONY__DATABASE__SCHEMA=widget
- SYMFONY__DATABASE__DRIVER=pdo_pgsql
- SYMFONY_ENV=prod
- SYMFONY__ELASTICSEARCH__HOST=es:9200
- SYMFONY__SECRET=dsakfhakjhsdfjkhajhjds
- SYMFONY__LOCALE=en
- SYMFONY__PROJECT_URL=http://localhost
- SYMFONY__APP__NAME=widget
- SYMFONY__CURRENT__API__VERSION=1
entrypoint: ["/usr/local/bin/php","app/console","--no-interaction"]
command: doctrine:migrations:migrate
volumes:
- src:/usr/local/apache2/htdocs
depends_on:
- db
networks:
- data-network
labels:
com.singlehop.description: "Widget Automated Symfony Migration"
com.singlehop.development: "false"
volumes:
src: {}
data-volume: {}
es-data: {}
symfony-cache: {}
symfony-log: {}
networks:
client-network:
data-network:
I'm using the fpm service to act like a data container and share PHP code to the webserver service. For some reason the named volume src is not being shared to the webserver service/container. I've tried both setting the volumes and using volumes_from.
I'm assuming this is possible and I feel like it would be bad practice to do another copy of the source code in the widget_web Dockerfile.
The depends_on in the fpm service is breaking the named volume src. When I removed the depends_on declaration it worked like I assumed it would work. I can't tell if this is a bug or working as designed.

Docker rails elastic search

I am trying to implement elastic search in my rails web app. I am using docker. I used this link for reference. My docker-compose.yml file is:
mysql:
image: mysql:5.6.34
ports:
- "3002:3002"
volumes_from:
- dbdata
environment:
- MYSQL_ROOT_PASSWORD=root
- MYSQL_DATABASE=dev
dbdata:
image: tianon/true
volumes:
- /var/lib/mysql
web:
build: .
environment:
RAILS_ENV: development
ports:
- '3000:3000'
volumes_from:
- appdata
links:
- "mysql"
- elasticsearch
appdata:
image: tianon/true
volumes:
- ".:/workspace"
elasticsearch:
image: elasticsearch
ports:
- "9200:9200"
ulimits:
memlock:
soft: -1
hard: -1
nofile:
soft: 65536
hard: 65536
mem_limit: 1g
cap_add:
- IPC_LOCK
volumes:
- /usr/share/elasticsearch/data
When I am trying to run Student.__elasticsearch__.create_index! force:true as indicated in the above given link. I am getting following error:
You need to set ENV for ELASTICSEARCH_URL to correct value
ELASTICSEARCH_URL="http://<ip-of-your-docker-container>:9200"
As you have linked network, you can provide as bellow
ELASTICSEARCH_URL="http://elasticsearch:9200"
Links allow you to define extra aliases by which a service is reachable from another service. Any service can reach any other service at that service’s name
If no ENV is set, your rails app will use http://localhost:9200

Resources