while running the docker-compose up it exited error code 0 - docker

docker-compose.yml
version: "3.3"
services:
sonarqube:
container_name: sonarqube
image: sonarqube:7.9.2-community
ports:
- "9000:9000"
environment:
- SONARQUBE_JDBC_URL=jdbc:postgresql://db:5432/sonar
- SONARQUBE_JDBC_USERNAME=sonar
- SONARQUBE_JDBC_PASSWORD=sonar
networks:
- sonarnet
volumes:
- sonarqube_conf:/opt/sonarqube/conf
- sonarqube_data:/opt/sonarqube/data
- sonarqube_logs:/opt/sonarqube/logs
- sonarqube_extensions:/opt/sonarqube/extensions
- sonarqube_bundled-plugins:/opt/sonarqube/lib/bundled-plugins
db:
container_name: sonardb
image: postgres
networks:
- sonarnet
environment:
- POSTGRES_USER=sonar
- POSTGRES_PASSWORD=sonar
volumes:
- postgresql:/var/lib/postgresql
- postgresql_data:/var/lib/postgresql/data
sonarscanner:
container_name: sonarscanner
image: newtmitch/sonar-scanner
networks:
- sonarnet
volumes:
- sonarvol:/usr/src
networks:
sonarnet:
volumes:
sonarqube_conf:
sonarqube_data:
sonarqube_logs:
sonarqube_extensions:
sonarqube_bundled-plugins:
postgresql:
postgresql_data:
sonarvol:

There are two containers that are exiting, each with its own cause:
sonarqube initially fails with max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]. To solve this, run on your host machine the following command:
sysctl -w vm.max_map_count=262144
once this is solved, there is a second problem:
the sonarscanner container is launched before sonarqube is running, causing this error: ERROR: SonarQube server [http://sonarqube:9000] can not be reached. To solve this issue, add a restart policy on failure under your scanner, in the compose file:
sonarscanner:
[...]
restart: on-failure
Once this problem is also solved, the scanner will exit 0(success) hence the execution completes successfully, after printing(which seems to be the normal behavior):
INFO: ANALYSIS SUCCESSFUL, you can browse http://sonarqube:9000/dashboard?id=MyProjectKey
INFO: Note that you will be able to access the updated dashboard once the server has processed the submitted analysis report

Related

Elasticsearch Docker Container Stops 10secs After Start in Ubuntu

I ran a docker-compose file to setup elasticsearch and Kibana on Ubuntu 18.04LTS. Kibana container is up and running just fine but elasticsearch goes down after about 10secs. I have restarted the containers and docker service several times and still got the same result. Been on this all day and hoping that I get some help.
Docker-Compose file.
version: "3.0"
services:
elasticsearch:
container_name: es-container
image: docker.elastic.co/elasticsearch/elasticsearch:7.16.3
environment:
- xpack.security.enabled=true
- xpack.security.audit.enabled=true
- "discovery.type=single-node"
- ELASTIC_PASSWORD=secretpassword
networks:
- es-net
ports:
- 9200:9200
kibana:
container_name: kb-container
image: docker.elastic.co/kibana/kibana:7.16.3
environment:
- ELASTICSEARCH_HOSTS=http://es-container:9200
- ELASTICSEARCH_USERNAME=elastic
- ELASTICSEARCH_PASSWORD=secretpassword
networks:
- es-net
depends_on:
- elasticsearch
ports:
- 5601:5601
networks:
es-net:
driver: bridge
Also checked the logs on the es-container and it displayed;
Created elasticsearch keystore in
/usr/share/elasticsearch/config/elasticsearch.keystore
Audit logging can be only enabled with paid ES subscription and you don't provide any license info to your container.

Docker uses an undefined network

Hi Stackoverflow fellows,
I am facing an issue while running docker-compose up. Whereas docker-compose runs the jenkins locally. This complete docker file is as follows.
version: '2.3'
services:
jenkins:
container_name: jenkins
build: ./master
image: jenkins_casc
environment:
- CASC_JENKINS_CONFIG=/var/jenkins_casc/jenkins.yaml
- SECRETS=/var/jenkins_casc/secrets
ports:
- "8080:8080"
volumes:
- jenkins_master_home:/var/jenkins_home
jenkins_slave_docker:
container_name: jenkins_agent_docker
build: ./agent
image: jenkins_agent_docker
init: true
environment:
- JENKINS_AGENT_SSH_PUBKEY=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0xJ5n9MY0PFBR/aCHSb8JBQgbIUo0C/bPlaxM9v0uCT2CQJvNyrHUfJKaM9wJsdT7wdKBUIvhODdfoE7kc59j0WpO5TQ5Q2MeG7fpQAalM0ATwv/o7hCTvWev5gpJPSsIg9N/+VusO2R4V1H7LpZm65hHL/0lt9SmvtZzQBR+lt5IhrliEMZpo1UdNql/ueR6Em3mFW/tJvprBD445xTa0kxACGXdMI3nF2+SF49oXhTPjNFKSJilWDsoWzf9swyIf1vbH6zr3slMm7jUvOSCC3gGcqNrSG9Y3wkBzqUDe20CjbeAHMq490xlkGQeg9BAByTvn9uOU7ym3mMUnkKR
- DOCKER_CERT_PATH=/certs/client
- DOCKER_HOST=tcp://docker:2376
- DOCKER_TLS_VERIFY=1
restart: on-failure
depends_on:
- jenkins
volumes:
- jenkins-docker-certs:/certs/client:ro
- jenkins_slave_docker_workdir:/home/jenkins:z
- jenkins_slave_docker:/home/jenkins/.jenkins
docker:
container_name: docker
networks:
- harbor
image: docker:dind
command: ["--insecure-registry=proxy:8080"]
environment:
- DOCKER_TLS_CERTDIR=/certs
volumes:
- jenkins-docker-certs:/certs/client
- jenkins_slave_docker_workdir:/home/jenkins:z
privileged: true
volumes:
jenkins_master_home:
jenkins_slave_docker:
jenkins-docker-certs:
jenkins_slave_docker_workdir:
Whereas the error is as follows:
ERROR: Service "docker" uses an undefined network "harbor"
Everything is correct!
You need to define harbor network in your docker-compose file. It may be just "simple" bridge and docker-compose will create this network automatically on your behalf or you can define it as "external" network in case it already exists.
networks:
harbor:
external:
name: harbor

Getting Segmentation Fault on running hyperledger/explorer in docker container

I am getting segmentation fault and docker exited with code 139 on running hyperledger-explorer docker image.
docker-compose file for creating explorer-db
version: "2.1"
volumes:
data:
walletstore:
pgadmin_4:
external: true
networks:
mynetwork.com:
external:
name: bikeblockchain_network
services:
explorerdb.mynetwork.com:
image: hyperledger/explorer-db:V1.0.0
container_name: explorerdb.mynetwork.com
hostname: explorerdb.mynetwork.com
restart: always
ports:
- 54320:5432
environment:
- DATABASE_DATABASE=fabricexplorer
- DATABASE_USERNAME=hppoc
- DATABASE_PASSWORD=password
healthcheck:
test: "pg_isready -h localhost -p 5432 -q -U postgres"
interval: 30s
timeout: 10s
retries: 5
volumes:
- data:/var/lib/postgresql/data
networks:
mynetwork.com:
aliases:
- postgresdb
pgadmin:
image: dpage/pgadmin4
restart: always
environment:
PGADMIN_DEFAULT_EMAIL: user#domain.com
PGADMIN_DEFAULT_PASSWORD: SuperSecret
PGADMIN_CONFIG_ENHANCED_COOKIE_PROTECTION: "True"
# PGADMIN_CONFIG_LOGIN_BANNER: "Authorized Users Only!"
PGADMIN_CONFIG_CONSOLE_LOG_LEVEL: 10
volumes:
- "pgadmin_4:/var/lib/pgadmin"
ports:
- 8080:80
networks:
- mynetwork.com
docker-compose-explorer file
version: "2.1"
volumes:
data:
walletstore:
external: true
pgadmin_4:
external: true
networks:
mynetwork.com:
external:
name: bikeblockchain_network
services:
explorer.mynetwork.com:
image: hyperledger/explorer:V1.0.0
container_name: explorer.mynetwork.com
hostname: explorer.mynetwork.com
# restart: always
environment:
- DATABASE_HOST=xx.xxx.xxx.xxx
#Host is VM IP address with ports exposed for postgres. No issues here
- DATABASE_PORT=54320
- DATABASE_DATABASE=fabricexplorer
- DATABASE_USERNAME=hppoc
- DATABASE_PASSWD=password
- LOG_LEVEL_APP=debug
- LOG_LEVEL_DB=debug
- LOG_LEVEL_CONSOLE=info
# - LOG_CONSOLE_STDOUT=true
- DISCOVERY_AS_LOCALHOST=false
volumes:
- ./config.json:/opt/explorer/app/platform/fabric/config.json
- ./connection-profile:/opt/explorer/app/platform/fabric/connection-profile
- ./examples/net1/crypto:/tmp/crypto
- walletstore:/opt/wallet
- ./crypto-config/:/etc/data
command: sh -c "node /opt/explorer/main.js && tail -f /dev/null"
ports:
- 6060:6060
networks:
- mynetwork.com
error
Attaching to explorer.mynetwork.com
explorer.mynetwork.com | Segmentation fault
explorer.mynetwork.com exited with code 139
Postgres is working fine. Docker is updated to the latest version.
Fabric network being used is generated inside IBM Blockchain VS Code extension.
I too face the same problem with docker images but I was success on manual start.sh but not on the docker image. After some exploration, i came to know this is due to some architecture build related and there seem to be a segmentation fault issue in the latest v1.0.0 container image.
This get fixed it on the latest master branch, but not yet released it on Docker Hub.
Please build Explorer container image by yourself by using build_docker_image.sh on your local for the time being.
from hlf forum
Okay!! So I did some testings and found that if, the Docker is set to Run on Windows Login, Explorer will throw error of segmentation fault, but if, I manually start Docker after windows login, it works well. Strange !!

docker-compose: zipkin cannot connect to elasticsearch

I try to setup zipkin, elasticsearch, prometheus and grafana with docker-compose.yml
When I run dockers, see in the log:
dependencies_zipkin | 19/09/30 14:37:09 ERROR NetworkClient: Node [172.28.0.2:9200] failed (java.net.ConnectException: Connection refused (Connection refused)); no other nodes left - aborting...
I'm on MacOS X with docker 2.1.0.3
the content of my docker-compose.yml is this one:
version: '3.7'
services:
storage:
image: openzipkin/zipkin-elasticsearch7
container_name: elasticsearch
ports:
- "9200:9200"
environment:
- "xpack.security.enabled=false"
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
restart: unless-stopped
prometheus:
image: prom/prometheus:latest
container_name: prometheus
volumes:
- $PWD/prometheus:/etc/prometheus/
- /tmp/prometheus:/prometheus/data:rw
command:
- '--config.file=/etc/prometheus/prometheus.yml'
- '--storage.tsdb.path=/prometheus'
- '--web.console.libraries=/usr/share/prometheus/console_libraries'
- '--web.console.templates=/usr/share/prometheus/consoles'
ports:
- "9090:9090"
restart: unless-stopped
zipkin:
image: openzipkin/zipkin
container_name: zipkin
depends_on:
- dependencies
- storage
environment:
- "STORAGE_TYPE=elasticsearch"
- "ES_HOSTS=storage"
ports:
- "9411:9411"
restart: unless-stopped
grafana:
image: grafana/grafana
container_name: grafana
ports:
- "3000:3000"
restart: unless-stopped
dependencies:
image: openzipkin/zipkin-dependencies
container_name: dependencies_zipkin
depends_on:
- storage
environment:
- "STORAGE_TYPE=elasticsearch"
- "ES_HOSTS=storage"
When I connect to localhost:9200, I see that elasticsearch is working fine and on port 9411, zipkin is deployed but I have the error:
ERROR: cannot load service names: server error (Service Unavailable)(due to the network error
In the log, I have this information:
105 ^[[35mdependencies_zipkin |^[[0m 19/09/30 14:45:20 ERROR NetworkClient: Node [172.28.0.2:9200] failed (java.net.ConnectException: Connection refused (Connection refused)); no other nodes left - aborting...
and this one
^[[31mzipkin |^[[0m java.lang.IllegalStateException: couldn't connect any of [Endpoint{storage:80, ipAddr=172.28.0.2, weight=1000}]
Any idea?
UPDATE
by using mysql it is working fine, so the problem is at the level of elastic search.
I tried alsoo by using
"STORAGE_PORT_9200_TCP_ADDR=127.0.0.1"
but the issue still occurs.
UPDATE
As mention is the solution gave by Brian, I have to use:
ES_HOSTS=http://storage:9300
the key is on port, I was using the port 9200
The error disappear between zipkin and es but still occurs between es and zipkin-dependencies.
The problem lies in your ES_HOSTS variable, from the docs here:
ES_HOSTS: A comma separated list of elasticsearch base urls to connect to ex. http://host:9200.
Defaults to "http://localhost:9200".
So you will need: ES_HOSTS=http://storage:9200
Finally I have this file:
version: '3.7'
services:
storage:
image: openzipkin/zipkin-elasticsearch7
container_name: elasticsearch
ports:
- 9200:9200
zipkin:
image: openzipkin/zipkin
container_name: zipkin
environment:
- STORAGE_TYPE=elasticsearch
- "ES_HOSTS=elasticsearch:9300"
ports:
- 9411:9411
depends_on:
- storage
dependencies:
image: openzipkin/zipkin-dependencies
container_name: dependencies
entrypoint: crond -f
depends_on:
- storage
environment:
- STORAGE_TYPE=elasticsearch
- "ES_HOSTS=elasticsearch:9300"
- "ES_NODES_WAN_ONLY=true"
prometheus:
image: prom/prometheus:latest
container_name: prometheus
volumes:
- $PWD/prometheus:/etc/prometheus/
- /tmp/prometheus:/prometheus/data:rw
command:
- '--config.file=/etc/prometheus/prometheus.yml'
- '--storage.tsdb.path=/prometheus'
- '--web.console.libraries=/usr/share/prometheus/console_libraries'
- '--web.console.templates=/usr/share/prometheus/consoles'
ports:
- "9090:9090"
grafana:
image: grafana/grafana
container_name: grafana
depends_on:
- prometheus
ports:
- "3000:3000"
Main differences are the usage of
"ES_HOSTS=elasticsearch:9300"
instead of
"ES_HOSTS=storage:9300"
and in the dependencies configuration I add the entrypoint in dependencies:
entrypoint: crond -f
This one is really the key to not have the exception when I start docker-compose.
To solve this issue, I check the this project: https://github.com/openzipkin/docker-zipkin
The remaining question is: why do I need to use entrypoint: crond -f

ElasticSearch container won't start up in Docker

I'm attempting to run this script in Win10 to configure everything.
All containers except the elastic container are initialized correctly and
Elastic times out and then exits with code 124.
https://imgur.com/a/FO8ckwc (some log outputs)
I'm running this script where I didn't touch anything except the Windows ports (you can see the comments)
https://pastebin.com/7Z8Gnenr
version: '3.1'
# Generated on 23-04-2018
services:
alfresco:
image: openmbeeguest/mms-repo:3.2.4-SNAPSHOT
environment:
CATALINA_OPTS: "-Xmx1G -XX:+UseConcMarkSweepGC"
depends_on:
- postgresql
- activemq
- elastic
networks:
- internal
ports:
- 8080:8080
volumes:
- alf_logs:/usr/local/tomcat/logs
- alf_data:/opt/alf_data
tmpfs:
- /tmp
- /usr/local/tomcat/temp/
- /usr/local/tomcat/work/
solr:
image: openmbeeguest/mms-solr:3.2.4-SNAPSHOT
environment:
CATALINA_OPTS: "-Xmx1G -XX:+UseG1GC -XX:+ParallelRefProcEnabled -XX:G1HeapRegionSize=8m -XX:MaxGCPauseMillis=200"
depends_on:
- alfresco
networks:
- internal
volumes:
- solr_logs:/usr/local/tomcat/logs/
- solr_content_store:/opt/solr/ContentStore
tmpfs:
- /tmp
- /usr/local/tomcat/temp/
- /usr/local/tomcat/work/
activemq:
image: openmbeeguest/mms-activemq:3.2.4-SNAPSHOT
ports:
#I changed these Windows side ports
- 61615:61616
- 61617:61614
- 8162:8161
# ORIGINAL
#- 61616:61616
#- 61614:61614
#- 8161:8161
volumes:
- activemq-data-volume:/data/activemq
- activemq-log-volume:/var/log/activemq
- activemq-conf-volume:/opt/activemq/conf
environment:
- ACTIVEMQ_ADMIN_LOGIN admin
- ACTIVEMQ_ADMIN_PASSWORD admin
networks:
- internal
elastic:
image: openmbeeguest/mms-elastic:3.2.4-SNAPSHOT
environment:
CLEAN: 'false'
ports:
- 9200:9200
volumes:
- elastic-data-volume:/usr/share/elasticsearch/data
networks:
- internal
postgresql:
image: openmbeeguest/mms-postgres:3.2.4-SNAPSHOT
volumes:
- pgsql_data:/var/lib/postgresql/data
environment:
- POSTGRES_USER=alfresco
- POSTGRES_PASSWORD=alfresco
- POSTGRES_DB=alfresco
networks:
- internal
volumes:
alf_logs:
alf_data:
solr_logs:
solr_content_store:
pgsql_data:
activemq-data-volume:
activemq-log-volume:
activemq-conf-volume:
elastic-data-volume:
nginx-external-volume:
networks:
internal:
Any help would be greatly appreciated!
Do you have the logs from the elasticsearch container to share? Without that it's hard to tell why it's exiting.
One thing that's tripped me up repeatedly though is the vm.max_map_count setting - the default in Docker is too low for elasticsearch to function, so it's a good first thing to check.

Resources