I am getting segmentation fault and docker exited with code 139 on running hyperledger-explorer docker image.
docker-compose file for creating explorer-db
version: "2.1"
volumes:
data:
walletstore:
pgadmin_4:
external: true
networks:
mynetwork.com:
external:
name: bikeblockchain_network
services:
explorerdb.mynetwork.com:
image: hyperledger/explorer-db:V1.0.0
container_name: explorerdb.mynetwork.com
hostname: explorerdb.mynetwork.com
restart: always
ports:
- 54320:5432
environment:
- DATABASE_DATABASE=fabricexplorer
- DATABASE_USERNAME=hppoc
- DATABASE_PASSWORD=password
healthcheck:
test: "pg_isready -h localhost -p 5432 -q -U postgres"
interval: 30s
timeout: 10s
retries: 5
volumes:
- data:/var/lib/postgresql/data
networks:
mynetwork.com:
aliases:
- postgresdb
pgadmin:
image: dpage/pgadmin4
restart: always
environment:
PGADMIN_DEFAULT_EMAIL: user#domain.com
PGADMIN_DEFAULT_PASSWORD: SuperSecret
PGADMIN_CONFIG_ENHANCED_COOKIE_PROTECTION: "True"
# PGADMIN_CONFIG_LOGIN_BANNER: "Authorized Users Only!"
PGADMIN_CONFIG_CONSOLE_LOG_LEVEL: 10
volumes:
- "pgadmin_4:/var/lib/pgadmin"
ports:
- 8080:80
networks:
- mynetwork.com
docker-compose-explorer file
version: "2.1"
volumes:
data:
walletstore:
external: true
pgadmin_4:
external: true
networks:
mynetwork.com:
external:
name: bikeblockchain_network
services:
explorer.mynetwork.com:
image: hyperledger/explorer:V1.0.0
container_name: explorer.mynetwork.com
hostname: explorer.mynetwork.com
# restart: always
environment:
- DATABASE_HOST=xx.xxx.xxx.xxx
#Host is VM IP address with ports exposed for postgres. No issues here
- DATABASE_PORT=54320
- DATABASE_DATABASE=fabricexplorer
- DATABASE_USERNAME=hppoc
- DATABASE_PASSWD=password
- LOG_LEVEL_APP=debug
- LOG_LEVEL_DB=debug
- LOG_LEVEL_CONSOLE=info
# - LOG_CONSOLE_STDOUT=true
- DISCOVERY_AS_LOCALHOST=false
volumes:
- ./config.json:/opt/explorer/app/platform/fabric/config.json
- ./connection-profile:/opt/explorer/app/platform/fabric/connection-profile
- ./examples/net1/crypto:/tmp/crypto
- walletstore:/opt/wallet
- ./crypto-config/:/etc/data
command: sh -c "node /opt/explorer/main.js && tail -f /dev/null"
ports:
- 6060:6060
networks:
- mynetwork.com
error
Attaching to explorer.mynetwork.com
explorer.mynetwork.com | Segmentation fault
explorer.mynetwork.com exited with code 139
Postgres is working fine. Docker is updated to the latest version.
Fabric network being used is generated inside IBM Blockchain VS Code extension.
I too face the same problem with docker images but I was success on manual start.sh but not on the docker image. After some exploration, i came to know this is due to some architecture build related and there seem to be a segmentation fault issue in the latest v1.0.0 container image.
This get fixed it on the latest master branch, but not yet released it on Docker Hub.
Please build Explorer container image by yourself by using build_docker_image.sh on your local for the time being.
from hlf forum
Okay!! So I did some testings and found that if, the Docker is set to Run on Windows Login, Explorer will throw error of segmentation fault, but if, I manually start Docker after windows login, it works well. Strange !!
Related
[docker-compose question]
hello all! I've been stuck on this for a while so hopefully we can debug together.
I'm using docker compose to bring up three separate services.
Everything builds and comes up great. Health check for the app passes, the services make contact with each other but I can't seem to my curl my app from the host.
I've tried the following values for app.ports:
"127.0.0.1:3000:3000"
"3000:3000"
"0.0.0.0:3000:3000"
I've also tried to run this with a "host" network, but that also didn't seem to work and I don't prefer it because apparently that is not supported on Mac and my local developer environment is Macosx. The prod server is ubuntu.
And I've tried defining the default bridge netowrk explicitly:
networks:
default:
driver: bridge
Here is my docker-compose.yml
version: "2.4"
services:
rabbitmq:
image: rabbitmq
volumes:
- ${ML_FILE_PATH}/taskqueue/config/:/etc/rabbitmq/
environment:
LC_ALL: "C.UTF-8"
LANG: "C.UTF-8"
celery-worker:
image: ${ML_IMAGE_NAME}
entrypoint: "celery --broker='amqp://<user>:<password>#rabbitmq:5672//' -A taskqueue.celeryapp worker --uid 1111"
runtime: ${RUNTIME} ## either "runc" if running locally on debug mode or "nvidia" on production with multi processors
volumes:
- ${ML_FILE_PATH}:/host
depends_on:
- rabbitmq
- app
environment:
LC_ALL: "C.UTF-8"
LANG: "C.UTF-8"
MPLCONFIGDIR: /host/tmp
volumes:
- ${ML_FILE_PATH}:/host
celery-beat:
image: ${ML_IMAGE_NAME}
entrypoint: "celery --broker='amqp://<user>:<password>#rabbitmq:5672//' -A taskqueue.celeryapp beat --uid 1111"
runtime: ${RUNTIME} ## either "runc" if running locally on debug mode or "nvidia" on production with multi processors
depends_on:
- rabbitmq
- app
environment:
LC_ALL: "C.UTF-8"
LANG: "C.UTF-8"
MPLCONFIGDIR: /host/tmp
volumes:
- ${ML_FILE_PATH}:/host
app:
build: .
entrypoint: ${ML_ENTRYPOINT} # just starts a flask app
image: ${ML_IMAGE_NAME}
ports:
- "3000:3000"
expose:
- "3000"
volumes:
- ${ML_FILE_PATH}:/host
restart: always
runtime: ${RUNTIME}
healthcheck:
test: ["CMD", "curl", "http:/localhost:3000/?requestType=health-check"]
start_period: 30s
interval: 30s
timeout: 5s
environment:
SCHEDULER: "off"
TZ: "UTC"
LC_ALL: "C.UTF-8"
LANG: "C.UTF-8"
I can hit the service from within the container as expected.
I'm not sure what I'm missing. Thanks so much for any help!
I'm not sure but i think you dont can route traffic from the host to containers on mac osx.
https://docs.docker.com/desktop/mac/networking/
This ended up being mostly unrelated to docker-compose.
My flask app was starting up on 127.0.0.1. I needed to be starting it as an
externally visible server.
I just had to add --host=0.0.0.0 to my start script.
I am launching containers via docker-compose, but 2 out of 3 containers are failing stating -:"exec user process caused "exec format error" "
The above error is caused while executing a file places at location /opt/whatsapp/bin/wait_on_postgres.sh, i need to add #!/bin/bash at top of this file.
Problem is, the container is exiting in no time so how to access this file to make necessary changes ??
Below is the docker-compose.yml i am using -:
version: '3'
volumes:
whatsappMedia:
driver: local
postgresData:
driver: local
services:
db:
image: postgres:10.6
command: "-p 3306 -N 500"
restart: always
environment:
POSTGRES_PASSWORD: testpass
POSTGRES_USER: root
expose:
- "33060"
ports:
- "33060:3306"
volumes:
- postgresData:/var/lib/postgresql/data
network_mode: bridge
wacore:
image: docker.whatsapp.biz/coreapp:v${WA_API_VERSION:?Run docker-compose with env var WA_API_VERSION (ex. WA_API_VERSION=2.31.4 docker-compose <command> <options>)}
command: ["/opt/whatsapp/bin/wait_on_postgres.sh", "/opt/whatsapp/bin/launch_within_docker.sh"]
volumes:
- whatsappMedia:/usr/local/wamedia
env_file:
- db.env
environment:
# This is the version of the docker templates being used to run WhatsApp Business API
WA_RUNNING_ENV_VERSION: v2.2.3
ORCHESTRATION: DOCKER-COMPOSE
depends_on:
- "db"
network_mode: bridge
links:
- db
waweb:
image: docker.whatsapp.biz/web:v${WA_API_VERSION:?Run docker-compose with env var WA_API_VERSION (ex. WA_API_VERSION=2.31.4 docker-compose <command> <options>)}
command: ["/opt/whatsapp/bin/wait_on_postgres.sh", "/opt/whatsapp/bin/launch_within_docker.sh"]
ports:
- "9090:443"
volumes:
- whatsappMedia:/usr/local/wamedia
env_file:
- db.env
environment:
WACORE_HOSTNAME: wacore
# This is the version of the docker templates being used to run WhatsApp Business API
WA_RUNNING_ENV_VERSION: v2.2.3
ORCHESTRATION: DOCKER-COMPOSE
depends_on:
- "db"
- "wacore"
links:
- db
- wacore
network_mode: bridge
Problem got resolved by using 64bit guest OS image.
I was running this container over 32 bit Centos which was causing the error.
Hi Stackoverflow fellows,
I am facing an issue while running docker-compose up. Whereas docker-compose runs the jenkins locally. This complete docker file is as follows.
version: '2.3'
services:
jenkins:
container_name: jenkins
build: ./master
image: jenkins_casc
environment:
- CASC_JENKINS_CONFIG=/var/jenkins_casc/jenkins.yaml
- SECRETS=/var/jenkins_casc/secrets
ports:
- "8080:8080"
volumes:
- jenkins_master_home:/var/jenkins_home
jenkins_slave_docker:
container_name: jenkins_agent_docker
build: ./agent
image: jenkins_agent_docker
init: true
environment:
- JENKINS_AGENT_SSH_PUBKEY=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0xJ5n9MY0PFBR/aCHSb8JBQgbIUo0C/bPlaxM9v0uCT2CQJvNyrHUfJKaM9wJsdT7wdKBUIvhODdfoE7kc59j0WpO5TQ5Q2MeG7fpQAalM0ATwv/o7hCTvWev5gpJPSsIg9N/+VusO2R4V1H7LpZm65hHL/0lt9SmvtZzQBR+lt5IhrliEMZpo1UdNql/ueR6Em3mFW/tJvprBD445xTa0kxACGXdMI3nF2+SF49oXhTPjNFKSJilWDsoWzf9swyIf1vbH6zr3slMm7jUvOSCC3gGcqNrSG9Y3wkBzqUDe20CjbeAHMq490xlkGQeg9BAByTvn9uOU7ym3mMUnkKR
- DOCKER_CERT_PATH=/certs/client
- DOCKER_HOST=tcp://docker:2376
- DOCKER_TLS_VERIFY=1
restart: on-failure
depends_on:
- jenkins
volumes:
- jenkins-docker-certs:/certs/client:ro
- jenkins_slave_docker_workdir:/home/jenkins:z
- jenkins_slave_docker:/home/jenkins/.jenkins
docker:
container_name: docker
networks:
- harbor
image: docker:dind
command: ["--insecure-registry=proxy:8080"]
environment:
- DOCKER_TLS_CERTDIR=/certs
volumes:
- jenkins-docker-certs:/certs/client
- jenkins_slave_docker_workdir:/home/jenkins:z
privileged: true
volumes:
jenkins_master_home:
jenkins_slave_docker:
jenkins-docker-certs:
jenkins_slave_docker_workdir:
Whereas the error is as follows:
ERROR: Service "docker" uses an undefined network "harbor"
Everything is correct!
You need to define harbor network in your docker-compose file. It may be just "simple" bridge and docker-compose will create this network automatically on your behalf or you can define it as "external" network in case it already exists.
networks:
harbor:
external:
name: harbor
I'm seeing a weird issue where basic docker-compose commands like ps and down time out.
$ docker-compose ps
ERROR: An HTTP request took too long to complete. Retry with --verbose to obtain debug information.
If you encounter this issue regularly because of slow network conditions, consider setting COMPOSE_HTTP_TIMEOUT to a higher value (current value: 60).
There's no reason why this process should take anywhere near 60 seconds, normally it takes less than ten.
I found a stackoverflow post where 'docker ps' hangs forever after server restart but docker ps seems to work just fine so I think it's specifically related to docker-compose. I also found some other instances of the same error on Docker Mall but without a solution and on medium where the only advice is to increase the timeout, which doesn't help me.
Here's my docker-compose file:
---
version: '3.7'
services:
assets:
build:
context: .
args:
NPM_TOKEN: "${NPM_TOKEN}"
IS_LCL: "TRUE"
container_name: foobar_assets
volumes:
- .:/app:delegated
- /app/node_modules
- "${MESSAGING_PATH:-./node_modules/#foobar/baz}:/app/local_modules/#foobar/baz"
ports:
- "4005:4005"
# - "8888:8888"
env_file:
- .env
- .env-overrides
healthcheck:
test: curl -f http://localhost:4005/assets-manifest.json && echo 'assets are ready!'
interval: 2s
timeout: 1s
retries: 100
entrypoint: ['./rsync-entrypoint.sh']
command: ['/usr/local/bin/npm', 'run', 'dev:assets']
init: true
bff:
build:
context: .
args:
NPM_TOKEN: "${NPM_TOKEN}"
IS_LCL: "TRUE"
volumes:
- .:/app:delegated
- /app/node_modules
container_name: foobar_bff
links:
- redis
ports:
- "4010:4010"
- "9231:9231"
env_file:
- .env
- .env-overrides
depends_on:
- assets
entrypoint: ['./rsync-entrypoint.sh']
command: ['/usr/local/bin/npm', 'run', 'dev:server']
init: true
redis:
image: redis:2.8
container_name: foobar_redis
networks:
default:
external:
name: lcl.foobar.io
I'm running Docker for Mac 2.1.0.3 and have tried restarting it as well as my entire mac. This solves the problem temporarily but then it recurs.
I am using Docker version 1.12.3 and docker-compose version 1.8.1. I have some services which contains for example elasticsearch, rabbitmq and a webapp
My problem is that a service can not access another service by its host becuase docker-compose does not put all service hots in /etc/hosts file. I don't know their IP's because it is defined on docker-compose up phase.
I use networks feature as it is described at https://docs.docker.com/compose/networking/ instead of links because I do circular reference and links doesn't support it. But using networks does not put all services hosts to each service nodes /etc/hosts file. I set container_name, I set hostname but nothing happened. What I am missing;
Here is my docker-compose.yml;
version: '2'
services:
elasticsearch1:
image: elasticsearch:5.0
container_name: "elasticsearch1"
hostname: "elasticsearch1"
command: "elasticsearch -E cluster.name=GameOfThrones -E node.name='Ned Stark' -E discovery.zen.ping.unicast.hosts=elasticsearch1,elasticsearch2,elasticsearch3"
volumes:
- "/opt/elasticsearch/data"
ports:
- "9200:9200"
- "9300:9300"
networks:
- webapp
elasticsearch2:
image: elasticsearch:5.0
container_name: "elasticsearch2"
hostname: "elasticsearch2"
command: "elasticsearch -E cluster.name=GameOfThrones -E node.name='Daenerys Targaryen' -E discovery.zen.ping.unicast.hosts=elasticsearch1,elasticsearch2,elasticsearch3"
volumes:
- "/opt/elasticsearch/data"
networks:
- webapp
elasticsearch3:
image: elasticsearch:5.0
container_name: "elasticsearch3"
hostname: "elasticsearch3"
command: "elasticsearch -E cluster.name=GameOfThrones -E node.name='John Snow' -E discovery.zen.ping.unicast.hosts=elasticsearch1,elasticsearch2,elasticsearch3"
volumes:
- "/opt/elasticsearch/data"
networks:
- webapp
rabbit1:
image: harbur/rabbitmq-cluster
container_name: "rabbit1"
hostname: "rabbit1"
environment:
- ERLANG_COOKIE=abcdefg
networks:
- webapp
rabbit2:
image: harbur/rabbitmq-cluster
container_name: "rabbit2"
hostname: "rabbit2"
environment:
- ERLANG_COOKIE=abcdefg
- CLUSTER_WITH=rabbit1
- ENABLE_RAM=true
networks:
- webapp
rabbit3:
image: harbur/rabbitmq-cluster
container_name: "rabbit3"
hostname: "rabbit3"
environment:
- ERLANG_COOKIE=abcdefg
- CLUSTER_WITH=rabbit1
networks:
- webapp
my_webapp:
image: my_webapp:0.2.0
container_name: "my_webapp"
hostname: "my_webapp"
command: "supervisord -c /etc/supervisor/supervisord.conf -n"
environment:
- DYNACONF_SETTINGS=settings.prod
ports:
- "8000:8000"
tty: true
networks:
- webapp
networks:
webapp:
driver: bridge
This is how I understand they can't comunicate with each other;
I get this error on elasticserach cluster initialization;
Caused by: java.net.UnknownHostException: elasticsearch3
And this is how I docker-composing
docker-compose up
If the container expects the hostname to be available immediate when the container starts that is likely why it's failing.
The hostname isn't going to exist until the other containers start. You can use an entrypoint script to wait until all the hostnames are available, then exec elasticsearch ...