I'm trying to setup a CD/CI build environment with docker compose.
I have a jenkins container, a sonar container and an archiva container. The problem is, my jenkins cannot connect to sonar and archiva.
I tried linking multiple containers together or joining them in the same network, but still no success.
In jenkins, I get the following error:
Caused by: org.apache.http.conn.HttpHostConnectException: Connect to localhost:8081 [localhost/127.0.0.1] failed: Connection refused (Connection refused)
This is my docker-compose file.
version: '2'
volumes:
data-jenkins:
driver: 'local'
data-postgres:
driver: 'local'
data-sonarqube-conf:
driver: 'local'
data-sonarqube-data:
driver: 'local'
data-archiva:
driver: 'local'
services:
jenkins:
image: 'jenkins'
ports:
- '8080:8080'
restart: 'always'
volumes:
- 'data-jenkins:/var/jenkins_home'
links:
- 'sonarqube:sonarqube'
postgres:
image: 'postgres:9.6.1'
environment:
- 'POSTGRES_USER=postgres'
- 'POSTGRES_PASSWORD=postgres'
ports:
- '5432:5432'
restart: 'always'
volumes:
- 'data-postgres:/var/lib/postgresql/data'
sonarqube:
image: 'sonarqube'
depends_on:
- 'postgres'
ports:
- '9000:9000'
links:
- 'postgres:postgres'
environment:
- 'SONARQUBE_JDBC_URL=jdbc:postgresql://postgres:5432/'
- 'SONARQUBE_JDBC_USERNAME=postgres'
- 'SONARQUBE_JDBC_PASSWORD=postgres'
volumes:
- 'data-sonarqube-data:/var/lib/sonarqube/data'
- 'data-sonarqube-conf:/var/lib/sonarqube/conf'
archiva:
image: 'xetusoss/archiva'
ports:
- '8081:8080'
volumes:
- 'data-archiva:/var/archiva'
environment:
- 'SSL_ENABLED=false'
It seems the Jenkins container is living in a seperate environment. Does anyone how can i join all the environments together? Been struggling with this problem for almost a week now
To reference your sonarqube container from Jenkins use sonarqube:9000 docker will translate your service name sonarqube to the ip of that container.
I would also recommend using different networks rather than links to connect your containers.
This is because the ping is going to sonarqube.
Related
I am trying to set up a Learning Locker server within Docker (on Windows 10, Docker using WSL for emulation) using the repo from michzimney. This service is composed of several Docker containers (Mongo, Redis, NGINX, etc) networked together. Using the provided docker-compose.yml file I have been able to set up the service and access it from localhost, but I cannot access the server from any machine on the rest of my home network.
This is a specific case, but some guidance will be valuable as I am very new to Docker and will need to build many such environments in the future, for now in Windows but later in Docker on Synology, where the services can be access from network and internet.
My research has led me to user-defined bridging using docker -p [hostip]:80:80 but this didn't work for me. I have also turned off Windows firewall since that seems to cause a host of issues for some but still no effect. I tried to bridge my virtual switch manager for WSL using Windows 10 Hyper-V manager, but that didn't work, and I have tried bridging the WSL connector to LAN using basic Windows 10 networking but that didn't work and I had to reset my network.
So the first question is is this a Windows networking issue or a
Docker configuration issue?
The second question, assuming it's a
Docker configuration issue, is how can I modify the following YML
file to make the service accessible to the outside network:
version: '2'
services:
mongo:
image: mongo:3.4
restart: unless-stopped
volumes:
- "${DATA_LOCATION}/mongo:/data/db"
redis:
image: redis:4-alpine
restart: unless-stopped
xapi:
image: learninglocker/xapi-service:2.1.10
restart: unless-stopped
environment:
- MONGO_URL=mongodb://mongo:27017/learninglocker_v2
- MONGO_DB=learninglocker_v2
- REDIS_URL=redis://redis:6379/0
depends_on:
- mongo
- redis
volumes:
- "${DATA_LOCATION}/xapi-storage:/usr/src/app/storage"
api:
image: michzimny/learninglocker2-app:${DOCKER_TAG}
environment:
- DOMAIN_NAME
- APP_SECRET
- SMTP_HOST
- SMTP_PORT
- SMTP_SECURED
- SMTP_USER
- SMTP_PASS
command: "node api/dist/server"
restart: unless-stopped
depends_on:
- mongo
- redis
volumes:
- "${DATA_LOCATION}/app-storage:/opt/learninglocker/storage"
ui:
image: michzimny/learninglocker2-app:${DOCKER_TAG}
environment:
- DOMAIN_NAME
- APP_SECRET
- SMTP_HOST
- SMTP_PORT
- SMTP_SECURED
- SMTP_USER
- SMTP_PASS
command: "./entrypoint-ui.sh"
restart: unless-stopped
depends_on:
- mongo
- redis
- api
volumes:
- "${DATA_LOCATION}/app-storage:/opt/learninglocker/storage"
- "${DATA_LOCATION}/ui-logs:/opt/learninglocker/logs"
worker:
image: michzimny/learninglocker2-app:${DOCKER_TAG}
environment:
- DOMAIN_NAME
- APP_SECRET
- SMTP_HOST
- SMTP_PORT
- SMTP_SECURED
- SMTP_USER
- SMTP_PASS
command: "node worker/dist/server"
restart: unless-stopped
depends_on:
- mongo
- redis
volumes:
- "${DATA_LOCATION}/app-storage:/opt/learninglocker/storage"
nginx:
image: michzimny/learninglocker2-nginx:${DOCKER_TAG}
environment:
- DOMAIN_NAME
restart: unless-stopped
depends_on:
- ui
- xapi
ports:
- "443:443"
- "80:80"
So far I have attempted to change the ports option to the following:
ports:
- "192.168.1.102:443:443"
- "192.168.1.102:80:80"
But then the container wasn't even accessible from the host machine anymore. I also tried adding network-mode=host under the nginx service but the build failed saying it was not compatible with port mapping. Do I need to set network-mode=host for every service or is the problem something else entirely?
Any help is appreciated.
By the looks of your docker-compose.yml, you are exposing ports 80 & 443 to your host (Windows machine). So, if your windows IP is 192.168.1.102 - you should be able to reach http://192.168.1.102 & https://192.168.1.102 on your LAN if there is nothing blocking it (firewall etc.).
You can confirm that you are indeed listening on those ports by running 'netstat -a' and checking to see if you are LISTENING on those ports.
I am launching containers via docker-compose, but 2 out of 3 containers are failing stating -:"exec user process caused "exec format error" "
The above error is caused while executing a file places at location /opt/whatsapp/bin/wait_on_postgres.sh, i need to add #!/bin/bash at top of this file.
Problem is, the container is exiting in no time so how to access this file to make necessary changes ??
Below is the docker-compose.yml i am using -:
version: '3'
volumes:
whatsappMedia:
driver: local
postgresData:
driver: local
services:
db:
image: postgres:10.6
command: "-p 3306 -N 500"
restart: always
environment:
POSTGRES_PASSWORD: testpass
POSTGRES_USER: root
expose:
- "33060"
ports:
- "33060:3306"
volumes:
- postgresData:/var/lib/postgresql/data
network_mode: bridge
wacore:
image: docker.whatsapp.biz/coreapp:v${WA_API_VERSION:?Run docker-compose with env var WA_API_VERSION (ex. WA_API_VERSION=2.31.4 docker-compose <command> <options>)}
command: ["/opt/whatsapp/bin/wait_on_postgres.sh", "/opt/whatsapp/bin/launch_within_docker.sh"]
volumes:
- whatsappMedia:/usr/local/wamedia
env_file:
- db.env
environment:
# This is the version of the docker templates being used to run WhatsApp Business API
WA_RUNNING_ENV_VERSION: v2.2.3
ORCHESTRATION: DOCKER-COMPOSE
depends_on:
- "db"
network_mode: bridge
links:
- db
waweb:
image: docker.whatsapp.biz/web:v${WA_API_VERSION:?Run docker-compose with env var WA_API_VERSION (ex. WA_API_VERSION=2.31.4 docker-compose <command> <options>)}
command: ["/opt/whatsapp/bin/wait_on_postgres.sh", "/opt/whatsapp/bin/launch_within_docker.sh"]
ports:
- "9090:443"
volumes:
- whatsappMedia:/usr/local/wamedia
env_file:
- db.env
environment:
WACORE_HOSTNAME: wacore
# This is the version of the docker templates being used to run WhatsApp Business API
WA_RUNNING_ENV_VERSION: v2.2.3
ORCHESTRATION: DOCKER-COMPOSE
depends_on:
- "db"
- "wacore"
links:
- db
- wacore
network_mode: bridge
Problem got resolved by using 64bit guest OS image.
I was running this container over 32 bit Centos which was causing the error.
Using Spring cloud Stream 2.1.4 with Spring Boot 2.1.10, I'm trying to target a local instance of Kafka.
This is an extract of my projetc configuation so far:
spring.kafka.bootstrap-servers=PLAINTEXT://localhost:9092
spring.kafka.streams.bootstrap-servers=PLAINTEXT://localhost:9092
spring.cloud.stream.kafka.binder.brokers=PLAINTEXT://localhost:9092
spring.cloud.stream.kafka.binder.zkNodes=localhost:2181
spring.cloud.stream.kafka.streams.binder.brokers=PLAINTEXT://localhost:9092
spring.cloud.stream.kafka.streams.binder.zkNodes=localhost:2181
But the binder keeps on calling a wrong target :
java.io.IOException: Can't resolve address: kafka.example.com:9092
How can can I specify the target if those properties won't do he trick?
More, I deploy the Kafka instance through a Docker Bitnami image and I'd prefer not to use SSL configuration (see PLAINTEXT protocol) but I'm don't find properties for basic credentials login. Does anyone know if this is hopeless?
This is my docker-compose.yml
version: '3'
services:
zookeeper:
image: bitnami/zookeeper:latest
container_name: zookeeper
environment:
- ZOO_ENABLE_AUTH=yes
- ZOO_SERVER_USERS=kafka
- ZOO_SERVER_PASSWORDS=kafka_password
networks:
- kafka-net
kafka:
image: bitnami/kafka:latest
container_name: kafka
hostname: kafka.example.com
depends_on:
- zookeeper
ports:
- 9092:9092
environment:
- ALLOW_PLAINTEXT_LISTENER=yes
- KAFKA_CFG_LISTENERS=PLAINTEXT://:9092
- KAFKA_CFG_ADVERTISED_LISTENERS=PLAINTEXT://:9092
- KAFKA_CFG_ZOOKEEPER_CONNECT=zookeeper:2181
- KAFKA_ZOOKEEPER_USER=kafka
- KAFKA_ZOOKEEPER_PASSWORD=kafka_password
networks:
- kafka-net
networks:
kafka-net:
driver: bridge
Thanks in advance
The hostname isn't the issue, rahter the advertised listeners protocol//:port mapping that causes the hostname to be advertised, by default. You should change that, rather than the hostname.
kafka:
image: bitnami/kafka:latest
container_name: kafka
hostname: kafka.example.com # <--- Here's what you are getting in the request
...
environment:
- ALLOW_PLAINTEXT_LISTENER=yes
- KAFKA_CFG_LISTENERS=PLAINTEXT://:9092
- KAFKA_CFG_ADVERTISED_LISTENERS=PLAINTEXT://:9092 # <--- This returns the hostname to the clients
If you plan on running your code outside of another container, you should advertise localhost in addition to, or instead of the container hostname.
One year later, my comment still is not been merged into the bitnami README, where I was able to get it working with the following vars (changed to match your deployment)
KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP=PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
KAFKA_CFG_LISTENERS=PLAINTEXT://:29092,PLAINTEXT_HOST://:9092
KAFKA_CFG_ADVERTISED_LISTENERS=PLAINTEXT://kafka.example.com:29092,PLAINTEXT_HOST://localhost:9092
All right: got this to work by looking twice to the "dockerfile" (thx to cricket_007):
kafka:
...
hostname: localhost
For the record: I could get rid of all properties above, default being for Kafka localhost:9092
I am trying to set up a 2 node private IPFS cluster using docker. For that purpose I am using ipfs/ipfs-cluster:latest image.
My docker-compose file looks like :
version: '3'
services:
peer-1:
image: ipfs/ipfs-cluster:latest
ports:
- 8080:8080
- 4001:4001
- 5001:5001
volumes:
- ./cluster/peer1/config:/data/ipfs-cluster
peer-2:
image: ipfs/ipfs-cluster:latest
ports:
- 8081:8080
- 4002:4001
- 5002:5001
volumes:
- ./cluster/peer2/config:/data/ipfs-cluster
While starting the containers getting following error
ERROR ipfshttp: error posting to IPFS: Post http://127.0.0.1:5001/api/v0/repo/stat?size-only=true: dial tcp 127.0.0.1:5001: connect: connection refused ipfshttp.go:745
Please help with the problem.
Is there any proper documentation about how to setup a IPFS cluster on docker. This document misses on lot of details.
Thank you.
I figured out how to run a multi-node IPFS cluster on docker environment.
The current ipfs/ipfs-cluster which is version 0.4.17 doesn't run ipfs peer i.e. ipfs/go-ipfs in it. We need to run it separately.
So now in order to run a multi-node (2 node in this case) IPSF cluster in docker environment we need to run 2 IPFS peer container and 2 IPFS cluster container 1 corresponding to each peer.
So your docker-compose file will look as follows :
version: '3'
networks:
vpcbr:
driver: bridge
ipam:
config:
- subnet: 10.5.0.0/16
services:
ipfs0:
container_name: ipfs0
image: ipfs/go-ipfs
ports:
- "4001:4001"
- "5001:5001"
- "8081:8080"
volumes:
- ./var/ipfs0-docker-data:/data/ipfs/
- ./var/ipfs0-docker-staging:/export
networks:
vpcbr:
ipv4_address: 10.5.0.5
ipfs1:
container_name: ipfs1
image: ipfs/go-ipfs
ports:
- "4101:4001"
- "5101:5001"
- "8181:8080"
volumes:
- ./var/ipfs1-docker-data:/data/ipfs/
- ./var/ipfs1-docker-staging:/export
networks:
vpcbr:
ipv4_address: 10.5.0.7
ipfs-cluster0:
container_name: ipfs-cluster0
image: ipfs/ipfs-cluster
depends_on:
- ipfs0
environment:
CLUSTER_SECRET: 1aebe6d1ff52d96241e00d1abbd1be0743e3ccd0e3f8a05e3c8dd2bbbddb7b93
IPFS_API: /ip4/10.5.0.5/tcp/5001
ports:
- "9094:9094"
- "9095:9095"
- "9096:9096"
volumes:
- ./var/ipfs-cluster0:/data/ipfs-cluster/
networks:
vpcbr:
ipv4_address: 10.5.0.6
ipfs-cluster1:
container_name: ipfs-cluster1
image: ipfs/ipfs-cluster
depends_on:
- ipfs1
- ipfs-cluster0
environment:
CLUSTER_SECRET: 1aebe6d1ff52d96241e00d1abbd1be0743e3ccd0e3f8a05e3c8dd2bbbddb7b93
IPFS_API: /ip4/10.5.0.7/tcp/5001
ports:
- "9194:9094"
- "9195:9095"
- "9196:9096"
volumes:
- ./var/ipfs-cluster1:/data/ipfs-cluster/
networks:
vpcbr:
ipv4_address: 10.5.0.8
This will spin 2 peer IPFS cluster and we can store and retrieve file using any of the peer.
The catch here is we need to provide the IPFS_API to ipfs-cluster as environment variable so that the ipfs-cluster knows its corresponding peer. And for both the ipfs-cluster we need to have the same CLUSTER_SECRET.
According to the article you posted:
The container does not run go-ipfs. You should run the IPFS daemon
separetly, for example, using the ipfs/go-ipfs Docker container. We
recommend mounting the /data/ipfs-cluster folder to provide a custom,
working configuration, as well as persistency for the cluster data.
This is usually achieved by passing -v
:/data/ipfs-cluster to docker run).
If in fact you need to connect to another service within the docker-compose, you can simply refer to it by the service name, since hostname entries are created in all the containers in the docker-compose so services can talk to each other by name instead of ip
Additionally:
Unless you run docker with --net=host, you will need to set $IPFS_API
or make sure the configuration has the correct node_multiaddress.
The equivalent of --net=host in docker-compose is network_mode: "host" (incompatible with port-mapping) https://docs.docker.com/compose/compose-file/#network_mode
i have a simple spring cloud project,and it contains 4 services:
config:8888
registry(eureka):8761
gateway(zuul):8080
service-1:9527
enter image description here
the project has no problem if deployed in localhost,
i can successfully fetch service-1's api by zuul without docker:
http://localhost:8080/service-1/test
but when i deployed with docker,
it throws error: Caused by: java.lang.RuntimeException: org.apache.http.conn.HttpHostConnectException: Connect to registry:9527 [registry/172.21.0.4] failed: Connection refused (Connection refused)
i can only fetch with service-1's api
http://localhost:9527/test
PS: the two services(gateway,service-1) has been success registry to eureka
here is my docker-compose yml :
version: '3'
services:
config:
build: ./config
ports:
- "8888:8888"
registry:
build: ./registry
ports:
- "8761:8761"
depends_on:
- config
environment:
- SPRING_PROFILES_ACTIVE=prd
gateway:
build: ./gateway
depends_on:
- config
links:
- registry
- service-1
ports:
- "8080:8080"
environment:
- SPRING_PROFILES_ACTIVE=prd
service-1:
build: ./service-1
ports:
- "9527:9527"
depends_on:
- config
links:
- registry
environment:
- SPRING_PROFILES_ACTIVE=prd
can anyone help me?
i have solved this issue , i forgot add #EnableDiscoveryClient on gateway main class,and cover the eureka instance hostname