How to set Zookeeper dataDir in Docker (fig.yml) - docker

I've configured Zookeeper and Kafka containers in a fig.yml file for Docker. Both containers start fine. But after sending a number of messages, my application /zk-client hangs. On checking zookeeper logs, I see the error:
Error Path:/brokers Error:KeeperErrorCode = NoNode for /brokers
My fig.yml is as follows:
zookeeper:
image: wurstmeister/zookeeper
ports:
- "2181:2181"
environment:
ZK_ADVERTISED_HOST_NAME: xx.xx.x.xxx
ZK_CONNECTION_TIMEOUT_MS: 6000
ZK_SYNC_TIME_MS: 2000
ZK_DATADIR: /path/to/data/zk/data/dir
kafka:
image: wurstmeister/kafka:0.8.2.0
ports:
- "xx.xx.x.xxx:9092:9092"
links:
- zookeeper:zk
environment:
KAFKA_ADVERTISED_HOST_NAME: xx.xx.x.xxx
KAFKA_LOG_DIRS: /home/svc_cis4/dl
volumes:
- /var/run/docker.sock:/var/run/docker.sock
I've searched for quite a while now, but I haven't got a solution yet. I've also tried setting the data directory in fig.yml using ZK_DATADIR: '/path/to/zk/data/dir' but it doesn't seem to help. Any assistance will be appreciated.
UPDATE
Content of /opt/kafka_2.10-0.8.2.0/config/server.properties:
broker.id=0
port=9092
num.network.threads=3
num.io.threads=8
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
num.partitions=1
num.recovery.threads.per.data.dir=1
log.retention.hours=168
log.segment.bytes=1073741824
log.retention.check.interval.ms=300000
log.cleaner.enable=false
zookeeper.connect=localhost:2181
zookeeper.connection.timeout.ms=6000

The problems you are having are not related with zookeeper's data directory. The error Error Path:/brokers Error:KeeperErrorCode = NoNode for /brokers are due to your application cannot find any broker znode in zookeeper's data. This is happening probably because the kafka container is not connecting correctly with zookeeper, and looking to wurstmeister's images I think the problem may be related to variable KAFKA_ADVERTISED_HOST_NAME could be wrong. I don't know if there is a reason to assign that variable through a env variable that has to be passed, but from my point of view this is not a good approach. There are multiple ways to configure kafka (in fact there is no need to set advertised.host.name and you can leave it commented and kafka will take default hostname, which can be set with docker), but a fast solution using this would be editing start-kafka.sh and rebuilding the image:
#!/bin/bash
if [[ -z "$KAFKA_ADVERTISED_PORT" ]]; then
export KAFKA_ADVERTISED_PORT=$(docker port `hostname` 9092 | sed -r "s/.*:(.*)/\1/g")
fi
if [[ -z "$KAFKA_BROKER_ID" ]]; then
export KAFKA_BROKER_ID=$KAFKA_ADVERTISED_PORT
fi
if [[ -z "$KAFKA_LOG_DIRS" ]]; then
export KAFKA_LOG_DIRS="/kafka/kafka-logs-$KAFKA_BROKER_ID"
fi
if [[ -z "$KAFKA_ZOOKEEPER_CONNECT" ]]; then
export KAFKA_ZOOKEEPER_CONNECT=$(env | grep ZK.*PORT_2181_TCP= | sed -e 's|.*tcp://||' | paste -sd ,)
fi
if [[ -n "$KAFKA_HEAP_OPTS" ]]; then
sed -r -i "s/^(export KAFKA_HEAP_OPTS)=\"(.*)\"/\1=\"$KAFKA_HEAP_OPTS\"/g" $KAFKA_HOME/bin/kafka-server-start.sh
unset KAFKA_HEAP_OPTS
fi
for VAR in `env`
do
if [[ $VAR =~ ^KAFKA_ && ! $VAR =~ ^KAFKA_HOME ]]; then
kafka_name=`echo "$VAR" | sed -r "s/KAFKA_(.*)=.*/\1/g" | tr '[:upper:]' '[:lower:]' | tr _ .`
env_var=`echo "$VAR" | sed -r "s/(.*)=.*/\1/g"`
if egrep -q "(^|^#)$kafka_name=" $KAFKA_HOME/config/server.properties; then
sed -r -i "s#(^|^#)($kafka_name)=(.*)#\2=${!env_var}#g" $KAFKA_HOME/config/server.properties #note that no config values may contain an '#' char
else
echo "$kafka_name=${!env_var}" >> $KAFKA_HOME/config/server.properties
fi
fi
done
###NEW###
IP=$(hostname --ip-address)
sed -i -e "s/^advertised.host.name.*/advertised.host.name=$IP/" $KAFKA_HOME/config/server.properties
###END###
$KAFKA_HOME/bin/kafka-server-start.sh $KAFKA_HOME/config/server.properties
If this doesn't solve your problem you can get more information starting a session inside the containers (i.e.: docker exec -it kafkadocker_kafka_1 /bin/bash for kafka's and docker exec -it kafkadocker_zookeeper_1 /bin/bash for zookeeper's), and there check kafka logs, or zookeeper console (/opt/zookeeper-3.4.6/bin/zkCli.sh)

The configuration that's been working for me without any issues for the last two days involves specifying host addresses for both Zookeeper and Kafka. My fig.yml content is:
zookeeper:
image: wurstmeister/zookeeper
ports:
- "xx.xx.x.xxx:2181:2181"
kafka:
image: wurstmeister/kafka:0.8.2.0
ports:
- "9092:9092"
links:
- zookeeper:zk
environment:
KAFKA_ADVERTISED_HOST_NAME: xx.xx.x.xxx
KAFKA_NUM_REPLICA_FETCHERS: 4
...other env variables...
volumes:
- /var/run/docker.sock:/var/run/docker.sock
validator:
build: .
volumes:
- .:/host
entrypoint: /bin/bash
command: -c 'java -jar /host/app1.jar'
links:
- zookeeper:zk
- kafka
analytics:
build: .
volumes:
- .:/host
entrypoint: /bin/bash
command: -c 'java -jar /host/app2.jar'
links:
- zookeeper:zk
- kafka
loader:
build: .
volumes:
- .:/host
entrypoint: /bin/bash
command: -c 'java -jar /host/app3.jar'
links:
- zookeeper:zk
- kafka
And the accompanying Dockerfile content:
FROM ubuntu:trusty
MAINTAINER Wurstmeister
RUN apt-get update; apt-get install -y unzip openjdk-7-jdk wget git docker.io
RUN wget -q http://apache.mirrors.lucidnetworks.net/kafka/0.8.2.0/kafka_2.10-0.8.2.0.tgz -O /tmp/kafka_2.10-0.8.2.0.tgz
RUN tar xfz /tmp/kafka_2.10-0.8.2.0.tgz -C /opt
VOLUME ["/kafka"]
ENV KAFKA_HOME /opt/kafka_2.10-0.8.2.0
ADD start-kafka.sh /usr/bin/start-kafka.sh
ADD broker-list.sh /usr/bin/broker-list.sh
CMD start-kafka.sh

Related

How to launch Docker Compose and run curl together from dockerfile

I have a problem with write a dockerfile for my application. My code is beloy:
# define a imagem base
FROM ubuntu:latest
# define a owner image
LABEL maintainer="MyCompany"
# Update a image with packages
RUN apt-get update && apt-get upgrade -y
# Expose port 80
EXPOSE 8089
# Command to start my docker compose file
CMD ["docker-compose -f compose.yaml up -d"]
# Command to link KafkaConnect with MySql (images in docker compose file)
CMD ["curl -i -X POST -H "Accept:application/json" -H "Content-Type:application/json"
localhost:8083/connectors/ -d "
{ \"name\": \"inventory-connector\",
\"config\": {
\"connector.class\": \"io.debezium.connector.mysql.MySqlConnector\",
\"tasks.max\": \"1\",
\"database.hostname\": \"mysql\",
\"database.port\": \"3306\",
\"database.user\": \"debezium\",
\"database.password\": \"dbz\",
\"database.server.id\": \"184054\",
\"database.server.name\": \"dbserver1\",
\"database.include.list\": \"inventory\",
\"database.history.kafka.bootstrap.servers\": \"kafka:9092\",
\"database.history.kafka.topic\": \"dbhistory.inventory\"
}
}"]
I know there can only be one CMD inside the dockerfile file.
How do I run my compose file and then make a cURL call?
You need to use RUN command for this. Check this answer for the difference between RUN and CMD.
If your second CMD is the final command inside the Dockerfile, then just change the following line:
# Command to start my docker compose file
RUN docker-compose -f compose.yaml up -d
If you have more commands to run after the CMDs you have now, try the below:
# Command to start my docker compose file
RUN docker-compose -f compose.yaml up -d
# Command to link KafkaConnect with MySql (images in docker compose file)
RUN curl -i -X POST -H "Accept:application/json" -H "Content-Type:application/json"
localhost:8083/connectors/ -d "
{ \"name\": \"inventory-connector\",
\"config\": {
\"connector.class\": \"io.debezium.connector.mysql.MySqlConnector\",
\"tasks.max\": \"1\",
\"database.hostname\": \"mysql\",
\"database.port\": \"3306\",
\"database.user\": \"debezium\",
\"database.password\": \"dbz\",
\"database.server.id\": \"184054\",
\"database.server.name\": \"dbserver1\",
\"database.include.list\": \"inventory\",
\"database.history.kafka.bootstrap.servers\": \"kafka:9092\",
\"database.history.kafka.topic\": \"dbhistory.inventory\"
}
}"
# To set your ENTRYPOINT at the end of the file, uncomment the following line
# ENTRYPOINT ["some-other-command-you-need", "arg1", "arg2"]
Here's another option - run the curl from within the Kafka Connect container that you're creating. It looks something like this:
kafka-connect:
image: confluentinc/cp-kafka-connect-base:6.2.0
container_name: kafka-connect
depends_on:
- broker
ports:
- 8083:8083
environment:
CONNECT_BOOTSTRAP_SERVERS: "kafka:9092"
CONNECT_REST_ADVERTISED_HOST_NAME: "kafka-connect"
CONNECT_REST_PORT: 8083
CONNECT_GROUP_ID: kafka-connect
CONNECT_CONFIG_STORAGE_TOPIC: _kafka-connect-configs
CONNECT_OFFSET_STORAGE_TOPIC: _kafka-connect-offsets
CONNECT_STATUS_STORAGE_TOPIC: _kafka-connect-status
CONNECT_KEY_CONVERTER: io.confluent.connect.avro.AvroConverter
CONNECT_KEY_CONVERTER_SCHEMA_REGISTRY_URL: 'http://schema-registry:8081'
CONNECT_VALUE_CONVERTER: io.confluent.connect.avro.AvroConverter
CONNECT_VALUE_CONVERTER_SCHEMA_REGISTRY_URL: 'http://schema-registry:8081'
CONNECT_LOG4J_ROOT_LOGLEVEL: "INFO"
CONNECT_LOG4J_LOGGERS: "org.apache.kafka.connect.runtime.rest=WARN,org.reflections=ERROR"
CONNECT_LOG4J_APPENDER_STDOUT_LAYOUT_CONVERSIONPATTERN: "[%d] %p %X{connector.context}%m (%c:%L)%n"
CONNECT_CONFIG_STORAGE_REPLICATION_FACTOR: "1"
CONNECT_OFFSET_STORAGE_REPLICATION_FACTOR: "1"
CONNECT_STATUS_STORAGE_REPLICATION_FACTOR: "1"
command:
- bash
- -c
- |
#
echo "Installing connector plugins"
confluent-hub install --no-prompt debezium/debezium-connector-mysql:1.5.0
#
echo "Launching Kafka Connect worker"
/etc/confluent/docker/run &
#
echo "Waiting for Kafka Connect to start listening on localhost ⏳"
while : ; do
curl_status=$$(curl -s -o /dev/null -w %{http_code} http://localhost:8083/connectors)
echo -e $$(date) " Kafka Connect listener HTTP state: " $$curl_status " (waiting for 200)"
if [ $$curl_status -eq 200 ] ; then
break
fi
sleep 5
done
echo -e "\n--\n+> Creating Data Generator source"
curl -s -X PUT -H "Content-Type:application/json" http://localhost:8083/connectors/inventory-connector/config \
-d '{
"connector.class": "io.debezium.connector.mysql.MySqlConnector",
"tasks.max": "1",
"database.hostname": "mysql",
"database.port": "3306",
"database.user": "debezium",
"database.password": "dbz",
"database.server.id": "184054",
"database.server.name": "dbserver1",
"database.include.list": "inventory",
"database.history.kafka.bootstrap.servers": "kafka:9092",
"database.history.kafka.topic": "dbhistory.inventory"
}'
sleep infinity
You can see the full Docker Compose here

Find url/ip of container running in docker-compose in gitlab ci

I have an application that runs in docker-compose (for acceptance testing). The acceptance tests work locally, but they require the host (or ip) of the webservice container running in docker-compose in order to send requests to it. This works fine locally, but I cannot find the ip of the container when it is running in a gitlab ci server. I've tried the following few solutions (all of which work when running locally, but none of which work in gitlab ci) to find the url of the container running in docker-compose in gitlab ci server:
use "docker" as the host. This works for an application running in docker, but not docker-compose
use docker-inspect to find the ip of the container (docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' reading-comprehension)
assign a static ip to the container using a network in docker-compose.yml (latest attempt).
The gitlab ci file can be found here:
https://gitlab.com/connorbutch/reading-comprehension/-/blob/9-list-all-assessments/.gitlab-ci.yml
image: connorbutch/gradle-and-java-11:alpha
variables:
GRADLE_OPTS: "-Dorg.gradle.daemon=false"
DOCKER_HOST: "tcp://docker:2375"
DOCKER_DRIVER: "overlay2"
before_script:
- export GRADLE_USER_HOME=`pwd`/.gradle
services:
- docker:stable-dind
stages:
- build
- docker_build
- acceptance_test
unit_test:
stage: build
script: ./gradlew check
cache:
key: "$CI_COMMIT_REF_NAME"
policy: pull
paths:
- build
- .gradle
build:
stage: build
script:
- ./gradlew clean quarkusBuild
- ./gradlew clean build -Dquarkus.package.type=native -Dquarkus.native.container-build=true
cache:
key: "$CI_COMMIT_REF_NAME"
policy: push
paths:
- build
- .gradle
artifacts:
paths:
- reading-comprehension-server-quarkus-impl/build/
docker_build:
stage: docker_build
script:
- cd reading-comprehension-server-quarkus-impl
- docker build -f infrastructure/Dockerfile -t registry.gitlab.com/connorbutch/reading-comprehension:$CI_COMMIT_SHORT_SHA .
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
- docker push registry.gitlab.com/connorbutch/reading-comprehension:$CI_COMMIT_SHORT_SHA
acceptance_test:
stage: acceptance_test
only:
- merge_requests
script:
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
- cd reading-comprehension-server-quarkus-impl/infrastructure
- export IMAGE_TAG=$CI_COMMIT_SHORT_SHA
- docker-compose up -d & ../../wait-for-it-2.sh
- cd ../..
- ./gradlew -DBASE_URL='192.168.0.8' acceptanceTest
artifacts:
paths:
- reading-comprehension/reading-comprehension-server-quarkus-impl/build/
The docker-compose file can be found here:
https://gitlab.com/connorbutch/reading-comprehension/-/blob/9-list-all-assessments/reading-comprehension-server-quarkus-impl/infrastructure/docker-compose.yml
Find the output of one of the failed jobs here:
https://gitlab.com/connorbutch/reading-comprehension/-/jobs/734771859
#This file is NOT ever intended for use in production. Docker-compose is a great tool for running
#database with our application for acceptance testing.
version: '3.3'
networks:
network:
ipam:
driver: default
config:
- subnet: 192.168.0.0/24
services:
db:
image: mysql:5.7.10
container_name: "db"
restart: always
environment:
MYSQL_DATABASE: "rc"
MYSQL_USER: "user"
MYSQL_PASSWORD: "password"
MYSQL_ROOT_PASSWORD: "password"
MYSQL_ROOT_HOST: "%"
networks:
network:
ipv4_address: 192.168.0.4
ports:
- '3306:3306'
expose:
- '3306'
volumes:
- db:/var/lib/mysql
reading-comprehension-ws:
image: "registry.gitlab.com/connorbutch/reading-comprehension:${IMAGE_TAG}"
container_name: "reading-comprehension"
restart: on-failure
environment:
WAIT_HOSTS: "db:3306"
DB_USER: "user"
DB_PASSWORD: "password"
DB_JDBC_URL: "jdbc:mysql://192.168.0.4:3306/rc"
networks:
network:
ipv4_address: 192.168.0.8
ports:
- 8080:8080
expose:
- 8080
volumes:
db:
Does anyone have any idea on how to access the ip of the container running in docker-compose on gitlab ci server? Any suggestions are welcome.
Thanks,
Connor
This is little bit tricky, just few days ago I had similar problem but with VPN from CI to client :)
EDIT: Solution for on-premise gitlab instances
Create custom network for gitlab runners:
docker network create --subnet=172.16.0.0/28 \
--opt com.docker.network.bridge.name=gitlab-runners \
--opt com.docker.network.bridge.enable_icc=true \
--opt com.docker.network.bridge.enable_ip_masquerade=true \
--opt com.docker.network.bridge.host_binding_ipv4=0.0.0.0 \
--opt com.docker.network.driver.mtu=9001 gitlab-runners
Attach new network to gitlab-runners
# /etc/gitlab-runner/config.toml
[[runners]]
....
[runners.docker]
....
network_mode = "gitlab-runners"
Restart runners.
And finally gitlab-ci.yml
start-vpn:
stage: prepare-deploy
image: docker:stable
cache: {}
variables:
GIT_STRATEGY: none
script:
- >
docker run -it -d --rm
--name vpn-branch-$CI_COMMIT_REF_NAME
--privileged
--net gitlab-runners
-e VPNADDR=$VPN_SERVER
-e VPNUSER=$VPN_USER
-e VPNPASS=$VPN_PASSWORD
auchandirect/forticlient || true && sleep 2
- >
docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}'
vpn-branch-$CI_COMMIT_REF_NAME > vpn_container_ip
artifacts:
paths:
- vpn_container_ip
And in next step you can use something like:
before_script:
- ip route add 10.230.163.0/24 via $(cat vpn_container_ip) # prod/dev
- ip route add 10.230.164.0/24 via $(cat vpn_container_ip) # test
EDIT: Solution for gitlab.com
Base on gitlab issue answer port mapping in DinD is bit different from nonDinD gitlab-runner and for exposed ports you should use hostname 'docker'.
Example:
services:
- docker:stable-dind
variables:
DOCKER_HOST: "tcp://docker:2375"
stages:
- test
test env:
image: tmaier/docker-compose:latest
stage: test
script:
# containous/whoami with exposed port 80:80
- docker-compose up -d
- apk --no-cache add curl
- curl docker:80 # <-------
- docker-compose down
I'm using docker and not docker-compose and the solution above doesn't work for me/
I am using my own image based on node in which I install docker & buildx like this:
ARG NODE_VER=lts-alpine
FROM node:${NODE_VER}
ARG BUILDX_VERSION=0.5.1
ARG DOCKER_VERSION=20.10.6
ARG BUILDX_ARCH=linux-arm-v7
RUN apk --no-cache add curl
# install docker
RUN curl -SL "https://download.docker.com/linux/static/stable/armhf/docker-${DOCKER_VERSION}.tgz" | \
tar -xz --strip-components 1 --directory /usr/local/bin/
COPY docker/modprobe.sh /usr/local/bin/modprobe
# replace node entrypoint by docker one /!\
COPY docker/docker-entrypoint.sh /usr/local/bin/
ENV DOCKER_TLS_CERTDIR=/certs
RUN mkdir /certs /certs/client && chmod 1777 /certs /certs/client
# download buildx
RUN mkdir -p /usr/lib/docker/cli-plugins \
&& curl -L \
--output /usr/lib/docker/cli-plugins/docker-buildx \
"https://github.com/docker/buildx/releases/download/v${BUILDX_VERSION}/buildx-v${BUILDX_VERSION}.${BUILDX_ARCH}"
RUN chmod a+x /usr/lib/docker/cli-plugins/docker-buildx
RUN mkdir -p /etc/docker && echo '{"experimental": true}' > /usr/lib/docker/config.json
My gitlab-ci.yml contains:
image: myimageabove
variables:
DOCKER_DRIVER: overlay2
PLATFORMS: linux/arm/v7
IMAGE_NAME: ${CI_PROJECT_NAME}
TAG: ${CI_COMMIT_BRANCH}-latest
REGISTRY: registry.gitlab.com
REGISTRY_ROOT: mygroup
WEBSOCKETD_VER: 0.4.1
# DOCKER_GATEWAY_HOST: 172.17.0.1
DOCKER_GATEWAY_HOST: docker
services:
- docker:dind
before_script:
- docker info
build:
stage: build
script:
- docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" ${REGISTRY}
- docker buildx create --use
- docker buildx build --platform $PLATFORMS --tag "${REGISTRY}/${REGISTRY_ROOT}/${IMAGE_NAME}:${TAG}" --push .
test:
stage: test
variables:
WSD_DIR: /tmp/websocketd
WSD_FILE: /tmp/websocketd/websocketd
cache:
key: websocketd
paths:
- ${WSD_DIR}
before_script:
# download websocketd and put in cache if needed
- if [ ! -f ${WSD_FILE} ]; then
mkdir -p ${WSD_DIR};
curl -o ${WSD_FILE}.zip -L "https://github.com/joewalnes/websocketd/releases/download/v${WEBSOCKETD_VER}/websocketd-${WEBSOCKETD_VER}-linux_arm.zip";
unzip -o ${WSD_FILE}.zip websocketd -d ${WSD_DIR};
chmod 755 ${WSD_FILE};
fi;
- mkdir /home/pier
- cp -R ./test/resources/* /home/pier
# get websocketd from cache
- cp ${WSD_FILE} /home/pier/Admin/websocketd
# setup envt variables
- JWT_KEY=$(cat /home/pier/Services/Secrets/WEBSOCKETD_KEY)
# - DOCKER_GATEWAY_HOST=$(/sbin/ip route|awk '/default/ { print $3 }')
# - DOCKER_GATEWAY_HOST=$(hostname)
- ENVT="-e BASE_URL=/ -e JWT_KEY=$JWT_KEY -e WEBSOCKETD_KEY=$JWT_KEY -e WEBSOCKET_URL=ws://${DOCKER_GATEWAY_HOST:-host.docker.internal}:8088 -e SERVICES_DIR=/home/pier/Services"
- VOLUMES='-v /tmp:/config -v /home/pier/Services:/services -v /etc/wireguard:/etc/wireguard'
script:
# start websocketd
- /home/pier/start.sh &
# start docker pier admin
- docker run -p 4000:4000 ${ENVT} ${VOLUMES} ${REGISTRY}/${REGISTRY_ROOT}/${IMAGE_NAME}:${TAG}
# run postman tests
- newman run https://api.getpostman.com/collections/${POSTMAN_COLLECTION_UID}?apikey=${POSTMAN_API_KEY}
deploy:
stage: deploy
script:
# just push to docker hub
- docker login -u "$DOCKERHUB_REGISTRY_USER" -p "$DOCKERHUB_REGISTRY_PASSWORD" ${DOCKERHUB}
- docker buildx build --platform $PLATFORMS --tag "${DOCKERHUB}/mygroup/${IMAGE}:${TAG}" --push .
When I run this, the build job works alright, then the test "before_script" works but when the script starts, I get the following trace:
# <= this starts the websocketd server locally on port 8088 =>
$ /home/pier/start.sh &
# <= this starts the image I just built which should connect to the above websocketd server =>
$ docker run -p 4000:4000 ${ENVT} ${VOLUMES} ${REGISTRY}/${REGISTRY_ROOT}/${IMAGE_NAME}:${TAG}
# <= trace of the websocketd server start with url ws://runner-hbghjvzp-project-22314059-concurrent-0:8088/ =>
Tue, 11 May 2021 12:08:13 +0000 | INFO | server | | Serving using application : ./websocket-server.py
Tue, 11 May 2021 12:08:13 +0000 | INFO | server | | Starting WebSocket server : ws://runner-hbghjvzp-project-22314059-concurrent-0:8088/
# <= trace of the image start saying it tires to conenct to the websocketd server
Websocket connecting to ws://docker:8088 ...
Listen on port 4000
# <= trace with ENOTFOUND on "docker" address =>
websocket connection failed: Error: getaddrinfo ENOTFOUND docker
at GetAddrInfoReqWrap.onlookup [as oncomplete] (dns.js:67:26) {
errno: -3008,
code: 'ENOTFOUND',
syscall: 'getaddrinfo',
hostname: 'docker'
}
/pier/storage/websocket-client.js:52
throw err;
^
Error: getaddrinfo ENOTFOUND docker
at GetAddrInfoReqWrap.onlookup [as oncomplete] (dns.js:67:26) {
errno: -3008,
code: 'ENOTFOUND',
syscall: 'getaddrinfo',
hostname: 'docker'
}
Cleaning up file based variables 00:01
ERROR: Job failed: exit code 1
I tried other ways like:
websocket connection failed: Error: getaddrinfo ENOTFOUND host.docker.internal
websocket connection failed: Error: connect ETIMEDOUT 172.17.0.1:8088 # <= same error when trying $(/sbin/ip route|awk '/default/ { print $3 }') =>
websocket connection failed: Error: getaddrinfo ENOTFOUND runner-meuessxe-project-22314059-concurrent-0 # using $(hostname)
Out of new idea...
Would greatly appreciate any help on that one.

Container Scanning feature does not work for multiple images

I've successfully setup the Container Scanning feature from GitLab for a single Docker image. Now I'd like to scan yet another image using the same CI/CD configuration in .gitlab-ci.yml
Problem
It looks like it is not possible to have multiple Container Scanning reports on the Merge Request detail page.
The following screenshot shows the result of both Container Scanning jobs in the configuration below.
We scan two Docker images, which both have CVE's to be reported:
iojs:1.6.3-slim (355 vulnerabilities)
golang:1.3 (1139 vulnerabilities)
Expected result
The Container Scanning report would show a total of 1494 vulnerabilities (355 + 1139). Currently it looks like only the results for the golang image are being included.
Relevant parts of the configuration
container_scanning_first_image:
script:
- docker pull golang:1.3
- ./clair-scanner -c http://docker:6060 --ip $(hostname -i) -r gl-container-scanning-report-first-image.json -l clair.log golang:1.3 || true
artifacts:
reports:
container_scanning: gl-container-scanning-report-first-image.json
container_scanning_second_image:
script:
- docker pull iojs:1.6.3-slim
- ./clair-scanner -c http://docker:6060 --ip $(hostname -i) -r gl-container-scanning-report-second-image.json -l clair.log iojs:1.6.3-slim || true
artifacts:
reports:
container_scanning: gl-container-scanning-report-second-image.json
Full configuration for reference
image: docker:stable
stages:
- scan
variables:
DOCKER_HOST: tcp://docker:2375/
DOCKER_DRIVER: overlay2
container_scanning_first_image:
stage: scan
variables:
GIT_STRATEGY: none
DOCKER_SERVICE: docker
DOCKER_HOST: tcp://${DOCKER_SERVICE}:2375/
CLAIR_LOCAL_SCAN_VERSION: v2.0.8_fe9b059d930314b54c78f75afe265955faf4fdc1
NO_PROXY: ${DOCKER_SERVICE},localhost
allow_failure: true
services:
- docker:dind
script:
- docker login -u gitlab-ci-token -p $CI_JOB_TOKEN $CI_REGISTRY
- docker run -d --name db arminc/clair-db:latest
- docker run -p 6060:6060 --link db:postgres -d --name clair --restart on-failure arminc/clair-local-scan:${CLAIR_LOCAL_SCAN_VERSION}
- apk add -U wget ca-certificates
- docker pull golang:1.3
- wget https://github.com/arminc/clair-scanner/releases/download/v8/clair-scanner_linux_amd64
- mv clair-scanner_linux_amd64 clair-scanner
- chmod +x clair-scanner
- touch clair-whitelist.yml
- retries=0
- echo "Waiting for clair daemon to start"
- while( ! wget -T 10 -q -O /dev/null http://${DOCKER_SERVICE}:6060/v1/namespaces ) ; do sleep 1 ; echo -n "." ; if [ $retries -eq 10 ] ; then echo " Timeout, aborting." ; exit 1 ; fi ; retries=$(($retries+1)) ; done
- ./clair-scanner -c http://${DOCKER_SERVICE}:6060 --ip $(hostname -i) -r gl-container-scanning-report-first-image.json -l clair.log golang:1.3 || true
artifacts:
paths:
- gl-container-scanning-report-first-image.json
reports:
container_scanning: gl-container-scanning-report-first-image.json
dependencies: []
only:
refs:
- branches
variables:
- $GITLAB_FEATURES =~ /\bcontainer_scanning\b/
except:
variables:
- $CONTAINER_SCANNING_DISABLED
container_scanning_second_image:
stage: scan
variables:
GIT_STRATEGY: none
DOCKER_SERVICE: docker
DOCKER_HOST: tcp://${DOCKER_SERVICE}:2375/
CLAIR_LOCAL_SCAN_VERSION: v2.0.8_fe9b059d930314b54c78f75afe265955faf4fdc1
NO_PROXY: ${DOCKER_SERVICE},localhost
allow_failure: true
services:
- docker:dind
script:
- docker login -u gitlab-ci-token -p $CI_JOB_TOKEN $CI_REGISTRY
- docker run -d --name db arminc/clair-db:latest
- docker run -p 6060:6060 --link db:postgres -d --name clair --restart on-failure arminc/clair-local-scan:${CLAIR_LOCAL_SCAN_VERSION}
- apk add -U wget ca-certificates
- docker pull iojs:1.6.3-slim
- wget https://github.com/arminc/clair-scanner/releases/download/v8/clair-scanner_linux_amd64
- mv clair-scanner_linux_amd64 clair-scanner
- chmod +x clair-scanner
- touch clair-whitelist.yml
- retries=0
- echo "Waiting for clair daemon to start"
- while( ! wget -T 10 -q -O /dev/null http://${DOCKER_SERVICE}:6060/v1/namespaces ) ; do sleep 1 ; echo -n "." ; if [ $retries -eq 10 ] ; then echo " Timeout, aborting." ; exit 1 ; fi ; retries=$(($retries+1)) ; done
- ./clair-scanner -c http://${DOCKER_SERVICE}:6060 --ip $(hostname -i) -r gl-container-scanning-report-second-image.json -l clair.log iojs:1.6.3-slim || true
artifacts:
paths:
- gl-container-scanning-report-second-image.json
reports:
container_scanning: gl-container-scanning-report-second-image.json
dependencies: []
only:
refs:
- branches
variables:
- $GITLAB_FEATURES =~ /\bcontainer_scanning\b/
except:
variables:
- $CONTAINER_SCANNING_DISABLED
Question
How should the GitLab Container Scanning feature be configured in order to be able to report the results of two Docker images?

Cross-OS compatible way to map user in docker

Introduction
I am setting up a project where we try to use docker for everything.
It's php(symfony) + npm project. We have working and battle-tested (we are using this setup for more than a year on several projects) docker-compose.yaml.
But to make it for developers more friendly, I came up with setting up bin-docker folder, that is, using direnv, placed first in the user's PATH
/.envrc:
export PATH="$(pwd)/bin-docker:$PATH"
Folder contains files, that are supposed to replace bin files with the in-docker ones
❯ tree bin-docker
bin-docker
├── _tty.sh
├── composer
├── npm
├── php
└── php-xdebug
E.g.php file contains:
#!/usr/bin/env bash
set -euo pipefail
IFS=$'\n\t'
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
PROJECT_ROOT="$(dirname "$DIR")"
source ${DIR}/_tty.sh
if [ $(docker-compose ps php | grep Up | wc -l) -gt 0 ]; then
docker_compose_exec \
--workdir=/src${PWD:${#PROJECT_ROOT}} \
php php "$#"
else
docker_compose_run \
--entrypoint=/usr/local/bin/php \
--workdir=/src${PWD:${#PROJECT_ROOT}} \
php "$#"
fi
npm:
#!/usr/bin/env bash
set -euo pipefail
IFS=$'\n\t'
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
PROJECT_ROOT="$(dirname "$DIR")"
source ${DIR}/_tty.sh
docker_run --init \
--entrypoint=npm \
-v "$PROJECT_ROOT":"$PROJECT_ROOT" \
-w "$(pwd)" \
-u "$(id -u "$USER"):$(id -g "$USER")" \
mangoweb/mango-cli:v2.3.2 "$#"
It works great, you can simply use symfony's bin/console and it will "magically" run in the docker-container.
The problem
The only problem and my question is, how to properly map host user to container's user. Properly for all major OS (macOS, Windows(WSL), Linux) because our developers use all of them. I will talk about the npm, because it uses public image anyone can download.
When I do not map user at all, on Linux the files create in mounted volume are the owned by root, and users have to chmod the files afterwards. Not ideal at all.
When I use -u "$(id -u "$USER"):$(id -g "$USER")" it break's because the in-container user now doesn't have rights to create cache folder in container, also on macOS standard UID is 501, which breaks everything.
What is the proper way to map the user, or is there any other better way to do any part of this setup?
Attachments:
docker-compose.yaml: (It's shortened from sensitive or non-important info)
version: '2.4'
x-php-service-base: &php-service-base
restart: on-failure
depends_on:
- redis
- elasticsearch
working_dir: /src
volumes:
- .:/src:cached
environment:
APP_ENV: dev
SESSION_STORE_URI: tcp://redis:6379
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:6.2.3
environment:
discovery.type: single-node
xpack.security.enabled: "false"
kibana:
image: docker.elastic.co/kibana/kibana:6.2.3
environment:
SERVER_NAME: localhost
ELASTICSEARCH_URL: http://elasticsearch:9200
depends_on:
- elasticsearch
redis:
image: redis:4.0.8-alpine
php:
<<: *php-service-base
image: custom-php-image:7.2
php-xdebug:
<<: *php-service-base
image: custom-php-image-with-xdebug:7.2
nginx:
image: custom-nginx-image
restart: on-failure
depends_on:
- php
- php-xdebug
_tty.sh: Only to properly pass tty status to docker run
if [ -t 1 ]; then
DC_INTERACTIVITY=""
else
DC_INTERACTIVITY="-T"
fi
function docker_run {
if [ -t 1 ]; then
docker run --rm --interactive --tty=true "$#"
else
docker run --rm --interactive --tty=false "$#"
fi
}
function docker_compose_run {
docker-compose run --rm $DC_INTERACTIVITY "$#"
}
function docker_compose_exec {
docker-compose exec $DC_INTERACTIVITY "$#"
}
This may answer your problem.
I came across a tutorial as to how to do setup user namespaces in Ubuntu. Note the use case in the tutorial is for using nvidia-docker and restricting permissions. In particular Dr. Kinghorn states in hist post:
The main idea of a user-namespace is that a processes UID (user ID) and GID (group ID) can be different inside and outside of a containers namespace. The significant consequence of this is that a container can have it's root process mapped to a non-privileged user ID on the host.
Which sounds like what you're looking for. Hope this helps.

docker compose Logstash - specify config file and install plugin?

Im trying to copy my Logstash config and install a plugin at the same time. Ive tried multiple methods thus far with no avail, Logstash exists with errors every time
this fails:
logstash:
image: logstash:latest
command: logstash -f /etc/logstash/conf.d/logstash.conf
command: bash -c bin/logstash-plugin install logstash-filter-translate
this fails:
command: logstash -f /etc/logstash/conf.d/logstash.conf bash -c bin/logstash-plugin install logstash-filter-translate
this fails:
command: logstash -f /etc/logstash/conf.d/logstash.conf && bash -c bin/logstash-plugin install logstash-filter-translate
this also fails
command: bash -c logstash -f /etc/logstash/conf.d/logstash.conf && bin/logstash-plugin install logstash-filter-translate
Im having no luck here and I bet the answer is simple... can anyone point me in the right direction?
Thanks
I use the image that I am having locally with the below config, then it's working fine. Hope it helps.
version: '3'
services:
logstash:
image: docker.elastic.co/logstash/logstash:5.6.3
command: bash -c "logstash -f /etc/logstash/conf.d/logstash.conf && bin/logstash-plugin install logstash-filter-translate"
Sample output
logstash_1 | [2017-12-06T15:27:29,120][WARN ][logstash.agent ] stopping pipeline {:id=>".monitoring-logstash"}
logstash_1 | Validating logstash-filter-translate
logstash_1 | Installing logstash-filter-translate
try this if it's Ubuntu image
command: bash -c "logstash -f /etc/logstash/conf.d/logstash.conf && bin/- install logstash-filter-translate".
Else it's Alpine image then use
command : sh -c " command to run "

Resources