docker compose Logstash - specify config file and install plugin? - docker

Im trying to copy my Logstash config and install a plugin at the same time. Ive tried multiple methods thus far with no avail, Logstash exists with errors every time
this fails:
logstash:
image: logstash:latest
command: logstash -f /etc/logstash/conf.d/logstash.conf
command: bash -c bin/logstash-plugin install logstash-filter-translate
this fails:
command: logstash -f /etc/logstash/conf.d/logstash.conf bash -c bin/logstash-plugin install logstash-filter-translate
this fails:
command: logstash -f /etc/logstash/conf.d/logstash.conf && bash -c bin/logstash-plugin install logstash-filter-translate
this also fails
command: bash -c logstash -f /etc/logstash/conf.d/logstash.conf && bin/logstash-plugin install logstash-filter-translate
Im having no luck here and I bet the answer is simple... can anyone point me in the right direction?
Thanks

I use the image that I am having locally with the below config, then it's working fine. Hope it helps.
version: '3'
services:
logstash:
image: docker.elastic.co/logstash/logstash:5.6.3
command: bash -c "logstash -f /etc/logstash/conf.d/logstash.conf && bin/logstash-plugin install logstash-filter-translate"
Sample output
logstash_1 | [2017-12-06T15:27:29,120][WARN ][logstash.agent ] stopping pipeline {:id=>".monitoring-logstash"}
logstash_1 | Validating logstash-filter-translate
logstash_1 | Installing logstash-filter-translate

try this if it's Ubuntu image
command: bash -c "logstash -f /etc/logstash/conf.d/logstash.conf && bin/- install logstash-filter-translate".
Else it's Alpine image then use
command : sh -c " command to run "

Related

Gitlab CI job with specific user

I am trying to run Gitlab CI job of anchore engine to scan docker image. The command in script section fails with error of permission denied. I found out the command requires root user permissions. Sudo is not installed in the docker image I'm using as gitlab runner and only non sudo user anchore is there in the docker container.
Below is the CI job for container scanning.
container_scan:
stage: scan
image:
name: anchore/anchore-engine:latest
entrypoint: ['']
services:
- name: anchore/engine-db-preload:latest
alias: anchore-db
variables:
GIT_STRATEGY: none
ANCHORE_HOST_ID: "localhost"
ANCHORE_ENDPOINT_HOSTNAME: "localhost"
ANCHORE_CLI_USER: "admin"
ANCHORE_CLI_PASS: "foobar"
ANCHORE_CLI_SSL_VERIFY: "n"
ANCHORE_FAIL_ON_POLICY: "true"
ANCHORE_TIMEOUT: "500"
script:
- |
curl -o /tmp/anchore_ci_tools.py https://raw.githubusercontent.com/anchore/ci-tools/master/scripts/anchore_ci_tools.py
chmod +x /tmp/anchore_ci_tools.py
ln -s /tmp/anchore_ci_tools.py /usr/local/bin/anchore_ci_tools
- anchore_ci_tools --setup
- anchore-cli registry add "$CI_REGISTRY" gitlab-ci-token "$CI_JOB_TOKEN" --skip-validate
- anchore_ci_tools --analyze --report --image "$IMAGE_NAME" --timeout "$ANCHORE_TIMEOUT"
- |
if ; then
anchore-cli evaluate check "$IMAGE_NAME"
else
set +o pipefail
anchore-cli evaluate check "$IMAGE_NAME" | tee /dev/null
fi
artifacts:
name: ${CI_JOB_NAME}-${CI_COMMIT_REF_NAME}
paths:
- image-*-report.json
The CI job fails at ln -s /tmp/anchore_ci_tools.py /usr/local/bin/anchore_ci_tools in the script section.
I have tried to add user in the entrypoint section
name: anchore/anchore-engine:latest
entrypoint: ['bash', '-c', 'useradd myuser && exec su myuser -c bash']
but it did not allow to create a user. I have tried running the docker container in linux with docker run -it --user=root anchore/anchore-engine:latest /bin/bash and it run without any problem. How can I simulate the same in gitlab-ci job?

How to pass argument for a configuration file in JuPyterhub's deployment?

I want to install envkey in my docker image which requires a key-value pair. I have the key-value pair with me but I am unable to figure out as to how do I install it in my docker image using those arguments and then deploy the same on jupyterhub.
I tried reading other deployments of mine which use envkey. Here is how it goes:
1. I have a Makefile and I run the command sudo make dev config=aviral.cfg
2. The dev command in the Makefile is as follows:
dev:
docker build -t $(IMAGE) -f Dockerfile.dev . && docker tag $(IMAGE) $(ALIAS)
#echo "\nCreate docker container.."
CONFIG=$(config) IMAGE=$(IMAGE) docker-compose -f docker-compose.yml up -d --scale test=0 --scale airflow_worker=0
#echo "\n$(GREEN)Done.$(NO_COLOR)\n"
#echo "Try airflow at http://localhost:8080."
#echo "and flower at http://localhost:5555."
The docker-compose file is:
airflow_worker:
image: ${IMAGE}:latest
restart: always
depends_on:
- airflow_scheduler
# ports:
# - 8793:8793
# environment:
# - GOOGLE_APPLICATION_CREDENTIALS=/gcloud/cloud.json
env_file:
- ${CONFIG}
command: worker
As you can see, the env_file is passed on.
I am unable to deduce how to do this same in the JuPyterHub.
The helm chart is here(https://jupyterhub.github.io/helm-chart/jupyterhub-0.8.2.tgz). And my config is:
proxy:
secretToken: "yada_yada"
singleuser:
image:
name: yada_yada.dkr.ecr.ap-south-1.amazonaws.com/demo
tag: 12h
lifecycleHooks:
postStart:
exec:
command: ["/bin/sh", "-c", 'ipython profile create; cd ~/.ipython/profile_default/startup; echo ''run_id = "sample" ''> aviral.py']
imagePullSecret:
enabled: true
registry: yada_yada.dkr.ecr.ap-south-1.amazonaws.com
username: aws
email: aviral#yada_yada.com
password: yada_yada
In my config file, I pass variables as:
ENVKEY=my_personal_envkey
I expect to have my configs passed in the docker, or perhaps I write a proper Makefile for this stuff, as of now, I am facing this error:
Step 19/32 : RUN curl -s https://raw.githubusercontent.com/envkey/envkey-source/master/install.sh | bash
---> Running in 35bc1cf0e1c8
envkey-source 1.2.9 Quick Install
Copyright (c) 2019 Envkey Inc. - MIT License
https://github.com/envkey/envkey-source
Downloading envkey-source binary for linux-amd64
Downloading tarball from https://github.com/envkey/envkey-source/releases/download/v1.2.9/envkey-source_1.2.9_linux_amd64.tar.gz
envkey-source is installed in /usr/local/bin
Installation complete. Info:
bash: line 97: 29 Segmentation fault envkey-source -h
The command '/bin/sh -c curl -s https://raw.githubusercontent.com/envkey/envkey-source/master/install.sh | bash' returned a non-zero code: 139
Although this question alone should be good enough to give you the picture but for the sake of context(if), here are some of the questions:
1. How do I make jupyter-hub access my private docker image repository?
2. Unable to run a lifecycle command from config.yaml while deploying jupyterhub
3. How to have file written automatically in the startup folder when a new user signs up/in on JuPyter hub?
Probably you get this error because install.sh script tries to add envkey-source binary under /usr/local/bin directory and then tries to run envkey-source -h and fails. Check if user(if non-root) have permission to do that or if /usr/local/bin directory exists in container image.
Hope it helps!

Build FAILED but job status is SUCCESS in Gitlab

My Dockerfile:
FROM mm_php:7.1
ADD ./docker/test/source/entrypoint.sh /work/entrypoint.sh
ADD ./docker/wait-for-it.sh /work/wait-for-it.sh
RUN chmod 755 /work/entrypoint.sh \
&& chmod 755 /work/wait-for-it.sh
ENTRYPOINT ["/work/entrypoint.sh"]
entrypoint.sh:
#!/bin/bash -e
/work/wait-for-it.sh db:5432 -- echo "PostgreSQL started"
./vendor/bin/parallel-phpunit --pu-cmd="./vendor/bin/phpunit -c phpunit-docker.xml" tests
docker-compose.yml:
version: '2'
services:
test:
build:
context: .
args:
ssh_prv_key: ${ssh_prv_key}
application_env: ${application_env}
dockerfile: docker/test/source/Dockerfile
links:
- db
db:
build:
context: .
dockerfile: docker/test/postgres/Dockerfile
environment:
PGDATA: /tmp
.gitlab-ci.yml:
image: docker:latest
services:
- name: docker:dind
command: ["--insecure-registry=my.domain:5000 --registry-mirror=http://my.domain"]
before_script:
- apk add --no-cache py-pip
- pip install docker-compose
- mkdir -p ~/.ssh
- chmod 700 ~/.ssh
- echo "$SSH_KNOWN_HOSTS" > ~/.ssh/known_hosts
- chmod 644 ~/.ssh/known_hosts
- echo "$SSH_PRIVATE_KEY" > ~/.ssh/id_rsa
- chmod 600 ~/.ssh/id_rsa
- '[[ -f /.dockerenv ]] && echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config'
test:
stage: test
script:
- export ssh_prv_key="$(cat ~/.ssh/id_rsa)" && export application_env="testing-docker" && docker-compose up --build test
All works good. But if tests are failed, status of job in Gitlab is SUCCESS instead of FAILED.
How to obtain status FAILED if tests are failed?
UPD
If I run docker-compose up locally, it return no error code:
$ export ssh_prv_key="$(cat ~/.ssh/id_rsa)" && export application_env="testing-docker" && docker-compose up --build test
Building db
Step 1/2 : FROM mm_postgres:9.6
...
test_1 | FAILURES!
test_1 | Tests: 1, Assertions: 1, Failures: 1.
test_1 | Success: 2 Fail: 2 Error: 0 Skip: 2 Incomplete: 0
mmadmin_test_1 exited with code 1
$ echo $?
0
It looks to me like it's reporting failed on the test without necessarily reporting failure on the return value of the docker-compose call. Have you tried capturing the return value of docker-compose when tests fail locally?
In order to get docker-compose to return the exit code from a specific service, try this:
docker-compose up --exit-code-from=service
When Gitlab CI runs something, if the process executed returns something different from zero, then, your build will fail.
In your case, you are running a docker-compose and this program returns zero when the container finish, what is correct.
You are trying to get phpunit's failure.
I think that is better you split your build in steps and not use docker-compose in this case:
gitlab.yml:
stages:
- build
- test
build:
image: docker:latest
stage: build
script:
- docker build -t ${NAME_OF_IMAGE} .
- docker push ${NAME_OF_IMAGE}
test:
image: ${NAME_OF_IMAGE}
stage: test
script:
- ./execute_your.sh

Docker check if file exists in healthcheck

How do I wait until a file is created in docker? I'm trying the code below, but it doesn't work. If I execute bash -c [ -f /tmp/asdasdasd ] separate from docker shell, it gives me the correct result.
Dockerfiletest:
FROM alpine:3.6
RUN apk update && apk add bash
docker-compose.yml:
version: '2.1'
services:
testserv:
build:
context: .
dockerfile: ./Dockerfiletest
command:
bash -c "rm /tmp/a && sleep 5 && touch /tmp/a && sleep 100"
healthcheck:
# I tried adding '&& exit 1', '|| exit `' it doesn't work.
test: bash -c [ -f /tmp/a ]
timeout: 1s
retries: 20
docker-compose up + wait 10s + docker ps:
: docker ps
STATUS
Up About a minute (health: starting)
I believe you are missing quotes on the command to run. bash -c only accepts one parameter, not a list, so you need to quote the rest of that line to pass it as a single parameter:
bash -c "[ -f /tmp/a ]"
To see the results of your healthcheck, you can run:
docker inspect $container_id -f '{{ json .State.Health.Log }}' | jq .
It turns out that besides missing quotes I also checked existence of socket via -f when I should do
bash -c '[ -S /tmp/uwsgi.sock ]'
Furthermore interval: 5s could be used to decrease default 5s interval.

How to set Zookeeper dataDir in Docker (fig.yml)

I've configured Zookeeper and Kafka containers in a fig.yml file for Docker. Both containers start fine. But after sending a number of messages, my application /zk-client hangs. On checking zookeeper logs, I see the error:
Error Path:/brokers Error:KeeperErrorCode = NoNode for /brokers
My fig.yml is as follows:
zookeeper:
image: wurstmeister/zookeeper
ports:
- "2181:2181"
environment:
ZK_ADVERTISED_HOST_NAME: xx.xx.x.xxx
ZK_CONNECTION_TIMEOUT_MS: 6000
ZK_SYNC_TIME_MS: 2000
ZK_DATADIR: /path/to/data/zk/data/dir
kafka:
image: wurstmeister/kafka:0.8.2.0
ports:
- "xx.xx.x.xxx:9092:9092"
links:
- zookeeper:zk
environment:
KAFKA_ADVERTISED_HOST_NAME: xx.xx.x.xxx
KAFKA_LOG_DIRS: /home/svc_cis4/dl
volumes:
- /var/run/docker.sock:/var/run/docker.sock
I've searched for quite a while now, but I haven't got a solution yet. I've also tried setting the data directory in fig.yml using ZK_DATADIR: '/path/to/zk/data/dir' but it doesn't seem to help. Any assistance will be appreciated.
UPDATE
Content of /opt/kafka_2.10-0.8.2.0/config/server.properties:
broker.id=0
port=9092
num.network.threads=3
num.io.threads=8
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
num.partitions=1
num.recovery.threads.per.data.dir=1
log.retention.hours=168
log.segment.bytes=1073741824
log.retention.check.interval.ms=300000
log.cleaner.enable=false
zookeeper.connect=localhost:2181
zookeeper.connection.timeout.ms=6000
The problems you are having are not related with zookeeper's data directory. The error Error Path:/brokers Error:KeeperErrorCode = NoNode for /brokers are due to your application cannot find any broker znode in zookeeper's data. This is happening probably because the kafka container is not connecting correctly with zookeeper, and looking to wurstmeister's images I think the problem may be related to variable KAFKA_ADVERTISED_HOST_NAME could be wrong. I don't know if there is a reason to assign that variable through a env variable that has to be passed, but from my point of view this is not a good approach. There are multiple ways to configure kafka (in fact there is no need to set advertised.host.name and you can leave it commented and kafka will take default hostname, which can be set with docker), but a fast solution using this would be editing start-kafka.sh and rebuilding the image:
#!/bin/bash
if [[ -z "$KAFKA_ADVERTISED_PORT" ]]; then
export KAFKA_ADVERTISED_PORT=$(docker port `hostname` 9092 | sed -r "s/.*:(.*)/\1/g")
fi
if [[ -z "$KAFKA_BROKER_ID" ]]; then
export KAFKA_BROKER_ID=$KAFKA_ADVERTISED_PORT
fi
if [[ -z "$KAFKA_LOG_DIRS" ]]; then
export KAFKA_LOG_DIRS="/kafka/kafka-logs-$KAFKA_BROKER_ID"
fi
if [[ -z "$KAFKA_ZOOKEEPER_CONNECT" ]]; then
export KAFKA_ZOOKEEPER_CONNECT=$(env | grep ZK.*PORT_2181_TCP= | sed -e 's|.*tcp://||' | paste -sd ,)
fi
if [[ -n "$KAFKA_HEAP_OPTS" ]]; then
sed -r -i "s/^(export KAFKA_HEAP_OPTS)=\"(.*)\"/\1=\"$KAFKA_HEAP_OPTS\"/g" $KAFKA_HOME/bin/kafka-server-start.sh
unset KAFKA_HEAP_OPTS
fi
for VAR in `env`
do
if [[ $VAR =~ ^KAFKA_ && ! $VAR =~ ^KAFKA_HOME ]]; then
kafka_name=`echo "$VAR" | sed -r "s/KAFKA_(.*)=.*/\1/g" | tr '[:upper:]' '[:lower:]' | tr _ .`
env_var=`echo "$VAR" | sed -r "s/(.*)=.*/\1/g"`
if egrep -q "(^|^#)$kafka_name=" $KAFKA_HOME/config/server.properties; then
sed -r -i "s#(^|^#)($kafka_name)=(.*)#\2=${!env_var}#g" $KAFKA_HOME/config/server.properties #note that no config values may contain an '#' char
else
echo "$kafka_name=${!env_var}" >> $KAFKA_HOME/config/server.properties
fi
fi
done
###NEW###
IP=$(hostname --ip-address)
sed -i -e "s/^advertised.host.name.*/advertised.host.name=$IP/" $KAFKA_HOME/config/server.properties
###END###
$KAFKA_HOME/bin/kafka-server-start.sh $KAFKA_HOME/config/server.properties
If this doesn't solve your problem you can get more information starting a session inside the containers (i.e.: docker exec -it kafkadocker_kafka_1 /bin/bash for kafka's and docker exec -it kafkadocker_zookeeper_1 /bin/bash for zookeeper's), and there check kafka logs, or zookeeper console (/opt/zookeeper-3.4.6/bin/zkCli.sh)
The configuration that's been working for me without any issues for the last two days involves specifying host addresses for both Zookeeper and Kafka. My fig.yml content is:
zookeeper:
image: wurstmeister/zookeeper
ports:
- "xx.xx.x.xxx:2181:2181"
kafka:
image: wurstmeister/kafka:0.8.2.0
ports:
- "9092:9092"
links:
- zookeeper:zk
environment:
KAFKA_ADVERTISED_HOST_NAME: xx.xx.x.xxx
KAFKA_NUM_REPLICA_FETCHERS: 4
...other env variables...
volumes:
- /var/run/docker.sock:/var/run/docker.sock
validator:
build: .
volumes:
- .:/host
entrypoint: /bin/bash
command: -c 'java -jar /host/app1.jar'
links:
- zookeeper:zk
- kafka
analytics:
build: .
volumes:
- .:/host
entrypoint: /bin/bash
command: -c 'java -jar /host/app2.jar'
links:
- zookeeper:zk
- kafka
loader:
build: .
volumes:
- .:/host
entrypoint: /bin/bash
command: -c 'java -jar /host/app3.jar'
links:
- zookeeper:zk
- kafka
And the accompanying Dockerfile content:
FROM ubuntu:trusty
MAINTAINER Wurstmeister
RUN apt-get update; apt-get install -y unzip openjdk-7-jdk wget git docker.io
RUN wget -q http://apache.mirrors.lucidnetworks.net/kafka/0.8.2.0/kafka_2.10-0.8.2.0.tgz -O /tmp/kafka_2.10-0.8.2.0.tgz
RUN tar xfz /tmp/kafka_2.10-0.8.2.0.tgz -C /opt
VOLUME ["/kafka"]
ENV KAFKA_HOME /opt/kafka_2.10-0.8.2.0
ADD start-kafka.sh /usr/bin/start-kafka.sh
ADD broker-list.sh /usr/bin/broker-list.sh
CMD start-kafka.sh

Resources