Cloning environment in elastic beanstalk - ruby-on-rails

I am wanting to update my Rails EB Linux to 2.12.2 from 1.11.8, so I cloned the environment and committed to it but I am getting this error:
PG::ConnectionBad (could not connect to server: Connection timed out
Is the server running on host "example.ccexample.us-east-1.rds.amazonaws.com" (111.11.21.22) and accepting
TCP/IP connections on port 5432?
Another error -- likely the same issue?:
/opt/elastticbeanstalk/hooks/appdeploy/pre/12_db_migration.sh failed.
My env variables are all correct so shouldn't the database just simply connect?
This is the error log:
[2020-12-31T22:05:28.834Z] INFO [5012] - [Application update app-example/AppDeployStage0/AppDeployPreHook/12_db_migration.sh] : Starting activity...
[2020-12-31T22:07:45.564Z] INFO [5012] - [Application update example/AppDeployStage0/AppDeployPreHook/12_db_migration.sh] : Activity execution failed, because: ++ /opt/elasticbeanstalk/bin/get-config container -k script_dir
+ EB_SCRIPT_DIR=/opt/elasticbeanstalk/support/scripts
++ /opt/elasticbeanstalk/bin/get-config container -k app_staging_dir
+ EB_APP_STAGING_DIR=/var/app/ondeck
++ /opt/elasticbeanstalk/bin/get-config container -k app_user
+ EB_APP_USER=webapp
++ /opt/elasticbeanstalk/bin/get-config container -k support_dir
+ EB_SUPPORT_DIR=/opt/elasticbeanstalk/support
+ . /opt/elasticbeanstalk/support/envvars-wrapper.sh
+++ /opt/elasticbeanstalk/bin/get-config container -k support_dir
++ EB_SUPPORT_DIR=/opt/elasticbeanstalk/support
++ set +x
+ RAKE_TASK=db:migrate
+ . /opt/elasticbeanstalk/support/scripts/use-app-ruby.sh
++ . /usr/local/share/chruby/chruby.sh
+++ CHRUBY_VERSION=0.3.9
+++ RUBIES=()
+++ for dir in '"$PREFIX/opt/rubies"' '"$HOME/.rubies"'
+++ [[ -d /opt/rubies ]]
++++ ls -A /opt/rubies
+++ [[ -n ruby-2.4.10
ruby-2.5.8
ruby-2.6.6
ruby-current ]]
+++ RUBIES+=("$dir"/*)
+++ for dir in '"$PREFIX/opt/rubies"' '"$HOME/.rubies"'
+++ [[ -d /.rubies ]]
+++ unset dir
+++ cat /etc/elasticbeanstalk/.ruby_version
++ chruby 2.5.8
++ case "$1" in
++ local dir match
++ for dir in '"${RUBIES[#]}"'
++ dir=/opt/rubies/ruby-2.4.10
++ case "${dir##*/}" in
++ for dir in '"${RUBIES[#]}"'
++ dir=/opt/rubies/ruby-2.5.8
++ case "${dir##*/}" in
++ match=/opt/rubies/ruby-2.5.8
++ for dir in '"${RUBIES[#]}"'
++ dir=/opt/rubies/ruby-2.6.6
++ case "${dir##*/}" in
++ for dir in '"${RUBIES[#]}"'
++ dir=/opt/rubies/ruby-current
++ case "${dir##*/}" in
++ [[ -z /opt/rubies/ruby-2.5.8 ]]
++ shift
++ chruby_use /opt/rubies/ruby-2.5.8 ''
++ [[ ! -x /opt/rubies/ruby-2.5.8/bin/ruby ]]
++ [[ -n '' ]]
++ export RUBY_ROOT=/opt/rubies/ruby-2.5.8
++ RUBY_ROOT=/opt/rubies/ruby-2.5.8
++ export RUBYOPT=
++ RUBYOPT=
++ export PATH=/opt/rubies/ruby-2.5.8/bin:/opt/elasticbeanstalk/lib/ruby/bin:/usr/local/sbin:/usr/local/bin:/usr/bin:/usr/sbin:/sbin:/bin
++ PATH=/opt/rubies/ruby-2.5.8/bin:/opt/elasticbeanstalk/lib/ruby/bin:/usr/local/sbin:/usr/local/bin:/usr/bin:/usr/sbin:/sbin:/bin
+++ /opt/rubies/ruby-2.5.8/bin/ruby -
++ eval 'export RUBY_ENGINE=ruby;
export RUBY_VERSION=2.5.8;
export GEM_ROOT="/opt/rubies/ruby-2.5.8/lib/ruby/gems/2.5.0";'
+++ export RUBY_ENGINE=ruby
+++ RUBY_ENGINE=ruby
+++ export RUBY_VERSION=2.5.8
+++ RUBY_VERSION=2.5.8
+++ export GEM_ROOT=/opt/rubies/ruby-2.5.8/lib/ruby/gems/2.5.0
+++ GEM_ROOT=/opt/rubies/ruby-2.5.8/lib/ruby/gems/2.5.0
++ (( 0 != 0 ))
+ cd /var/app/ondeck
+ su -s /bin/bash -c 'bundle exec /opt/elasticbeanstalk/support/scripts/check-for-rake-task.rb db:migrate' webapp
I also updated the config.yml with the new environment names:
branch-defaults:
master:
environment: NewName
environment-defaults:
NewName:
branch: null
repository: null
RevoltVendor-env:
branch: null
repository: null
global:
application_name: App Name
default_ec2_keyname: null
default_platform: Puma with Ruby 2.5 running on 64bit Amazon Linux
default_region: us-east-1
include_git_submodules: true
instance_profile: null
platform_name: null
platform_version: null
profile: eb-cli
sc: git
workspace_type: Application
Any help would be appreciated

Based on the comments.
The issue was caused by wrong security group (SG) inbound rules in RDS. The EB cloning operation creating new SG, which was not reflected in the SG of the RDS.
The solution was to update the SG of the RDS and add the SG associated with the cloned EB environment.

Related

Rails app fails to deploy when upgrading elastic beanstalk stack

I am upgrading my elastic beanstalk version to Puma with Ruby 2.6 running on 64bit Amazon Linux/2.11.8 via elb's ui. When I do so I get this error. It works if I revert back to platform version 2.11.4
Initialization failed at 2020-08-07T04:41:35Z with exit status 1 and error: Hook /opt/elasticbeanstalk/hooks/preinit/22_gems.sh failed.
++ /opt/elasticbeanstalk/bin/get-config container -k script_dir
+ EB_SCRIPT_DIR=/opt/elasticbeanstalk/support/scripts
++ /opt/elasticbeanstalk/bin/get-config container -k gem_dir
+ EB_GEM_DIR=/opt/elasticbeanstalk/support/gems/puma
+ . /opt/elasticbeanstalk/support/scripts/use-app-ruby.sh
++ . /usr/local/share/chruby/chruby.sh
+++ CHRUBY_VERSION=0.3.9
+++ RUBIES=()
+++ for dir in '"$PREFIX/opt/rubies"' '"$HOME/.rubies"'
+++ [[ -d /opt/rubies ]]
++++ ls -A /opt/rubies
+++ [[ -n ruby-1.9.3-p551
ruby-2.0.0-p648
ruby-2.1.10
ruby-2.2.10
ruby-2.3.8
ruby-2.4.9
ruby-2.5.7
ruby-2.6.5
ruby-current ]]
+++ RUBIES+=("$dir"/*)
+++ for dir in '"$PREFIX/opt/rubies"' '"$HOME/.rubies"'
+++ [[ -d /.rubies ]]
+++ unset dir
+++ cat /etc/elasticbeanstalk/.ruby_version
++ chruby 2.6.6
++ case "$1" in
++ local dir match
++ for dir in '"${RUBIES[#]}"'
++ dir=/opt/rubies/ruby-1.9.3-p551
++ case "${dir##*/}" in
++ for dir in '"${RUBIES[#]}"'
++ dir=/opt/rubies/ruby-2.0.0-p648
++ case "${dir##*/}" in
++ for dir in '"${RUBIES[#]}"'
++ dir=/opt/rubies/ruby-2.1.10
++ case "${dir##*/}" in
++ for dir in '"${RUBIES[#]}"'
++ dir=/opt/rubies/ruby-2.2.10
++ case "${dir##*/}" in
++ for dir in '"${RUBIES[#]}"'
++ dir=/opt/rubies/ruby-2.3.8
++ case "${dir##*/}" in
++ for dir in '"${RUBIES[#]}"'
++ dir=/opt/rubies/ruby-2.4.9
++ case "${dir##*/}" in
++ for dir in '"${RUBIES[#]}"'
++ dir=/opt/rubies/ruby-2.5.7
++ case "${dir##*/}" in
++ for dir in '"${RUBIES[#]}"'
++ dir=/opt/rubies/ruby-2.6.5
++ case "${dir##*/}" in
++ for dir in '"${RUBIES[#]}"'
++ dir=/opt/rubies/ruby-current
++ case "${dir##*/}" in
++ [[ -z '' ]]
++ echo 'chruby: unknown Ruby: 2.6.6'
chruby: unknown Ruby: 2.6.6
++ return 1.
Process default has been unhealthy for 34 minutes (Target.FailedHealthChecks).
How do I go about debugging this? The rails app is running on ruby 2.6.6.
You probably need to use a ruby in the available rubies (like 2.6.5) or figure out how to make other rubies available.

Jenkins inconsistency (file changes everytime)

I am new to Jenkins and still trying to understand how it actually works.
What I am trying to do is pretty simple. I trigger the build whenever I push it to my Github repo.
Then, I try to ssh into a server.
My pipeline looks like this:
pipeline {
agent any
stages {
stage('SSH into the server') {
steps {
withCredentials([sshUserPrivateKey(
credentialsId: '<id>',
keyFileVariable: 'KEY_FILE')]) {
sh '''
cd ~/.ssh
ls
cat ${KEY_FILE} > ./deployer_key.key
eval $(ssh-agent -s)
chmod 600 ./deployer_key.key
ssh-add ./deployer_key.key
ssh root#<my-server> ps -a
ssh-agent -k
'''
}
}
}
}
}
It's literally a simple ssh task.
However, I am getting inconsistent results.
When I check the log,
Failed Case
Masking supported pattern matches of $KEY_FILE
[Pipeline] {
[Pipeline] sh
+ cd /bms/home/pdsint/.ssh
+ ls
authorized_keys
known_hosts
known_hosts.old
+ cat ****
++ ssh-agent -s
+ eval 'SSH_AUTH_SOCK=/tmp/ssh-hb6yX48CJPQA/agent.51702;' export 'SSH_AUTH_SOCK;' 'SSH_AGENT_PID=51703;' export 'SSH_AGENT_PID;' echo Agent pid '51703;'
++ SSH_AUTH_SOCK=/tmp/ssh-hb6yX48CJPQA/agent.51702
++ export SSH_AUTH_SOCK
++ SSH_AGENT_PID=51703
++ export SSH_AGENT_PID
++ echo Agent pid 51703
Agent pid 51703
+ chmod 600 ./deployer_key.key
+ ssh-add ./deployer_key.key
Identity added: ./deployer_key.key (./deployer_key.key)
+ ssh root#<my-server> docker ps -a
Host key verification failed.
When I ls inside the .ssh directory, it has those files.
In the success case,
Success Case
+ cd /bms/home/pdsint/.ssh
+ ls
authorized_keys
authorized_keys.bak <----------
known_hosts
known_hosts.old
+ cat ****
++ ssh-agent -s
+ eval 'SSH_AUTH_SOCK=/tmp/ssh-yDNVe51565/agent.51565;' export 'SSH_AUTH_SOCK;' 'SSH_AGENT_PID=51566;' export 'SSH_AGENT_PID;' echo Agent pid '51566;'
++ SSH_AUTH_SOCK=/tmp/ssh-yDNVe51565/agent.51565
++ export SSH_AUTH_SOCK
++ SSH_AGENT_PID=51566
++ export SSH_AGENT_PID
++ echo Agent pid 51566
Agent pid 51566
+ chmod 600 ./deployer_key.key
+ ssh-add ./deployer_key.key
Identity added: ./deployer_key.key (./deployer_key.key)
+ ssh root#<my-server> docker ps -a
Warning: Permanently added '<my-server>' (RSA) to the list of known hosts.
It has the authorized_keys.bak file.
I don't really think that file makes the difference, but all success logs have that file while all failure logs do not. Also, I really don't get why each build has different files in it. Aren't they supposed to be independent of each other? Isn't that the point of Jenkins (trying to build/test/deploy in a new environment)?
Any help would be appreciated. Thanks.

Kafka not able to connect with Zookeeper in Linux machine

I've been trying to create a producer and consumer in Kafka on linux machine.
I've started an instance of both zookeeper and kafka with the following command.
docker run -d \
--name zookeeper \
-p 32181:32181 \
-e ZOOKEEPER_CLIENT_PORT=32181 \
confluentinc/cp-zookeeper:4.1.0
docker run -d \
--name kafka \
--link zookeeper \
-p 39092:39092 \
-e KAFKA_ZOOKEEPER_CONNECT=zookeeper:32181 \
-e KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://localhost:39092 \
-e KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR=1 \
confluentinc/cp-kafka:4.1.0
and kafka is not able to connect with zookeeper.
The above scenario works fine in Mac machine but not in linux.
However, when I start an instance of both zookeeper and kafka with the host command (given below)
docker run -d --name zookeeper --network=host -e ZOOKEEPER_CLIENT_PORT=32181 confluentinc/cp-zookeeper:4.1.0
docker run -d --name kafka --network=host -e KAFKA_ZOOKEEPER_CONNECT=zookeeper1:32181 -e KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://localhost:39092 -e KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR=1 confluentinc/cp-kafka:4.1.0
The instances are up and running and kafka is able to connect with zookeeper.
but I donot wish to use host command. can one please share what is the possible solution for the above scenario.
Below is the complete docker logs for zookeeper and kafka.
docker logs kafka
# Set environment values if they exist as arguments
if [ $# -ne 0 ]; then
echo "===> Overriding env params with args ..."
for var in "$#"
do
export "$var"
done
fi
+ '[' 0 -ne 0 ']'
echo "===> ENV Variables ..."
+ echo '===> ENV Variables ...'
env | sort
===> ENV Variables ...
+ env
+ sort
ALLOW_UNSIGNED=false
COMPONENT=kafka
CONFLUENT_DEB_VERSION=1
CONFLUENT_MAJOR_VERSION=4
CONFLUENT_MINOR_VERSION=1
CONFLUENT_MVN_LABEL=
CONFLUENT_PATCH_VERSION=0
CONFLUENT_PLATFORM_LABEL=
CONFLUENT_VERSION=4.1.0
CUB_CLASSPATH=/etc/confluent/docker/docker-utils.jar
HOME=/root
HOSTNAME=df9a2616ba03
KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://localhost:39092
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR=1
KAFKA_VERSION=1.1.0
KAFKA_ZOOKEEPER_CONNECT=zookeeper:32181
LANG=C.UTF-8
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
PWD=/
PYTHON_PIP_VERSION=8.1.2
PYTHON_VERSION=2.7.9-1
SCALA_VERSION=2.11
SHLVL=1
ZOOKEEPER_ENV_ALLOW_UNSIGNED=false
ZOOKEEPER_ENV_COMPONENT=zookeeper
ZOOKEEPER_ENV_CONFLUENT_DEB_VERSION=1
ZOOKEEPER_ENV_CONFLUENT_MAJOR_VERSION=4
ZOOKEEPER_ENV_CONFLUENT_MINOR_VERSION=1
ZOOKEEPER_ENV_CONFLUENT_MVN_LABEL=
ZOOKEEPER_ENV_CONFLUENT_PATCH_VERSION=0
ZOOKEEPER_ENV_CONFLUENT_PLATFORM_LABEL=
ZOOKEEPER_ENV_CONFLUENT_VERSION=4.1.0
ZOOKEEPER_ENV_CUB_CLASSPATH=/etc/confluent/docker/docker-utils.jar
ZOOKEEPER_ENV_KAFKA_VERSION=1.1.0
ZOOKEEPER_ENV_LANG=C.UTF-8
ZOOKEEPER_ENV_PYTHON_PIP_VERSION=8.1.2
ZOOKEEPER_ENV_PYTHON_VERSION=2.7.9-1
ZOOKEEPER_ENV_SCALA_VERSION=2.11
ZOOKEEPER_ENV_ZOOKEEPER_CLIENT_PORT=32181
ZOOKEEPER_ENV_ZULU_OPENJDK_VERSION=8=8.17.0.3
ZOOKEEPER_NAME=/kafka/zookeeper
ZOOKEEPER_PORT=tcp://172.17.0.2:2181
ZOOKEEPER_PORT_2181_TCP=tcp://172.17.0.2:2181
ZOOKEEPER_PORT_2181_TCP_ADDR=172.17.0.2
ZOOKEEPER_PORT_2181_TCP_PORT=2181
ZOOKEEPER_PORT_2181_TCP_PROTO=tcp
ZOOKEEPER_PORT_2888_TCP=tcp://172.17.0.2:2888
ZOOKEEPER_PORT_2888_TCP_ADDR=172.17.0.2
ZOOKEEPER_PORT_2888_TCP_PORT=2888
ZOOKEEPER_PORT_2888_TCP_PROTO=tcp
ZOOKEEPER_PORT_32181_TCP=tcp://172.17.0.2:32181
ZOOKEEPER_PORT_32181_TCP_ADDR=172.17.0.2
ZOOKEEPER_PORT_32181_TCP_PORT=32181
ZOOKEEPER_PORT_32181_TCP_PROTO=tcp
ZOOKEEPER_PORT_3888_TCP=tcp://172.17.0.2:3888
ZOOKEEPER_PORT_3888_TCP_ADDR=172.17.0.2
ZOOKEEPER_PORT_3888_TCP_PORT=3888
ZOOKEEPER_PORT_3888_TCP_PROTO=tcp
ZULU_OPENJDK_VERSION=8=8.17.0.3
_=/usr/bin/env
echo "===> User"
+ echo '===> User'
===> User
id
+ id
uid=0(root) gid=0(root) groups=0(root)
echo "===> Configuring ..."
+ echo '===> Configuring ...'
/etc/confluent/docker/configure
===> Configuring ...
+ /etc/confluent/docker/configure
dub ensure KAFKA_ZOOKEEPER_CONNECT
+ dub ensure KAFKA_ZOOKEEPER_CONNECT
dub ensure KAFKA_ADVERTISED_LISTENERS
+ dub ensure KAFKA_ADVERTISED_LISTENERS
# By default, LISTENERS is derived from ADVERTISED_LISTENERS by replacing
# hosts with 0.0.0.0. This is good default as it ensures that the broker
# process listens on all ports.
if [[ -z "${KAFKA_LISTENERS-}" ]]
then
export KAFKA_LISTENERS
KAFKA_LISTENERS=$(cub listeners "$KAFKA_ADVERTISED_LISTENERS")
fi
+ [[ -z '' ]]
+ export KAFKA_LISTENERS
cub listeners "$KAFKA_ADVERTISED_LISTENERS"
++ cub listeners PLAINTEXT://localhost:39092
+ KAFKA_LISTENERS=PLAINTEXT://0.0.0.0:39092
dub path /etc/kafka/ writable
+ dub path /etc/kafka/ writable
if [[ -z "${KAFKA_LOG_DIRS-}" ]]
then
export KAFKA_LOG_DIRS
KAFKA_LOG_DIRS="/var/lib/kafka/data"
fi
+ [[ -z '' ]]
+ export KAFKA_LOG_DIRS
+ KAFKA_LOG_DIRS=/var/lib/kafka/data
# advertised.host, advertised.port, host and port are deprecated. Exit if these properties are set.
if [[ -n "${KAFKA_ADVERTISED_PORT-}" ]]
then
echo "advertised.port is deprecated. Please use KAFKA_ADVERTISED_LISTENERS instead."
exit 1
fi
+ [[ -n '' ]]
if [[ -n "${KAFKA_ADVERTISED_HOST-}" ]]
then
echo "advertised.host is deprecated. Please use KAFKA_ADVERTISED_LISTENERS instead."
exit 1
fi
+ [[ -n '' ]]
if [[ -n "${KAFKA_HOST-}" ]]
then
echo "host is deprecated. Please use KAFKA_ADVERTISED_LISTENERS instead."
exit 1
fi
+ [[ -n '' ]]
if [[ -n "${KAFKA_PORT-}" ]]
then
echo "port is deprecated. Please use KAFKA_ADVERTISED_LISTENERS instead."
exit 1
fi
+ [[ -n '' ]]
# Set if ADVERTISED_LISTENERS has SSL:// or SASL_SSL:// endpoints.
if [[ $KAFKA_ADVERTISED_LISTENERS == *"SSL://"* ]]
then
echo "SSL is enabled."
dub ensure KAFKA_SSL_KEYSTORE_FILENAME
export KAFKA_SSL_KEYSTORE_LOCATION="/etc/kafka/secrets/$KAFKA_SSL_KEYSTORE_FILENAME"
dub path "$KAFKA_SSL_KEYSTORE_LOCATION" exists
dub ensure KAFKA_SSL_KEY_CREDENTIALS
KAFKA_SSL_KEY_CREDENTIALS_LOCATION="/etc/kafka/secrets/$KAFKA_SSL_KEY_CREDENTIALS"
dub path "$KAFKA_SSL_KEY_CREDENTIALS_LOCATION" exists
export KAFKA_SSL_KEY_PASSWORD
KAFKA_SSL_KEY_PASSWORD=$(cat "$KAFKA_SSL_KEY_CREDENTIALS_LOCATION")
dub ensure KAFKA_SSL_KEYSTORE_CREDENTIALS
KAFKA_SSL_KEYSTORE_CREDENTIALS_LOCATION="/etc/kafka/secrets/$KAFKA_SSL_KEYSTORE_CREDENTIALS"
dub path "$KAFKA_SSL_KEYSTORE_CREDENTIALS_LOCATION" exists
export KAFKA_SSL_KEYSTORE_PASSWORD
KAFKA_SSL_KEYSTORE_PASSWORD=$(cat "$KAFKA_SSL_KEYSTORE_CREDENTIALS_LOCATION")
if [[ -n "${KAFKA_SSL_CLIENT_AUTH-}" ]] && ( [[ $KAFKA_SSL_CLIENT_AUTH == *"required"* ]] || [[ $KAFKA_SSL_CLIENT_AUTH == *"requested"* ]] )
then
dub ensure KAFKA_SSL_TRUSTSTORE_FILENAME
export KAFKA_SSL_TRUSTSTORE_LOCATION="/etc/kafka/secrets/$KAFKA_SSL_TRUSTSTORE_FILENAME"
dub path "$KAFKA_SSL_TRUSTSTORE_LOCATION" exists
dub ensure KAFKA_SSL_TRUSTSTORE_CREDENTIALS
KAFKA_SSL_TRUSTSTORE_CREDENTIALS_LOCATION="/etc/kafka/secrets/$KAFKA_SSL_TRUSTSTORE_CREDENTIALS"
dub path "$KAFKA_SSL_TRUSTSTORE_CREDENTIALS_LOCATION" exists
export KAFKA_SSL_TRUSTSTORE_PASSWORD
KAFKA_SSL_TRUSTSTORE_PASSWORD=$(cat "$KAFKA_SSL_TRUSTSTORE_CREDENTIALS_LOCATION")
fi
fi
+ [[ PLAINTEXT://localhost:39092 == *\S\S\L\:\/\/* ]]
# Set if KAFKA_ADVERTISED_LISTENERS has SASL_PLAINTEXT:// or SASL_SSL:// endpoints.
if [[ $KAFKA_ADVERTISED_LISTENERS =~ .*SASL_.*://.* ]]
then
echo "SASL" is enabled.
dub ensure KAFKA_OPTS
if [[ ! $KAFKA_OPTS == *"java.security.auth.login.config"* ]]
then
echo "KAFKA_OPTS should contain 'java.security.auth.login.config' property."
fi
fi
+ [[ PLAINTEXT://localhost:39092 =~ .*SASL_.*://.* ]]
if [[ -n "${KAFKA_JMX_OPTS-}" ]]
then
if [[ ! $KAFKA_JMX_OPTS == *"com.sun.management.jmxremote.rmi.port"* ]]
then
echo "KAFKA_OPTS should contain 'com.sun.management.jmxremote.rmi.port' property. It is required for accessing the JMX metrics externally."
fi
fi
+ [[ -n '' ]]
dub template "/etc/confluent/docker/${COMPONENT}.properties.template" "/etc/${COMPONENT}/${COMPONENT}.properties"
+ dub template /etc/confluent/docker/kafka.properties.template /etc/kafka/kafka.properties
dub template "/etc/confluent/docker/log4j.properties.template" "/etc/${COMPONENT}/log4j.properties"
+ dub template /etc/confluent/docker/log4j.properties.template /etc/kafka/log4j.properties
dub template "/etc/confluent/docker/tools-log4j.properties.template" "/etc/${COMPONENT}/tools-log4j.properties"
+ dub template /etc/confluent/docker/tools-log4j.properties.template /etc/kafka/tools-log4j.properties
echo "===> Running preflight checks ... "
+ echo '===> Running preflight checks ... '
/etc/confluent/docker/ensure
+ /etc/confluent/docker/ensure
===> Running preflight checks ...
===> Check if /var/lib/kafka/data is writable ...
export KAFKA_DATA_DIRS=${KAFKA_DATA_DIRS:-"/var/lib/kafka/data"}
+ export KAFKA_DATA_DIRS=/var/lib/kafka/data
+ KAFKA_DATA_DIRS=/var/lib/kafka/data
echo "===> Check if $KAFKA_DATA_DIRS is writable ..."
+ echo '===> Check if /var/lib/kafka/data is writable ...'
dub path "$KAFKA_DATA_DIRS" writable
+ dub path /var/lib/kafka/data writable
===> Check if Zookeeper is healthy ...
echo "===> Check if Zookeeper is healthy ..."
+ echo '===> Check if Zookeeper is healthy ...'
cub zk-ready "$KAFKA_ZOOKEEPER_CONNECT" "${KAFKA_CUB_ZK_TIMEOUT:-40}"
+ cub zk-ready zookeeper:32181 40
[main] INFO org.apache.zookeeper.ZooKeeper - Client environment:zookeeper.version=3.4.10-39d3a4f269333c922ed3db283be479f9deacaa0f, built on 03/23/2017 10:13 GMT
[main] INFO org.apache.zookeeper.ZooKeeper - Client environment:host.name=df9a2616ba03
[main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.version=1.8.0_102
[main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.vendor=Azul Systems, Inc.
[main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.home=/usr/lib/jvm/zulu-8-amd64/jre
[main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.class.path=/etc/confluent/docker/docker-utils.jar
[main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
[main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.io.tmpdir=/tmp
[main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.compiler=<NA>
[main] INFO org.apache.zookeeper.ZooKeeper - Client environment:os.name=Linux
[main] INFO org.apache.zookeeper.ZooKeeper - Client environment:os.arch=amd64
[main] INFO org.apache.zookeeper.ZooKeeper - Client environment:os.version=4.15.0-46-generic
[main] INFO org.apache.zookeeper.ZooKeeper - Client environment:user.name=root
[main] INFO org.apache.zookeeper.ZooKeeper - Client environment:user.home=/root
[main] INFO org.apache.zookeeper.ZooKeeper - Client environment:user.dir=/
[main] INFO org.apache.zookeeper.ZooKeeper - Initiating client connection, connectString=zookeeper:32181 sessionTimeout=40000 watcher=io.confluent.admin.utils.ZookeeperConnectionWatcher#1ddc4ec2
[main-SendThread(zookeeper:32181)] INFO org.apache.zookeeper.ClientCnxn - Opening socket connection to server zookeeper/172.17.0.2:32181. Will not attempt to authenticate using SASL (unknown error)
using the below commands
docker run -d \
--name zookeeper \
-p 32181:32181 \
-e ZOOKEEPER_CLIENT_PORT=32181 \
confluentinc/cp-zookeeper:4.1.0
docker run -d \
--name kafka \
--link zookeeper \
-p 39092:39092 \
-e KAFKA_ZOOKEEPER_CONNECT=zookeeper:32181 \
-e KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://localhost:39092 \
-e KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR=1 \
confluentinc/cp-kafka:4.1.0
the kafka should have connected with zookeeper as its working in Mac machine but not on linux.
If you do not want to use network=host you need to create and use a docker bridge network: https://docs.docker.com/network/bridge/#manage-a-user-defined-bridge.
Here is how you could do it:
docker network create kafka-network
docker run -d --name zookeeper --network=kafka-network -e ZOOKEEPER_CLIENT_PORT=32181 confluentinc/cp-zookeeper:4.1.0
docker run -d --name kafka --network=kafka-network -e KAFKA_ZOOKEEPER_CONNECT=zookeeper:32181 -e KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://localhost:39092 -e KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR=1 confluentinc/cp-kafka:4.1.0

docker-compose wurstmeister/kafka failing to parse KAFKA_OPTS

I have a basic docker-compose file file for wurstmeister/kafka
I'm trying to configure it to use SASL_PLAIN with SSL
However I keep getting this error no matter how many ways I try to specify my jaas file
This is the error I get
[2018-04-11 10:34:34,545] FATAL [KafkaServer id=1001] Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
java.lang.IllegalArgumentException: Could not find a 'KafkaServer' or 'sasl_ssl.KafkaServer' entry in the JAAS configuration. System property 'java.security.auth.login.config' is not set
These are the vars I have. Last one is where I specify my jaas file
environment:
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_HOST_NAME: 10.10.10.1
KAFKA_PORT: 9092
KAFKA_ADVERTISED_PORT: 9093
KAFKA_ADVERTISED_HOST_NAME: 10.10.10.1
KAFKA_LISTENERS: PLAINTEXT://:9092,SASL_SSL://:9093
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://10.10.10.1:9092,SASL_SSL://10.10.10.1:9093
KAFKA_SECURITY_INTER_BROKER_PROTOCOL: SASL_SSL
KAFKA_SASL_ENABLED_MECHANISMS: PLAIN
SASL_MECHANISM_INTER_BROKER_PROTOCOL: PLAIN
KAFKA_SASL_MECHANISM_INTER_BROKER_PROTOCOL: PLAIN
KAFKA_SSL_TRUSTSTORE_LOCATION: /kafka.server.truststore.jks
KAFKA_SSL_TRUSTSTORE_PASSWORD: password
KAFKA_SSL_KEYSTORE_LOCATION: /kafka.server.keystore.jks
KAFKA_SSL_KEYSTORE_PASSWORD: password
KAFKA_SSL_KEY_PASSWORD: password
KAFKA_OPTS: '-Djava.security.auth.login.config=/path/kafka_server_jaas.conf'
Also when I try to check the docker logs I see
/usr/bin/start-kafka.sh: line 96: KAFKA_OPTS=-Djava.security.auth.login.config: bad substitution
Any help is greatly appreciated!
equals '=' inside the last value is causing this issue.
KAFKA_OPTS: '-Djava.security.auth.login.config=/path/kafka_server_jaas.conf'
This is what I have got after debugging.
+ for VAR in $(env)
+ [[ KAFKA_OPTS=-
Djava.security.auth.login.config=/path/kafka_server_jaas.conf =~ ^KAFKA_ ]]
+ [[ ! KAFKA_OPTS=-
Djava.security.auth.login.config=/path/kafka_server_jaas.conf =~
^KAFKA_HOME ]]
++ echo KAFKA_OPTS=-
Djava.security.auth.login.config=/path/kafka_server_jaas.conf
++ sed -r 's/KAFKA_(.*)=.*/\1/g'
++ tr '[:upper:]' '[:lower:]'
++ tr _ .
+ kafka_name=opts=-djava.security.auth.login.config
++ echo KAFKA_OPTS=-
Djava.security.auth.login.config=/path/kafka_server_jaas.conf
++ sed -r 's/(.*)=.*/\1/g'
+ env_var=KAFKA_OPTS=-Djava.security.auth.login.config
+ grep -E -q '(^|^#)opts=-djava.security.auth.login.config='
/opt/kafka/config/server.properties
start-kafka.sh: line 96: KAFKA_OPTS=-Djava.security.auth.login.config: bad
substitution
and this is the piece of code that is performing this operation.
88 for VAR in $(env)
89 do
90 if [[ $VAR =~ ^KAFKA_ && ! $VAR =~ ^KAFKA_HOME ]]; then
91 kafka_name=$(echo "$VAR" | sed -r 's/KAFKA_(.*)=.*/\1/g' | tr '[:upper:]' '[:lower:]' | tr _ .)
92 env_var=$(echo "$VAR" | sed -r 's/(.*)=.*/\1/g')
93 if grep -E -q '(^|^#)'"$kafka_name=" "$KAFKA_HOME/config/server.properties"; then
94 sed -r -i 's#(^|^#)('"$kafka_name"')=(.*)#\2='"${!env_var}"'#g' "$KAFKA_HOME/config/server.properties" #note that no config values may contain an '#' char
95 else
96 echo "$kafka_name=${!env_var}" >> "$KAFKA_HOME/config/server.properties"
97 fi
98 fi
99
100 if [[ $VAR =~ ^LOG4J_ ]]; then
101 log4j_name=$(echo "$VAR" | sed -r 's/(LOG4J_.*)=.*/\1/g' | tr '[:upper:]' '[:lower:]' | tr _ .)
102 log4j_env=$(echo "$VAR" | sed -r 's/(.*)=.*/\1/g')
103 if grep -E -q '(^|^#)'"$log4j_name=" "$KAFKA_HOME/config/log4j.properties"; then
104 sed -r -i 's#(^|^#)('"$log4j_name"')=(.*)#\2='"${!log4j_env}"'#g' "$KAFKA_HOME/config/log4j.properties" #note that no config values may contain an'#' char
105 else
106 echo "$log4j_name=${!log4j_env}" >> "$KAFKA_HOME/config/log4j.properties"
107 fi
108 fi
109 done
Update: They have fixed it and it is merged now!
https://github.com/wurstmeister/kafka-docker/pull/321
There's a bug open now with wurstmeister/kafka but they have gotten back to me with a workaround as follows
I believe his is part of a larger namespace collision problem that
affects multiple elements such as Kubernetes deployments etc (as well
as other KAFKA_ service settings).
Given you are referencing an external file /kafka_server_jaas.conf,
i'm assuming you're OK adding/mounting extra files through; a
work-around is to specify a CUSTOM_INIT_SCRIPT environment var, which
should be a script similar to:
#!/bin/bash
export KAFKA_OPTS="-Djava.security.auth.login.config=/kafka_server_jaas.conf"
This is executed after the substitution part that is failing.
This could have been done inline, however there is currently a bug in
how we process the environment, where we need to specify the input
separator to make this work correctly.
Hopefully this works!

Configure Jenkins with bitbucket for running Test

I am planning to run build and test cases and deploy using jenkins. I have installed Jenkins and creating job.
I have bitbucket repository with mercurial, So configured mercurial and build clone repository and do nothing else. Now I am writing commands in shell for given purpose:
source ~/.profile # load profile and working fine
mkvirtualenv test_build # create virtual environment using virtualenv wrapper. this fails with trace provided.
cd my_project # move to project directory
pip install -r requirements.txt # install packages using pip
Here is an trace for build console on jenkins.
[workspace] $ /usr/local/bin/bash -xe /tmp/hudson3781010986042746968.sh
+ source /usr/local/jenkins/.profile
++ PATH=/sbin:/bin:/usr/sbin:/usr/bin:/usr/games:/usr/local/sbin:/usr/local/bin:/usr/local/jenkins/bin
++ export PATH
++ BLOCKSIZE=K
++ export BLOCKSIZE
++ EDITOR=vi
++ export EDITOR
++ PAGER=more
++ export PAGER
++ ENV=/usr/local/jenkins/.shrc
++ export ENV
++ '[' -x /usr/games/fortune ']'
++ '[' -e /usr/local/bin/virtualenvwrapper.sh ']'
++ export WORKON_HOME=/usr/local/jenkins/virtualenvs
++ WORKON_HOME=/usr/local/jenkins/virtualenvs
++ source /usr/local/bin/virtualenvwrapper.sh
+++ '[' '' = '' ']'
++++ command which python
++++ which python
+++ VIRTUALENVWRAPPER_PYTHON=/usr/local/bin/python
+++ '[' '' = '' ']'
+++ VIRTUALENVWRAPPER_VIRTUALENV=virtualenv
+++ '[' '' = '' ']'
+++ VIRTUALENVWRAPPER_VIRTUALENV_CLONE=virtualenv-clone
+++ VIRTUALENVWRAPPER_ENV_BIN_DIR=bin
+++ '[' '' = Windows_NT ']'
+++ '[' .project = '' ']'
+++ virtualenvwrapper_initialize
++++ virtualenvwrapper_derive_workon_home
++++ typeset workon_home_dir=/usr/local/jenkins/virtualenvs
++++ '[' /usr/local/jenkins/virtualenvs = '' ']'
++++ echo /usr/local/jenkins/virtualenvs
++++ unset GREP_OPTIONS
++++ command grep '^[^/~]'
++++ grep '^[^/~]'
++++ echo /usr/local/jenkins/virtualenvs
++++ unset GREP_OPTIONS
++++ command egrep '([\$~]|//)'
++++ egrep '([\$~]|//)'
++++ echo /usr/local/jenkins/virtualenvs
++++ return 0
+++ export WORKON_HOME=/usr/local/jenkins/virtualenvs
+++ WORKON_HOME=/usr/local/jenkins/virtualenvs
+++ virtualenvwrapper_verify_workon_home -q
+++ RC=0
+++ '[' '!' -d /usr/local/jenkins/virtualenvs/ ']'
+++ return 0
+++ '[' /usr/local/jenkins/virtualenvs = '' ']'
+++ '[' /usr/local/jenkins/virtualenvs = '' ']'
+++ virtualenvwrapper_run_hook initialize
+++ typeset hook_script
+++ typeset result
++++ virtualenvwrapper_tempfile initialize-hook
++++ typeset suffix=initialize-hook
++++ typeset file
+++++ virtualenvwrapper_mktemp -t virtualenvwrapper-initialize-hook-XXXXXXXXXX
+++++ command mktemp -t virtualenvwrapper-initialize-hook-XXXXXXXXXX
+++++ mktemp -t virtualenvwrapper-initialize-hook-XXXXXXXXXX
++++ file=/tmp/virtualenvwrapper-initialize-hook-XXXXXXXXXX.jKEvY7Y1
++++ '[' 0 -ne 0 ']'
++++ '[' -z /tmp/virtualenvwrapper-initialize-hook-XXXXXXXXXX.jKEvY7Y1 ']'
++++ '[' '!' -f /tmp/virtualenvwrapper-initialize-hook-XXXXXXXXXX.jKEvY7Y1 ']'
++++ echo /tmp/virtualenvwrapper-initialize-hook-XXXXXXXXXX.jKEvY7Y1
++++ return 0
+++ hook_script=/tmp/virtualenvwrapper-initialize-hook-XXXXXXXXXX.jKEvY7Y1
+++ '[' -z /usr/local/jenkins/virtualenvs ']'
+++ /usr/local/bin/python -c 'from virtualenvwrapper.hook_loader import main; main()' --script /tmp/virtualenvwrapper-initialize-hook-XXXXXXXXXX.jKEvY7Y1 initialize
+++ result=0
+++ '[' 0 -eq 0 ']'
+++ '[' '!' -f /tmp/virtualenvwrapper-initialize-hook-XXXXXXXXXX.jKEvY7Y1 ']'
+++ source /tmp/virtualenvwrapper-initialize-hook-XXXXXXXXXX.jKEvY7Y1
++++ '[' -f /usr/local/jenkins/virtualenvs/initialize ']'
++++ source /usr/local/jenkins/virtualenvs/initialize
+++ command rm -f /tmp/virtualenvwrapper-initialize-hook-XXXXXXXXXX.jKEvY7Y1
+++ rm -f /tmp/virtualenvwrapper-initialize-hook-XXXXXXXXXX.jKEvY7Y1
+++ return 0
+++ virtualenvwrapper_setup_tab_completion
+++ '[' -n /usr/local/bin/bash ']'
+++ complete -o nospace -F _cdvirtualenv_complete -S/ cdvirtualenv
+++ complete -o nospace -F _cdsitepackages_complete -S/ cdsitepackages
+++ complete -o default -o nospace -F _virtualenvs workon
+++ complete -o default -o nospace -F _virtualenvs rmvirtualenv
+++ complete -o default -o nospace -F _virtualenvs cpvirtualenv
+++ complete -o default -o nospace -F _virtualenvs showvirtualenv
+++ return 0
+ mkvirtualenv test_build
+ typeset -a in_args
+ typeset -a out_args
+ typeset -i i
+ typeset tst
+ typeset a
+ typeset envname
+ typeset requirements
+ typeset packages
+ in_args=("$#")
+ '[' -n '' ']'
+ i=0
+ tst=-lt
+ '[' 0 -lt 1 ']'
+ a=test_build
+ case "$a" in
+ '[' 0 -gt 0 ']'
+ out_args=("$a")
+ i=1
+ '[' 1 -lt 1 ']'
+ set -- test_build
+ eval 'envname=$1'
++ envname=test_build
+ virtualenvwrapper_verify_workon_home
+ RC=0
+ '[' '!' -d /usr/local/jenkins/virtualenvs/ ']'
+ return 0
+ virtualenvwrapper_verify_virtualenv
+ virtualenvwrapper_verify_resource virtualenv
++ command which virtualenv
++ which virtualenv
++ unset GREP_OPTIONS
++ command grep -v 'not found'
++ grep -v 'not found'
+ typeset exe_path=/usr/local/bin/virtualenv
+ '[' /usr/local/bin/virtualenv = '' ']'
+ '[' '!' -e /usr/local/bin/virtualenv ']'
+ return 0
+ '[' -n '' ']'
+ virtualenvwrapper_cd /usr/local/jenkins/virtualenvs
+ '[' -n /usr/local/bin/bash ']'
+ builtin cd /usr/local/jenkins/virtualenvs
+ virtualenv test_build
New python executable in test_build/bin/python2.7
Also creating executable in test_build/bin/python
Installing Setuptools..............................................................................................................................................................................................................................done.
Installing Pip.....................................................................................................................................................................................................................................................................................................................................done.
+ '[' -d /usr/local/jenkins/virtualenvs/test_build ']'
+ virtualenvwrapper_run_hook pre_mkvirtualenv test_build
+ typeset hook_script
+ typeset result
++ virtualenvwrapper_tempfile pre_mkvirtualenv-hook
++ typeset suffix=pre_mkvirtualenv-hook
++ typeset file
+++ virtualenvwrapper_mktemp -t virtualenvwrapper-pre_mkvirtualenv-hook-XXXXXXXXXX
+++ command mktemp -t virtualenvwrapper-pre_mkvirtualenv-hook-XXXXXXXXXX
+++ mktemp -t virtualenvwrapper-pre_mkvirtualenv-hook-XXXXXXXXXX
++ file=/tmp/virtualenvwrapper-pre_mkvirtualenv-hook-XXXXXXXXXX.rPgLzOe4
++ '[' 0 -ne 0 ']'
++ '[' -z /tmp/virtualenvwrapper-pre_mkvirtualenv-hook-XXXXXXXXXX.rPgLzOe4 ']'
++ '[' '!' -f /tmp/virtualenvwrapper-pre_mkvirtualenv-hook-XXXXXXXXXX.rPgLzOe4 ']'
++ echo /tmp/virtualenvwrapper-pre_mkvirtualenv-hook-XXXXXXXXXX.rPgLzOe4
++ return 0
+ hook_script=/tmp/virtualenvwrapper-pre_mkvirtualenv-hook-XXXXXXXXXX.rPgLzOe4
+ '[' -z /usr/local/jenkins/virtualenvs ']'
+ /usr/local/bin/python -c 'from virtualenvwrapper.hook_loader import main; main()' --script /tmp/virtualenvwrapper-pre_mkvirtualenv-hook-XXXXXXXXXX.rPgLzOe4 pre_mkvirtualenv test_build
virtualenvwrapper.user_scripts creating /usr/local/jenkins/virtualenvs/test_build/bin/predeactivate
virtualenvwrapper.user_scripts creating /usr/local/jenkins/virtualenvs/test_build/bin/postdeactivate
virtualenvwrapper.user_scripts creating /usr/local/jenkins/virtualenvs/test_build/bin/preactivate
virtualenvwrapper.user_scripts creating /usr/local/jenkins/virtualenvs/test_build/bin/postactivate
virtualenvwrapper.user_scripts creating /usr/local/jenkins/virtualenvs/test_build/bin/get_env_details
+ result=0
+ '[' 0 -eq 0 ']'
+ '[' '!' -f /tmp/virtualenvwrapper-pre_mkvirtualenv-hook-XXXXXXXXXX.rPgLzOe4 ']'
+ source /tmp/virtualenvwrapper-pre_mkvirtualenv-hook-XXXXXXXXXX.rPgLzOe4
+ command rm -f /tmp/virtualenvwrapper-pre_mkvirtualenv-hook-XXXXXXXXXX.rPgLzOe4
+ rm -f /tmp/virtualenvwrapper-pre_mkvirtualenv-hook-XXXXXXXXXX.rPgLzOe4
+ return 0
+ typeset RC=0
+ '[' 0 -ne 0 ']'
+ '[' '!' -d /usr/local/jenkins/virtualenvs/test_build ']'
+ '[' '!' -z '' ']'
+ workon test_build
+ typeset env_name=test_build
+ '[' test_build = '' ']'
+ virtualenvwrapper_verify_workon_home
+ RC=0
+ '[' '!' -d /usr/local/jenkins/virtualenvs/ ']'
+ return 0
+ virtualenvwrapper_verify_workon_environment test_build
+ typeset env_name=test_build
+ '[' '!' -d /usr/local/jenkins/virtualenvs/test_build ']'
+ return 0
+ activate=/usr/local/jenkins/virtualenvs/test_build/bin/activate
+ '[' '!' -f /usr/local/jenkins/virtualenvs/test_build/bin/activate ']'
+ type deactivate
Build step 'Execute shell' marked build as failure
Finished: FAILURE

Resources