I am trying to create a keytab file when i build the image.
Here is what I am running on one of our Red Hat boxes:
ktutil
ktutil: add_entry -password -p $user_id#DOMAIN.COM -k 1 -e aes256-cts
Password for $user_id#DOMAIN.COM:
ktutil: wkt $user_id.keytab
ktutil: quit
and it generates the keytab.
I am trying to do this on Docker and I am running:
RUN ktutil && echo "add_entry -password -p $user_id#DOMAIN.COM -k 1 -e aes256-cts" && echo "$user_pass" && echo "wkt $user_id.keytab" && echo "quit"
Its doing this:
Step 22/27 : RUN ktutil && echo "add_entry -password -p $user_id#DOMAIN.COM -k 1 -e aes256-cts" && echo "$user_pass" && echo "wkt $user_id.keytab" && echo "quit"
---> Running in b186efb561fc
ktutil: add_entry -password -p $user_id#DOMAIN.COM -k 1 -e aes256-cts
$user_pass
wkt $user_id.keytab
quit
So it runs the first command and then exits ktutil? How should i format the RUN command. Is there a trick to getting this to stay in ktutil?
This question is not really Docker specific. It is about how to run ktutil in non-interactive mode and I found existing question which covers that: Script Kerberos Ktutil to make keytabs.
We can apply ideas from that answer to create keytab file in Docker:
FROM centos
# These variables just for demonstration here,
# ideally should be passed as
ARG user_id
ARG user_pass
# Should check here whether the above arguments
# have been actually passed to the build
# Install dependencies
# Add new entry to keytab file and list all entries afterwards
RUN yum install -y krb5-workstation.x86_64 \
&& echo -e "add_entry -password -p $user_id#DOMAIN.COM -k 1 -e aes256-cts\n$user_pass\nwkt $user_id.keytab" | ktutil \
&& echo -e "read_kt $user_id.keytab\nlist" | ktutil
wkt $user_id.keytab" | ktutil \
&& echo -e ""
When I run the build for the above Dockerfile with this command:
docker build -t ktutil --build-arg user_id=test --build-arg user_pass=test_pass .
I can see the following results:
ktutil: add_entry -password -p test#DOMAIN.COM -k 1 -e aes256-cts
Password for test#DOMAIN.COM:
ktutil: wkt test.keytab
ktutil: ktutil: read_kt test.keytab
ktutil: list
slot KVNO Principal
---- ---- ---------------------------------------------------------------------
1 1 test#DOMAIN.COM
Try:
ktutil: add_entry -password -p $user_id#DOMAIN.COM -k 1 -e aes256-cts
Password for $user_id#DOMAIN.COM:
ktutil: add_entry -password -p $user_id#DOMAIN.COM -k 1 -e aes256-cts
Password for $user_id#DOMAIN.COM:
ktutil:
We managed to fix it like so:
RUN printf 'add_entry -password -p $user_id#DOMAIN.COM -k 1 -e aes256-cts\n$user_pass\nwkt $user_id.keytab' | ktutil
Related
I am trying to create docker container with below dockerfile and what i am trying to do is I am trying to build this Dockerfile and What i am looking is, if I'll run container, then it should ask from me that from which user i need to login, I am getting only input like " Enter the Name: " after that It's not going to login with provided username, which i provided to the input.
Please help!
FROM ubuntu:14.04
RUN apt-get -y update && apt-get -y install sudo
ENV VES=14.04
RUN userdel -r Harsh && userdel -r Piyush && userdel -r Sajid
LABEL Name "Harsh Arora"
RUN useradd -ms /bin/bash Harsh && passwd "Harsharora:Harsharora" | chpasswd && adduser Harsh sudo
RUN useradd -m Piyush && passwd "Piyush:Piyush" | chpasswd
RUN useradd -m Sajid && passwd "Sajid:Sajid" | chpasswd
COPY test.sh /tmp/test.sh
ENTRYPOINT ["/tmp/test.sh"]
So, Please find below "test.sh" script for the references.
#!/bin/bash
#This is test script writing for entrypoint example in docker file.
echo -e " Enter The Name: \c" ; read Name
case $Name in
Love)
echo -e " *********************************************************************************************************************"
echo -e "\t\tTHIS CONTAINER RUNNING BY USER $Name"
echo -e " ********************************************************************************************************************* "
;;
Lucky) echo -e " *********************************************************************************************************************"
echo -e "\t\tTHIS CONTAINER RUNNING BY USER $Name"
echo -e " ********************************************************************************************************************* "
;;
Naveen) echo -e " *********************************************************************************************************************"
echo -e "\t\tTHIS CONTAINER RUNNING BY USER $Name"
echo -e " ********************************************************************************************************************* "
;;
*) echo -e " *********************************************************************************************************************"
echo -e "\t\tTHIS CONTAINER RUNNING BY USER $Name"
echo -e " ********************************************************************************************************************* "
;;
esac
$#
su - $Name <<< echo "$Name"
I am following the steps in the "Standalone Instance, Two-Way SSL" section of https://hub.docker.com/r/apache/nifi. However, when I visit the NiFi page, my user has insufficient permissions. Below is the process I am using:
Generate self-signed certificates
mkdir conf
docker exec \
-ti toolkit \
/opt/nifi/nifi-toolkit-current/bin/tls-toolkit.sh \
standalone \
-n 'nifi1.bluejay.local' \
-C 'CN=admin,OU=NIFI'
docker cp toolkit:/opt/nifi/nifi-current/nifi-cert.pem conf
docker cp toolkit:/opt/nifi/nifi-current/nifi-key.key conf
docker cp toolkit:/opt/nifi/nifi-current/nifi1.bluejay.local conf
docker cp toolkit:/opt/nifi/nifi-current/CN=admin_OU=NIFI.p12 conf
docker cp toolkit:/opt/nifi/nifi-current/CN=admin_OU=NIFI.password conf
docker stop toolkit
Import client certificate to browser
Import the .p12 file into your browser.
Update /etc/hosts
Add "127.0.0.1 nifi1.bluejay.local" to the end of your /etc/hosts file.
Define a NiFi network
docker network create --subnet=10.18.0.0/16 nifi
Run NiFi in a container
docker run -d \
-e AUTH=tls \
-e KEYSTORE_PATH=/opt/certs/keystore.jks \
-e KEYSTORE_TYPE=JKS \
-e KEYSTORE_PASSWORD=$(grep keystorePasswd conf/nifi1.bluejay.local/nifi.properties | cut -d'=' -f2) \
-e TRUSTSTORE_PATH=/opt/certs/truststore.jks \
-e TRUSTSTORE_PASSWORD=$(grep truststorePasswd conf/nifi1.bluejay.local/nifi.properties | cut -d'=' -f2) \
-e TRUSTSTORE_TYPE=JKS \
-e INITIAL_ADMIN_IDENTITY="CN=admin,OU=NIFI" \
-e NIFI_WEB_PROXY_CONTEXT_PATH=/nifi \
-e NIFI_WEB_PROXY_HOST=nifi1.bluejay.local \
--hostname nifi1.bluejay.local \
--ip 10.18.0.10 \
--name nifi \
--net nifi \
-p 8443:8443 \
-v $(pwd)/conf/nifi1.bluejay.local:/opt/certs:ro \
-v /data/projects/nifi-shared:/opt/nifi/nifi-current/ls-target \
apache/nifi
Visit Page
When you visit http://localhost:8443/nifi, you'll be asked to select a certificate. Select the certificate (e.g. admin) that you imported.
At this point, I am seeing:
Insufficient Permissions
Unknown user with identity 'CN=admin, OU=NIFI'. Contact the system administrator.
In the examples I am seeing, there is no mention of this issue or how to resolve it.
How are permissions assigned to the Initial Admin Identity?
You are missing a space at line
-e INITIAL_ADMIN_IDENTITY="CN=admin,OU=NIFI"
See the error msg.
I tryed to add crontab inside docker image "jenkinsci/blueocean" but after it, jenkins does not start. Where could be the problem?
Many thanks in advance for any help.
<Dockerfile>
FROM jenkinsci/blueocean:1.17.0
USER root
ENV SUPERCRONIC_URL=https://github.com/aptible/supercronic/releases/download/v0.1.9/supercronic-linux-amd64 \
SUPERCRONIC=supercronic-linux-amd64 \
SUPERCRONIC_SHA1SUM=5ddf8ea26b56d4a7ff6faecdd8966610d5cb9d85
RUN curl -fsSLO "$SUPERCRONIC_URL" \
&& echo "${SUPERCRONIC_SHA1SUM} ${SUPERCRONIC}" | sha1sum -c - \
&& chmod +x "$SUPERCRONIC" \
&& mv "$SUPERCRONIC" "/usr/local/bin/${SUPERCRONIC}" \
&& ln -s "/usr/local/bin/${SUPERCRONIC}" /usr/local/bin/supercronic
ADD crontab /etc/crontab
CMD ["supercronic", "/etc/crontab"]
<crontab>
# Run every minute
*/1 * * * * echo "hello world"
commands:
$docker build -t jenkins_test .
$docker run -it -p 8080:8080 --name=container_jenkins jenkins_test
If use docker inspect jenkinsci/blueocean:1.17.0 you will it's entrypoint is:
"Entrypoint": [
"/sbin/tini",
"--",
"/usr/local/bin/jenkins.sh"
],
So, when start the container it will first execute next script.
/usr/local/bin/jenkins.sh:
#! /bin/bash -e
: "${JENKINS_WAR:="/usr/share/jenkins/jenkins.war"}"
: "${JENKINS_HOME:="/var/jenkins_home"}"
touch "${COPY_REFERENCE_FILE_LOG}" || { echo "Can not write to ${COPY_REFERENCE_FILE_LOG}. Wrong volume permissions?"; exit 1; }
echo "--- Copying files at $(date)" >> "$COPY_REFERENCE_FILE_LOG"
find /usr/share/jenkins/ref/ \( -type f -o -type l \) -exec bash -c '. /usr/local/bin/jenkins-support; for arg; do copy_reference_file "$arg"; done' _ {} +
# if `docker run` first argument start with `--` the user is passing jenkins launcher arguments
if [[ $# -lt 1 ]] || [[ "$1" == "--"* ]]; then
# read JAVA_OPTS and JENKINS_OPTS into arrays to avoid need for eval (and associated vulnerabilities)
java_opts_array=()
while IFS= read -r -d '' item; do
java_opts_array+=( "$item" )
done < <([[ $JAVA_OPTS ]] && xargs printf '%s\0' <<<"$JAVA_OPTS")
readonly agent_port_property='jenkins.model.Jenkins.slaveAgentPort'
if [ -n "${JENKINS_SLAVE_AGENT_PORT:-}" ] && [[ "${JAVA_OPTS:-}" != *"${agent_port_property}"* ]]; then
java_opts_array+=( "-D${agent_port_property}=${JENKINS_SLAVE_AGENT_PORT}" )
fi
if [[ "$DEBUG" ]] ; then
java_opts_array+=( \
'-Xdebug' \
'-Xrunjdwp:server=y,transport=dt_socket,address=5005,suspend=y' \
)
fi
jenkins_opts_array=( )
while IFS= read -r -d '' item; do
jenkins_opts_array+=( "$item" )
done < <([[ $JENKINS_OPTS ]] && xargs printf '%s\0' <<<"$JENKINS_OPTS")
exec java -Duser.home="$JENKINS_HOME" "${java_opts_array[#]}" -jar ${JENKINS_WAR} "${jenkins_opts_array[#]}" "$#"
fi
# As argument is not jenkins, assume user want to run his own process, for example a `bash` shell to explore this image
exec "$#"
From above script, you can see, if you add CMD ["supercronic", "/etc/crontab"] to your own dockerfile, then when your container starts, it equals to execute next:
/usr/local/bin/jenkins.sh "supercronic" "/etc/crontab"
As if [[ $# -lt 1 ]] || [[ "$1" == "--"* ]]; then not match, it will directly execute the exec "$# at the last line, which results in the jenkins start code never execute.
To fix it, you had to use your own docker-entrypoint.sh to override its default entrypoint:
docker-entrypoint.sh:
#!/bin/bash
supercronic /etc/crontab &
/usr/local/bin/jenkins.sh
Dockerfile:
FROM jenkinsci/blueocean:1.17.0
USER root
ENV SUPERCRONIC_URL=https://github.com/aptible/supercronic/releases/download/v0.1.9/supercronic-linux-amd64 \
SUPERCRONIC=supercronic-linux-amd64 \
SUPERCRONIC_SHA1SUM=5ddf8ea26b56d4a7ff6faecdd8966610d5cb9d85
RUN curl -fsSLO "$SUPERCRONIC_URL" \
&& echo "${SUPERCRONIC_SHA1SUM} ${SUPERCRONIC}" | sha1sum -c - \
&& chmod +x "$SUPERCRONIC" \
&& mv "$SUPERCRONIC" "/usr/local/bin/${SUPERCRONIC}" \
&& ln -s "/usr/local/bin/${SUPERCRONIC}" /usr/local/bin/supercronic
ADD crontab /etc/crontab
COPY docker-entrypoint.sh /
RUN chmod +x /docker-entrypoint.sh
ENTRYPOINT ["/sbin/tini", "--", "/docker-entrypoint.sh"]
I have the following Dockerfile to create a container running, NGINX & MYSQL. I am trying to mount /var/lib/mysql to the local Docker host, in order to keep MySQL databases after a container is destroyed.
FROM ubuntu:16.04
MAINTAINER - ******
## Install php nginx mysql supervisor ###
########################################
RUN apt-get update && DEBIAN_FRONTEND=noninteractive apt-get install -y \
php-fpm \
php-cli \
php-gd \
php-mcrypt \
php-mysql \
php-curl \
php-xml \
php-json \
nginx \
curl \
unzip \
mysql-server \
supervisor
### Nginx & PHP-FPM ###
########################
# Remove the default Nginx configuration file
RUN rm -v /etc/nginx/nginx.conf
# Copy configuration files from the current directory
ADD files/nginx.conf /etc/nginx/nginx.conf
ADD files/php-fpm.conf /etc/php/7.0/fpm/
### MYSQL ###
############
ENV ROOT_PWD test
### Supervisor.conf ###
######################
ADD files/supervisord.conf /etc/supervisor/supervisord.conf
### Container configuration ###
###############################
EXPOSE 80
VOLUME /DATA
VOLUME /var/lib/mysql
# Set the default command to execute
# when creating a new container
ADD start.sh /
RUN chmod u+x /start.sh
CMD /start.sh
Below is my entry point script start.sh:
#!/bin/sh
# 1.MYSQL SETUP
#
# ###########
MYSQL_CHARSET=${MYSQL_CHARSET:-"utf8"}
MYSQL_COLLATION=${MYSQL_COLLATION:-"utf8_unicode_ci"}
create_data_dir() {
mkdir -p /var/lib/mysql
chmod -R 0700 /var/lib/mysql
chown -R mysql:mysql /var/lib/mysql
}
create_run_dir() {
mkdir -p /run/mysqld
chmod -R 0755 /run/mysqld
chown -R mysql:root /run/mysqld
}
create_log_dir() {
mkdir -p /var/log/mysql
chmod -R 0755 /var/log/mysql
chown -R mysql:mysql /var/log/mysql
}
mysql_default_install() {
/usr/bin/mysql_install_db --datadir=/var/lib/mysql
}
create_modx_database() {
# start mysql server.
/usr/bin/mysqld_safe >/dev/null 2>&1 &
# wait for mysql server to start (max 30 seconds).
timeout=30
echo -n "Waiting for database server to accept connections"
while ! /usr/bin/mysqladmin -u root status >/dev/null 2>&1
do
timeout=$(($timeout - 1))
if [ $timeout -eq 0 ]; then
echo -e "\nCould not connect to database server. Aborting..."
exit 1
fi
echo -n "."
sleep 1
done
echo
# create database and assign user permissions.
if [ -n "${DB_NAME}" -a -n "${DB_USER}" -a -n "${DB_PASS}" ]; then
echo "Creating database \"${DB_NAME}\" and granting access to \"${DB_USER}\" database."
mysql -uroot -e "CREATE DATABASE ${DB_NAME};"
mysql -uroot -e "GRANT USAGE ON *.* TO ${DB_USER}#localhost IDENTIFIED BY '${DB_PASS}';"
mysql -uroot -e "GRANT ALL PRIVILEGES ON ${DB_NAME}.* TO ${DB_USER}#localhost;"
fi
}
set_mysql_root_pw() {
# set root password for mysql.
echo "Setting root password"
/usr/bin/mysqladmin -u root password "${ROOT_PWD}"
# shutdown mysql reeady for supervisor to start mysql.
/usr/bin/mysqladmin -u root --password=${ROOT_PWD} shutdown
}
# 2.NGINX & PHP-FPM
#
# ################
create_www_dir() {
# Create LOG directoties for NGINX & PHP-FPM
echo "Creating www directories"
mkdir -p /DATA/logs/php-fpm
mkdir -p /DATA/logs/nginx
mkdir -p /DATA/www
}
apply_www_permissions(){
echo "Applying www permissions"
chown -R www-data:www-data /DATA/www /DATA/logs
}
# Running all script functions
create_data_dir
create_run_dir
create_log_dir
mysql_default_install
create_modx_database
set_mysql_root_pw
create_www_dir
apply_www_permissions
# Start Supervisor
echo "Starting Supervisor"
/usr/bin/supervisord -n -c /etc/supervisor/supervisord.conf
I can successfully run the container without mounting the /var/lib/mysql folder to local docker host using:
docker run --name modx.test --expose 80 -d -e 'VIRTUAL_HOST=modx.test' -e 'DB_NAME=modx' -e 'DB_USER=modx' -e 'DB_PASS=test' -v /data/sites/test:/DATA modx
If I try and mount /var/lib/mysql using the following:
docker run --name modx.test --expose 80 -d -e 'VIRTUAL_HOST=modx.test' -e 'DB_NAME=modx' -e 'DB_USER=modx' -e 'DB_PASS=test' -v /data/sites/test:/DATA -v /data/sites/test/mysql:/var/lib/mysql modx
the following error occurs: 2017-08-24 07:47:22 [ERROR] The data directory '/var/lib/mysql' already exist and is not empty.
Waiting for database server to accept connections.............................-e
Could not connect to database server. Aborting...
Managed to resolve the problem. By using /usr/sbin/mysqld --initialize-insecure --datadir=/var/lib/mysql but also adding a IF clause to check if the initial databases has already been created.
mysql_default_install() {
if [ ! -d "/var/lib/mysql/mysql" ]; then
echo "Creating the default database"
/usr/sbin/mysqld --initialize-insecure --datadir=/var/lib/mysql
else
echo "MySQL database already initialiazed"
fi
}
I am attempting to create a cluster of Hyperledger validating peers, each running on a different host, but it does not appear to be functioning properly.
After starting the root node and 3 peer nodes, this is the output running peer network list on the root node, vp0:
{"peers":[{"ID":{"name":"vp1"},"address":"172.17.0.2:30303","type":1},{"ID":{"name":"vp2"},"address":"172.17.0.2:30303","type":1},{"ID":{"name":"vp3"},"address":"172.17.0.2:30303","type":1}]}
This is the output from the same command on one of the peers, vp3:
{"peers":[{"ID":{"name":"vp0"},"address":"172.17.0.2:30303","type":1},{"ID":{"name":"vp3"},"address":"172.17.0.2:30303","type":1}]}
All of the peers only list themselves and the root, vp0, in their lists.
This is the log output from the root node, vp0: https://gist.github.com/mikezaccardo/f139eaf8004540cdfd24da5a892716cc
This is the log output from one of the peer nodes, vp3: https://gist.github.com/mikezaccardo/7379584ca4f67bce553c288541e3c58e
This is the command I'm running to create the root node:
nohup sudo docker run --name=$HYPERLEDGER_PEER_ID \
--restart=unless-stopped \
-i \
-p 5000:5000 \
-p 30303:30303 \
-p 30304:30304 \
-p 31315:31315 \
-e CORE_VM_ENDPOINT=http://172.17.0.1:4243 \
-e CORE_PEER_ID=$HYPERLEDGER_PEER_ID \
-e CORE_PEER_ADDRESSAUTODETECT=true \
-e CORE_PEER_NETWORKID=dev \
-e CORE_PEER_VALIDATOR_CONSENSUS_PLUGIN=pbft \
-e CORE_PBFT_GENERAL_MODE=classic \
-e CORE_PBFT_GENERAL_N=$HYPERLEDGER_CLUSTER_SIZE \
-e CORE_PBFT_GENERAL_TIMEOUT_REQUEST=10s \
joequant/hyperledger /bin/bash -c "rm config.yaml; cp /usr/share/go-1.6/src/github.com/hyperledger/fabric/consensus/obcpbft/config.yaml .; peer node start" > $HYPERLEDGER_PEER_ID.log 2>&1&
And this is the command I'm running to create each of the other peer nodes:
nohup sudo docker run --name=$HYPERLEDGER_PEER_ID \
--restart=unless-stopped \
-i \
-p 30303:30303 \
-p 30304:30304 \
-p 31315:31315 \
-e CORE_VM_ENDPOINT=http://172.17.0.1:4243 \
-e CORE_PEER_ID=$HYPERLEDGER_PEER_ID \
-e CORE_PEER_DISCOVERY_ROOTNODE=$HYPERLEDGER_ROOT_NODE_ADDRESS:30303 \
-e CORE_PEER_ADDRESSAUTODETECT=true \
-e CORE_PEER_NETWORKID=dev \
-e CORE_PEER_VALIDATOR_CONSENSUS_PLUGIN=pbft \
-e CORE_PBFT_GENERAL_MODE=classic \
-e CORE_PBFT_GENERAL_N=$HYPERLEDGER_CLUSTER_SIZE \
-e CORE_PBFT_GENERAL_TIMEOUT_REQUEST=10s \
joequant/hyperledger /bin/bash -c "rm config.yaml; cp /usr/share/go-1.6/src/github.com/hyperledger/fabric/consensus/obcpbft/config.yaml .; peer node start" > $HYPERLEDGER_PEER_ID.log 2>&1&
HYPERLEDGER_PEER_ID is vp0 for the root node and vp1, vp2, ... for the peer nodes, HYPERLEDGER_ROOT_NODE_ADDRESS is the public IP address of the root node, and HYPERLEDGER_CLUSTER_SIZE is 4.
This is the Docker image that I am using: github.com/joequant/hyperledger
Is there anything obviously wrong with my commands? Should the actual public IP addresses of the peers be showing up as opposed to just 172.17.0.2? Are my logs helpful / is any additional information needed?
Any help or insight would be greatly appreciated, thanks!
I've managed to get a noops cluster working in which all nodes discover each other and chaincodes successfully deploy.
I made a few fixes since my post above:
I now use mikezaccardo/hyperledger-peer image, a fork of yeasy/hyperledger-peer, instead of joequant/hyperledger.
I changed:
-e CORE_PEER_ADDRESSAUTODETECT=true \
to:
-e CORE_PEER_ADDRESS=$HOST_ADDRESS:30303 \
-e CORE_PEER_ADDRESSAUTODETECT=false \
so that each peer would advertise its public IP, not private.
And I properly tag my image as the official base image:
sudo docker tag mikezaccardo/hyperledger:latest hyperledger/fabric-baseimage:latest
Finally, for context, this is all related to my development of a blueprint for Apache Brooklyn which deploys a Hyperledger Fabric cluster. That repository, which contains all of the code mentioned in this post and answer, can be found here: https://github.com/cloudsoft/brooklyn-hyperledger.