I am having an issue with data persistence on my Elasticsearch docker image on my linux AWS EC2 machine.
I am launching the container like so:
docker run -d --name elasticsearch -p 9200:9200 -p 9300:9300 \
-v $PWD/elasticsearch/data:/usr/share/elasticsearch/data \
-e "discovery.type=single-node" \
docker.elastic.co/elasticsearch/elasticsearch-oss:6.2.4
The issue is with the -v $PWD/elasticsearch/data:/usr/share/elasticsearch/data line. On Mac everything works correctly and I can persist my data after bringing down the container, but on the linux machine I get permission errors on the /usr/share/elasticsearch/data directory in the container.
Error (line 3 is the critical part):
[2018-07-06T00:39:35,479][INFO ][o.e.n.Node ] [] initializing ...
[2018-07-06T00:39:35,503][WARN ][o.e.b.ElasticsearchUncaughtExceptionHandler] [] uncaught exception in thread [main]
org.elasticsearch.bootstrap.StartupException: java.lang.IllegalStateException: failed to obtain node locks, tried [[/usr/share/elasticsearch/data/docker-cluster]] with lock id [0]; maybe these locations are not writable or multiple nodes were started without increasing [node.max_local_storage_nodes] (was [1])?
at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:125) ~[elasticsearch-6.2.4.jar:6.2.4]
at org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:112) ~[elasticsearch-6.2.4.jar:6.2.4]
at org.elasticsearch.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:86) ~[elasticsearch-6.2.4.jar:6.2.4]
at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:124) ~[elasticsearch-cli-6.2.4.jar:6.2.4]
at org.elasticsearch.cli.Command.main(Command.java:90) ~[elasticsearch-cli-6.2.4.jar:6.2.4]
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:92) ~[elasticsearch-6.2.4.jar:6.2.4]
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:85) ~[elasticsearch-6.2.4.jar:6.2.4]
Caused by: java.lang.IllegalStateException: failed to obtain node locks, tried [[/usr/share/elasticsearch/data/docker-cluster]] with lock id [0]; maybe these locations are not writable or multiple nodes were started without increasing [node.max_local_storage_nodes] (was [1])?
at org.elasticsearch.env.NodeEnvironment.<init>(NodeEnvironment.java:244) ~[elasticsearch-6.2.4.jar:6.2.4]
at org.elasticsearch.node.Node.<init>(Node.java:264) ~[elasticsearch-6.2.4.jar:6.2.4]
at org.elasticsearch.node.Node.<init>(Node.java:246) ~[elasticsearch-6.2.4.jar:6.2.4]
at org.elasticsearch.bootstrap.Bootstrap$5.<init>(Bootstrap.java:213) ~[elasticsearch-6.2.4.jar:6.2.4]
at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:213) ~[elasticsearch-6.2.4.jar:6.2.4]
at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:323) ~[elasticsearch-6.2.4.jar:6.2.4]
at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:121) ~[elasticsearch-6.2.4.jar:6.2.4]
... 6 more
Caused by: java.io.IOException: failed to obtain lock on /usr/share/elasticsearch/data/nodes/0
at org.elasticsearch.env.NodeEnvironment.<init>(NodeEnvironment.java:223) ~[elasticsearch-6.2.4.jar:6.2.4]
at org.elasticsearch.node.Node.<init>(Node.java:264) ~[elasticsearch-6.2.4.jar:6.2.4]
at org.elasticsearch.node.Node.<init>(Node.java:246) ~[elasticsearch-6.2.4.jar:6.2.4]
at org.elasticsearch.bootstrap.Bootstrap$5.<init>(Bootstrap.java:213) ~[elasticsearch-6.2.4.jar:6.2.4]
at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:213) ~[elasticsearch-6.2.4.jar:6.2.4]
at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:323) ~[elasticsearch-6.2.4.jar:6.2.4]
at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:121) ~[elasticsearch-6.2.4.jar:6.2.4]
... 6 more
Caused by: java.nio.file.AccessDeniedException: /usr/share/elasticsearch/data/nodes/0/node.lock
at sun.nio.fs.UnixException.translateToIOException(UnixException.java:84) ~[?:?]
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102) ~[?:?]
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107) ~[?:?]
at sun.nio.fs.UnixFileSystemProvider.newFileChannel(UnixFileSystemProvider.java:177) ~[?:?]
at java.nio.channels.FileChannel.open(FileChannel.java:287) ~[?:1.8.0_161]
at java.nio.channels.FileChannel.open(FileChannel.java:335) ~[?:1.8.0_161]
at org.apache.lucene.store.NativeFSLockFactory.obtainFSLock(NativeFSLockFactory.java:125) ~[lucene-core-7.2.1.jar:7.2.1 b2b6438b37073bee1fca40374e85bf91aa457c0b - ubuntu - 2018-01-10 00:48:43]
at org.apache.lucene.store.FSLockFactory.obtainLock(FSLockFactory.java:41) ~[lucene-core-7.2.1.jar:7.2.1 b2b6438b37073bee1fca40374e85bf91aa457c0b - ubuntu - 2018-01-10 00:48:43]
at org.apache.lucene.store.BaseDirectory.obtainLock(BaseDirectory.java:45) ~[lucene-core-7.2.1.jar:7.2.1 b2b6438b37073bee1fca40374e85bf91aa457c0b - ubuntu - 2018-01-10 00:48:43]
at org.elasticsearch.env.NodeEnvironment.<init>(NodeEnvironment.java:209) ~[elasticsearch-6.2.4.jar:6.2.4]
at org.elasticsearch.node.Node.<init>(Node.java:264) ~[elasticsearch-6.2.4.jar:6.2.4]
at org.elasticsearch.node.Node.<init>(Node.java:246) ~[elasticsearch-6.2.4.jar:6.2.4]
at org.elasticsearch.bootstrap.Bootstrap$5.<init>(Bootstrap.java:213) ~[elasticsearch-6.2.4.jar:6.2.4]
at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:213) ~[elasticsearch-6.2.4.jar:6.2.4]
at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:323) ~[elasticsearch-6.2.4.jar:6.2.4]
at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:121) ~[elasticsearch-6.2.4.jar:6.2.4]
... 6 more
What do I need to add to make this work on linux?
This will work.
Set permission to:
sudo mkdir -p $PWD/elasticsearch/data
sudo chmod 777 -R $PWD/elasticsearch/data
Then:
docker run -d --name elasticsearch -p 9200:9200 -p 9300:9300 \
-v $PWD/elasticsearch/data:/usr/share/elasticsearch/data \
-e "discovery.type=single-node" \
--name elasticsearch \
docker.elastic.co/elasticsearch/elasticsearch-oss:6.2.4
I found a solution by investigating the ownership of the folders from inside and outside the docker container. It starting working by running the command sudo chown -R 1000:root $PWD/elasticsearch/data before launching the container so that, in the container, docker thought it owned the directory.
Why does the same docker run create two different results on two different machines? Isn't the point of docker to be one size fits all?
This will work for now but I would like a better solution because I'm not sure if mine will always work as I don't know if 1000 will be the UID for docker every time.
In you docker-compose.yml, you can specify the user with user: $USER, like this:
elasticsearch:
user: $USER
image: docker.elastic.co/elasticsearch/elasticsearch-oss:6.8.5
volumes:
- /srv/graylog/es_data:/usr/share/elasticsearch/data
This is also solves the problem, and you don't need to run chown command.
Why this error happens?
The Elasticsearch can't be started using root user, it's probably an extra security the developer team gave us. Therefore, the docker image for Elasticsearch checks if the current container user is root, if it is, it changes the user to elasticsearch:elasticsearch or 1000:1000.
Possible solution
You can change the directory owner to 1000:1000. It will work. But what if you can't change the owner? I had this problem recently, I was trying to map a NFS directory to docker and couldn't change the owner in NFS.
Final solution
the docker-entrypoint.sh located at /usr/local/bin/docker-entrypoint.sh is the responsible for checking and changing root user to elasticsearch. It can be overriden with a simple custom image like the following:
FROM docker.elastic.co/elasticsearch/elasticsearch:7.11.0
COPY ./docker-entrypoint.sh /usr/local/bin/docker-entrypoint.sh
Below there's an example of a modified docker-entrypoint.sh currently working for Elasticsearch 7.11.0. You can always get the docker-entrypoint.sh by starting a test container and using something like
docker cp <elasticsearch-container>:/usr/local/bin/docker-entrypoint.sh ./docker-entrypoint.sh
Modified docker-entrypoint.sh
#!/bin/bash
set -e
# Files created by Elasticsearch should always be group writable too
umask 0002
run_as_other_user_if_needed() {
if [[ "$(id -u)" == "0" ]]; then
# If running as root, drop to specified UID and run command
exec chroot --userspec=<uid>:<gid> / "${#}"
else
# Either we are running in Openshift with random uid and are a member of the root group
# or with a custom --user
exec "${#}"
fi
}
# Allow user specify custom CMD, maybe bin/elasticsearch itself
# for example to directly specify `-E` style parameters for elasticsearch on k8s
# or simply to run /bin/bash to check the image
if [[ "$1" != "eswrapper" ]]; then
if [[ "$(id -u)" == "0" && $(basename "$1") == "elasticsearch" ]]; then
# centos:7 chroot doesn't have the `--skip-chdir` option and
# changes our CWD.
# Rewrite CMD args to replace $1 with `elasticsearch` explicitly,
# so that we are backwards compatible with the docs
# from the previous Elasticsearch versions<6
# and configuration option D:
# https://www.elastic.co/guide/en/elasticsearch/reference/5.6/docker.html#_d_override_the_image_8217_s_default_ulink_url_https_docs_docker_com_engine_reference_run_cmd_default_command_or_options_cmd_ulink
# Without this, user could specify `elasticsearch -E x.y=z` but
# `bin/elasticsearch -E x.y=z` would not work.
set -- "elasticsearch" "${#:2}"
# Use chroot to switch to UID 1000 / GID 0
exec chroot --userspec=<uid>:<gid> / "$#"
else
# User probably wants to run something else, like /bin/bash, with another uid forced (Openshift?)
exec "$#"
fi
fi
# Allow environment variables to be set by creating a file with the
# contents, and setting an environment variable with the suffix _FILE to
# point to it. This can be used to provide secrets to a container, without
# the values being specified explicitly when running the container.
#
# This is also sourced in elasticsearch-env, and is only needed here
# as well because we use ELASTIC_PASSWORD below. Sourcing this script
# is idempotent.
source /usr/share/elasticsearch/bin/elasticsearch-env-from-file
if [[ -f bin/elasticsearch-users ]]; then
# Check for the ELASTIC_PASSWORD environment variable to set the
# bootstrap password for Security.
#
# This is only required for the first node in a cluster with Security
# enabled, but we have no way of knowing which node we are yet. We'll just
# honor the variable if it's present.
if [[ -n "$ELASTIC_PASSWORD" ]]; then
[[ -f /usr/share/elasticsearch/config/elasticsearch.keystore ]] || (run_as_other_user_if_needed elasticsearch-keystore create)
if ! (run_as_other_user_if_needed elasticsearch-keystore has-passwd --silent) ; then
# keystore is unencrypted
if ! (run_as_other_user_if_needed elasticsearch-keystore list | grep -q '^bootstrap.password$'); then
(run_as_other_user_if_needed echo "$ELASTIC_PASSWORD" | elasticsearch-keystore add -x 'bootstrap.password')
fi
else
# keystore requires password
if ! (run_as_other_user_if_needed echo "$KEYSTORE_PASSWORD" \
| elasticsearch-keystore list | grep -q '^bootstrap.password$') ; then
COMMANDS="$(printf "%s\n%s" "$KEYSTORE_PASSWORD" "$ELASTIC_PASSWORD")"
(run_as_other_user_if_needed echo "$COMMANDS" | elasticsearch-keystore add -x 'bootstrap.password')
fi
fi
fi
fi
if [[ "$(id -u)" == "0" ]]; then
# If requested and running as root, mutate the ownership of bind-mounts
if [[ -n "$TAKE_FILE_OWNERSHIP" ]]; then
chown -R <uid>:<gid> /usr/share/elasticsearch/{data,logs}
fi
fi
if [[ -n "$ES_LOG_STYLE" ]]; then
case "$ES_LOG_STYLE" in
console)
# This is the default. Nothing to do.
;;
file)
# Overwrite the default config with the stack config
mv /usr/share/elasticsearch/config/log4j2.file.properties /usr/share/elasticsearch/config/log4j2.properties
;;
*)
echo "ERROR: ES_LOG_STYLE set to [$ES_LOG_STYLE]. Expected [console] or [file]" >&2
exit 1 ;;
esac
fi
# Signal forwarding and child reaping is handled by `tini`, which is the
# actual entrypoint of the container
run_as_other_user_if_needed /usr/share/elasticsearch/bin/elasticsearch <<<"$KEYSTORE_PASSWORD"
notice that the comments in the code are the original ones, it may be confusing they saying the code will change the user to elasticsearch.
This solved it for me:
sudo chown 1000:1000 -R $PWD/elasticsearch/data
The reason is that ES now runs with user 1000 and needs the directory to have permissions for user 1000
In case somebody land here with the same problem as me.
java.lang.IllegalStateException: failed to obtain node locks, tried
[[/usr/share/elasticsearch/data]] with lock id [0]; maybe these locations
are not writable or multiple nodes were started without increasing
[node.max_local_storage_nodes] (was [1])?
And created 2+ containers with docker-compose.yml, check that the volumes are different for each container. It was that in my case.
Related
I've been trying to mount a Google Cloud bucket inside a pod in our (onprem) cluster in order to share that mounted volume using NFS to other pods and PersistentVolumes.
Here there are the configurations:
#!/bin/bash
_start_nfs() {
exportfs -a
rpcbind
rpc.statd
rpc.nfsd
rpc.mountd
GOOGLE_APPLICATION_CREDENTIALS=/accounts/key.json gcsfuse -o allow_other --dir-mode 777 --uid 1500 --gid 1500 ${BUCKET} /exports
}
_nfs_server_mounts() {
IFS=':' read -r -a MNT_SERVER_ARRAY <<< "$NFS_SERVER_DIRS"
for server_mnt in "${MNT_SERVER_ARRAY[#]}"; do
if [[ ! -d $server_mnt ]]; then
mkdir -p $server_mnt
fi
chmod -R 777 $server_mnt
cat >> /etc/exports <<EOF
${server_mnt} *(rw,sync,no_subtree_check,all_squash,anonuid=1500,anongid=1500,fsid=$(( ( RANDOM % 100 ) + 200 )))
EOF
done
cat /etc/exports
}
_sysconfig_nfs() {
cat > /etc/sysconfig/nfs <<EOF
#
#
# To set lockd kernel module parameters please see
# /etc/modprobe.d/lockd.conf
#
# Optional arguments passed to rpc.nfsd. See rpc.nfsd(8)
RPCNFSDARGS=""
# Number of nfs server processes to be started.
# The default is 8.
RPCNFSDCOUNT=${RPCNFSDCOUNT}
#
# Set V4 grace period in seconds
#NFSD_V4_GRACE=90
#
# Set V4 lease period in seconds
#NFSD_V4_LEASE=90
#
# Optional arguments passed to rpc.mountd. See rpc.mountd(8)
RPCMOUNTDOPTS=""
# Port rpc.mountd should listen on.
#MOUNTD_PORT=892
#
# Optional arguments passed to rpc.statd. See rpc.statd(8)
STATDARG=""
# Port rpc.statd should listen on.
#STATD_PORT=662
# Outgoing port statd should used. The default is port
# is random
#STATD_OUTGOING_PORT=2020
# Specify callout program
#STATD_HA_CALLOUT="/usr/local/bin/foo"
#
#
# Optional arguments passed to sm-notify. See sm-notify(8)
SMNOTIFYARGS=""
#
# Optional arguments passed to rpc.idmapd. See rpc.idmapd(8)
RPCIDMAPDARGS=""
#
# Optional arguments passed to rpc.gssd. See rpc.gssd(8)
# Note: The rpc-gssd service will not start unless the
# file /etc/krb5.keytab exists. If an alternate
# keytab is needed, that separate keytab file
# location may be defined in the rpc-gssd.service's
# systemd unit file under the ConditionPathExists
# parameter
RPCGSSDARGS=""
#
# Enable usage of gssproxy. See gssproxy-mech(8).
GSS_USE_PROXY="yes"
#
# Optional arguments passed to blkmapd. See blkmapd(8)
BLKMAPDARGS=""
EOF
}
### main ###
_sysconfig_nfs
_nfs_server_mounts
_start_nfs
rpcinfo -p
showmount -e
tail -f /dev/null
The Dockerfile:
FROM centos:7
RUN yum -y install /usr/bin/ps nfs-utils nfs4-acl-tools curl portmap fuse nfs-utils && yum clean all
RUN mkdir -p /exports
ENV RPCNFSDCOUNT=8 \
NFS_SERVER_DIRS='/usr'
RUN chown nfsnobody:nfsnobody /exports
RUN chmod 777 /exports
ADD setup.sh /usr/local/bin/run_nfs.sh
RUN chmod +x /usr/local/bin/run_nfs.sh
RUN useradd -u 1500 orenes
ADD gcsfuse-0.41.6-1.x86_64.rpm /data/gcsfuse-0.41.6-1.x86_64.rpm
RUN yum install -y /data/gcsfuse-0.41.6-1.x86_64.rpm
# Expose volume
VOLUME ["/exports"]
# expose mountd 20048/tcp and nfsd 2049/tcp and rpcbind 111/tcp
EXPOSE 2049/tcp 20048/tcp 111/tcp 111/udp
ENTRYPOINT ["/usr/local/bin/run_nfs.sh"]
Let's assume that there's a PersistentVolume mounting a Service pointing to the NFS server I described before.
So far I got access denied, unable to receive, etc...
Should I move to a SMB/CIFS sharing? Is there any solution on this? Thanks in advance.
It seems that FUSE and NFS don't get well along. I ended up using SMB and it works flawlessly but you must use the Service ClusterIP instead of Kubernetes's DNS.
The comment from #AviD about using a CSI is so helpful if you need to make it quick.
Again thanks to all.
I am trying to import a pipeline into streamsets, during container start up, by using the Docker CMD command in Dockerfile. The image builds, but while creating the container there is no error but it exits with code 0. So it never comes up. Here is what I did:
Dockerfile:
FROM streamsets/datacollector:3.18.1
COPY myPipeline.json /pipelinejsonlocation/
EXPOSE 18630
ENTRYPOINT ["/bin/sh"]
CMD ["/opt/streamsets-datacollector-3.18.1/bin/streamsets","cli","-U", "http://localhost:18630", \
"-u", \
"admin", \
"-p", \
"admin", \
"store", \
"import", \
"-n", \
"myPipeline", \
"--stack", \
"-f", \
"/pipelinejsonlocation/myPipeline.json"]
Build image:
docker build -t cmp/sdc .
Run image:
docker run -p 18630:18630 -d --name sdc cmp/sdc
This outputs the container id. But the container is in the Exited status as shown below.
docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
537adb1b05ab cmp/sdc "/bin/sh /opt/stream…" 5 seconds ago Exited (0) 3 seconds ago sdc
When I do not specify the CMD command in the Dockerfile, the streamsets container spins up and then when I run the streamsets import command in the running container in shell, it works. But how do I get it done during provisioning itself? Is there something I am missing in the Dockerfile?
In your Dockerfile you overwrite the default CMD and ENTRYPOINT from the StreamSets Data Collector Dockerfile. So the container only executes your command during startup and exits without errors afterwards. This is the reason why your container is in Exited (0) status.
In general this is good and expected behavior. If you want to keep your container alive you need to execute another command in the foreground, which never ends. But unfortunately, you cannot run multiple CMDs in your docker file.
I dug a little deeper. The default entry point of the image is ENTRYPOINT ["/docker-entrypoint.sh"]. This script sets up a few things and starts the Data Collector.
It is required that the Data Collector is running before the pipeline is imported. So a solution could be to copy the default docker-entrypoint.sh and modify it to start the Data Collector and import the pipeline afterwards. You could to it like this:
Dockerfile:
FROM streamsets/datacollector:3.18.1
COPY myPipeline.json /pipelinejsonlocation/
# Replace docker-entrypoint.sh
COPY docker-entrypoint.sh /docker-entrypoint.sh
EXPOSE 18630
docker-entrypoint.sh (https://github.com/streamsets/datacollector-docker/blob/master/docker-entrypoint.sh):
#!/bin/bash
#
# Copyright 2017 StreamSets Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
set -e
# We translate environment variables to sdc.properties and rewrite them.
set_conf() {
if [ $# -ne 2 ]; then
echo "set_conf requires two arguments: <key> <value>"
exit 1
fi
if [ -z "$SDC_CONF" ]; then
echo "SDC_CONF is not set."
exit 1
fi
grep -q "^$1" ${SDC_CONF}/sdc.properties && sed 's|^#\?\('"$1"'=\).*|\1'"$2"'|' -i ${SDC_CONF}/sdc.properties || echo -e "\n$1=$2" >> ${SDC_CONF}/sdc.properties
}
# support arbitrary user IDs
# ref: https://docs.openshift.com/container-platform/3.3/creating_images/guidelines.html#openshift-container-platform-specific-guidelines
if ! whoami &> /dev/null; then
if [ -w /etc/passwd ]; then
echo "${SDC_USER:-sdc}:x:$(id -u):0:${SDC_USER:-sdc} user:${HOME}:/sbin/nologin" >> /etc/passwd
fi
fi
# In some environments such as Marathon $HOST and $PORT0 can be used to
# determine the correct external URL to reach SDC.
if [ ! -z "$HOST" ] && [ ! -z "$PORT0" ] && [ -z "$SDC_CONF_SDC_BASE_HTTP_URL" ]; then
export SDC_CONF_SDC_BASE_HTTP_URL="http://${HOST}:${PORT0}"
fi
for e in $(env); do
key=${e%=*}
value=${e#*=}
if [[ $key == SDC_CONF_* ]]; then
lowercase=$(echo $key | tr '[:upper:]' '[:lower:]')
key=$(echo ${lowercase#*sdc_conf_} | sed 's|_|.|g')
set_conf $key $value
fi
done
# MODIFICATIONS:
#exec "${SDC_DIST}/bin/streamsets" "$#"
check_data_collector_status () {
watch -n 1 ${SDC_DIST}/bin/streamsets cli -U http://localhost:18630 ping | grep -q 'version' && echo "Data Collector has started!" && import_pipeline
}
function import_pipeline () {
sleep 1
echo "Start to import pipeline"
${SDC_DIST}/bin/streamsets cli -U http://localhost:18630 -u admin -p admin store import -n myPipeline --stack -f /pipelinejsonlocation/myPipeline.json
echo "Finished importing pipeline"
}
# Start checking if Data Collector is up (in background) and start Data Collector
check_data_collector_status & ${SDC_DIST}/bin/streamsets $#
I commented out the last line exec "${SDC_DIST}/bin/streamsets" "$#" of the default docker-entrypoint.sh and added two functions. check_data_collector_status () pings the Data Collector service until it is available. import_pipeline () imports your pipeline.
check_data_collector_status () runs in background and ${SDC_DIST}/bin/streamsets $# is started in foreground as before. So the pipeline is imported after the Data Collector service is started.
Run this image with sleep command:
docker run -p 18630:18630 -d --name sdc cmp/sdc sleep 300
300 is the time to sleep in seconds.
Then exec your script manually within the docker container and find out what's wrong.
I am trying to find a "global" solution for injecting an SSH key into a container. I know that there are several solutions including docker build kit and so on...but I don't want to build an image and inject the SSH key. I want to inject the SSH key by using an existing image with docker compose.
I use the following docker compose file:
version: '3.1'
services:
server1:
image: XXXXXXX
container_name: server1
command: bash -c "/root/init.sh && python3 /root/my_python.py"
environment:
- MANAGED_HOST=mserver
volumes:
- ./init.sh:/root/init.sh
secrets:
- id_rsa
secrets:
id_rsa:
file: /home/user/.ssh/id_rsa
The init.sh is as follows:
#!/bin/bash
eval "$(ssh-agent -s)" > /dev/null
if [ ! -d "/root/.ssh/" ]; then
mkdir /root/.ssh
ssh-keyscan $MANAGED_HOST > /root/.ssh/known_hosts
fi
ssh-add -k /run/secrets/id_rsa
If I run docker compose with the parameter command
bash -c "/root/init.sh && python3 /root/my_python.py", then the SSH authentication to the appropriate remote host ($MANAGED_HOST) is not working.
An agent process is running:
root 8 1 0 12:50 ? 00:00:00 ssh-agent -s
known_hosts is OK:
root#c67655d87ced:~# cat /root/.ssh/known_hosts
BLABLABLA ssh-rsa AAAAB3BLABLABLA....
and the agent is running, but the private key is not added:
root#c67655d87ced:~# ssh-add -l
Could not open a connection to your authentication agent.
Now, if I log in the container (docker exec -it server1 /bin/bash) and run the commands from init.sh one by one from the command line, then the SSH authentication to the appropriate remote host ($MANAGED_HOST) is working?!?
Any idea, how I can get it working by using the docker compose?
It should be enough to cause the file $HOME/.ssh/id_rsa to exist with appropriate permissions; you don't need an ssh agent running.
#!/bin/sh
if ! [ -d "$HOME/.ssh" ]; then
mkdir "$HOME/.ssh"
fi
chmod 0700 "$HOME/.ssh"
if [ -n "$MANAGED_HOST" ]; then
ssh-keyscan "$MANAGED_HOST" >> "$HOME/.ssh/known_hosts"
fi
if [ -f /run/secrets/id_rsa ]; then
cp /run/secrets/id_rsa "$HOME/.ssh/id_rsa"
chmod 0400 "$HOME/.ssh/id_rsa"
fi
# exec "$#"
A typical pattern is to use the Dockerfile ENTRYPOINT to do first-time setup tasks like this. That will get passed the CMD as arguments, and the commented exec "$#" line at the end of the file runs that as a command. You'd set this up in your image's Dockerfile like:
FROM XXXXXX
...
# Script must be executable on the host, and must start with a
# #!/bin/sh "shebang" line
COPY init.sh /root
# MUST use JSON-array form
ENTRYPOINT ["/root/init.sh"]
# Can use any Dockerfile syntax
CMD ["python3", "/root/my_python.py"]
In your specific example, you're launching init.sh as a subprocess. The ssh-agent setup sets some environment variables, like $SSH_AUTH_SOCK, but when these run as a subprocess they don't get propagated back out to the host process. You can use the standard POSIX shell . builtin (the bash source builtin is equivalent, but non-standard) to cause those environment variables to be set in the context of the parent shell:
command: sh -c ". /root/init.sh && exec python3 /root/my_python.py"
The exec replaces the shell wrapper with the Python script, which you generally want. This will also wind up being the parent process of ssh-agent, which could potentially surprise your process if it happens to exit.
I'm fairly new to node and nginx. I've a task of building a simple webserver which host dynamic contents. A very crucial part of the webserver is to take inputs from user about ports to be used , any custom domain to be used (in place of localhost) , SSL certificates etc. from installer [Its supposed to be built for docker ] but I have no idea how to execute a script such that is passes the variable entered by user ( like $SERVER_URI) to nginx.conf and node file and overwrite current data
I will suggest to create a config file and read the value from them so everything will be dynamic.
Here is how you can achieve SSL certificate and other ENV and port dynamically also docker name and image name will be get and set.
Create file docker.config which contain ports, ENV, path mapping, hosts values and links if you wish to link container. leave them blank
if you do not need them. remove host_port:container_port this entry
just for comment purpose.
docker.config
START_PORT_MAPPINGS
host_port:container_port
8080:80
END_PORT_MAPPINGS
START_PATH_MAPPINGS
/path_to_code/:/var/www/htlm/test
/path_to_nginx_config1:/etc/nginx/nginx.conf
/path_to_ssl_certs:/container_path_to_Certs
END_PATH_MAPPINGS
START_LINKING
db:db-server
END_LINKING
START_HOST_MAPPINGS
test.com:192.168.1.23
test2.com:192.168.1.23
END_HOST_MAPPINGS
START_ENV_VARS
MYSQL_ROOT_PASSWORD=1234
OTHER_ENV_VAR=value
END_ENV_VARS
create start.sh this will read the values from docker.config and will run command your docker container.
Need two arguments 1st: docker name and 2nd: image name.
function read_connfig() {
docker_name="${1}"
input="docker.config"
option_key=$(echo "${2}" | cut -d':' -f1)
config_name=$(echo "${2}" | cut -d':' -f2)
post_fix=$(echo "${2}" | cut -d':' -f3)
while IFS=$' \t\n\r' read -r line; do
if [[ $line == END_"${config_name}" ]] ; then
read_prop="no"
fi
if [[ $read_prop == "yes" ]] ; then
echo -n "${option_key}${line}${post_fix} "
fi
if [[ $line == START_"${config_name}" ]] ; then
read_prop="yes"
fi
done < "$input"
}
function get_run_configs() {
docker_name=${1}
declare -a configs=("-p :PORT_MAPPINGS:" "-v :PATH_MAPPINGS:" "--add-host=:HOST_MAPPINGS:" "-e :ENV_VARS:" "--link :LINKING:")
local run_command=""
for config in "${configs[#]}"
do
config_vals=($(read_connfig "${docker_name}" "${config}"))
if [ ! -z "${config_vals}" ];
then
for config_val in "${config_vals[#]}"
do
run_command="${run_command} ${config_val}"
done
else
echo >&2 "No config found for ${config}"
fi
done
echo "${run_command}"
}
container_name=$1;
image_name=$2
docker_command=$(get_run_configs $docker_name)
echo $docker_command
docker run --name $container_name $docker_command -dit $image_name
Resulting command will be. ./start.sh test test
docker run --name test -p host_port:container_port -p 8080:80 -v /path_to_code/:/var/www/htlm/test -v /path_to_nginx_config1:/etc/nginx/nginx.conf -v /path_to_ssl_certs:/container_path_to_Certs --add-host=test.com:192.168.1.23 --add-host=test2.com:192.168.1.23 -e MYSQL_ROOT_PASSWORD=1234 -e OTHER_ENV_VAR=value --link db:db-server -dit test
I have created database.sql file and added to a folder inside the container so that it can be used by the application.I want that my data should remain persistent even when my container is removed.I tried using volume.But how to add .sql file and make that file consistent.
sudo docker -v /datadir sqldb
here, sqldb is database image name and datadir is mount folder.
Dockerfile of sqldb:
FROM ubuntu:latest
RUN apt-get update
RUN apt-get upgrade -y
RUN apt-get -y install mysql-client mysql-server curl
ADD ./my.cnf /etc/mysql/my.cnf
RUN sed -i -e"s/^bind-address\s*=\s*127.0.0.1/bind-address = 0.0.0.0/" /etc/mysql/my.cnf
ADD database.sql /var/db/database.sql
ENV user root
ENV password password
ENV url file:/var/db/database.sql
ENV right WRITE
ADD ./start-database.sh /usr/local/bin/start-database.sh
RUN chmod +x /usr/local/bin/start-database.sh
EXPOSE 3306
CMD ["/usr/local/bin/start-database.sh"]
start-database.sh file
#!/bin/bash
# This script starts the database server.
echo "Creating user $user for databases loaded from $url"
a
# Import database if provided via 'docker run --env url="http:/ex.org/db.sql"'
echo "Adding data into MySQL"
/usr/sbin/mysqld &
sleep 5
curl $url -o import.sql
# Fixing some phpmysqladmin export problems
sed -ri.bak 's/-- Database: (.*?)/CREATE DATABASE \1;\nUSE \1;/g' import.sql
# Fixing some mysqldump export problems (when run without --databases switch)
# This is not tested so far
# if grep -q "CREATE DATABASE" import.sql; then :; else sed -ri.bak 's/-- MySQL dump/CREATE DATABASE `database_1`;\nUSE `database_1`;\n-- MySQL dump/g' import.sql; fi
mysql --default-character-set=utf8 < import.sql
rm import.sql
mysqladmin shutdown
echo "finished"
# Now the provided user credentials are added
/usr/sbin/mysqld &
sleep 5
echo "Creating user"
echo "CREATE USER '$user' IDENTIFIED BY '$password'" | mysql --default-character-set=utf8
echo "REVOKE ALL PRIVILEGES ON *.* FROM '$user'#'%'; FLUSH PRIVILEGES" | mysql --default-character-set=utf8
echo "GRANT SELECT ON *.* TO '$user'#'%'; FLUSH PRIVILEGES" | mysql --default-character-set=utf8
echo "finished"
if [ "$right" = "WRITE" ]; then
echo "adding write access"
echo "GRANT ALL PRIVILEGES ON *.* TO '$user'#'%' WITH GRANT OPTION; FLUSH PRIVILEGES" | mysql --default-character-set=utf8
fi
# And we restart the server to go operational
mysqladmin shutdown
cp /var/db/database.sql /var/lib/docker/volumes/mysqlvol/database.sql
echo "Starting MySQL Server"
/usr/sbin/mysqld
To keep data persisted even when container is removed please use volumes.
For more info look at:
SO: How to deal with persistent storage (e.g. databases) in docker
Docker Webinar Q&A: Persistent Storage & Docker
Manage data in containers
Compose and volumes
Volume plugins
I hope that helps. If not please ask.