I use jenkins to build and run my containers.
After the run via jenkins the container change status to Exited (0)
I think there is somethings wrong in my shell that run the application inside the contaier but i cannot find where the problem is.
This is the command I use to run the container:
sh "podman run -d --name igfsbase -p 36360:36360 -p 46460:46460 --restart=always igfsbase:5.5.0.17"
This is the command that I execute in my Dockerfile at the end of the image creation.
ENTRYPOINT ["./startdockerigfsbase.sh"]
CMD [""]
And this is the very simple shell startdockerigfsbase.sh where I just run two JVM:
#!/bin/bash
echo " start IGFS_ONLINE.." &
java -DAPPL=IGFS_ONLINE -Djava.rmi.server.hostname=127.0.0.1 -Djava.rmi.server.useLocalHostname=true -Xms256M -Xmx256M -Xss256K -XX:+UseParallelGC -jar lib/jtmsStarter.jar ./cfg/online/jtms.properties &
sleep 5 &
echo " start IGFS_BATCH.." &
java -DAPPL=IGFS_BATCH -Djava.rmi.server.hostname=127.0.0.1 -Djava.rmi.server.useLocalHostname=true -Xms256M -Xmx256M -Xss256K -XX:+UseParallelGC -jar lib/jtmsStarter.jar ./cfg/batch/batch.properties &
sleep 5 &
echo "Task Completed" &
tail -f /dev/null &
echo "End process"
This is the container logs:
2022-08-09T13:43:43.302395979+02:00 stdout F start IGFS_ONLINE..
2022-08-09T13:43:43.304333937+02:00 stdout F start IGFS_BATCH..
2022-08-09T13:43:43.307389201+02:00 stdout F Task Completed
2022-08-09T13:43:43.308149080+02:00 stdout F End process
After that, the container status is in: Exited (0)
I really don't know where the problem is.
Could you please indicate me if there is somethings wrong or to change within the Shell script?
Thanks in advance
Related
I have created this Dockerfile:
FROM couchdb:latest
EXPOSE 5984
COPY local.ini /opt/couchdb/etc/
But even though I specified [admins] inside of the local.ini, I still get this error at launch:
[error] 2022-11-06T17:55:49.799365Z nonode#nohost emulator -------- Error in process <0.15793.0> with exit value:
{database_does_not_exist,[{mem3_shards,load_shards_from_db,"_users",[{file,"src/mem3_shards.erl"},{line,400}]},{mem3_shards,load_shards_from_disk,1,[{file,"src/mem3_shards.erl"},{line,375}]},{mem3_shards,load_shards_from_disk,2,[{file,"src/mem3_shards.erl"},{line,404}]},{mem3_shards,for_docid,3,[{file,"src/mem3_shards.erl"},{line,97}]},{fabric_doc_open,go,3,[{file,"src/fabric_doc_open.erl"},{line,39}]},{chttpd_auth_cache,ensure_auth_ddoc_exists,2,[{file,"src/chttpd_auth_cache.erl"},{line,198}]},{chttpd_auth_cache,listen_for_changes,1,[{file,"src/chttpd_auth_cache.erl"},{line,145}]}]}
What do I need to do in order to avoid this error?
Dockerfile
FROM couchdb:latest
EXPOSE 5984
COPY setup.sh setup.sh
RUN sh setup.sh
setup.sh
#!/bin/sh -xe
cat >/opt/couchdb/etc/local.ini <<EOF
[couchdb]
single_node=true
[admins]
dbadmin = $(base32 /dev/random |head -1|cut -c-24)
EOF
nohup bash -c "/docker-entrypoint.sh /opt/couchdb/bin/couchdb &"
sleep 15
curl -X PUT http://127.0.0.1:5984/_users
curl -X PUT http://127.0.0.1:5984/_replicator
My docker file
FROM cassandra:4.0
MAINTAINER me
EXPOSE 9042
I want to run something like when cassandra image is fetched and super user is made inside container.
create keyspace IF NOT EXISTS XYZ WITH REPLICATION = { 'class' : 'SimpleStrategy', 'replication_factor' : 1 };
I have also tried out adding a shell script but it never connects to cassandra, my modified docker file is
FROM cassandra:4.0
MAINTAINER me
ADD entrypoint.sh /usr/local/bin/entrypoint.sh
RUN chmod 755 /usr/local/bin/entrypoint.sh
RUN mkdir scripts
COPY alter.cql scripts/
RUN chmod 755 scripts/alter.cql
EXPOSE 9042
CMD ["entrypoint.sh"]
My entrypoint looks like this
#!/bin/bash
export CQLVERSION=${CQLVERSION:-"4.0"}
export CQLSH_HOST=${CQLSH_HOST:-"localhost"}
export CQLSH_PORT=${CQLSH_PORT:-"9042"}
cqlsh=( cqlsh --cqlversion ${CQLVERSION} )
# test connection to cassandra
echo "Checking connection to cassandra..."
for i in {1..30}; do
if "${cqlsh[#]}" -e "show host;" 2> /dev/null; then
break
fi
echo "Can't establish connection, will retry again in $i seconds"
sleep $i
done
if [ "$i" = 30 ]; then
echo >&2 "Failed to connect to cassandra at ${CQLSH_HOST}:${CQLSH_PORT}"
exit 1
fi
# iterate over the cql files in /scripts folder and execute each one
for file in /scripts/*.cql; do
[ -e "$file" ] || continue
echo "Executing $file..."
"${cqlsh[#]}" -f "$file"
done
echo "Done."
exit 0
This never connects to my cassandra
Any ideas please help.
Thanks .
Your problem is that you don't call the original entrypoint to start Cassandra - you overwrote it with your own code, but it just running the cqlsh, without starting Cassandra.
You need to modify your code to start Cassandra using the original entrypoint script (source) that is installed as /usr/local/bin/docker-entrypoint.sh, and then execute your script, and then wait for termination signal (you can't just exit from your script, because it will terminate the image.
I have a Jenkins stage as:
stage("Deploy on Server") {
steps {
script {
sh 'sshpass -p "password" ssh -o "StrictHostKeyChecking=no" username#server "cd ../../to/app/path; sh redeploy.sh && exit;"'
}
}
}
and some scripts on my server (centos):
redeploy.sh:
declare -i result=0
...
sh restart.sh
result+=$?
echo "Step 6: result = " $result
# 7. if restart fail, restart /versions/*.jar "sh restart-previous.sh"
if [ $result != "0" ]
then
sh restart-previous.sh
result+=$?
fi
echo "Deploy done. Final result: " $result
restart.sh
nohup java -Xms8g -Xmx8g -jar app-name-1.0-allinone.jar &
Because I execute the redeploy.sh script from the Jenkins, the problem is that it will cling on Jenkins console and will log all application events there, instead to create a nohup file in the patch where my app is deployed.
In some examples I found that it is recommended to use nohup directly in ssh command, but I can't do this because I need to execute a script (with all the steps, nohup can't doing that) and not directly a command.
exit cmd will be ignored because the previous command will never be closed.
thanks
Finally, I found the solution. One problem was in restart.sh, because is needed to force from cmd to specify the log file. So, nohup is ignored/unused, and the command become:
java -Xms8g -Xmx8g -jar app-name-1.0-allinone.jar </dev/null>> logfile.log 2>&1 &
Another problem was with killing the previous jar process. Be very carrefour, because using project name as path in jenkins script, this will create a new process for your user and will be accidentally killed when you will want to stop your application:
def statusCode = sh returnStatus: true, script: 'sshpass -p "password" ssh -o "StrictHostKeyChecking=no" username#server "cd ../../to/app/path/app-folder; sh redeploy.sh;"'
if (statusCode != 0) {
currentBuild.result = 'FAILURE'
echo "FAILURE"
}
stop.sh
if pgrep -u username -f app-name
then
pkill -u username -f app-name
fi
# (app-name is a string, some words from the running cmd to open tha application)
Because app-folder from Jenkins script and app-name from stop.sh are equals (or even app-folder contains app-name value), when you'll try to kill app-name process, accidentally you'll kill the ssh connection and Jenkins will get 255 status code, but the redeploy.sh script from server will be done successfully because it will be executed independently.
The solution is so simple, but hard to be discovered. You should be sure that you give an explicit name for your search command which will find only and only the process id of your application.
Finally, stop.sh must be as:
if pgrep -u username -f my-app-v1.0.jar
then
pkill -u username -f my-app-v1.0.jar
fi
I am trying to find a "global" solution for injecting an SSH key into a container. I know that there are several solutions including docker build kit and so on...but I don't want to build an image and inject the SSH key. I want to inject the SSH key by using an existing image with docker compose.
I use the following docker compose file:
version: '3.1'
services:
server1:
image: XXXXXXX
container_name: server1
command: bash -c "/root/init.sh && python3 /root/my_python.py"
environment:
- MANAGED_HOST=mserver
volumes:
- ./init.sh:/root/init.sh
secrets:
- id_rsa
secrets:
id_rsa:
file: /home/user/.ssh/id_rsa
The init.sh is as follows:
#!/bin/bash
eval "$(ssh-agent -s)" > /dev/null
if [ ! -d "/root/.ssh/" ]; then
mkdir /root/.ssh
ssh-keyscan $MANAGED_HOST > /root/.ssh/known_hosts
fi
ssh-add -k /run/secrets/id_rsa
If I run docker compose with the parameter command
bash -c "/root/init.sh && python3 /root/my_python.py", then the SSH authentication to the appropriate remote host ($MANAGED_HOST) is not working.
An agent process is running:
root 8 1 0 12:50 ? 00:00:00 ssh-agent -s
known_hosts is OK:
root#c67655d87ced:~# cat /root/.ssh/known_hosts
BLABLABLA ssh-rsa AAAAB3BLABLABLA....
and the agent is running, but the private key is not added:
root#c67655d87ced:~# ssh-add -l
Could not open a connection to your authentication agent.
Now, if I log in the container (docker exec -it server1 /bin/bash) and run the commands from init.sh one by one from the command line, then the SSH authentication to the appropriate remote host ($MANAGED_HOST) is working?!?
Any idea, how I can get it working by using the docker compose?
It should be enough to cause the file $HOME/.ssh/id_rsa to exist with appropriate permissions; you don't need an ssh agent running.
#!/bin/sh
if ! [ -d "$HOME/.ssh" ]; then
mkdir "$HOME/.ssh"
fi
chmod 0700 "$HOME/.ssh"
if [ -n "$MANAGED_HOST" ]; then
ssh-keyscan "$MANAGED_HOST" >> "$HOME/.ssh/known_hosts"
fi
if [ -f /run/secrets/id_rsa ]; then
cp /run/secrets/id_rsa "$HOME/.ssh/id_rsa"
chmod 0400 "$HOME/.ssh/id_rsa"
fi
# exec "$#"
A typical pattern is to use the Dockerfile ENTRYPOINT to do first-time setup tasks like this. That will get passed the CMD as arguments, and the commented exec "$#" line at the end of the file runs that as a command. You'd set this up in your image's Dockerfile like:
FROM XXXXXX
...
# Script must be executable on the host, and must start with a
# #!/bin/sh "shebang" line
COPY init.sh /root
# MUST use JSON-array form
ENTRYPOINT ["/root/init.sh"]
# Can use any Dockerfile syntax
CMD ["python3", "/root/my_python.py"]
In your specific example, you're launching init.sh as a subprocess. The ssh-agent setup sets some environment variables, like $SSH_AUTH_SOCK, but when these run as a subprocess they don't get propagated back out to the host process. You can use the standard POSIX shell . builtin (the bash source builtin is equivalent, but non-standard) to cause those environment variables to be set in the context of the parent shell:
command: sh -c ". /root/init.sh && exec python3 /root/my_python.py"
The exec replaces the shell wrapper with the Python script, which you generally want. This will also wind up being the parent process of ssh-agent, which could potentially surprise your process if it happens to exit.
I have build a Docker image and afterwards run a container using Docker Compose. The following command will do the job for me:
docker-compose up -d
I have restarted the PC and now I want to start the previous container that I've created before. So I have tried the following command:
$ docker-compose start
Starting php-apache ... done
Apparently it works but it doesn't as per the output for the following command:
$ docker-compose ps
Name Command State Ports
---------------------------------------------------------------------------
php55devwork_php-apache_1 /bin/sh -c bash -C '/usr/l ... Exit 0
For sure something is wrong and I am trying to find out what.
How do I find why the command is failing?
Is there any place where I could see a log file or something that help me to identify and fix the error?
Here is the repository if you want to give it a try.
Update
If I remove the container: docker rm <container-id> and recreate it by running docker-compose up -d --build it works again.
Update #1
I am not able to see such weird characters:
This is what helped me to resolve this issue:
Under one of your services in the docker-compose yaml file, type in the following:
tty: true so it'll look like
version: '3'
services:
web:
tty: true
Hopefully this helps someone; thumps up if it helps you :)
I took a look into your Docker github and setup_php_settings
on line (line n. 27) there is source /etc/apache2/envvars && exec /usr/sbin/apache2 -DFOREGROUND
and that runs apache2 on foreground so it shouldn't exit with status code 0.
But it seems to me like your setup_php_settings contains some weird character (when I run your image with compose)
(original is one on right side) weird character
I have changed it to new lines and it worked for me. Let us know if it helped.
If you want to debug your docker container you can run it without entrypoint like:
docker run -it yourImage bash
-- AFTER some investigation:
There were still some errors when I restart docker container - like in your case stopped container and start after reboot. There were problems: symbolic links already exist and apache2 has grumpy PID so we need to do something like in oficial php docker
This is full setup_php_settings worked for me after container restart.
#!/bin/bash -x
set -e
PHP_ERROR_REPORTING=${PHP_ERROR_REPORTING:-"E_ALL & ~E_DEPRECATED & ~E_NOTICE"}
sed -ri 's/^display_errors\s*=\s*Off/display_errors = On/g' /etc/php5/apache2/php.ini
sed -ri 's/^display_errors\s*=\s*Off/display_errors = On/g' /etc/php5/cli/php.ini
sed -ri "s/^error_reporting\s*=.*$//g" /etc/php5/apache2/php.ini
sed -ri "s/^error_reporting\s*=.*$//g" /etc/php5/cli/php.ini
echo "error_reporting = $PHP_ERROR_REPORTING" >> /etc/php5/apache2/php.ini
echo "error_reporting = $PHP_ERROR_REPORTING" >> /etc/php5/cli/php.ini
mkdir -p /data/tmp/php/uploads
mkdir -p /data/tmp/php/sessions
mkdir -p /data/tmp/php/xdebug
chown -R www-data:www-data /data/tmp/php*
ln -sf /etc/php5/mods-available/zz-php.ini /etc/php5/apache2/conf.d/zz-php.ini
ln -sf /etc/php5/mods-available/zz-php-directories.ini /etc/php5/apache2/conf.d/zz-php-directories.ini
# Add symbolic link to get Zend out of the current install dir
ln -sf /usr/share/php/libzend-framework-php/Zend/ /usr/share/php/Zend
a2enmod rewrite
php5enmod mcrypt
# Apache gets grumpy about PID files pre-existing
: "${APACHE_PID_FILE:=${APACHE_RUN_DIR:=/var/run/apache2}/apache2.pid}"
rm -f "$APACHE_PID_FILE"
source /etc/apache2/envvars && exec /usr/sbin/apache2 -DFOREGROUND "$#"
You can check logs with docker compose logs.
Looking through your repo, you have
ENTRYPOINT bash -C '/usr/local/bin/setup_php_settings';'bash'
which, without an interactive session, bash will exit immediately (with an exit code 0) after reading the end of file on stdin.
Normally getting an exit 0 should be a reason to celebrate, as it indicates that your command has ended successfully (http://www.tldp.org/LDP/abs/html/exit-status.html).
Having had a look at your Dockerfile it looks like, your just invoking bash in your entry point which then for sure will exit (as it is non blocking). In order to serve some data, you should rather be calling php (which is a blocking operation that keeps the container up), like done in the official docker files for php (see the CMD ["php", "-a"] at https://github.com/docker-library/php/blob/1c56325a69718a3e3cf76179e75d070b7e23da62/5.6/Dockerfile)