504 timeout on wget command when executed from shell script - timeout

I have this wget command :
wget -P /home/storyplayr/storyplayr-web-playr -O sitemap.xml https://preprod.storyplayr.com/sitemap.xml?snapshots=true&onlyBlog=true --no-check-certificate
When executed from the command line, it works fine
When executed from a shell script (.sh), I have a 504 Getaway Time-out
I have tried to set up the time out on the wget command but it does nothing. It is clearly related to the sh command...
Any idea why ? Any idea on how to avoid this ?
Thanks !!!

Related

Execute xvfb-run as docker custom command

Is there any reason why xvfb-run will not be executed as docker overridden command?
Having an image from this Dockerfile:
FROM ubuntu:20.04
RUN apt-get update && apt-get install -y xvfb
Built with:
docker build -f Dockerfile.xvfb -t xvfb-test
If I execute a custom docker command with xfvb-run:
docker run xvfb-test bash -x /usr/bin/xvfb-run echo
It gets stuck and never ends
But, if I enter to the image docker run --rm -it xvfb-test bash, and execute the same command xvfb-run echo it finished immediately (meaning that the Xvfb server started and was able to execute the command)
This is an excerpt of xvfb-run script:
...
trap : USR1
(trap '' USR1; exec Xvfb ":$SERVERNUM" $XVFBARGS $LISTENTCP -auth $AUTHFILE >>"$ERRORFILE" 2>&1) &
XVFBPID=$!
wait || :
...
Executing with bash -x we can see what lines is the last that was executed:
+ XAUTHORITY=/tmp/xvfb-run.YwmHlq/Xauthority
+ xauth source -
+ trap : USR1
+ XVFBPID=16
+ wait
+ trap '' USR1
+ exec Xvfb :99 -screen 0 1280x1024x24 -nolisten tcp -auth /tmp/xvfb-run.YwmHlq/Xauthority
From this link
The docker --init option in the run command basically sets ENTRYPOINT to tini and passes the CMD to it or whatever you specify on the commandline. Without init, CMD becomes pid 1. In this case, /bin/bash
looks like running the command withouth init parameter, running as PID 1, is not handling the signal USR1 correctly.
Looking that the xvfb-run script and where it gets stucked seems that the signal USR1 (that Xvfb process sends) never gets propagated to the wait statement.
A way to force signal propagation is to add the --init flag to docker run command.
From the documentation, is exactly what it does.
Run an init inside the container that forwards signals and reaps processes

while infinite loop SH does not work as expected on docker startup

I have sh code (DashBoardImport.sh) like down below. It checks apı response to import a kibana dashboard in a infinite loop, If it gets a reponse with success, it breaks the loop :
#!/bin/sh
# use while loop to check if kibana is running
while true
do
response=$(curl -X POST elk:5601/api/saved_objects/_import -H "kbn-xsrf: true" --form file=#/etc/elasticsearch/CityCountDashBoard.ndjson | grep -oE "^\{\"success")
#curl -X GET elk:9200/git-demo-topic | grep -oE "^\{\"git" > /dev/null
#match=$?
echo $response
if [ '{"success' = $response ]
then
echo "Running import dashboard.."
#curl -X POST elk:5601/api/saved_objects/_import -H "kbn-xsrf: true" --form file=#/etc/elasticsearch/CityCountDashBoard.ndjson
break
else
echo "Kibana is not running yet"
sleep 5
fi
done
I run DashBoardImport.sh via docker file:
ADD ./CityCountDashBoard.ndjson /etc/elasticsearch/CityCountDashBoard.ndjson
ADD ./DashBoardImport.sh /etc/elasticsearch/DashBoardImport.sh
#ENTRYPOINT /etc/elasticsearch/DashBoardImport.sh &
USER root
RUN chmod +x /etc/elasticsearch/DashBoardImport.sh
#RUN /etc/elasticsearch/DashBoardImport.sh &
RUN nohup bash -c "/etc/elasticsearch/DashBoardImport.sh" >/dev/null 2>&1 &
I tried many options as you can see commented out. The sh works perfectly when I run it manually on the Docker Container. I kill the kibana service. then run the code. after I started the kibana, code succesfully workes as expected and imports the dashboard. But It does not work when it start on container automatically.
Do you have any idea?
Thanks alot in advance :)
A RUN step executes in a temporary container until the command returns and then docker captures the changes to the filesystem as a new layer in your image. Nothing else remains, no environment variables, running processes, etc, only the filesystem changes.
So when you RUN nohup ... & that process immediately returns since it's in the background (nohup ... & explicitly does that), and so the container exits, killing any processes that were running in the container, and captures the filesystem changes made, if any, to your image.
If you want something to run when you start the container, add it to your ENTRYPOINT or CMD.

Multiple health check curls in docker health check

In order to ensure the health check of my container, I need to perform test calls to multiple URLS.
curl -f http://example.com and curl -f http://example2.com
Is it possible to perform multiple curl calls for a docker health check?
You can set a script as the healthcheck command that contains a more complex logic to perform the healthcheck. That way you can do multiple requests via curl and let the script only return positive if all requests succeeded.
# example dockerfile
COPY files/healthcheck.sh /healthcheck.sh
RUN chmod +x /healthcheck.sh
HEALTHCHECK --interval=60s --timeout=10s --start-period=10s \
CMD /healthcheck.sh
Although I cannot test, I think you can use the following
HEALTHCHECK CMD (curl --fail http://example.com && curl --fail http://example2.com) || exit 1
If you want first to check this command manually (without exit part), you can check the last error code by
echo $? -> in linux
and
echo %errorlevel% -> in windows

Curl comand does not work after runing the docker file

I have a script shell code named job.sh that contains the following curl comand:
curl http://httpbin.org/user-agent
When i execute the script on my ubuntu Terminal, every thing works fine.
Using the cmnd "docker build" I build my docker file and the build works fine also.
But when I run the docker the curl comand is not executed and I get the following error : "curl: (3) URL using bad/illegal format or missing URL".
My docker file is like this :
FROM alpine
COPY ./job.sh /
RUN chmod +x /job.sh
RUN apk update \
&& apk add curl
ENTRYPOINT ["sh","/job.sh"]
CMD [""]
My script job.sh is like this :
#!/bin/sh
curl http://httpbin.org/user-agent
Thanks a lot for the help,
Just use quotes for url:
#!/bin/sh
curl "http://httpbin.org/user-agent"

oh-my-zsh installation returns non zero code

I'm trying to install oh-my-zsh as part of a Docker build (using a Dockerfile). Here's the dockerfile line in question:
RUN wget https://github.com/robbyrussell/oh-my-zsh/raw/master/tools/install.sh -O - | zsh`
and the error I get is:
The command [/bin/sh -c wget https://github.com/robbyrussell/oh-my-zsh/raw/master/tools/install.sh -O - | zsh] returned a non-zero code: 1
Full error gist here
To debug, I've run the command manually and it works. Did anyone have luck installing oh-my-zsh as part of a docker build? any idea why it behaves differently if run this way?
Build failing because install.sh return non zero code, when you execute script manually you are ignoring return code, but docker failing build. Usually non-zero return code indicate error, but if in this case everything ok you could ignore this error:
RUN wget https://github.com/robbyrussell/oh-my-zsh/raw/master/tools/install.sh -O - | zsh || true

Resources