Curl comand does not work after runing the docker file - docker

I have a script shell code named job.sh that contains the following curl comand:
curl http://httpbin.org/user-agent
When i execute the script on my ubuntu Terminal, every thing works fine.
Using the cmnd "docker build" I build my docker file and the build works fine also.
But when I run the docker the curl comand is not executed and I get the following error : "curl: (3) URL using bad/illegal format or missing URL".
My docker file is like this :
FROM alpine
COPY ./job.sh /
RUN chmod +x /job.sh
RUN apk update \
&& apk add curl
ENTRYPOINT ["sh","/job.sh"]
CMD [""]
My script job.sh is like this :
#!/bin/sh
curl http://httpbin.org/user-agent
Thanks a lot for the help,

Just use quotes for url:
#!/bin/sh
curl "http://httpbin.org/user-agent"

Related

noVNC Docker with Jmeter - start button error, Could not create script recorder - keytool error: java.security.ProviderException

I am running Jmeter in noVNC, able to run Jmeter in noVNC but offcourse in default small window.
But when I create Http(s) script recorder and when click on Start button, I get this error
error is -> "Could not create script recorder -see log for details: >> keytool error: java.security.ProviderException: Could not initialize NSS << command failed code:1
'keytool -genkeypair -alias:root_ca: -dname"CN=_Jmeter Root CA for recording(INSTALL ONLY IF IT IS YOURS).......FULL ERROR in SCREENSHOT"'"
Tried creating Http(s) script recrorder with and without PRoxy setup in my Chrome browser, getting same error.
right hand side of screenshot
below is my Dockerfile
FROM uphy/novnc-alpine
RUN \
apk add --no-cache curl openjdk8-jre bash \
&& apk add --no-cache nss \
&& curl -L https://archive.apache.org/dist/jmeter/binaries/apache-jmeter-5.4.1.tgz > /tmp/jmeter.tgz \
&& mkdir -p /opt \
&& tar -xvf /tmp/jmeter.tgz -C /opt \
&& rm /tmp/jmeter.tgz \
&& cd /etc/supervisor/conf.d \
&& echo '[program:jmeter]' >> supervisord.conf \
&& echo 'command=/opt/apache-jmeter-5.4.1/bin/./jmeter' >> supervisord.conf \
&& echo 'autorestart=true' >> supervisord.conf
ENV JAVA_HOME /usr/lib/jvm/java-1.8-openjdk/
RUN export JAVA_HOME
This is how I am running (related to Use Jmeter desktop application as web app)
creating docker image with noVNC and running Jmeter inside noVNC (dockerfile also provided in the end)
exposing it to some port and accessing it in browser
docker build -t jmeter .
docker run -it --rm -p 8080:8080 jmeter
I checked my docker container also, able to see JDK, jdk is already present here -> /usr/lib/jvm/java-1.8-openjdk/ and jmeter is present here /opt/apache-jmeter-5.4.1
I am not sure should I pass more options or arguments inside docker run command.
I am wondering, how this jmeter will create the certificate inside my bin directory on click of start button, since this Jmeter is running inside noVNC docker ?
Any other way by which we can automatically integrate/create this certificate without importing or without clicking on start button.
How Proxy setting can be done if Jmeter in running inside noVNC container.
I think you need to install nss package
change this line:
apk add --no-cache curl openjdk8-jre bash \
to this one:
apk add --no-cache curl openjdk8-jre bash nss \
Once you re-build the image the HTTP(S) Test Script Recorder should launch normally.
With regards to the certificate, it will be stored in JMeter's "bin" folder in the container so if you want to use in in the browser in the container - you will have to install the browser there as well.
If you want to use the browser on your local machine - you will need to copy the certificate from the container and to expose another port for JMeter's HTTP(S) test script recorder.
Just in case be aware that you can also record JMeter test scripts using JMeter Chrome Extension, in this case you won't have to worry about proxies, certificates and ports.

Issue Running an Initialization Script with Nginx Docker Container

I am creating a Nginx docker image that I'll be using as a reverse proxy component in ECS/Fargate in AWS. I'm using the official Nginx image as the base image (1.17.5).
When the container starts I'm trying to run a bash script from an ENTRYPOINT to go out to the AWS Parameter Store and retrieve certificate info. This work fine, however when I try to add a parameter to pass to the bash script (e.g. ENTRYPOINT ["installcerts.sh", "AppName"] it executes the script but the container terminates without error.
I want the container to continue on to start up Nginx after the parameterized batch script.
Here is my Docker File:
FROM nginx:1.17.5
# Install AWS CLI/BOTO3, JQ
RUN apt-get update && apt-get install -y && apt-get install awscli -y && apt-get install jq -y
# Copy Nginx config to etc/nginx
COPY proxy_ssl.conf /etc/nginx/conf.d/
VOLUME ["/etc/nginx/conf/d"]
# Copy entrypoint bash script to install certs from the AWS Parameter Store
COPY installcerts.sh /usr/local/bin/
RUN chmod +x /usr/local/bin/installcerts.sh
#Pull certs from Parameter Store
ENTRYPOINT ["/usr/local/bin/installcerts.sh", "AppName"]
CMD ["nginx", "-g", "daemon off;"]
And here is my "installcerts.sh" script showing utilizing the parameter passed in from ENTRYPOINT.
#!/usr/bin/env bash
-e
echo Installing certs...
aws ssm get-parameters --name /Certificate/$1/CRT | jq '.Parameters[0].Value' -r > /etc/nginx/conf.d/app.crt
aws ssm get-parameters --name /Certificate/$1/KEY | jq '.Parameters[0].Value' -r > /etc/nginx/conf.d/app.key
echo Exiting script.
exec "$#"
The "exec "$#" in the bash script is needed, but honestly, I don't completely understand how or why this works even after hours of trying to track it down.
The short story is:
If I use this, the container does what i want it to do but I can't send a parameter to the bash script.
ENTRYPOINT ["/usr/local/bin/installcerts.sh"]
But If I use this, the script will run WITH the parameter successfully, but the container exits and Nginx doesn't start up.
ENTRYPOINT ["/usr/local/bin/installcerts.sh", "AppName"]
What am I doing wrong?
When you set an ENTRYPOINT on your image, then docker passes that script the value of CMD (or whatever you pass on the command line after the image name). For example, if you have:
ENTRYPOINT ["/entrypoint.sh"]
CMD ["/usr/bin/myprogram"]
Then docker runs, effectively:
/entrypoint.sh /usr/bin/myprogram
That is, Docker itself never runs /usr/bin/myprogram: it is entirely up to the ENTRYPOINT script to do that. That is what the exec "$#" is for. This is a shell variable the evaluates to:
Expands to the positional parameters, starting from one.
(bash(1) man page, in the "Special Parameters" section)
In our example, this would evaluate to:
exec /usr/bin/myprogram
...which replaces the current script with /usr/bin/myprogram. But if we were to set ENTRYPOINT as you have in your question:
ENTRYPOINT ["/entrypoint.sh", "appName"]
Then exec "$#" will in fact evaluate to:
exec appName /usr/bin/myprogram
And since appName isn't a valid command, the container will simply fail.
There are a few ways of dealing with this:
Do you really need to pass parameters to your ENTRYPOINT script? What about using environment variables instead?
If you always pass parameters to your script, you can use the shift shell command to drop those from the positional parameters before using $#. For example, for a script that expects two parameters:
param1=$1
param2=$2
shift 2
...do stuff here...
exec "$#"
...but this only works if you always pass two parameters.
You can implement command line option processing in your script using e.g. the getopts command:
while getopts a:b: ch; do
case $ch in
(a) param1=$OPTARG
;;
(b) param2=$OPTARG
;;
esac
done
shift $(( $OPTIND - 1 ))
...do stuff here...
exec "$#"
Having said that: I would opt for option 1 (use environment variables) as being the simplest solution:
docker run -e PARAM1="some value" ...
And then in your ENTRYPOINT script you can just use the $PARAM1 variable where you need it.

Is it possible to add an installer, run it and delete it during one build step in Docker?

I'm trying to create a Docker image from a pretty large installer binary (300+ MB). I want to add the installer to the image, install it, and delete the installer. This doesn't seem to be possible:
COPY huge-installer.bin /tmp
RUN /tmp/huge-installer.bin
RUN rm /tmp/huge-installer.bin # <- has no effect on the image size
Using multiple build stages doesn't seem to solve this, since I need to run the installer in the final image. If I could execute the installer directly from a previous build stage, without copying it, that would solve my problem, but as far as I know that's not possible.
Is there any way to avoid including the full weight of the installer in the final image?
I ended up solving this by using the built-in HTTP server in Python to make the project directory available to the image over HTTP.
Inside the Dockerfile, I can run commands like this, piping scripts directly to bash using curl:
RUN curl "http://127.0.0.1:${SERVER_PORT}/installer-${INSTALLER_VERSION}.bin" | bash
Or save binaries, run them and delete them in one step:
RUN curl -O "http://127.0.0.1:${SERVER_PORT}/binary-${INSTALLER_VERSION}.bin" && \
./binary-${INSTALLER_VERSION}.bin && \
rm binary-${INSTALLER_VERSION}.bin
I use a Makefile to start the server and stop it after the build, but you can use a build script instead.
Here's a Makefile example:
SHELL := bash
IMAGE_NAME := app-test
VERSION := 1.0.0
SERVER_PORT := 8580
.ONESHELL:
.PHONY: build
build:
# Kills the HTTP server when the build is done
function cleanup {
pkill -f "python3 -m http.server.*${SERVER_PORT}"
}
trap cleanup EXIT
# Starts a HTTP server that makes the contents of the project directory
# available to the image
python3 -m http.server -b 127.0.0.1 ${SERVER_PORT} &>/dev/null &
sleep 1
EXTRA_ARGS=""
# Allows skipping the build cache by setting NO_CACHE=1
if [[ -n $$NO_CACHE ]]; then
EXTRA_ARGS="--no-cache"
fi
docker build $$EXTRA_ARGS \
--network host \
--build-arg SERVER_PORT=${SERVER_PORT} \
-t ${IMAGE_NAME}:latest \
.
docker tag ${IMAGE_NAME}:latest ${IMAGE_NAME}:${VERSION}
I think the best way is to download the bin from a website then run it:
RUN wget http://myweb/huge-installer.bin && /tmp/huge-installer.bin && rm /tmp/huge-installer.bin
in this way your image layer will not contain the binary you download
I didn't test it thoroughly, but wouldn't such an approach be viable? (Besides LinPy's answer, which is way easier if you have the possibility to just do it that way.)
Dockerfile:
FROM alpine:latest
COPY entrypoint.sh /tmp/entrypoint.sh
RUN \
echo "I am an image that can run your huge installer binary!" \
&& echo "I will only function when you give it to me as a volume mount."
ENTRYPOINT [ "/tmp/entrypoint.sh" ]
entrypoint.sh:
#!/bin/sh
/tmp/your-installer # install your stuff here
while true; do
echo "installer finished, commit me now!"
sleep 5
done
Then run:
$ docker build -t foo-1
$ docker run --rm --name foo-1 --rm -d -v $(pwd)/your-installer:/tmp/your-installer
$ docker logs -f foo-1
# once it echoes "commit me now!", run the next command
$ docker commit foo-1 foo-2
$ docker stop foo-1
Since the installer was only mounted as a volume, the image foo-2 should not contain it anymore. You could also go and build another Dockerfile based on foo-2 to change the entrypoint, for example.
Cf. docker commit

Running docker-compose on a docker gitlab-ci-multi-runner

I have a project running on Docker with docker-compose for dev environment.
I want to get it running on GitLabCI with a gitlab-ci-multi-runner "Docker mode" instance.
Here is my .gitlab-ci.yml file:
image: soullivaneuh/docker-bash
before_script:
- apk add --update bash curl
- curl --silent --location https://github.com/docker/compose/releases/download/1.5.2/docker-compose-`uname -s`-`uname -m` > /usr/local/bin/docker-compose
- chmod +x /usr/local/bin/docker-compose
- ./configure
- docker-compose up -d
Note that soullivaneuh/docker-bash image is just a docker image with bash installed.
The script fails on docker-compose up -d command:
gitlab-ci-multi-runner 0.7.2 (998cf5d)
Using Docker executor with image soullivaneuh/docker-bash ...
Pulling docker image soullivaneuh/docker-bash:latest ...
Running on runner-1ee5079f-project-3-concurrent-1 via sd-59984...
Fetching changes...
Removing app/config/parameters.yml
Removing docker-compose.env
HEAD is now at 5c5e7ff remove docker service
From https://git.dummy.net/project/project
5c5e7ff..45e643d docker-ci -> origin/docker-ci
Checking out 45e643dd as docker-ci...
Previous HEAD position was 5c5e7ff... remove docker service
HEAD is now at 45e643d... Remove docker info commands
$ apk add --update bash curl
fetch http://dl-4.alpinelinux.org/alpine/v3.2/main/x86_64/APKINDEX.tar.gz
OK: 10 MiB in 28 packages
$ curl --silent --location https://github.com/docker/compose/releases/download/1.5.2/docker-compose-`uname -s`-`uname -m` > /usr/local/bin/docker-compose
$ chmod +x /usr/local/bin/docker-compose
$ ./configure
$ docker-compose up -d
bash: line 30: /usr/local/bin/docker-compose: No such file or directory
ERROR: Build failed with: exit code 1
I have absolutly no idea why this is failing.
Thanks for help.
The No such file or directory is misleading. I've received that many times while trying to run dynamically linked binaries using alpine linux (which it appears you are using).
The problem (as I understand it) is that the binary was compiled and linked against glibc, but alpine uses musl, not glibc.
You could use ldd /usr/local/bin/docker-compose to tell you which libraries are missing (or run it with strace if all else fails).
To get it working, it might be easier to install from python source (https://docs.docker.com/compose/install/#install-using-pip), which is what the official compose image does (https://github.com/docker/compose/blob/master/Dockerfile.run).
Or you could use an image built on debian or some other distro that uses glibc.

Docker can't find a script that I COPY to an image when I try to invoke it twice via /script.sh && /script.sh (but once works!)

A simple example of what i'm trying to do involves a Dockerfile like this:
from ubuntu
COPY script.sh /script.sh
RUN chmod a+x /script.sh
And a script file like this:
/script.sh
#!/bin/bash
echo hi `date`
sleep 1
echo hi `date`
I build and run like this and everything is fine and dandy:
docker build -t client .
docker run client /script.sh
When I do the above I see 'hi' twice with the date.
Now, if I want to be told 'hi' four times, I would think I could do this:
docker run client /script.sh && /script.sh
But that fails with the error:
bash: /script.sh: No such file or directory
Very odd.. since i am providing the full path to /script.sh.. I wonder why bash can't find it.
For built-in commands I can 'chain' using the '&&' operator. For example this works fine:
docker run client /script.sh && echo it works
If anyone could enlighten me, I'd be very grateful !
Your command is parsed on the host into the "docker run..." && /script.sh with obvious results. You might want to rephrase it to say docker run ... /bin/bash -c "/script.sh && /script.sh"

Resources