My application requires a license file to run. I don't want to include this in the Docker image I'm going to distribute. I was hoping users should be able to provide their license file by mounting it as a volume during Docker run.
What I'm after is a way to transiently provide a license file to the Docker during the build phase. I can't mount volumes during this phase.
Is there a standard pattern for performing this kind of thing?
In short, I need to provide a dependent transient file during build, so that the build completes but that file isn't included in the final image.
What you want is an image that when built doesn't include the license file. It needs to be added each time the container starts.
I can think of several ways of doing this, the most obvious is to look for a mounted volume of a fixed name containing the license file at startup, and if not found exit with a message.
This Dockerfile illustrates the idea
FROM ubuntu:14.04
RUN echo "#!/bin/bash" >> /bin/startup.sh
RUN echo "if [ ! -e /license/license.txt ] " >> /bin/startup.sh
RUN echo "then " >> /bin/startup.sh
RUN echo " echo 'missing license'" >> /bin/startup.sh
RUN echo "exit 1 " >> /bin/startup.sh
RUN echo "fi " >> /bin/startup.sh
RUN echo "top " >> /bin/startup.sh
RUN chmod +x /bin/startup.sh
ENTRYPOINT ["/bin/startup.sh"]
Running this without a license director will cause it to not start.
Running with a license will run top forever.
> docker run -it test
missing license
> docker run -v `pwd`/license:/license -it test
[works]
In your Dockerfile's build step, you would have to copy in the transient file and then delete it. It's important that this is done in the same RUN command, otherwise the transient file might end up in a cached union FS...which would probably be against the licensing, if that's what said transient file is.
Something like this:
RUN curl url/to/transient_file -o /containter/path/to/transient_file \
&& your_build_steps \
&& rm -f /container/path/to/transient_file
You could specify it in an environment variable file
Related
I have an entrypoint script with docker which is getting executed. However, it just doesn't run the source command to source a file full of env values.
Here's the relevant section from tehe dockerfile
ENTRYPOINT ["/usr/local/bin/entrypoint.sh"]
CMD ["-production"]
I have tried 2 version of entrypoint script. Neither of them are working.
VERSION 1
#!/bin/bash
cat >> /etc/bash.bashrc <<EOF
if [[ -f "/usr/local/etc/${SERVICE_NAME}/${SERVICE_NAME}.env" ]]
then
echo "${SERVICE_NAME}.env found ..."
set -a
source "/usr/local/etc/${SERVICE_NAME}/${SERVICE_NAME}.env"
set +a
fi
EOF
echo "INFO: Starting ${SERVICE_NAME} application, environment:"
exec -a $SERVICE_NAME node .
VERSION 2
ENV_FILE=/usr/local/etc/${SERVICE_NAME}/${SERVICE_NAME}.env
if [[] -f "$ENV_FILE" ]; then
echo "INFO: Loading environment variables from file: ${ENV_FILE}"
set -a
source $ENV_FILE
set +a
fi
echo "INFO: Starting ${SERVICE_NAME} application..."
exec -a $SERVICE_NAME node .
Version 2 of above prints to the log that it has found the file however, source command simply isn't loading the contents of file into memory. I check if contents have been loaded by running the env command.
I've been trying few things for 3 days now with no progress. Please can someone help me? Please note I am new to docker which is making things quite difficult.
I think your second version is almost there.
Normally Docker doesn't read or use shell dotfiles at all. This isn't anything particular to Docker, just that you're not running an "interactive" or "login" shell at any point in the sequence. In your first form you write out a .bashrc file but then exec node, and nothing there ever re-reads the dotfile.
You mention in the question that you use the env command to check the environment. If this is via docker exec, that launches a new process inside the container, but it's not a child of the entrypoint script, so any setup that happens there won't be visible to docker exec. This usually isn't a problem.
I can suggest a couple of cleanups that might make it a little easier to see the effects of this. The biggest is to split out the node invocation from the entrypoint script. If you have both an ENTRYPOINT and a CMD then Docker passes the CMD as arguments to the ENTRYPOINT; if you change the entrypoint script to end with exec "$#" then it will run whatever it got passed.
#!/bin/sh
# (trying to avoid bash-specific constructs)
# Read the environment file
ENV_FILE="/usr/local/etc/${SERVICE_NAME}/${SERVICE_NAME}.env"
if [[ -f "$ENV_FILE" ]; then
. $ENV_FILE
fi
# Run the main container command
exec "$#"
And then in the Dockerfile, put the node invocation as the main command
ENTRYPOINT ["./entrypoint.sh"] # must be JSON-array syntax
CMD ["node", "."] # could be shell-command syntax
The important thing with this is that it's easy to override the command but leave the entrypoint intact. So if you run
docker run --rm your-image env
that will launch a temporary container, but passing env as the command instead of node .. That will go through the steps in the entrypoint script, including setting up the environment, but then print out the environment and exit immediately. That will let you observe the changes.
I am trying to fork this docker image so that if anything changes on the original it won't affect me.
I have forked the repo corresponding to that image to my own repo.
I have cloned the repo and am trying to build it:
docker build . -t davcal/gcc-cross-x86_64-elf
I am getting this error:
+ cd /usr/local/src
+ ./build-binutils.sh 2.31.1
/bin/sh: 1: ./build-binutils.sh: not found
The command '/bin/sh -c set -x && cd /usr/local/src && ./build-binutils.sh ${BINUTILS_VERSION} && ./build-gcc.sh ${GCC_VERSION}' returned a non-zero code: 127
What makes no sense to me is that if I use the original image, it builds successfully:
FROM randomdude/gcc-cross-x86_64-elf
...
Maybe Docker Hub stores a pre-built image?
How do I fix this?
Note: I am using Windows. This shouldn't make a difference since the error originates within the container.
Edit
I tried patching the Dockerfile to chmod executable permissions to the sh files in case that was causing problems on Windows. Unfortunately, the exact same error occurs.
RUN set -x \
&& chmod +x /usr/local/src/build-binutils.sh \
&& chmod +x /usr/local/src/build-gcc.sh \
&& cd /usr/local/src \
&& ./build-binutils.sh ${BINUTILS_VERSION} \
&& ./build-gcc.sh ${GCC_VERSION}
Edit 2
Following this method, I inspected the container to see if the sh files actually exist. Here is the output.
I ran docker run --rm -it c53693f11514 bash, including the hash of the intermediate container of the previous successful step of the Dockerfile.
This is the output showing that the files do exist:
root#9b8a64ac2090:/# cd usr/local/src
root#9b8a64ac2090:/usr/local/src# ls
binutils-2.31.1 build-binutils.sh build-gcc.sh gcc-8.2.0
From the described symptoms, file exists, is a shell script, and works on other machines, the "file not found" error is most likely from Winidows linefeeds being added to the file. When the Linux kernel processes a shell script, it looks at the first line, the #!/bin/sh or similar, and then finds that interpreter to run the shell script. If that interpreter isn't found, you'll get a "file not found" error.
In this case, the file it's looking for won't be /bin/sh, but instead /bin/sh\r or /bin/sh^M depending on how you want to represent the carriage return character. You can fix that for single files with a tool like dos2unix but in general, you'll want to fix git itself since there are likely other files that have had their linefeeds corrupted. For details on adjusting the behavior of git, see this post.
I'm building a Docker image from a supplied Docker image. I need to install new dependencies which require administrative privileges. The default user of the supplied image does not have administrative privileges. After I'm done, I want to change the default user back to the previous default user.
Is there a way to generically preserve the user of the base image without knowing it beforehand?
Note: I'm not asking how to inspect a docker image and find the default user outside the Dockerfile. I want to know if there is a way to do it within the file itself.
See example Dockerfile below
FROM supliedImage
USER root
... Perform administrative task
USER <Default user from supplied image>
well... there's no way to do it the way you'd like it but you can always use a workaround with multistage builds:
FROM solr:alpine as build
RUN whoami
FROM build as prepare_dependencies
USER root
RUN echo 'I am root' > /the_root_important_message
FROM build
COPY --from=prepare_dependencies /the_root_important_message /the_root_vital_message
RUN echo "now I'm $( whoami )"
RUN cat /the_root_vital_message
CMD echo 'this is a pain'
Somehow I'm fairly sure this is not what you're looking for...
Well, since this topic is pretty fun, I decided to attempt a different approach:
FROM solr:alpine as build
RUN whoami
FROM build as prepare_dependencies
USER root
RUN apk --no-cache --update add sudo \
&& echo 'ALL ALL=(ALL) NOPASSWD: ALL' > /etc/sudoers
RUN echo "this is secret" > /secretfile
RUN chmod 400 /secretfile
FROM build
COPY --from=prepare_dependencies /usr/bin/sudo /usr/bin/sudo
COPY --from=prepare_dependencies /usr/lib/sudo /usr/lib/sudo
COPY --from=prepare_dependencies /etc/sudoers /etc/sudoers
COPY --from=prepare_dependencies /secretfile /secretfile
RUN sudo -l
RUN sudo cat /secretfile
RUN echo "now I'm $( whoami )"
RUN echo "cleanup" \
&& sudo rm -rf \
/etc/sudoers /secretfile \
/usr/lib/sudo /usr/bin/sudo
CMD echo 'this is a pain'
I'm trying to create a Docker image from a pretty large installer binary (300+ MB). I want to add the installer to the image, install it, and delete the installer. This doesn't seem to be possible:
COPY huge-installer.bin /tmp
RUN /tmp/huge-installer.bin
RUN rm /tmp/huge-installer.bin # <- has no effect on the image size
Using multiple build stages doesn't seem to solve this, since I need to run the installer in the final image. If I could execute the installer directly from a previous build stage, without copying it, that would solve my problem, but as far as I know that's not possible.
Is there any way to avoid including the full weight of the installer in the final image?
I ended up solving this by using the built-in HTTP server in Python to make the project directory available to the image over HTTP.
Inside the Dockerfile, I can run commands like this, piping scripts directly to bash using curl:
RUN curl "http://127.0.0.1:${SERVER_PORT}/installer-${INSTALLER_VERSION}.bin" | bash
Or save binaries, run them and delete them in one step:
RUN curl -O "http://127.0.0.1:${SERVER_PORT}/binary-${INSTALLER_VERSION}.bin" && \
./binary-${INSTALLER_VERSION}.bin && \
rm binary-${INSTALLER_VERSION}.bin
I use a Makefile to start the server and stop it after the build, but you can use a build script instead.
Here's a Makefile example:
SHELL := bash
IMAGE_NAME := app-test
VERSION := 1.0.0
SERVER_PORT := 8580
.ONESHELL:
.PHONY: build
build:
# Kills the HTTP server when the build is done
function cleanup {
pkill -f "python3 -m http.server.*${SERVER_PORT}"
}
trap cleanup EXIT
# Starts a HTTP server that makes the contents of the project directory
# available to the image
python3 -m http.server -b 127.0.0.1 ${SERVER_PORT} &>/dev/null &
sleep 1
EXTRA_ARGS=""
# Allows skipping the build cache by setting NO_CACHE=1
if [[ -n $$NO_CACHE ]]; then
EXTRA_ARGS="--no-cache"
fi
docker build $$EXTRA_ARGS \
--network host \
--build-arg SERVER_PORT=${SERVER_PORT} \
-t ${IMAGE_NAME}:latest \
.
docker tag ${IMAGE_NAME}:latest ${IMAGE_NAME}:${VERSION}
I think the best way is to download the bin from a website then run it:
RUN wget http://myweb/huge-installer.bin && /tmp/huge-installer.bin && rm /tmp/huge-installer.bin
in this way your image layer will not contain the binary you download
I didn't test it thoroughly, but wouldn't such an approach be viable? (Besides LinPy's answer, which is way easier if you have the possibility to just do it that way.)
Dockerfile:
FROM alpine:latest
COPY entrypoint.sh /tmp/entrypoint.sh
RUN \
echo "I am an image that can run your huge installer binary!" \
&& echo "I will only function when you give it to me as a volume mount."
ENTRYPOINT [ "/tmp/entrypoint.sh" ]
entrypoint.sh:
#!/bin/sh
/tmp/your-installer # install your stuff here
while true; do
echo "installer finished, commit me now!"
sleep 5
done
Then run:
$ docker build -t foo-1
$ docker run --rm --name foo-1 --rm -d -v $(pwd)/your-installer:/tmp/your-installer
$ docker logs -f foo-1
# once it echoes "commit me now!", run the next command
$ docker commit foo-1 foo-2
$ docker stop foo-1
Since the installer was only mounted as a volume, the image foo-2 should not contain it anymore. You could also go and build another Dockerfile based on foo-2 to change the entrypoint, for example.
Cf. docker commit
A simple example of what i'm trying to do involves a Dockerfile like this:
from ubuntu
COPY script.sh /script.sh
RUN chmod a+x /script.sh
And a script file like this:
/script.sh
#!/bin/bash
echo hi `date`
sleep 1
echo hi `date`
I build and run like this and everything is fine and dandy:
docker build -t client .
docker run client /script.sh
When I do the above I see 'hi' twice with the date.
Now, if I want to be told 'hi' four times, I would think I could do this:
docker run client /script.sh && /script.sh
But that fails with the error:
bash: /script.sh: No such file or directory
Very odd.. since i am providing the full path to /script.sh.. I wonder why bash can't find it.
For built-in commands I can 'chain' using the '&&' operator. For example this works fine:
docker run client /script.sh && echo it works
If anyone could enlighten me, I'd be very grateful !
Your command is parsed on the host into the "docker run..." && /script.sh with obvious results. You might want to rephrase it to say docker run ... /bin/bash -c "/script.sh && /script.sh"