I Have a dockerfile and in one of last steps, I download a WAR file from artifactory, so I can use it in the containers in the webapps/ directory.
Thing is I donĀ“t want to show user and pass of curl -u command. How can I hide both users and password in following command? Is there a way in docker to hide/encrypt passwords?
RUN curl -u user:pass -O "https://artifactory.xxxx.com:443/artifactory/api/api-0.0.1-SNAPSHOT.war"
You can use multi stage build to achieve a lightweight image, but you have to use one single docker build, instead of two. Like this single Dockerfile:
FROM maven as build
(... Your app build....)
FROM tomcat
COPY --from=build artifact.war /dest/dir
Everything before the second FROM is discarded from the resulting image, so it will contain Tomcat, not Maven, and your copied artifact.
You can't hide the contents of RUN commands from docker history.
You should download the artifact outside the Dockerfile. A script that did the curl command and then ran docker build might simplify things for you. In addition to not exposing the credentials via docker history, this also avoids committing them to source control, makes it possible to build the image even if your Artifactory is unreachable, and simplifies a developer-oriented sequence where they need to build a temporary image out of something they've built themselves.
According to the documentation, it could be done with this new BuildKit feature RUN --mount=type=secret:
# syntax=docker/dockerfile:1.2
FROM ubuntu:22.04
RUN --mount=type=secret,id=creds,target=/tmp/artifactory.credentials \
curl -k /tmp/artifactory.credentials -O <URL>
# REST of the DockerFile
Then you can build this image like this:
docker build -secret id=creds,src=$HOME/creds/artifactory.curl .
Content of the $HOME/creds/artifactory.curl:
-u USER:PASS
For additional info on -k, please follow the official documentation
The official cUrl documentation says -K not -k
Related
Over here is a use case - I want to download and extract all files from a particular website and allow users to specify from which workweek it might be done. Please, imagine using one docker command and specifying only the variable which tells where to go, download and extract files.
The problem is I want to allow a user to manipulate variables that refer to a particular workweek.
Now it is only my idea, not sure If I am thinking right before I start to design my Dockerfile.
Dockerfile:
...
ENV TARGET="$WW_DIR"
...
Now you can imagine that the first user wants to download files from WW17 so he can type:
docker container run -e TARGET=WW17 <image_name>
The second one wants to download files from WW25:
docker container run -e TARGET=WW25 <image_name>
Etc.
Underhood Dockerfile knows that it must go to the directory from WW17 (in the first scenario) or WW25 (in the second scenario). My imagination is that a new container is created then using for example "curl" files are downloaded from an external server and extracted.
Can you recommend to me the best methods with some examples of how to solve it? Apply bash script inside of the container?
Thanks.
There is no Dockerfile at docker container run, it just runs the command. So write a command that does what you want or add the data to the image when building it with Dockerfile.
# Dockerfile
FROM your_favourite_image
COPY your_script /
RUN chmod +x /your_script
CMD /your_script
# your_script
#!/usr/bin/env your_favourite_langauge_like_python_or_bash_or_perl
# download the $TARGET or whatever you want to do
And then
docker build -t image .
docker run -r TARGET=WW1 image
Reading: https://docs.docker.com/engine/reference/builder/#cmd https://docs.docker.com/engine/reference/builder/#entrypoint https://docs.docker.com/get-started/overview/ https://btholt.github.io/complete-intro-to-containers/dockerfile
I created a Dockerfile for my website development (Jekyll in this case, but I do not think that matters much).
In case this information is helpful, I code locally using Visual Studio Code and the Remote Containers extension. This extension allows me to manage my code locally while keeping it in sync with the container.
To publish my website, I run a GitHub Action that creates a container from my Dockerfile and then runs all the build code from an entrypoint.sh file. Here is the pertinent code from my Dockerfile:
FROM ruby:alpine as jekyll
ENV env_workspace_directory=$workspace_directory
... more irrelevant code ...
RUN echo "#################################################"
RUN echo "Copy the GitHub repo to the Docker container"
RUN echo "COPY . ${env_workspace_directory}"
COPY . ${env_workspace_directory}
RUN echo "#################################################"
RUN echo "Run the entrypoint "
ENTRYPOINT ["/entrypoint.sh"]
Because I am using the Remote Containers VS Code extension, I do not want the Dockerfile to contain the COPY . ${env_workspace_directory} code. Instead, I only want that code to run when used as a GitHub Action.
So I guess I have two questions, with the first being ideal:
Is it possible to write like-type code that will copy the contents of the currently open GitHub branch (or at least the main branch), including all files and subfolders into the Docker container using the entrypoint.sh file instead? If so, what would that entrypoint.sh code look like?
Is it possible to leave the COPY command in the Dockerfile and make it conditional? For example "Only run the COPY command if running from a GitHub Action"?
For #1 above, I reviewed this Stack Overflow article that says you can use the docker cp command, but I am unsure if that is (a) correct and (b) how to be sure I am using the $workspace_directory.
I am very new to Dockerfiles, writing shell commands, and GitHub Actions, so apologies if this question is an easy one or if more clarifications are required.
Here is the Development repo if that is useful.
A Docker volume mount happens after the image is built but before the main container process is run. So if you do something like
docker run -v "$PWD/site:/site" your-image
then the entrypoint.sh script will see the host content in the container's /site directory, even if something different had been COPYed in the Dockerfile.
There are no conditionals or flow control in Dockerfiles, beyond shell syntax within individual RUN instructions. You could in principle access a Git repository in your container process, but managing the repository location, ssh credentials, branches, uncommitted files, etc. can get complex.
Depending on how you're using the image, I could suggest two approaches here.
If you have a deploy-time action that uses this image in its entirety to build the site, then just leave your Dockerfile as-is. In development use a bind mount to inject your host content; don't especially worry about skipping the image COPY here.
Another approach is to build the image containing the Jekyll tool, but to treat the site itself as data. In that case you'd always run the image with a docker run -v or Compose volumes: option to inject the data. In the Dockerfile you might create an empty directory to be safe
RUN mkdir "${env_workspace_directory}" # consider a fixed path
and in your entrypoint script you can verify the site exists
if [ ! -f "$env_workspace_directory/_site.yml" ]; then
cat >&2 <<EOF
There does not seem to be a Jekyll site in $env_workspace_directory.
Please re-run this container with the site mounted.
EOF
exit 1
fi
I'm new to docker and am trying to dockerize an app I have. Here is the dockerfile I am using:
FROM golang:1.10
WORKDIR /go/src/github.com/myuser/pkg
ADD . .
RUN curl https://raw.githubusercontent.com/golang/dep/master/install.sh | sh
RUN dep ensure
CMD ["go", "run", "cmd/pkg/main.go"]
The issue I am running into is that I will update source files on my local machine with some log statements, rebuild the image, and try running it in a container. However, the CMD (go run cmd/pkg/main.go) will not reflect the changes I made.
I looked into the container filesystem and I see that the source files are updated and match what I have locally. But when I run go run cmd/pkg/main.go within the container, I don't see the log statements I added.
I've tried using the --no-cache option when building the image, but that doesn't seem to help. Is this a problem with the golang image, or my dockerfile setup?
UPDATE: I have found the issue. The issue is related to using dep for vendoring. The vendor folder had outdated files for my package because dep ensure was pulling them from github instead of locally. I will be moving to go 1.1 which support to go modules to fix this.
I see several things:
According to your Dockerfile
Maybe you need a dep init before dep ensure
Probably you need to check if main.go path is correct.
According to docker philosophy
In my humble opinion, you should create an image with docker build -t <your_image_name> ., executing that where your Dockerfile is, but without CMD line.
I would execute your go run <your main.go> in your docker run -d <your_image_name> go run <cmd/pkg/main.go> or whatever is your command.
If something is wrong, you can check exited containers with docker ps -a and furthermore check logs with docker logs <your_CONTAINER_name/id>
Other way to check logs is access to the container using bash and execute go run manually:
docker run -ti <your_image_name> bash
# go run blablabla
Is there a way to use external volumes during the docker image build ?
I have a situation where I would like to use a configuration inside a external volume during the docker image build time. Is that possible?
(edited to reflect current Docker CLI behavior)
If by 'docker image build' you mean running a single 'docker build ...' command: no, there is no way to do that (at least, not in the most recent documentation that I have read). However, nothing prevents you from performing the step that needs the external volume using direct docker commands and then commit the container and tag it just as 'docker build' would. Assuming this is the last step in your build, put all other commands (that don't need the volume) into a Dockerfile and then do this:
tmp_img=`docker build .`
tmp_container=`docker run -d -v $my_ext_volume:$my_mount_path --entrypoint=(your volume-dependent build command here) $tmp_img`
docker wait "$tmp_container"
docker commit $tmp_container my_repo/image_tag:latest
docker rm "$tmp_container"
This does the same as having a RUN command in the Dockerfile, but with the added volume mount. The commit command in the example also tags the image.
It is a bit more complex if you need to have other Dockerfile commands after the volume-dependent one, but in most cases you can combine run commands and re-arrange your install in a way that leaves the manual run-with-volume command last, to keep things simple.
You can copy the file into the docker image (ADD) and rm as one of the last steps
podman is an alternative to Docker that has an api that is the same BUT also supports mounting volumes at buildtime.
I use this to load data into testdatabases without having to copy the data into the image first.
You can use ADD combined with ARG (build time parameters) to access files or directories during the build without having to hardcode their location.
ARG MAVEN_SETTINGS=settings.xml
ADD $MAVEN_SETTINGS ./
And now you can change the file location during the build with:
docker build --build-arg MAVEN_SETTINGS=someotherfile.xml
We are not restricted to Docker to build OCI images.
With buildah it's possible to mount volumes from the host that won't be persisted in the final image. Useful for configuration and secrets.
buildah bud --volume /home/test:/myvol:ro -t imageName .
I am wanting to build a production ready image for clients to use and I am wondering if there is a way to prevent access to my code within the image?
My current approach is storing my code in /root/ and creating a "customer" user that only has a startup script in their home dir.
My Dockerfile looks like this
FROM node:8.11.3-alpine
# Tools
RUN apk update && apk add alpine-sdk
# Create customer user
RUN adduser -s /bin/ash -D customer
# Add code
COPY ./code /root/code
COPY ./start.sh /home/customer/
# Set execution permissions
RUN chown root:root /home/customer/start.sh
RUN chmod 4755 /home/customer/start.sh
# Allow customer to execute start.sh
RUN echo 'customer ALL=(ALL) NOPASSWD: /home/customer/start.sh' | EDITOR='tee -a' visudo
# Default to use customer
USER customer
ENTRYPOINT ["sudo","/home/customer/start.sh"]
This approach works as expected, if I were to enter the container I won't be able to see the codebase but I can start up services.
The final step in my Dockerfile would be to either, set a password for the root user or remove it entirely.
I am wondering if this is a correct production flow or am I attempting to use docker for something it is not meant to?
If this is the correct, what other things should I lock down?
any tips appreciated!
Anybody who has your image can always do
docker run -u root imagename sh
Anybody who can run Docker commands at all has root access to their system (or can trivially give it to themselves via docker run -v /etc:/hostetc ...) and so can freely poke around in /var/lib/docker to see what's there. It will have all of the contents of all of the images, if scattered across directories in a system-specific way.
If your source code is actually secret, you should make sure you're using a compiled language (C, Go, Java kind of) and that your build process doesn't accidentally leak the source code into the built image, and it will be as secure as anything else where you're distributing binaries to end users. If you're using a scripting language (Python, JavaScript, Ruby) then intrinsically the end user has to have the code to be able to run the program.
Something else to consider is the use of docker container export. This would allow anyone to export the containers file system, and therefore have access to code files.
I believe this bypasses removing the sh/bash and any user permission changes as others have mentioned.
You can protect your source-code even it can't be have a build stage or state,By removing the bash and sh in your base Image.
By this approach you can restrict the user to not enter into your docker container and Image either through these commands
docker (exec or run) -it (container id) bash or sh.
To have this kind of approach after all your build step add this command at the end of your build stage.
RUN rm -rf bin/bash bin/sh
you can also refer more about google distroless images which follow the same approach above.
You can remove the users from the docker group and create sudos for the docker start and docker stop