Docker build with external file - docker

I have a docker file which does a simple dotnet restore inside a container, essentially like this:
FROM microsoft/dotnet:2.1-sdk-alpine as build-env
COPY . .
RUN dotnet restore
Now, sometimes the dotnet restore requires external packages, i.e. an external nuget.config file.
This file can be found at various locations depending on whether you are using windows/mac etc, but the operator is always expected to know where it is.
More so, this file has sensitive information and therefore cannot be in the repository. All I want to do is when I do docker build . I want to pass that file into the docker container. This file will not be part of the . context and will reside somewhere else.
Conceptually I want to do docker build -file "c:\XXX\Nuget.Config" and then have it available inside docker. An option could also be if I could somehow mount that file/directory into the volume at build time.
Any help will be appreciated.
PS: I have contemplated docker-compose so might be open to solutions using that although for now I just want to keep it simple and use docker

Two easy options I can think of, depending on how sensitive the information is:
If it's super-sensitive you could build your binaries outside of docker and create your final image by copying in the binaries.
If you could permit sensitive information in a transient build container (but not in the final shipped image) then a multi-stage build (like this one) would work. (Of course, you'd have to ensure nuget.config is within the . context before build.)

Related

Share Docker Image with a collaborator

I have to share a docker image (Spring boot application) with a collaborator which works outside my company. I need to prevent access to the source code.
I was trying to share it as a .tar file that contains the jar in the docker image but as far as i know this won't prevent source code access. The second solution is to push this one to Docker Hub as a private repo and grant the access only to him, but i think the source code can be access as well.
Are there any other solutions that i can use for this situation?
Thanks.
It doesn't matter that the image is in Docker Hub as a private image or a tar file, in both cases, if the image can be pulled, it can be exported again using docker save in the machine that pulled it.
The most you can do is use a multi stage build (if you are building the jar file using Docker as well), so that once the jar file is generated, a new image containing only the JRE and the JAR is present.
Something like this, and will heavily vary on your implementation specifics:
FROM openjdk:latest
# Build your image
FROM openjdk:11-jre
COPY --from=0 /build/app.jar ./
# runtime environment, CMD, etc
This will not prevent a third party from fetching the JAR file from the Docker image and decompile it, but it will prevent the user from reading clean code.
To further complicate it, you will have to refer to a Java obfuscator, there are plenty of products, both free and commercial, available online for that purpose.
Only things you COPY into a Docker image are present into it. I see a lot of Java-based Dockerfiles like
FROM openjdk:17
COPY target/app.jar /app.jar
CMD java -jar /app.jar
and these only contain some base Linux distribution, the JDK, and the jar file; they do not contain any application source since it is not COPYed in.
As #MarcSances notes in their answer this is as secure as otherwise distributing the jar file; it is straightforward to decompile it and you will get relatively readable results, but this is not "distributing the source" per se. (Compare with Javascript, Python, PHP, or Ruby scripted applications, where the only way to run the application is to actually have its source code; and also compare with C++, Go, or Rust, where you have a standalone runnable binary which is even harder to decompile.)

Buiding docker containers for golang applications without calling go build

I'm building my first Dockerfile for a go application and I don't understand while go build or go install are considered a necessary part of the docker container.
I know this can be avoided using muilt-stage but I don't know why it was ever put in the container image in the first place.
What I would expect:
I have a go application 'go-awesome'
I can build it locally from cmd/go-awesome
My Dockerfile contains not much more than
COPY go-awesome .
CMD ["go-awesome"]
What is the downside of this configuration? What do I gain by instead doing
COPY . .
RUN go get ./...
RUN go install ./..
Links to posts showing building go applications as part of the Dockerfile
https://www.callicoder.com/docker-golang-image-container-example/
https://blog.codeship.com/building-minimal-docker-containers-for-go-applications/
https://www.cloudreach.com/blog/containerize-this-golang-dockerfiles/
https://medium.com/travis-on-docker/how-to-dockerize-your-go-golang-app-542af15c27a2
You are correct that you can compile your application locally and simply copy the executable into a suitable docker image.
However, there are benefits to compiling the application inside the docker build, particularly for larger projects with multiple collaborators. Specifically the following reasons come to mind:
There are no local dependencies (aside from docker) required to build the application source. Someone wouldn't even need to have go installed. This is especially valuable for projects in which multiple languages are in use. Consider someone who might want to edit an HTML template inside of a go project and see what that looked like in the container runtime..
The build environment (version of go, dependency managment, file paths...) is constant. Any external dependencies can be safely managed and maintained via the Dockerfile.

Install ansible application to docker container

I have an application which can be installed with ansible. No I want to create docker image that includes installed application.
My idea is to up docker container from some base image, after that start installation from external machine, to this docker container. After that create image from this container.
I am just starting with dockers, could you please advise if it is good idea and how can I do it?
This isn’t the standard way to create a Docker image and it isn’t what I’d do, but it will work. Consider looking at a tool like Hashicorp’s Packer that can automate this sequence.
Ignoring the specific details of the tools, the important thing about the docker build sequence is that you have some file checked into source control that an automated process can use to build a Docker image. An Ansible playbook coupled with a Packer JSON template would meet this same basic requirement.
The important thing here though is that there are some key differences between the Docker runtime environment and a bare-metal system or VM that you’d typically configure with Ansible: it’s unlikely you’ll be able to use your existing playbook unmodified. For example, if your playbook tries to configure system daemons, install a systemd unit file, add ssh users, or other standard system administrative tasks, these generally aren’t relevant or useful in Docker.
I’d suggest making at least one attempt to package your application using a standard Dockerfile to actually understand the ecosystem. Don’t expect to be able to use an Ansible playbook unmodified in a Docker environment; but if your organization has a lot of Ansible experience and you can easily separate “install the application” from “configure the server”, the path you’re suggesting is technically fine.
You can use multi-stage builds in Docker, which might be a nice solution:
FROM ansible/centos7-ansible:stable as builder
COPY playbook.yaml .
RUN ansible-playbook playbook.yaml
FROM alpine:latest # Include whatever image you need for your application
# Add required setup for your app
COPY --from=builder . . # Copy files build in the ansible image, aka your app
CMD ["<command to run your app>"]
Hopefully the example is clear enough for you to create your Dockerfile

Override a volume when Building docker image from another docker image

sorry if the question is basic but would it be possible to build a docker image from another one with a different volume in the new image? My use case is the following:
Start From image library/odoo (cfr. https://hub.docker.com/_/odoo/)
upload folders into the volume "/mnt/extra-addons"
build a new image, tag it then put it in our internal image repo
how can we achieve that? I would like to avoid putting extra folders into the host filesystem
thanks a lot
This approach seems to work best until the Docker development team adds the capability you are looking for.
Dockerfile
FROM percona:5.7.24 as dbdata
MAINTAINER monkey#blackmirror.org
FROM centos:7
USER root
COPY --from=dbdata / /
Do whatever you want . This eliminates the VOLUME issue. Heck maybe I'll write tool to automatically do this :)
You have a few options, without involving the host OS that runs the container.
Make your own Dockerfile, inherit from the library/odoo Docker image using a FROM instruction, and COPY files into the /mnt/extra-addons directory. This still involves your host OS somewhat, but may be acceptable since you wouldn't necessarily be building the Docker image on the same host you were running it.
Make your own Dockerfile, as in (1), but use an entrypoint script to download the contents of /mnt/extra-addons at runtime. This would increase your container startup time since the download would need to take place before running your service, but no host directories would need be involved.
Personally I would opt for (1) if your build pipeline supports it. That would bake the addons right into the image, so the image itself would be a complete, ready-to-go build artifact.

How to place files on shared volume from within Dockerfile?

I have a Dockerfile which builds an image that provides for me a complicated tool-chain environment to compile a project on a mounted volume from the host machines file system. Another reason is that I don't have a lot of space on the image.
The Dockerfile builds my tool-chain in the OS image, and then prepares the source by downloading packages to be placed on the hosts shared volume. And normally from there I'd then log into the image and execute commands to build. And this is the problem. I can download the source in the Dockerfile, but how then would I get it to the shared volume.
Basically I have ...
ADD http://.../file mydir
VOLUME /home/me/mydir
But then of course, I get the error 'cannot mount volume over existing file ..."
Am I going about this wrong?
You're going about it wrong, but you already suspected that.
If you want the source files to reside on the host filesystem, get rid of the VOLUME directive in your Dockerfile, and don't try to download the source files at build time. This is something you want to do at run time. You probably want to provision your image with a pair of scripts:
One that downloads the files to a specific location, say /build.
Another that actually runs the build process.
With these in place, you could first download the source files to a location on the host filesystem, as in:
docker run -v /path/on/my/host:/build myimage fetch-sources
And then you can build them by running:
docker run -v /path/on/my/host:/build myimage build-sources
With this model:
You're trying to muck about with volumes during the image build process. This is almost never what you want, since data stored in a volume is explicitly excluded from the image, and the build process doesn't permit you to conveniently mount host directories inside the container.
You are able to download the files into a persistent location on the host, where they will be available to you for editing, or re-building, or whatever.
You can run the build process multiple times without needing to re-download the source files every time.
I think this will do pretty much what you want, but if it doesn't meet your needs, or if something is unclear, let me know.

Resources