renaming a file with Dockerfile instructions - docker

I'm trying to build a docker which clones a public repository, builds a library and the built library is then used by the main application. My local machine is on MacOS, the docker is a Linux distro, so I just can't compile and move the file. The library needs to be renamed (mandatory, the output is .dylib, but to use it in python it must become .so) and moved (optional).
ADD and COPY take my local machine as reference for the source, the relevant part of the Dockerfile is:
RUN git clone https://gitlab.com/somelib/somelib.git
RUN curl https://sh.rustup.rs -sSf | sh -s -- -y
ENV PATH="/root/.cargo/bin:${PATH}"
RUN cargo build --release --manifest-path=somelib/Cargo.toml
RUN cp somelib/target/release/libsomelib.dylib app/main/util/somelib.so #<-- ERROR HERE
But this doesn't work because it fails to find libsomelib.dylib
cp: cannot stat 'somelib/target/release/libsomelib.dylib': No such file or directory
Is this possible or is docker not meant for this this operation?

Related

Is there a way i can include .deb package in Docker file

I created a xyz.deb package which, after installation, provides an application. I am trying to create a Docker container with FROM ubuntu:20.04.
How do I add my xyz.deb package in the Dockerfile and install it so that container comes ready with the application xyz.
The COPY command in a Dockerfile lets you copy external files into the container. You can then install the .deb file as you would on your local system with a RUN command.
Simple example:
COPY ./xyz.deb /
RUN dpkg -i /xyz.deb

In docker with buildkit and run --mount, why is cabal install Downloading cached packages?

I am in the process of creating a Dockerfile that can build a haskell program. The Dockerfile uses ubuntu focal as a base image, installs ghcup, and then builds a haskell program. There are multiple reasons why I am doing this; it can support a low-configuration CI environment, and it can help new developers who are trying to build a complicated project.
In order to speed up build times, I am using docker v20 with buildkit. I have a sequence of events like this (it's quite a long file, but this excerpt is the relevant part):
# installs haskell
WORKDIR $HOME
RUN git clone https://github.com/haskell/ghcup-hs.git
WORKDIR ghcup-hs
RUN BOOTSTRAP_HASKELL_NONINTERACTIVE=NO ./bootstrap-haskell
#RUN source ~/.ghcup/env # Uh-oh: can't do this.
# We recreate the contents of ~/.ghcup/env
ENV PATH=$HOME/.cabal/bin:$HOME/.ghcup/bin:$PATH
# builds application
COPY application $HOME/application
WORKDIR $HOME/application
RUN mkdir -p logs
RUN --mount=type=cache,target=$HOME/.cabal \
--mount=type=cache,target=$HOME/.ghcup \
--mount=type=cache,target=$HOME/application/dist-newstyle \
cabal build |& tee logs/configure.log
But when I change some non-code files (README.md for example) in application, and build my docker image ...
DOCKER_BUILDKIT=1 docker build -t application/application:1.0 .
... it takes quite a bit of time and the output from cabal build includes a lot of Downloading [blah] followed by Building/Installing/Completed messages from cabal install.
However when I go into my container and type cabal build, it is much faster (it is already built):
host$ docker run -it application/application:1.0
container$ cabal build # this is fast
I would expect it to be just as fast in the prior case as well. Since I have not really changed the code files, and the dependencies are all downloaded, and since I am using RUN --mount.
Are there files somewhere that my --mount=type=cache entries are not covering? Is there a package registry file somewhere that I need to include in its own --mount=type=cache line? As far as I can tell, my builds ought to be nearly instant instead of taking several minutes to complete.

Docker container with build output and no source

I have a build process that converts typescript into javascript, minifies and concatenates css files, etc.
I would like to put those files into an nginx docker container, but I don't want the original javascript / css source to be included, nor the tools that I use to build them. Is there a good way to do this, or do I have to run the build outside docker (or in a separately defined container), then COPY the relevant files in?
This page talks about doing something similar in a manual way, but doesn't explain how to automate the process e.g. with docker-compose or something.
Create a docker images with all required tools to build your code also that can clone code and build it. After build it have to copy
into docker volume for example volume name is /opt/webapp.
Launch build docker container using build image in step 1
docker run -d -P --name BuildContainer -v /opt/webapp:/opt/webapp build_image_name
Launch nginx docker container that will use shared volume of build docker in which your build code resides.
docker run -d -P --name Appserver -v /opt/webapp:/usr/local/nginx/html nginx_image_name
After building and shipping your build code to Appserver . you can delete BuildContainer because that is no more required.
Advantage of above steps:
your build code will in host machine so if one Appserver docker fail or stop then your build code will be safe in host machine and you can launch new docker using that build code.
if you create docker image for building code then every time no need to install required tool while launching docker.
you can build your code in host machine also but if you want your code should be build in fresh environment every time then this will be good. or if you use same host machine to build/compile code every time then some older source code may create problem or git clone error etc.
EDIT:
you can append :ro (Read only) to volume by which one container will not affect another. you can Read more about docker volume Here . Thanks #BMitch for suggestion.
The latest version of docker supports multi-stage builds where build products can be copied from on container to another.
https://docs.docker.com/engine/userguide/eng-image/multistage-build/
This is an ideal scenario for a multi-stage build. You perform the compiling in the first stage, copy the output of that compile to the second stage, and only ship that second stage. Each stage is an independent image that begins with a FROM line. And to transfer files between stages, there's now a COPY --from syntax. The result looks roughly like:
# first stage with your full compile environment, e.g. maven/jdk
FROM maven as build
WORKDIR /src
COPY src /src
RUN mvn install
# second stage starts below with just a jre base image
FROM openjdk:jre
# copy the jar from the first stage here
COPY --from=build /src/result.jar /app
CMD java -jar /app/result.jar
Original answer:
Two common options:
As mentioned, you can build outside and copy the compiled result into the container.
You merge your download, build, and cleanup step into a single RUN command. This is a common best practice to minimize the size of each layer.
An example Dockerfile for the second option would look like:
FROM mybase:latest
RUN apt-get update && apt-get install tools \
&& git clone https://github.com/myproj \
&& cd myproj \
&& make \
&& make install
&& cd .. \
&& apt-get rm tools && apt-get clean \
&& rm -rf myproj
The lines would be a little more complicated than that, but that's the gist.
As #dnephin suggested in his comments on the question and on #pl_rock's answer, the standard docker tools are not designed to do this, but you can use a third party tool like one of the following:
dobi (48 GitHub stars)
packer (6210 GitHub stars)
rocker (759 GitHub stars)
conveyor (152 GitHub stars)
(GitHub stars correct when I wrote the answer)
We went with dobi as it was the first one we heard of (because of this question), but it looks like packer is the most popular.
Create a docker file to run your build process, then run cleanup code
Example:
FROM node:latest
# Provides cached layer for node_modules
ADD package.json /tmp/package.json
RUN cd /tmp && npm install
RUN mkdir -p /dist && cp -a /tmp/node_modules /dist/
RUN cp /tmp/package.json /dist
ADD . /tmp
RUN cd /tmp && npm run build
RUN mkdir -p /dist && cp -a /tmp/. /dist
#run some clean up code here
RUN npm run cleanup
# Define working directory
WORKDIR /dist
# Expose port
EXPOSE 4000
# Run app
CMD ["npm", "run", "start"]
In your docker compose file
web:
build: ../project_path
environment:
- NODE_ENV=production
restart: always
ports:
- "4000"

How to package files with docker image

I have an application that requires some binaries on host machine for a docker based application to work. I can ship the image using docker registry but how do I ship those binaries to host machine? creating deb/rpm seems one option but that would be against the docker platform independent philosophy.
If you need them outside the docker image on the host machine what you can do is this.
Add them to your Dockerfile with ADD or COPY
Also had an installation script which calls cp -f src dest
Then bind mount an installation directory from the host to dest in the container.
Something like the following example:
e.g. Dockerfile
FROM ubuntu:16.04
COPY file1 /src
COPY file2 /src
COPY install /src
CMD install
Build it:
docker build -t installer .
install script:
#/bin/bash
cp -f /src /dist
Installation:
docker run -v /opt/bin:/dist
Will result in file1 & file2 ending up in /opt/bin on the host.
If your image is based off of an image with a package manager, you could use the package manager to install the required binaries, e.g.
RUN apt-get update && apt-get install -y required-package
Alternatively, you could download the statically linked binaries from the internet and extract them, e.g.
RUN curl -s -L https://example.com/some-bin.tar.gz | tar -C /opt -zx
If the binaries are created as part of the build process, you'd want to COPY them over
COPY build/target/bin/* /usr/local/bin/

Referencing files inside build (Docker)

I use boot2docker and want to build a simple docker image with the Dockerfile:
# Pull base image.
FROM elasticsearch
# Install Marvel plugin
RUN \
&& export ES_HOME=/usr/share/elasticsearch \
&& cd $ES_HOME \
&& bin/plugin -u file:///c/Users/buliov1/dev/elastic/plugins/marvel-latest.zip -i elasticsearch/marvel/latest
The path /c/Users/buliov1/dev/elastic/plugins/marvel-latest.zip is present and accessible on the machine where I build the dockerfile .
The problem is that inside the build i get
Failed: FileNotFoundException[/c/Users/buliov1/dev/elastic/plugins/marvel-latest.zip (No such file or directory)].
I searched through the documentation and the only solution I see is to use ADD/COPY and copy first the file inside the image and then run the command that uses the file.
I don't know how exactly docker build works but , is there a way to build it without copying the file first?
A docker build process is running inside Docker containers and has no access to the host filesystem. The only way to get files into the build environment is through the use of the ADD or COPY mechanism (or by fetching them over the network using, e.g., curl or wget or something).

Resources