I am creating my own Dockerfile based on the Jenkins Docker image only to add some pre-installed packages in the image (build-essential, etc...).
In the Jenkins Dockerfile they used the ARG command to create the jenkins user and group: See these lines.
Here is my Dockerfile:
FROM jenkins
USER root
RUN apt-get update && apt-get install -y build-essential
USER jenkins
But when I build it to fit my jenkins host user by using this line:
docker build --tag my-jenkins \
--build-arg user=jenkins \
--build-arg group=jenkins \
--build-arg uid=$(id -u jenkins) \
--build-arg gid=$(id -g jenkins) \
.
I have got this error:
One or more build-args [gid group uid user] were not consumed, failing build.
Is there a way to do this or is it impossible?
It's impossible. The image is already built (using the defaults). You could write a quick script to build your own jenkins image (passing the args) then build your sub-image from that.
Related
I have a script used in the preapration of a Docker image. I have this in the Dockerfile:
COPY my_script /
RUN bash -c "/my_script"
The my_script file contains secrets that I don't want in the image (it deletes itself when it finishes).
The problem is that the file remains in the image despite being deleted because the COPY is a separate layer. What I need is for both COPY and RUN to affect the same layer.
How can I COPY and RUN a script so that both actions affect the same layer?
take a look to multi-stage:
Use multi-stage builds
With multi-stage builds, you use multiple FROM statements in your
Dockerfile. Each FROM instruction can use a different base, and each
of them begins a new stage of the build. You can selectively copy
artifacts from one stage to another, leaving behind everything you
don’t want in the final image. To show how this works, let’s adapt the
Dockerfile from the previous section to use multi-stage builds.
Dockerfile:
FROM golang:1.7.3
WORKDIR /go/src/github.com/alexellis/href-counter/
RUN go get -d -v golang.org/x/net/html
COPY app.go .
RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o app .
FROM alpine:latest
RUN apk --no-cache add ca-certificates
WORKDIR /root/
COPY --from=0 /go/src/github.com/alexellis/href-counter/app .
CMD ["./app"]
As of 18.09 you can use docker build --secret to use secret information during the build process. The secrets are mounted into the build environment and aren't stored in the final image.
RUN --mount=type=secret,id=script,dst=/my_script \
bash -c /my_script
$ docker build --secret id=script,src=my_script.sh
The script wouldn't need to delete itself.
This can be handled by BuildKit:
# syntax=docker/dockerfile:experimental
FROM ...
RUN --mount=type=bind,target=/my_script,source=my_script,rw \
bash -c "/my_script"
You would then build with:
DOCKER_BUILDKIT=1 docker build -t my_image .
This also sounds like you are trying to inject secrets into the build, e.g. to pull from a private git repo. BuildKit also allows you to specify:
# syntax=docker/dockerfile:experimental
FROM ...
RUN --mount=type=secret,target=/creds,id=cred \
bash -c "/my_script -i /creds"
You would then build with:
DOCKER_BUILDKIT=1 docker build -t my_image --secret id=creds,src=./creds .
With both of the BuildKit options, the mount command never actually adds the file to your image. It only makes the file available as a bind mount during that single RUN step. As long as that RUN step does not output the secret to another file in your image, the secret is never injected in the image.
For more on the BuildKit experimental syntax, see: https://github.com/moby/buildkit/blob/master/frontend/dockerfile/docs/experimental.md
I guess you can use a workaround to do this:
Put my_script in a local http server which for example using python -m SimpleHTTPServer, and then the file could be accessed with http://http_server_ip:8000/my_script
Then, in Dockerfile use next:
RUN curl http://http_server_ip:8000/my_script > /my_script && chmod +x /my_script && bash -c "/my_script"
This workaround assure file add & delete in same layer, of course, you may need to add curl install in Dockerfile.
I think RUN --mount=type=bind,source=my_script,target=/my_script bash /my_script in BuildKit can solve your problem.
First, prepare BuildKit
export DOCKER_CLI_EXPERIMENTAL=enabled
export DOCKER_BUILDKIT=1
docker buildx create --name mybuilder --driver docker-container
docker buildx use mybuilder
Then, write your Dockerfile.
# syntax = docker/dockerfile:experimental
FORM debian
## something
RUN --mount=type=bind,source=my_script,target=/my_script bash -c /my_script
The first lint must be # syntax = docker/dockerfile:experimental because it's experimental feature.
And this method are not work in Play with docker, but work on my computer...
My computer us Ubuntu 20.04 with docker 19.03.12
Then, build it with
docker buildx build --platform linux/amd64 -t user/imgname -f ./Dockerfile . --push
I have specified a git clone command in the docker file as below.
RUN git clone https://github.com/zhaoyi0113/test.git
but I got this error when build the docker image:
Cloning into 'test'...
fatal: could not read Username for 'https://github.com': No such device or address
I wonder why it doesn't work. I am able to run this command on my host. Is there anything different if I list it on docker file?
You can pass credentials as arguments to container. This should work
FROM alpine:3.8
RUN apk update && apk upgrade && \
apk add --no-cache bash git openssh
ARG username
ARG password
RUN git clone https://${username}:${password}#github.com/username/repository.git
ENTRYPOINT ["sleep 10"]
but it might be unsafe if you want to distribute that image
then build
docker build \
--no-cache \
-t git-app:latest \
--build-arg username=user \
--build-arg password=qwerty \
.
I know what wrong with this problem. The issue is because the repo I am cloning is a private repo which means it requires credential to connect to github. Fixing it by passing credentials to the container.
I'm trying to build Oracle WebLogic Docker image with custom environment variables
$ docker build -t 12213-domain --build-arg ADMIN_PORT=8888 --build-arg ADMIN_PASSWORD=wls .
but I get the following warning on the build log
[Warning] One or more build-args [ADMIN_PASSWORD ADMIN_PORT] were not consumed
Here is the Dockerfile of the image I'm trying to build
#Copyright (c) 2014-2017 Oracle and/or its affiliates. All rights reserved.
#
#Licensed under the Universal Permissive License v 1.0 as shown at http://oss.oracle.com/licenses/upl.
#
# ORACLE DOCKERFILES PROJECT
# --------------------------
# This Dockerfile extends the Oracle WebLogic image by creating a sample domain.
#
# Util scripts are copied into the image enabling users to plug NodeManager
# automatically into the AdminServer running on another container.
#
# HOW TO BUILD THIS IMAGE
# -----------------------
# Put all downloaded files in the same directory as this Dockerfile
# Run:
# $ sudo docker build -t 12213-domain
#
# Pull base image
# ---------------
FROM oracle/weblogic:12.2.1.3-developer
# Maintainer
# ----------
MAINTAINER Monica Riccelli <monica.riccelli#oracle.com>
# WLS Configuration
# ---------------------------
ENV ADMIN_HOST="wlsadmin" \
NM_PORT="5556" \
MS_PORT="8001" \
DEBUG_PORT="8453" \
ORACLE_HOME=/u01/oracle \
SCRIPT_FILE=/u01/oracle/createAndStartWLSDomain.sh \
CONFIG_JVM_ARGS="-Dweblogic.security.SSL.ignoreHostnameVerification=true" \
PATH=$PATH:/u01/oracle/oracle_common/common/bin:/u01/oracle/wlserver/common/bin:/u01/oracle/user_projects/domains/${DOMAIN_NAME:-base_domain}/bin:/u01/oracle
# Domain and Server environment variables
# ------------------------------------------------------------
ENV DOMAIN_NAME="${DOMAIN_NAME:-base_domain}" \
PRE_DOMAIN_HOME=/u01/oracle/user_projects \
ADMIN_PORT="${ADMIN_PORT:-7001}" \
ADMIN_USERNAME="${ADMIN_USERNAME:-weblogic}" \
ADMIN_NAME="${ADMIN_NAME:-AdminServer}" \
MS_NAME="${MS_NAME:-""}" \
NM_NAME="${NM_NAME:-""}" \
ADMIN_PASSWORD="${ADMIN_PASSWORD:-""}" \
CLUSTER_NAME="${CLUSTER_NAME:-DockerCluster}" \
DEBUG_FLAG=true \
PRODUCTION_MODE=dev
# Add files required to build this image
COPY container-scripts/* /u01/oracle/
#Create directory where domain will be written to
USER root
RUN chmod +xw /u01/oracle/*.sh && \
chmod +xw /u01/oracle/*.py && \
mkdir -p $PRE_DOMAIN_HOME && \
chmod a+xr $PRE_DOMAIN_HOME && \
chown -R oracle:oracle $PRE_DOMAIN_HOME
VOLUME $PRE_DOMAIN_HOME
# Expose Node Manager default port, and also default for admin and managed server
EXPOSE $NM_PORT $ADMIN_PORT $MS_PORT $DEBUG_PORT
USER oracle
WORKDIR $ORACLE_HOME
# Define default command to start bash.
CMD ["/u01/oracle/createAndStartWLSDomain.sh"]
I'm running docker-toolbox on windows, and the docker version is
$ docker --version
Docker version 18.03.0-ce, build 0520e24302
ARG
ARG <name>[=<default value>]
The ARG instruction defines a variable that users can pass at
build-time to the builder with the docker build command using the
--build-arg = flag. If a user specifies a build argument that was not defined in the Dockerfile, the build outputs a
warning.
[Warning] One or more build-args [foo] were not consumed.
https://docs.docker.com/engine/reference/builder/#arg
Using ARG variables
You can use an ARG or an ENV instruction to specify variables that are
available to the RUN instruction. Environment variables defined using
the ENV instruction always override an ARG instruction of the same
name. Consider this Dockerfile with an ENV and ARG instruction.
Unlike an ARG instruction, ENV values are always persisted in the
built image. Consider a docker build without the --build-arg flag:
ARG is only available during the build of a Docker image (RUN etc), not after the image is created and containers are started from it (ENTRYPOINT, CMD). You can use ARG values to set ENV values to work around that.
So you need to do something like this
# Assign any default value to avoid any error. do not worry your build flag will override this.
ARG ADMIN_PORT=some_default_value
ENV ADMIN_PORT=${ADMIN_PORT}
https://vsupalov.com/docker-arg-env-variable-guide/
Update:
In simple Word, If you pass ARGs like --build-arg SOME_ARG=some_value to Docker build and did not declare the ARGS in Dockerfile this warning will be printed.
my Dockerfile to consume ARG
FROM alpine
ARG SOME_ARG="Default_Value"
RUN echo "ARGS is ${SOME_ARG}"
Build command
docker build --no-cache --build-arg SOME_ARG=some_value -t alpine .
So it will not print any Warning as ARGS is declared in my Dockerfile.
Now if try to remove ARGS from Dockerfile and build the image
FROM alpine
RUN echo "without ARGS dockerfile"
Build command
docker build --no-cache --build-arg SOME_ARG=some_value -t alpine .
So now we will get a [Warning] One or more build-args [SOME_ARG] were not consumed because we did not consume or declared SOME_ARG in our Dockerfile.
if you want to run on Docker-Compose with Dockerfile ;
Docker-compose.yml ;
version: '3.3'
services:
somecontb:
container_name: somecontainer
hostname : somecontainer
build:
args:
SOME_ARG: Default_Value
context: .
dockerfile: DockerFile
image : someimage
volumes:
- somecontainer_volume:/somefile
volumes:
somecontainer_volume:
.
.
.
DockerFile ;
FROM alpine
ARG SOME_ARG
ENV SOME_ENV=$SOME_ARG
docker commands ;
docker-compose down
docker volume rm somecontainer_volume
docker-compose build --no-cache
docker-compose up -d
I have a Dockerfile as shown below:
FROM centos:centos6
MAINTAINER tapash
###### Helpful utils
RUN yum -y install sudo
RUN yum -y install curl
RUN yum -y install unzip
#########Copy hibernate.cfg.xml to Client
ADD ${hibernate_path}/hibernate.cfg.xml /usr/share/tomcat7/webapps/roc_client/WEB-INF/classes/
I need a command line argument to be passed during docker build to be specified for the $hibernate_path.
How do I do this?
If this is purely a build-time variable, you can use the --build-arg option of docker build.
This flag allows you to pass the build-time variables that are accessed like regular environment variables in the RUN instruction of the Dockerfile. Also, these values don’t persist in the intermediate or final images like ENV values do.
docker build --build-arg hibernate_path=/a/path/to/hibernate -t tag .
In 1.7, only the static ENV Dockerfile directive is available.
So one solution is to generate the Dockerfile you need from a template Dockerfile.tpl.
Dockerfile.tpl:
...
ENV hibernate_path=xxx
ADD xxx/hibernate.cfg.xml /usr/share/tomcat7/webapps/roc_client/WEB-INF/classes/
...
Whenever you want to build the image, you generate first the Dockerfile:
sed "s,xxx,${hibernate_path},g" Dockerfile.tpl > Dockerfile
Then you build normally: docker build -t myimage .
You then benefit from (in docker 1.7):
build-time environment substitution
run-time environment variable.
You create a script that puts in your Dockerfile the required value and launches docker build -t mytag .
Here is my command in terminal to build image, sudo docker build -t actinbox3.2:latest .
I'm getting this error
" Step 0 : FROM iamdenmarkcontrevida/base
Pulling repository iamdenmarkcontrevida/base
INFO[0020] Repository not found"
Dockerfile
# Dockerfile for base image of actInbox
FROM iamdenmarkcontrevida/base
MAINTAINER Denmark Contrevida<DMcontrevida#gmail.com>
# Config files
COPY config /actinbox_config/
COPY script /actinbox_script/
COPY database /actinbox_db/
# Config pyenv
RUN echo 'export PYENV_ROOT="/root/.pyenv"' >> /root/.bashrc && \
echo 'export PATH="$PYENV_ROOT/bin:$PATH"' >> /root/.bashrc && \
echo 'eval "$(pyenv init -)"' >> /root/.bashrc && \
# Config Nginx
rm /etc/nginx/sites-enabled/default && \
ln -s /actinbox_config/actinbox.conf /etc/nginx/sites-enabled/actinbox.conf && \
# Config PostgreSQL
rm /etc/postgresql/9.3/main/pg_hba.conf && \
ln -s /actinbox_config/pg_hba.conf /etc/postgresql/9.3/main/pg_hba.conf && \
# Create DB & Restore database
sh /actinbox_config/create_db_actinbox.sh && \
# Delete template folder
rm -r /actinbox_db/
Mydockerfile in Base
Dockerfile for base image of actInbox
FROM ubuntu:14.04
MAINTAINER Denmark Contrevida<DMcontrevida#gmail.com>
# Base services
RUN apt-get update && apt-get install -y \
git nginx postgresql postgresql-contrib
# Install Pyenv, Python 3.x, django, uWSGI & psycopg2
COPY config/install_pyenv.sh /tmp/install_pyenv.sh
RUN sh /tmp/install_pyenv.sh
Please help me out or any idea why im getting this error? I have an account in docker hub...........
Thank you in advance!
Basically, it can't find the iamdenmarkcontrevida/base image in dockerhub.
Did you build/push the base image?
docker build .
docker tag <local-image-id> iamdenmarkcontrevida/base:latest
docker push iamdenmarkcontrevida/base
No need push, if you only need run it locally.
So you need build the base image first, then build actinbox3.2
For example (suppose you have different Dockerfile name)
sudo docker build -t iamdenmarkcontrevida/base -f Dockerfile.base
sudo docker build -t actinbox3.2 -f Docker.actinbox3.2
tag latest is default, so no need add it in build command.
It's just a two line command -
Get your local-image-id
docker ps
Copy Names
docker build -t local-image-id/Any name you want:latest .
For me
docker build -t condescending_greider/newdoc:latest .
Thanks for your time :)
You can try packer framework
that is easiest way to create docker image.
it also supported many other type of machine image.
https://www.packer.io/docs/builders/index.html