I have two images, first is a simple Ubuntu plus my stuff.
Second one basically has a FROM statement that includes the first image, but is a remote location like: server.com/reponame/baseimagename:latest
The problem is that, that works great on CI Jenkins, but i also want to build on localhost without any CI Jenkins remote implications.
So, how can i force my second image to look for baseimagename:latest on localhost instead of going on internet ?
You can use build args with a FROM step by specifying the ARG before any other FROM lines in the Dockerfile:
ARG base_image=server.com/reponame/baseimagename:latest
FROM ${base_image}
....
Then when you build and want to use a local image:
docker build --build-arg base_image=baseimagename:latest .
Related
I have a docker configuration that I want to run both locally and on the CI (Github).
The only difference in the Dockerfiles is the FROM directive:
local configuration uses Nexus (behind firewall)
CI configuration uses Github Container Registry (GHCR)
Rest of the configuration is exactly the same (base images are the same images, just pulled from different source).
Now the majority of Dockerfile content and files that are copied into image need to be duplicated in both env specific directories.
I'd like to have common Dockerfile configuration in case of any changes needed.
Current state example:
local/Dockerfile:
FROM nexus.example.com/myapp/app:latest
{several lines of code}
ci/Dockerfile:
FROM ghcr.io/mycompanyapp:latest
{the same several lines of code as above}
Desired:
common/Dockerfile:
{common code}
local/Dockerfile:
FROM nexus.example.com/myapp/app:latest
# INCLUDE ../common/Dockerfile
ci/Dockerfile:
FROM ghcr.io/mycompanyapp:latest
# INCLUDE ../common/Dockerfile
I am aware of existence of the edrevo/dockerfile-plus but I am looking for more official solution. It was tedious to link Dockerfile residing in different directory than build context.
Also it seems that it is not maintained actively and it may not work on Windows which is used by other team members (issue https://github.com/edrevo/dockerfile-plus/issues/27)
You can use ARGS do to this.
ARG REPO=nexus.example.com
FROM ${REPO}/app:latest
....
So the default config will be nexus.example.com and in your CI you just need to build with arg REPO=something
example:
docker build -t myapp:latest --build-args REPO=ghcr.io -f Dockerfile
For local build:
docker build -t myapp:latest -f Dockerfile
My use case is that I have multiple express micro-services that use the same middleware and I would like to create a different repo in the format of an npm module for each middleware.
Every repo is a private repo and can have a deploy key attached (can be different keys or the same)
All of this works OK locally. However when I try to use this with my docker-compose setup it fails on the npm install step, in the build stage.
Dockerfile
FROM node:alpine
RUN npm install --production
CMD npm start
docker-compose.yml
services:
node-api:
build:
context: .
dockerfile: Dockerfile
I understand this doesn't work because I don't have the deploy key I use on my local system in the Docker context.
I've looked around for a solution and none seem very easy/non hacky
Copy the key in and squash (CONS: not sure how I do this in a docker-compose file)http://blog.cloud66.com/pulling-git-into-a-docker-image-without-leaving-ssh-keys-behind/
Copy the key in on the build step and add to image. (CONS: Not very secure :( )
Use the key as a build argument. (CONS: see 2)
Dockerise something like https://www.vaultproject.io/ run that up first, add the key and use that within the node containers to get the latest key. (CONS: probably lots of work, maybe other issues?)
Use Docker secrets and docker stack deploy and store the key in docker secrets (CON: docker stack deploy has no support for docker volumes yet. See here https://docs.docker.com/compose/bundles/#producing-a-bundle unsupported key 'volumes')
My question is what is the most secure possible solution that is automated (minimal manual steps for users of the file)? Time of implementation is less of a concern. I'm trying to avoid checking in any sensitive data while making it easy for other people to run this locally.
Let's experiment with this new feature: Docker multi stage build
You can selectively copy artifacts from one stage to another, leaving behind everything you don’t want in the final image.
The idea is to build a temporary base image, then start the build again only taking what you want from the previous image. It uses multiple FROM in the same Dockerfile:
FROM node as base-node-modules
COPY your_secret_key /some/path
COPY package.json /somewhere
RUN npm install <Wich use your key>
FROM node #yes again!
...
...
COPY --from=base-node-modules /somewhere/node_modules /some/place/node_modules
...
... # the rest of your Dockerfile
...
Docker will discard everything what you don't save from the first FROM.
I'm creating a Docker image for Atlassian JIRA.
Dockerfile can be found here: https://github.com/joelcraenhals/docker-jira/blob/master/Dockerfile
However I want to enable the HTTPS connector on the Tomcat server inside the Docker image during image creation so that the server.xml file is configured during image creation.
How can I modify a certain file in the container?
Best regards,
Alternative a)
I would say you are going the wrong path here. You do not want to do this during image creation, but rather during the entrypoint.
It is very common and best practise in docker to configure the service during the first container start e.g. seed the database, generate passwords and seeds and, as in you case, generate configuration based on templates.
Usually those configuration files are either controlled by ENV variables you pass on to docker run or rather in your docker-compose.yml, in more complex environments the source of the configuration variables can be consul or etcd.
For your example, e.g. you could introduce a ENV variable 'USE_SSL' and then either use sed in your entrypoint to replace something in the server.xml when it is set, but since you need much more, like setting the revers_proxy domain and things, you should go with tiller : https://github.com/markround/tiller
Create a server.xml.erb file, place the variables you want to be dynamic, use if conditions if you want to exclude a section if USE_SSL is not set, and let tiller use ENVIRONMENT as a datasources.
Alternative b)
If you really want to stay with the "on image build" concept ( not recommended ) you should use the so called build_args https://docs.docker.com/engine/reference/commandline/build/
Add this to your docker file
ARG USE_SSL
RUN /some_script_you_created_to_generate_server_xml.sh $USE_SSL
You still need to have a bash/whatever script some_script_you_created_to_generate_server_xml.sh which takes the args, and creates by conditions, whatever you want. Tiller though will be much more convenient when stuff gets bigger (compared to running some seds/awks)
and then, when building the image, you could use
`docker build . --build-arg USE_SSL=no -t yourtag
You need to extend this image with your custom config file, write your own Dockerfile with following content:
FROM <docker-jira image name>:<tag>
COPY <path to the server.xml on your computer, relative to Dockerfile dir> <path to desired location of server.xml inside the container>
After that you need to build and run your new image:
docker build . --tag <name of your image>
docker run <name of your image>
How does one pass arguments into a dockerfile?
Lets say I have the dockerfile:
FROM ubuntu:14.04
MAINTAINER Karl Morrison
sudo do-something-here myVarHere
I would want to build the file as so for example:
docker build basickarl/my-image-example /directory/of/my/dockerfile "my string to be passed to myVarHere here!"
Docker has ARG that you can use here
FROM ubuntu:14.04
MAINTAINER Karl Morrison
ARG myVarHere
RUN do-something-here myVarHere
And then build using --build-arg:
docker build --build-arg myVarHere=value
We've had a similar requirement and came up with a relatively simple script that does exactly that:
We create a file called dockerfile_template in which we use variables just like you describe. The script takes that file, performs string substitutions and copies it to dockerfile (no _template) before calling docker build dockerfile.
Works pretty good. Also very extensible for future requirements.
Update:
Scrap that. Use build-arg (here)
I have a software that should be tested against a serious of WebDAV backends that are available as Docker containers. The lame approach is to start all containers within the before_install section like
before_install:
- docker run image1
- docker run image2
- ...
This does not make much sense and wastes system resource since I only need to have only on particular docker container running as part of test run.
My test configuration uses a matrix...it is possible to configure the docker image to be run using an environment variable as part of the matrix specs?
This boils down to two questions:
can I use environment variables inside steps of the before_install section
is the 'matrix' evaluated before the before_install section in order to make
use of environment variables defined inside the matrix
?
The answer to both of your questions is yes.
I have been able to build indepentant dockerfiles using the matrix configuration. A sample dockerfile might look like
sudo: required
services:
- docker
env:
- DOCKERFILE=dockerfile-1
- DOCKERFILE=dockerfile-2
before_install:
- docker build -f $DOCKERFILE .
In this case there would be two independant runs each building a separate image. You could also use a docker pull command if your images are on the docker hub.