So, my issue is that i need to import a .env file during the docker build part of the Docker#2 task on Azure.
Something like this, but -hopefully- using Docker#2
docker run --env-file envvariables.env project
I know that i can pass the variables independantly using the args like this
--build-arg VAR=$(VAR)
But the issue is that i have A LOT of enviroment variables in that file, so this solution becomes kinda unsustainable
I need to pass out the .env file to the build process on Docker#2
Tested from my side, Docker#2 does not support using .env files directly. You need to use --build-arg key=value in the arguments. When building an image from a Dockerfile, the supported arguments are as follows:
Related
I am sending my application code to bitbucket repo without .env file and enable bitbucket pipeline to build a docker image for my application through Dockerfile which is already in my repo.
But the issue is my build needs the .env file through out building the image and after building the image !! My image needs to have an .env file !!
I am trying to figure it out through bitbucket repository variables but maybe they are not available after building the image !! but i need them after building image
You can use docker --env-file argument. With that you can give env file to docker while running it.
If you are using docker-compose or k8s, there are other ways to inject env variables to containers.
https://docs.docker.com/compose/environment-variables/
https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/
I have a node.js Project which I run as Docker-Container in different environments (local, stage, production) and therefor configure it via .env-Files. As always advised I don't store the .env-Files in my remote repository which is Gitlab. My production- and stage-systems are run as kubernetes cluster.
What I want to achieve is an automated build via Gitlab's CI for different environments (e.g. stage) depending on the commit-branch (named stage as well), meaning when I push to origin/stage I want an Docker-image to be built for my stage-environment with the corresponding .env-File in it.
On my local machine it's pretty simple, since I have all the different .env-Files in the root-Folder of my app I just use this in my Dockerfile
COPY .env-stage ./.env
and everything is fine.
Since I don't store the .env-Files in my remote repo, this approach doesn't work, so I used Gitlab CI Variables and created a variable named DOTENV_STAGE of type file with the contents of my local .env-stage file.
Now my problem is: How do I get that content as .env-File inside the docker image that is going to be built by gitlab since that file is not yet a file in my repo but a variable instead?
I tried using cp (see below, also in the before_script-section) to just copy the file to an .env-File during the build process, but that obviously doesn't work.
My current build stage looks like this:
image: docker:git
services:
- docker:dind
build stage:
only:
- stage
stage: build
before_script:
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
script:
- cp $DOTENV_STAGE .env
- docker pull $GITLAB_IMAGE_PATH-$CI_COMMIT_BRANCH || true
- docker build --cache-from $GITLAB_IMAGE_PATH/$CI_COMMIT_BRANCH --file=Dockerfile-$CI_COMMIT_BRANCH -t $GITLAB_IMAGE_PATH/$CI_COMMIT_BRANCH:$CI_COMMIT_SHORT_SHA .
- docker push $GITLAB_IMAGE_PATH/$CI_COMMIT_BRANCH
This results in
Step 12/14 : COPY .env ./.env
COPY failed: stat /var/lib/docker/tmp/docker-builder513570233/.env: no such file or directory
I also tried cp $DOTENV_STAGE .env as well as cp $DOTENV_STAGE $CI_BUILDS_DIR/.env and cp $DOTENV_STAGE $CI_PROJECT_DIR/.env but none of them worked.
So the part I actually don't know is: Where do I have to put the file in order to make it available to docker during build?
Thanks
You should avoid copying .env file into the container altogether. Rather feed it from outside on runtime. There's a dedicated prop for that: env_file.
web:
env_file:
- .env
You can store contents of the .env file itself in a Masked Variable in the GitLabs CI backend. Then dump it to .env file in the runner and feed to Docker compose pipeline.
After some more research I stumbled upon a support-forum entry on gitlab.com, which exactly describes my situation (unfortunately it got deleted in the meanwhile) and it got solved by the same approach I was trying to use, namely this:
...
script:
- cp $DOTENV_STAGE $CI_PROJECT_DIR/.env
...
in my .gitlab-ci.yml
The part I was actually missing was adjusting my .dockerignore-File accordingly (removing .env from it) and then removing the line
COPY .env ./.env
from my Dockerfile
An alternative approach I thought about after joyarjo's answer could be to use a ConfigMap for Kubernetes. But I didn't try it yet
Is there a way to pass AWS Codebuild environment variables into a Dockerfile?
I'd like to be able to pull from ECR like this:
FROM $My_AWS_ACCOUNT.dkr.ecr.us-east-1.amazonaws.com/someimage:latest
Where $My_AWS_ACCOUNT references an environment variables within my codebuild project.
Yes, you can use FROM ${My_AWS_ACCOUNT}.xxx. My_AWS_ACCOUNT should be passed as an argument to the docker build.
This is how I would do it:
ARG My_AWS_ACCOUNT=SOME_DEFAULT_IMAGE
FROM ${My_AWS_ACCOUNT}.xxx
When you build:
docker build --build-arg My_AWS_ACCOUNT=${My_AWS_ACCOUNT}
Yet another amazingly annoying thing in Docker that doesn't actually need to be this difficult but for some reason is supremely complicated and/or non-intuitive.
command line:
docker build --build-arg My_AWS_ACCOUNT=${My_AWS_ACCOUNT}
Dockerfile:
ARG My_AWS_ACCOUNT
FROM ${My_AWS_ACCOUNT}.dkr.ecr.us-east-1.amazonaws.com/someimage:latest
I have a docker-compose file that allows me to pass the environment variables as a file (.env file). As I have multiple ENV variables, Is there any option in Dockerfile like env_file in docker-compose for passing multiple environment variables during docker build?
This is the docker-compose.yml
services:
web:
image: "node"
links:
- "db"
env_file: "env.app"
AFAIK, there is no such way to inject environment variables using a file during the build step using Dockerfile. However, in most cases, people do end up using an entrypoint script & injecting variables during the docker run or docker-compose up.
In case it's a necessity you might need to write a shell wrapper which will change the values in the Dockerfile dynamically by taking a key-value pair text file as an input or make it something as below but the ENV file name need to be included in Dockerfile.
COPY my-env-vars /
RUN export $(cat my-env-vars | xargs)
It's an open issue - https://github.com/moby/moby/issues/28617
PS - You need to be extra careful while using this approach because the secrets are baked into the image itself.
I'm creating a Docker image for Atlassian JIRA.
Dockerfile can be found here: https://github.com/joelcraenhals/docker-jira/blob/master/Dockerfile
However I want to enable the HTTPS connector on the Tomcat server inside the Docker image during image creation so that the server.xml file is configured during image creation.
How can I modify a certain file in the container?
Best regards,
Alternative a)
I would say you are going the wrong path here. You do not want to do this during image creation, but rather during the entrypoint.
It is very common and best practise in docker to configure the service during the first container start e.g. seed the database, generate passwords and seeds and, as in you case, generate configuration based on templates.
Usually those configuration files are either controlled by ENV variables you pass on to docker run or rather in your docker-compose.yml, in more complex environments the source of the configuration variables can be consul or etcd.
For your example, e.g. you could introduce a ENV variable 'USE_SSL' and then either use sed in your entrypoint to replace something in the server.xml when it is set, but since you need much more, like setting the revers_proxy domain and things, you should go with tiller : https://github.com/markround/tiller
Create a server.xml.erb file, place the variables you want to be dynamic, use if conditions if you want to exclude a section if USE_SSL is not set, and let tiller use ENVIRONMENT as a datasources.
Alternative b)
If you really want to stay with the "on image build" concept ( not recommended ) you should use the so called build_args https://docs.docker.com/engine/reference/commandline/build/
Add this to your docker file
ARG USE_SSL
RUN /some_script_you_created_to_generate_server_xml.sh $USE_SSL
You still need to have a bash/whatever script some_script_you_created_to_generate_server_xml.sh which takes the args, and creates by conditions, whatever you want. Tiller though will be much more convenient when stuff gets bigger (compared to running some seds/awks)
and then, when building the image, you could use
`docker build . --build-arg USE_SSL=no -t yourtag
You need to extend this image with your custom config file, write your own Dockerfile with following content:
FROM <docker-jira image name>:<tag>
COPY <path to the server.xml on your computer, relative to Dockerfile dir> <path to desired location of server.xml inside the container>
After that you need to build and run your new image:
docker build . --tag <name of your image>
docker run <name of your image>