How to create env-file in centos 7 for docker - docker

I am trying to host an application in docker. I am following a guide to host that application in docker, in that its mentioned use the 'env-file' in this way and given all the parameters to include in the env-file.
My question is , how do I edit the env-file in centos ? where will this be located in??

Exporting the variables in a .sh file inside /etc/profile.d/ or in ~/.bash_profile would do the trick. Keep in mind that if you intend to use these environment variables in a service script, it might not work as you expect since service purges all environment variables except a few.
See https://unix.stackexchange.com/a/44378/148497.

Related

Where do we get the list of environment variable for NiFi Docker

I'm a beginner in NiFi setup. I'm planning to start a NiFi cluster on Kubernetes. In normal installation, I saw that, we can change the NiFi configurations under the file 'nifi.properties'. But, when it comes to docker image, I also saw that we can change that by using environment variables. In most of the cases, the properties mentioned in the nifi.properties file can be easily converted into its equivalent environment variable.
Eg:
nifi.web.http.host <=> NIFI_WEB_HTTP_HOST
But in some cases, the environment variable is different. Eg:
nifi.zookeeper.connect.string != NIFI_ZK_CONNECT_STRING
From where do we get the full list of NiFi environment variable for Docker image. Any help like links or directions is very much appreciated.
You need to look into the documentation (or the source code) of the NiFi docker images your are using. For example agturley/nifi and apache/nifi.
When you enter the docker container you can see secure.sh and start.sh under the path /opt/nifi/scripts. These are the scripts that make all prop_replace

DevOps and environment variables with Docker images and containers

I am a newby with dockers and want to understand how to deal with environment variables in images/containers and how to configure the CI/CD pipelines.
In the first instance I need the big picture before deepdiving in commands etc. I searched a lot on Internet, but in the most of the cases I found the detailed commands how to create, build, publish images.
I have a .net core web application. As all of you know there are appsettings.json files for each environment, like appsettings.development.json or appsettings.production.json.
During the build you can give the environment information so .net can build the application with the specified environment variables like connection strings.
I can define the same steps in de Dockerfile and give the environment as a parameter or define as variables. That part works fine.
My question is, should I have to create seperate images for all of my environments? If no, how can I create 1 image and can use that to create a container and can use it for all of my environments? What is the best practice?
I hope I am understanding the question correctly. If the environments are the same framework, then no. In each project, import the necessary files for Docker and then update the docker-compose.yml for the project - it will then create an image for that project. Using Docker Desktop (if you prefer over CLI) you can start and stop your containers.

aspnet core 2.2 web app environment variables not changing in docker

I've got an ASP.NET core 2.2 web app that I have enabled docker support for. I have created a test app here for review here.
I am running it in VS with Docker locally. I want to add environment variables/secrets to the app settings secrets in order to override the values in the appsettings.json file. In order to do this locally, I have tried changing values in:
launchsettings.json
Dockerfile
however, for both of these, when I attach to my docker instance and printenv the variable values, I find that the variable for ASPNETCORE_ENVIRONMENT still shows up as Development.
I am attaching to the running container like this:
docker exec -t -i 4c05 /bin/bash
I have searched all files in my solution. I can't find ASPNETCORE_ENVIRONMENT being set to Development anywhere in the solution. However, somehow, the environment variable is still being set with that value.
What could be going wrong? I want that variable to change. Once working, what I really want to do is to add a connection string secret to environment variables so that it can be used locally via the appsettings.json file or via a docker secret environment variable if the aspnetcore web app is running in a container. I think I've got this code working, it's just that the variables are not being deployed as expected to the running container.
My VS version is:
thanks
Mmm - seems there is a problem with DockerFile support in VS. However, when I use the Orchestration Support, using docker-compose, the functionality works as expected, so I'm answering the question myself :-)

Docker-Compose Volume mounting from windows to linux container makes everything executable

Im working on some Ansible stuff that I we have setup in a docker container. when run from a linux system it works great. When run from a windows system I get the following error:
ERROR! Problem running vault password script /etc/ansible-deployment/secrets/vault-dev.txt ([Errno 8] Exec format error). If this is not a script, remove the executable bit from the file.
Basically what this is saying is that the file is marked as an executable. What i've noticed (and hasnt been a huge problem until now) is that all files mounted to a linux container from windows are ALWAYS tagged with the executable attribute.
Is there any way to control/prevent this?
Did you try adding :ro at the end of the mounted path?
Something like this:
HOST:CONTAINER:ro
This is a limitation of the SMB-based approach that Docker for Windows uses for making host-mounted volumes work, see here
To solve the executable bit error, I ended up passing Ansible a python script as the --vault-password-file argument as a workaround, see here.
#!/usr/bin/env python
import os
vault_password = open('PATH_TO_YOUR_VAULT_PASSWORD_FILE', 'r')
print vault_password.read()
vault_password.close()
Since the python script is executed in the container, the vault password file path needs to be accessible in the container - I'm mounting it as a volume, but you can also build it into your image. The latter is a security risk and is not recommended.

Transmit Heroku environment variables to Docker instance

I build a RoR app on Heroku that must be run inside a Docker container. To do so I use the official Dockerfile. As it is very common with Heroku, I need a few add-ons to make this app fully operational. In production the variable DATABASE_URL is available within my app. But if I try some other add-ons that use environment variables (Mailtrap in my case), variables aren't copied into the instance during runtime.
So my question is simple: how can I make docker instances aware of the environment variables when executed on Heroku?
As you may ask, I already know that we can specified an environment directive right in docker-compose.yml. I would like to avoid that in order to be able to share this file through the project repository.
I didn't find any documentation about it but t appears that Heroku change very recently the way it handles config vars in Docker containers: they are now replicated automatically (values from docker-compose.yml are simply ignored).
The workaround to not commit sensitive config files, would be to create a docker-compose.yml**.example** with empty fields and commit it, then add docker-compose.yml to .gitignore.
Since that's not very practical on heroku, you can use the --env docker switch to add any variable to the container's environment.
Like this: docker run --env "MY_VAR=yolo" my_image:my_tag
You could also serve a private docker-config.yml from a secure site, that heroku would have access to (that would be my preferred solution in your case).

Resources