I'm making some management system, and want to manage docker container's log with fluentd.
What I really want to do is saving logs dynamically with parameter in --log-opt tag.
For example, when I deploy a container, I use command like this:
docker run --log-driver=fluentd --log-opt fluentd-address=some_addr --log-opt tag={task_id} some_image
What I'm trying to do is classifying logs by task_id in the log-opt's tag.
In fluent.conf, I want to set path like this: /fluent/log/{task_id}/data.*.log
How can I pass variables or placeholder into fluentd conf file?
You can try after adding environment variable in command. PFB link for fluentd deploy(daemonset) file in YAML(kubernetes), I am passing Environment variable in Fluentd daemonset file(Fluentd Deployment) and using the same in fluentd.conf.
How to get ${kubernetes.namespace_name} for index_name in fluentd?
Pass environment variable in docker- https://stackoverflow.com/questions/30494050/how-do-i-pass-environment-variables-to-docker-containers#:~:text=Using%20docker%2Dcompose%20%2C%20you%20can,commands%20specific%20to%20the%20environment.&text=Use%20%2De%20or%20%2D%2Denv,set%20environment%20variables%20(default%20%5B%5D).
Related
I'm trying to setup Ory Kratos on ECS.
Their documentation says that you can run migrations with the following command...
docker -e DSN="engine://username:password#host:port/dbname" run oryd/kratos:v0.10 migrate sql -e
I'm trying to recreate this for an ECS task and the Dockerfile so far looks like this...
# syntax=docker/dockerfile:1
FROM oryd/kratos:v0.10
COPY kratos /kratos
CMD ["-c", "/kratos/kratos.yml", "migrate", "sql", "-e", "--yes"]
It uses the base oryd/kratos:v0.10 image, copies across a directory with some config and runs the migration command.
What I'm missing is a way to construct the -e DSN="engine://username:password#host:port/dbname". I'm able to supply my database secret from AWS Secrets Manager directly to the ECS task, however the secret is a JSON object in a string containing the engine, username, password, host, port and dbname properties.
How can I securely construct the required DSN environment variable?
Please see the ECS documentation on injecting SecretsManager secrets. You can inject specific values from a JSON secret as individual environment variables. Search for "Example referencing a specific key within a secret" in the page I linked above. So the easiest way to accomplish this without adding a JSON parser tool to your docker image, and writing a shell script to parse the JSON inside the container, is to simply have ECS inject each specific value as a separate environment variable.
Can i know, how to set initial password for elasticsearch database using docker-compose
bin/elasticsearch-setup-passwords auto -u "http://192.168.2.120:9200
See this:
The initial password can be set at start up time via the ELASTIC_PASSWORD environment variable:
docker run -e ELASTIC_PASSWORD=MagicWord docker.elastic.co/elasticsearch/elasticsearch-platinum:6.1.4
Also, for newest image (docker.elastic.co/elasticsearch/elasticsearch:7.14.0), the ELASTIC_PASSWORD_FILE environment added mentioned in Configuring Elasticsearch with Docker:
For example, to set the Elasticsearch bootstrap password from a file, you can bind mount the file and set the ELASTIC_PASSWORD_FILE environment variable to the mount location. If you mount the password file to /run/secrets/bootstrapPassword.txt, specify:
-e ELASTIC_PASSWORD_FILE=/run/secrets/bootstrapPassword.txt
So add these environment in docker-compose.yaml I guess could work for you.
If I'm using Docker with nginx for hosting a web app, how can I use either
Variables in my docker-compose.yml file
Environment variables such as HOSTNAME=example.com.
So that when I build the container, it will insert the value into my nginx.conf file that I copy over when I build the container.
You can use environment variables is your compose file. According to official docs
Your configuration options can contain environment variables. Compose uses the variable values from the shell environment in which docker-compose is run. For example, suppose the shell contains POSTGRES_VERSION=9.3 and you supply this configuration:
db: image: "postgres:${POSTGRES_VERSION}"
When you run docker-compose up with this configuration, Compose looks for the POSTGRES_VERSION environment variable in the shell and substitutes its value in.
See the docs for more information. You will find various other approaches to supply environment variables in the link like passing them through env_file etc.
I am having some trouble with my docker containers and environment variables.
Currently i have a docker-compose.yml with the following defined:
version: '2.1'
services:
some-service:
build:
context: .
image: image/replacedvalues
ports:
- 8080
environment:
- PROFILE=acc
- ENVA
- ENVB
- TZ=Europe/Berlin
some-service-acc:
extends:
service: some-service
environment:
- SERVICE_NAME=some-service-acc
Now when i deploy this manually (via SSH command line directly) on server A, it will take the environmental variables from Server A and put them in my container. So i have the values of ENVA and ENVB from the host in my container. Using the following command (after building the image ofcourse): docker-compose up some-service-acc.
We are currently developing a better infrastructure and want to deploy services via Jenkins. Jenkins is up and running in a docker container on server B.
I can deploy the service via Jenkins (Job-DSL, setting DOCKER_HOST="tcp://serverA:2375"temporary). So it will run all docker (compose) commands on ServerA from the Jenkins Container on Server B. The service is up and running except that it doesn't have values for the ENVA and the ENVB.
Jenkins runs the following with the Job-DSL groovy script:
withEnv(["DOCKER_HOST=tcp://serverA:2375"]) {
sh "docker-compose pull some-service-acc"
sh "docker-compose -p some-service-acc up -d some-service-acc"
}
I tried setting them in my Jenkins container and on Server B itself but neither worked. Only when i deploy manually directly on Server A it works.
When i use docker inspect to inspect the running container, i get the following output for the env block:
"Env": [
"PROFILE=acc",
"affinity:container==JADFG09gtq340iggIN0jg53ij0gokngfs",
"TZ=Europe/Berlin",
"SERVICE_NAME=some-service-acc",
"ENVA",
"ENVB",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"LANG=C.UTF-8",
"JAVA_VERSION=8",
"JAVA_UPDATE=121",
"JAVA_BUILD=13",
"JAVA_PATH=e9e7ea248e2c4826b92b3f075a80e441",
"JAVA_HOME=/usr/lib/jvm/default-jvm",
"JAVA_OPTS="
]
Where do i need to set the Environmental variables so that they will be passed to the container? I prefer to store the variables on Server A. But if this is not possible, can someone explain me how it could be done? It is not an option to hardcode the values in the compose file or anywhere else in the source as they contain sensitive data.
If i am asking this in the wrong place, please redirect me to where i should be.
Thanks!
You need to set the environment variables in the shell that is running the docker-compose command line. In Jenkins, that's best done be inside your groovy script (Jenkins doesn't use the host environment within the build slave):
withEnv(["DOCKER_HOST=tcp://serverA:2375", "ENVA=hello", "ENVB=world"]) {
sh "docker-compose pull some-service-acc"
sh "docker-compose -p some-service-acc up -d some-service-acc"
}
Edit: from the comments, you also want to pass secrets.
To do that, there are plugins like the Mask Password that would allow you to pass variables without them showing up in the logs or job configuration. (I'm fairly certain a determined intruder could still get to the values since Jenkins itself knows it and passes it to your script in clear text.)
The better option IMO is to use a secrets management tool inside of docker. Hashicorp has their Vault product which implements an encrypted K/V store where values are accessed with a time limited token and offers the ability to generate new passwords per request with integration into the target system. I'd consider this the highest level of security when fully configured, but you can configure this countless ways to suit your own needs. You'll need to write something to pull the secret and inject it into your container's environment (it's a rest protocol that you can add to your entrypoint).
The latest option from Docker itself is secrets management that requires the new Swarm Mode. You save your secret in the swarm and add it to the containers you want as a file using an entry in the docker-compose.yml version 3 format. If you already use Swarm Mode and can start your containers with docker stack deploy instead of docker-compose, this is a fairly easy solution to implement.
I'm creating a Docker image for Atlassian JIRA.
Dockerfile can be found here: https://github.com/joelcraenhals/docker-jira/blob/master/Dockerfile
However I want to enable the HTTPS connector on the Tomcat server inside the Docker image during image creation so that the server.xml file is configured during image creation.
How can I modify a certain file in the container?
Best regards,
Alternative a)
I would say you are going the wrong path here. You do not want to do this during image creation, but rather during the entrypoint.
It is very common and best practise in docker to configure the service during the first container start e.g. seed the database, generate passwords and seeds and, as in you case, generate configuration based on templates.
Usually those configuration files are either controlled by ENV variables you pass on to docker run or rather in your docker-compose.yml, in more complex environments the source of the configuration variables can be consul or etcd.
For your example, e.g. you could introduce a ENV variable 'USE_SSL' and then either use sed in your entrypoint to replace something in the server.xml when it is set, but since you need much more, like setting the revers_proxy domain and things, you should go with tiller : https://github.com/markround/tiller
Create a server.xml.erb file, place the variables you want to be dynamic, use if conditions if you want to exclude a section if USE_SSL is not set, and let tiller use ENVIRONMENT as a datasources.
Alternative b)
If you really want to stay with the "on image build" concept ( not recommended ) you should use the so called build_args https://docs.docker.com/engine/reference/commandline/build/
Add this to your docker file
ARG USE_SSL
RUN /some_script_you_created_to_generate_server_xml.sh $USE_SSL
You still need to have a bash/whatever script some_script_you_created_to_generate_server_xml.sh which takes the args, and creates by conditions, whatever you want. Tiller though will be much more convenient when stuff gets bigger (compared to running some seds/awks)
and then, when building the image, you could use
`docker build . --build-arg USE_SSL=no -t yourtag
You need to extend this image with your custom config file, write your own Dockerfile with following content:
FROM <docker-jira image name>:<tag>
COPY <path to the server.xml on your computer, relative to Dockerfile dir> <path to desired location of server.xml inside the container>
After that you need to build and run your new image:
docker build . --tag <name of your image>
docker run <name of your image>