I have a spring boot application with some properties as below in my application.properties.
server.ssl.keyStore=/users/admin/certs/appcert.jks
server.ssl.keyStorePassword=certpwd
server.ssl.trustStore=/users/admin/certs/cacerts
server.ssl.trustStorePassword=trustpwd
Here the cert paths are hardcoded to some path. But, I dont want to hard code this as the path will not be known in Mesos or Kubernetes world.
I have a docker file as follows.
FROM docker.com/base/jdk1.8:latest
MAINTAINER Application Engineering [ https://docker.com/ ]
RUN mkdir -p /opt/docker/svc
COPY application/weather-service.war /opt/docker/svc/
CMD java -jar /opt/docker/svc/weather-service.war --spring.config.location=file:/conf/application.properties -Dlogging.config=/conf/logback.xml
Here, I can use the volume mount option in kubernetes so as to place the application.proerties.
How can i achieve the same thing for cert files in the application.properties?
Here, the cert props are optional for few applications and mandatory for few applications.
I need options to integrate within the docker images and having the cert files outside the docker image.
Approach 1. Within the docker image
Remove the property "server.ssl.keyStore" from application.properties and pass it as a environment variable like the below one.
CMD java -jar /opt/docker/svc/weather-service.war
--spring.config.location=file:/conf/application.properties -Dlogging.config=/conf/logback.xml -Dserver.ssl.keyStore=/certs/appcert.jks
Now, the cert should be places in secrets and use the volume mount options with kubernetes.
Approach 2. No need to have -Dserver.ssl.keyStore=/certs/appcert.jks in the docker file, but still remove the property "server.ssl.keyStore" from application.properties and do as follows.
a. Create secret
kubectl create secret generic svc-truststore-cert
--from-file=./cacerts
b. Create one env variable as below.
{
"name": "JAVA_OPTS",
"value": "-Dserver.ssl.trustStore=/certs/truststore/cacerts"
}
c. Create Volume mounts under container for pod.
"volumeMounts": [
{
"name": "truststore-cert",
"mountPath": "/certs/truststore"
}
]
d. Create a volume under spec.
{
"name": "truststore-cert",
"secret": {
"secretName": "svc-truststore-cert",
"items": [
{
"key": "cacerts",
"path": "cacerts"
}
]
}
}
Approach 3.
Using Kubernetes Persistent Volume.
Created a persistent volume on Kubernetes .
Mount the volume to the Pod of each microservice (changes in the pod script file). Mounted file system accessible via - '/shared/folder/certs' path.
CMD java -jar /opt/docker/svc/weather-service.war
--spring.config.location=file:/conf/application.properties -Dlogging.config=/conf/logback.xml -Dserver.ssl.keyStore=/certs/appcert.jks
I have taken the second approach. Is this correct? Is there any other better approach?
Thanks
Yes, the second approach is the best one, and it is the only way if you are storing some sensitive data like certificates, keys, etc. That topic is covered in the documentation.
Moreover, you can encrypt your secrets to add another level of protection.
Related
I currently use the Elastic Beanstalk to run the docker image from ECR and my Dockerrun.aws.json looks like follows
{
"AWSEBDockerrunVersion": "1",
"Image": {
"Name": "12345678.dkr.ecr.eu-west-1.amazonaws.com/test-web:latest",
"Update": "true"
},
"Ports": [{
"ContainerPort": 80,
"HostPort": 80
}]
}
I generate this Dockerrun.aws.json file automatically in buildspec.json (for CodeBuild) and pass this file as an output artifact. Having the configuration values generated at every build seems wrong to me.
Now, I would like to also have some environment variables when running the container. The variables defined directly in Elastic Beanstalk would be perfect for me. But it is not being populated inside the container. From the internet, I see the only option is to have it in .config file. But I don't want to do this as it can expose the sensitive keys.
Is there any other solution in which I can pass the environment keys to the container? (For example, by using secret manager or other means of sharing EBS env variable to docker)
I am attempting to run a task with persistent storage. The task executes a docker image which creates a directory and copies a file into it. However, when the task definition mounts a volume to the created directory, the file is lost.
For brevity, the relevant lines Dockerfile are:
RUN mkdir /root/createdDir
COPY ./myFile.txt /root/createdDir/myFile.txt
And the relevant fields of the task definition JSON are:
{
"containerDefinitions":[
{
...,
"mountPoints": [
{
"readOnly": null,
"containerPath": "/root/createdDir",
"sourceVolume": "shared"
}
],
"image": "myImage"
}]
"volumes": [
{
"name": "shared",
"host": {
"sourcePath": null
}
}]
}
When the task is run, the file can no longer be found. If I run the task without adding a mount point to the container, the file is still there.
When trying to do this locally using docker-compose, I can use the same Dockerfile and in the docker-compose.yml file add the following specification to the service volumes: shared:/root/createdDir, where shared is a volume also declared in the docker-compose.yml with a local driver.
The behavior of mounting a volume into an existing directory on the container can be confusing. It is consistent with the general behavior of Linux's mount:
The previous contents (if any) and owner and mode of dir become invisible.
Avoid doing this whenever possible, because it can lead to hard-to-find issues when the volume and the container have files with same names.
I'm trying to pass default parameters such as volumes or envs to my docker container, which I create through Marathon and Apache Mesos. It is possible through arguments passed to mesos-slave. I've put in /etc/mesos-slave/default_container_info file with JSON content (mesos-slave read this file and put it as its arguments):
{
"type": "DOCKER",
"volumes": [
{
"host_path": "/var/lib/mesos-test",
"container_path": "/tmp",
"mode": "RW"
}
]
}
Then I've restarted mesos-slave and create new container in marathon, but I can not see mounted volume in my container. Where I could do mistake? How can I pass default values to my containers in other way?
This will not work for you. When you schedule task on Marathon with docker, Marathon creates TaskInfo with ContainerInfo and that's why Mesos do not fill your default.
From the documentation
--default_container_info=VALUE JSON-formatted ContainerInfo that will be included into any ExecutorInfo that does not specify a ContainerInfo
You need to add volumes to every Marathon task you have or create RunSpecTaskProcessor that will augment all tasks with your volumes
Background: I'm using docker-compose in order to place a tomcat service into a docker swarm cluster but I'm presently struggling with how I would approach the logging directory given that I want to scale the service up yet retain the uniqueness of the logging directory.
Consider the (obviously) made up docker-compose which simply starts tomcat and mounts a logging filesystem in which to capture the logs.
version: '2'
services:
tomcat:
image: "tomcat:latest"
hostname: tomcat-example
command: /start.sh
volumes:
- "/data/container/tomcat/logs:/opt/tomcat/logs,z"
Versions
docker 1.11
docker-compose 1.7.1
API version 1.21
Problem: I'm looking to understand how I would approach inserting a variable into the 'volume' log path so that the log directory is unique for each instance of the scaled service
say,
volumes:
- "/data/container/tomcat/${container_name}/logs:/opt/tomcat/logs,z"
I see that based on project name (or directory I'm in) the container name is actually known, so could I use this ?
eg, setting the project name to 'tomcat' and running docker-compose scale tomcat=2 I would see the following containers.
hostname/tomcat_1
hostname/tomcat_2
So is there any way I could leverage this as a variable in the logging volume, Other suggestions or approaches welcome. I realise that I could just specify a relative path and let the container_id take care of this, but now if I attach splunk or logstash to the logging devices I'd need to know which ones are indeed logging devices as opposed to the base containers f/s. However Ideally I'm looking use a specific absolute path here.
Thanks in advance dockers!
R.
You should really NOT log to the filesystem, and use a specialized log management tool like graylog/logstash/splunk/... instead. Either configure your logging framework in Tomcat with a specific appender, or log to sysout and configure a logging driver in Docker to redirect your logs to the external destination.
This said, if you really want to go the filesystem way, simply use a regular unnamed volume, and then call docker inspect on your container to find the volume's path on the filesystem :
[...snip...]
"Mounts": [
{
"Type": "volume",
"Name": "b8c...SomeHash...48d6e",
"Source": "/var/lib/docker/volumes/b8c...SomeHash...48d6e/_data",
"Destination": "/opt/tomcat/logs",
[...snip...]
If you want to have nice-looking names in a specific location, use a script to create symlinks.
Yet, I'm still doubtfull on this solution, especially in a multi-host swarm context. Logging to an external, specialized service is the way to go in your use case.
I have a RabbitMQ broker with some exchanges and queues already defined. I know I can export and import these definitions via the HTTP API. I want to Dockerize it, and have all the broker definitions imported when it starts.
Ideally, it would be done as easily as it is done via the API. I could write a bunch of rabbitmqctl commands, but with a lot of definitions this might take quite a some time. Also, every change somebody else makes through the web interface will have to be inserted.
I have managed to do what I want by writing a script that sleeps a curl request and starts the server, but this seems to be error prone and really not elegant. Are there any better ways to do definition importing/exporting
, or is this the best that can be done?
My Dockerfile:
FROM rabbitmq:management
LABEL description="Rabbit image" version="0.0.1"
ADD init.sh /init.sh
ADD rabbit_e6f2965776b0_2015-7-14.json /rabbit_config.json
CMD ["/init.sh"]
init.sh
sleep 10 && curl -i -u guest:guest -d #/rabbit_config.json -H "content-type:application/json" http://localhost:15672/api/definitions -X POST &
rabbitmq-server $#
Export definitions using rabbitmqadmin export rabbit.definitions.json.
Add them inside the image using your Dockerfile: ADD rabbit.definitions.json /tmp/rabbit.definitions.json
Add an environment variable when starting the container, for example, using docker-compose.yml:
environment:
- RABBITMQ_SERVER_ADDITIONAL_ERL_ARGS=-rabbitmq_management load_definitions "/tmp/rabbit.definitions.json"
There is a simple way to upload definitions to Docker container.
Use preconfigured node to export the definitions to json file.
Then move this file to the same folder where you have the Dockerfile and create a rabbitmq.config in the same folder too. Here is the context of rabbitmq.config:
[
{ rabbit, [
{ loopback_users, [ ] },
{ tcp_listeners, [ 5672 ] },
{ ssl_listeners, [ ] },
{ hipe_compile, false }
] },
{ rabbitmq_management, [ { listener, [
{ port, 15672 },
{ ssl, false }
] },
{ load_definitions, "/etc/rabbitmq/definitions.json" }
] }
].
Then prepare an appropriate Dockerfile:
FROM rabbitmq:3.6.14-management-alpine
ADD definitions.json /etc/rabbitmq
ADD rabbitmq.config /etc/rabbitmq
EXPOSE 4369 5672 25672 15672
The definitions will be loaded during image build. When you run the container all definitions will be already applied.
You could start your container with RabbitMQ, configure the resources (queues, exchanges, bindings) and then commit your configured container as a new image. This image can be used to start new containers.
More details at https://docs.docker.com/articles/basics/#committing-saving-a-container-state
I am not sure that this is an option, but the absolute easiest way to handle this situation is to periodically create a new, empty RabbitMQ container and have it join the first container as part of the RabbitMQ cluster. The configuration of the queues will be copied to the second container.
Then, you can stop the container and create a versioned image in your docker repository of the new container using docker commit. This process will only save the changes that you have made to your image, and then it would enable you to not have to worry about re-importing the configuration each time. You would just have to get the latest image to have the latest configuration!
Modern releases support definition import directly in the core, without the need to preconfigure the management plugin.
# New in RabbitMQ 3.8.2.
# Does not require management plugin to be enabled.
load_definitions = /path/to/definitions/file.json
From Schema Definition Export and Import RabbitMQ documentation.
If you use official rabbitmq image you can mount definition file in /etc/rabbitmq shown as below, rabbitmq will load this definitions on when daemon started
docker run -v ./your_local_definitions_file.json:/etc/rabbitmq/definitions.json ......