I have a project that runs multiple GO services using ECS. For example, I have 3 containers, A, B, C, and then container D holds config and stuff for the other containers. Is there a way that when container D is updated or restarted I can then restart the other containers within ECS so they can use the new data from container D?
I was thinking of having a pub/sub type thing and telling the other containers a new version had been released but I was thinking there must be an easier way that doesn't involve any extra code.
I'd also like to do this for my local docker-compose stack if possible.
In ECS you can define all containers A,B,C and D within the same task if there are dependencies between them (having all of them in your task definition).
If one container is going down within the task - all task containers are stopped (a new task is started instead). The deployment is in the task level - so deploying a new task will replace all the containers within it.
If you want more flexibility (having separate deployment for A,B,C) you can create 3 different tasks, each one has two containers - one of A/B/C and D (as a sidecar):
Task_A - running both containers A & D.
Task_B - running both containers B & D.
Task_C - running both containers C & D.
In docker-compose you can run docker-compose up --abort-on-container-exit so when one container is down everything goes down as well.
After a bit of searching, I found you can add container dependencies within the task definition JSON.
https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_ContainerDependency.html
This allows you to add dependencies for the different containers running.
For example:
"containerDefinitions": [
{
"name": "app",
"image": "application_image",
"portMappings": [
{
"containerPort": 9080,
"hostPort": 9080,
"protocol": "tcp"
}
],
"essential": true,
"dependsOn": [
{
"containerName": "container-a",
"condition": "HEALTHY"
}
]
},
Very useful.
Related
I currently use the Elastic Beanstalk to run the docker image from ECR and my Dockerrun.aws.json looks like follows
{
"AWSEBDockerrunVersion": "1",
"Image": {
"Name": "12345678.dkr.ecr.eu-west-1.amazonaws.com/test-web:latest",
"Update": "true"
},
"Ports": [{
"ContainerPort": 80,
"HostPort": 80
}]
}
I generate this Dockerrun.aws.json file automatically in buildspec.json (for CodeBuild) and pass this file as an output artifact. Having the configuration values generated at every build seems wrong to me.
Now, I would like to also have some environment variables when running the container. The variables defined directly in Elastic Beanstalk would be perfect for me. But it is not being populated inside the container. From the internet, I see the only option is to have it in .config file. But I don't want to do this as it can expose the sensitive keys.
Is there any other solution in which I can pass the environment keys to the container? (For example, by using secret manager or other means of sharing EBS env variable to docker)
DependsOn
property of ECS container definition is used for container dependencies
Links property of docker compose provides service dependencies.
We are mapping a docker compose file to ECS task definition.
Conceptually, Is the purpose of links property in docker compose similar to DependsOn property of ECS container definition?
links: was an important part of the first-generation Docker networking setup. Once Docker introduced the docker network series of commands and Docker Compose set up a private network by default, it became much less important, and there's not really any reason to use it at all in modern Docker.
Compose has its own depends_on: option. If service a depends_on: [b], then when a starts up (maybe because you explicitly docker-compose up a, or maybe just as an ordering constraint) the b container is guaranteed to exist. If b is a database or some other service that takes a while to start up, it's not guaranteed to be functional, but for instance b will be a valid host name from a's point of view.
Within a single ECS task, one container can dependsOn others. This is similar to the Compose depends_on: setting, but it has an additional condition parameter that can support a couple of different lifecycles. Of note, one container can wait for another to be "condition": "HEALTHY", a check that in Docker Compose requires the waiting container to manually check on its own (ofter with a helper script like wait-for-it.sh); it can also wait for another container to "condition": "COMPLETE" if one container just does setup for another.
If you're porting a Docker Compose file to an ECS task, I'd start by trying to replace links: with depends_on:, which shouldn't cause much functional change; translating this to ECS, the semantics of that are very similar to "dependsOn": [{"condition": "START"}].
I have a spring boot application with some properties as below in my application.properties.
server.ssl.keyStore=/users/admin/certs/appcert.jks
server.ssl.keyStorePassword=certpwd
server.ssl.trustStore=/users/admin/certs/cacerts
server.ssl.trustStorePassword=trustpwd
Here the cert paths are hardcoded to some path. But, I dont want to hard code this as the path will not be known in Mesos or Kubernetes world.
I have a docker file as follows.
FROM docker.com/base/jdk1.8:latest
MAINTAINER Application Engineering [ https://docker.com/ ]
RUN mkdir -p /opt/docker/svc
COPY application/weather-service.war /opt/docker/svc/
CMD java -jar /opt/docker/svc/weather-service.war --spring.config.location=file:/conf/application.properties -Dlogging.config=/conf/logback.xml
Here, I can use the volume mount option in kubernetes so as to place the application.proerties.
How can i achieve the same thing for cert files in the application.properties?
Here, the cert props are optional for few applications and mandatory for few applications.
I need options to integrate within the docker images and having the cert files outside the docker image.
Approach 1. Within the docker image
Remove the property "server.ssl.keyStore" from application.properties and pass it as a environment variable like the below one.
CMD java -jar /opt/docker/svc/weather-service.war
--spring.config.location=file:/conf/application.properties -Dlogging.config=/conf/logback.xml -Dserver.ssl.keyStore=/certs/appcert.jks
Now, the cert should be places in secrets and use the volume mount options with kubernetes.
Approach 2. No need to have -Dserver.ssl.keyStore=/certs/appcert.jks in the docker file, but still remove the property "server.ssl.keyStore" from application.properties and do as follows.
a. Create secret
kubectl create secret generic svc-truststore-cert
--from-file=./cacerts
b. Create one env variable as below.
{
"name": "JAVA_OPTS",
"value": "-Dserver.ssl.trustStore=/certs/truststore/cacerts"
}
c. Create Volume mounts under container for pod.
"volumeMounts": [
{
"name": "truststore-cert",
"mountPath": "/certs/truststore"
}
]
d. Create a volume under spec.
{
"name": "truststore-cert",
"secret": {
"secretName": "svc-truststore-cert",
"items": [
{
"key": "cacerts",
"path": "cacerts"
}
]
}
}
Approach 3.
Using Kubernetes Persistent Volume.
Created a persistent volume on Kubernetes .
Mount the volume to the Pod of each microservice (changes in the pod script file). Mounted file system accessible via - '/shared/folder/certs' path.
CMD java -jar /opt/docker/svc/weather-service.war
--spring.config.location=file:/conf/application.properties -Dlogging.config=/conf/logback.xml -Dserver.ssl.keyStore=/certs/appcert.jks
I have taken the second approach. Is this correct? Is there any other better approach?
Thanks
Yes, the second approach is the best one, and it is the only way if you are storing some sensitive data like certificates, keys, etc. That topic is covered in the documentation.
Moreover, you can encrypt your secrets to add another level of protection.
I'm trying to pass default parameters such as volumes or envs to my docker container, which I create through Marathon and Apache Mesos. It is possible through arguments passed to mesos-slave. I've put in /etc/mesos-slave/default_container_info file with JSON content (mesos-slave read this file and put it as its arguments):
{
"type": "DOCKER",
"volumes": [
{
"host_path": "/var/lib/mesos-test",
"container_path": "/tmp",
"mode": "RW"
}
]
}
Then I've restarted mesos-slave and create new container in marathon, but I can not see mounted volume in my container. Where I could do mistake? How can I pass default values to my containers in other way?
This will not work for you. When you schedule task on Marathon with docker, Marathon creates TaskInfo with ContainerInfo and that's why Mesos do not fill your default.
From the documentation
--default_container_info=VALUE JSON-formatted ContainerInfo that will be included into any ExecutorInfo that does not specify a ContainerInfo
You need to add volumes to every Marathon task you have or create RunSpecTaskProcessor that will augment all tasks with your volumes
I have a RabbitMQ broker with some exchanges and queues already defined. I know I can export and import these definitions via the HTTP API. I want to Dockerize it, and have all the broker definitions imported when it starts.
Ideally, it would be done as easily as it is done via the API. I could write a bunch of rabbitmqctl commands, but with a lot of definitions this might take quite a some time. Also, every change somebody else makes through the web interface will have to be inserted.
I have managed to do what I want by writing a script that sleeps a curl request and starts the server, but this seems to be error prone and really not elegant. Are there any better ways to do definition importing/exporting
, or is this the best that can be done?
My Dockerfile:
FROM rabbitmq:management
LABEL description="Rabbit image" version="0.0.1"
ADD init.sh /init.sh
ADD rabbit_e6f2965776b0_2015-7-14.json /rabbit_config.json
CMD ["/init.sh"]
init.sh
sleep 10 && curl -i -u guest:guest -d #/rabbit_config.json -H "content-type:application/json" http://localhost:15672/api/definitions -X POST &
rabbitmq-server $#
Export definitions using rabbitmqadmin export rabbit.definitions.json.
Add them inside the image using your Dockerfile: ADD rabbit.definitions.json /tmp/rabbit.definitions.json
Add an environment variable when starting the container, for example, using docker-compose.yml:
environment:
- RABBITMQ_SERVER_ADDITIONAL_ERL_ARGS=-rabbitmq_management load_definitions "/tmp/rabbit.definitions.json"
There is a simple way to upload definitions to Docker container.
Use preconfigured node to export the definitions to json file.
Then move this file to the same folder where you have the Dockerfile and create a rabbitmq.config in the same folder too. Here is the context of rabbitmq.config:
[
{ rabbit, [
{ loopback_users, [ ] },
{ tcp_listeners, [ 5672 ] },
{ ssl_listeners, [ ] },
{ hipe_compile, false }
] },
{ rabbitmq_management, [ { listener, [
{ port, 15672 },
{ ssl, false }
] },
{ load_definitions, "/etc/rabbitmq/definitions.json" }
] }
].
Then prepare an appropriate Dockerfile:
FROM rabbitmq:3.6.14-management-alpine
ADD definitions.json /etc/rabbitmq
ADD rabbitmq.config /etc/rabbitmq
EXPOSE 4369 5672 25672 15672
The definitions will be loaded during image build. When you run the container all definitions will be already applied.
You could start your container with RabbitMQ, configure the resources (queues, exchanges, bindings) and then commit your configured container as a new image. This image can be used to start new containers.
More details at https://docs.docker.com/articles/basics/#committing-saving-a-container-state
I am not sure that this is an option, but the absolute easiest way to handle this situation is to periodically create a new, empty RabbitMQ container and have it join the first container as part of the RabbitMQ cluster. The configuration of the queues will be copied to the second container.
Then, you can stop the container and create a versioned image in your docker repository of the new container using docker commit. This process will only save the changes that you have made to your image, and then it would enable you to not have to worry about re-importing the configuration each time. You would just have to get the latest image to have the latest configuration!
Modern releases support definition import directly in the core, without the need to preconfigure the management plugin.
# New in RabbitMQ 3.8.2.
# Does not require management plugin to be enabled.
load_definitions = /path/to/definitions/file.json
From Schema Definition Export and Import RabbitMQ documentation.
If you use official rabbitmq image you can mount definition file in /etc/rabbitmq shown as below, rabbitmq will load this definitions on when daemon started
docker run -v ./your_local_definitions_file.json:/etc/rabbitmq/definitions.json ......