Azure IoT Edge Module: Docker Container Run Time Options in VS Code Deployment Template - azure-iot-edge

Need to deploy a docker container "sample/api:v1" with docker runtime options such as --gpus=all and -e DATABASE=NO as an azure IoT edge module. How to specify these docker runtime options in VS Code- deployment.template.json
docker run -dt --gpus=all -e DATABASE=NO -p 5656:5656 sample/api:v1

This is probably what you're looking for, you need to specify this as the createOptions of your module in the deployment JSON:
{
"ENV":[
"DATABASE=NO"
],
"HostConfig":{
"DeviceRequests":[
{
"Driver":"",
"Count":-1,
"DeviceIDs":null,
"Capabilities":[
[
"gpu"
]
],
"Options":{
}
}
],
"PortBindings":{
"5656/tcp":[
{
"HostPort":"5656"
}
]
}
}
}
In ENV you'll find your environment variable. Under PortBindings you can specify what ports need to be open.
Under DeviceRequests you'll find the create options equivalent of --gpus=all. Support for this in Azure IoT Edge is from 1.0.10, so make sure to update you edge runtime. Check here and here for GitHub issues where others have done the same.

Related

Find out value of environment variables being passed to container by docker-compose

I'm troubleshooting a service that's failing to start because of an environment variable issue. I'd like to check out what the environment variables look like from the inside of the container. Is there a command I can pass in my docker-compose.yaml so that instead of starting the service it prints the relevant environment variable to standard output and exits?
Try this:
docker-compose run rabbitmq env
This will run env inside the rabbitmq service. env will print all environment variables (from the shell).
If the service is already running, you can do this instead, which will run env in a new shell in the existing container (which is faster since it does not need to spin up a new instance):
docker-compose exec rabbitmq env
Get the container ID with docker ps.
Then execute a shell for the running rabbitmq container by running docker exec command with the container id for your rabbitmq container.
Once you are on the rabbitmq container, you can echo out the value of any environment variable like you would on any other linux system. e.g. if you declared ENV DEBUG=true at image build time, then is should be able to retrieve that value with echo $DEBUG in the container. Furthermore, once you are in the container, you can poke around the log files for more investigation.
As others have said, first get the container ID with docker ps. When you have done that, view all the properties with docker inspect <id> and you will see something like:
[
{
...
"Config": {
...
"Env": [
"ASPNETCORE_URLS=http://+:80",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"DOTNET_RUNNING_IN_CONTAINER=true",
"DOTNET_VERSION=6.0.1",
"ASPNET_VERSION=6.0.1",
"Logging__Console__FormatterName=Json"
],
...
}
}
]

Implementing Consul healthcheck in Docker environment

I am new with Consul/Registrator and Docker. I am confused about using the Consul healthcheck in Docker environment. It is described in the following link, within the section Docker + Interval: https://www.consul.io/docs/agent/checks.html
Here is the example of Consul healthcheck definition described in the link:
{
"check": {
"id": "mem-util",
"name": "Memory utilization",
"docker_container_id": "f972c95ebf0e",
"shell": "/bin/bash",
"args": ["/usr/local/bin/check_mem.py"],
"interval": "10s"
}
}
Is the healthcheck script within the docker image or outside of it (in the example: check_mem.py)? Should we know the ID of the container and manually insert in the field: docker_container_id? (this would be not very efficient way)
I have been googling around and the only answer that I can find is at the end of the following discussion:
https://github.com/hashicorp/consul/issues/3182
But this code is some 'workaround' - it uses the docker 'primal' healthcheck and the registrator variable - ENV SERVICE_CHECK_SCRIPT. It does not use consul healthcheck script.
Can anybody help me with understanding how consul healthcheck works in docker environment.
The docker healthcheck runs within the docker container.
Your example is equivalent to docker exec f972c95ebf0e /bin/bash /usr/local/bin/check_mem.py

Dockerizing Spring boot application for Kubernetes Deployment

I have a spring boot application with some properties as below in my application.properties.
server.ssl.keyStore=/users/admin/certs/appcert.jks
server.ssl.keyStorePassword=certpwd
server.ssl.trustStore=/users/admin/certs/cacerts
server.ssl.trustStorePassword=trustpwd
Here the cert paths are hardcoded to some path. But, I dont want to hard code this as the path will not be known in Mesos or Kubernetes world.
I have a docker file as follows.
FROM docker.com/base/jdk1.8:latest
MAINTAINER Application Engineering [ https://docker.com/ ]
RUN mkdir -p /opt/docker/svc
COPY application/weather-service.war /opt/docker/svc/
CMD java -jar /opt/docker/svc/weather-service.war --spring.config.location=file:/conf/application.properties -Dlogging.config=/conf/logback.xml
Here, I can use the volume mount option in kubernetes so as to place the application.proerties.
How can i achieve the same thing for cert files in the application.properties?
Here, the cert props are optional for few applications and mandatory for few applications.
I need options to integrate within the docker images and having the cert files outside the docker image.
Approach 1. Within the docker image
Remove the property "server.ssl.keyStore" from application.properties and pass it as a environment variable like the below one.
CMD java -jar /opt/docker/svc/weather-service.war
--spring.config.location=file:/conf/application.properties -Dlogging.config=/conf/logback.xml -Dserver.ssl.keyStore=/certs/appcert.jks
Now, the cert should be places in secrets and use the volume mount options with kubernetes.
Approach 2. No need to have -Dserver.ssl.keyStore=/certs/appcert.jks in the docker file, but still remove the property "server.ssl.keyStore" from application.properties and do as follows.
a. Create secret
kubectl create secret generic svc-truststore-cert
--from-file=./cacerts
b. Create one env variable as below.
{
"name": "JAVA_OPTS",
"value": "-Dserver.ssl.trustStore=/certs/truststore/cacerts"
}
c. Create Volume mounts under container for pod.
"volumeMounts": [
{
"name": "truststore-cert",
"mountPath": "/certs/truststore"
}
]
d. Create a volume under spec.
{
"name": "truststore-cert",
"secret": {
"secretName": "svc-truststore-cert",
"items": [
{
"key": "cacerts",
"path": "cacerts"
}
]
}
}
Approach 3.
Using Kubernetes Persistent Volume.
Created a persistent volume on Kubernetes .
Mount the volume to the Pod of each microservice (changes in the pod script file). Mounted file system accessible via - '/shared/folder/certs' path.
CMD java -jar /opt/docker/svc/weather-service.war
--spring.config.location=file:/conf/application.properties -Dlogging.config=/conf/logback.xml -Dserver.ssl.keyStore=/certs/appcert.jks
I have taken the second approach. Is this correct? Is there any other better approach?
Thanks
Yes, the second approach is the best one, and it is the only way if you are storing some sensitive data like certificates, keys, etc. That topic is covered in the documentation.
Moreover, you can encrypt your secrets to add another level of protection.

How to create Production ready Docker images using best practices?

Creating Docker images would be a simple task for testing environments. But when it comes to Production implementations, we have to follow best practices to overcome any security and workflow issues.
What are the best practices to create a production ready docker image?
As comprehensively described in Create Production Docker Images in 5 Steps by DevopsAnswers, following steps would be considered as a comprehensive guide to create production ready Docker images.
When creating production Docker images, you should have an extensive understating about Docker best practices.
Step 1: Use light weight Base Docker Images
It's a better idea to use a light weight image rather than using a bulky base image, since the usage of resulting Docker image would be more convenient when it's smaller in size.
If you plan to use Docker at highly critical production systems, where you cannot afford a downtime of a few seconds, then first thing you have to choose is a light weight base image for your custom docker image.
Step 2: Reduce intermediate layers
In a Dockerfile, every instruction such as FROM, LABEL, RUN, CMD, ADD etc. would be adding a new layer to the Docker image. So reducing the usage of same instructions multiple times would be a best practice as it would give you a slightly smaller image.
Step 3: Choose specific versions
It’s a good practice to choose specific versions in Docker instructions, because it will keep things nice and steady for a production implementation.
Imagine if we use Ubuntu:latest as the base image. It will use the currently available latest Ubuntu image for our custom Docker image. Additionally, we will setup all the software components based on the same Ubuntu version.
When Ubuntu update the latest tag with a newer base image in Docker Hub, then you might experience some package dependency issues or incompatibilities in your production Docker image.
In addition, we should always try to install the specific package versions rather than installing the general package.
Example
Recommended apt-get install mysql-server-5.5
NOT recommended apt-get install mysql-server
Step 4: Do not include sensitive data
Using sensitive data such as Database credentials and API keys would be a challenging task in Docker.
Do not hard code any login credentials within a Docker image
To overcome this limitation, you should use environment variables effectively.
For example, if you create a Drupal image connecting to a Mysql DB, we can keep the Drupal MySQL DB settings blank as below.
$databases = array (
'default' =>
array (
'default' =>
array (
'database' => '',
'username' => '',
'password' => '',
'host' => '',
'port' => '',
'driver' => 'mysql',
'prefix' => '',
),
),
);
Now we can use ENTRYPOINT to leverage environment variables to replace Drupal MySQL DB credentials in runtime as below.
#!/bin/sh
set -e
# Apache gets grumpy about PID files pre-existing
rm -f /var/run/apache2.pid
# Define Drupal home file path
DRUPAL_HOME="/var/www/html"
# Define Drupal settings file path
DRUPAL_SETTINGS_FILE="${DRUPAL_HOME}/sites/default/settings.php"
# Check the avilability of environment variables
if [ -n "$DRUPAL_MYSQL_DB" ] && [ -n "$DRUPAL_MYSQL_USER" ] && [ -n "$DRUPAL_MYSQL_PASS" ] && [ -n "$DRUPAL_MYSQL_HOST" ] ; then
echo "Setting up Mysql DB in $DRUPAL_SETTINGS_FILE"
# Set Database
sed -i "s/'database' *=> *''/'database' => '"$DRUPAL_MYSQL_DB"'/g" $DRUPAL_SETTINGS_FILE
# Set Mysql username
sed -i "s/'username' *=> *''/'username' => '"$DRUPAL_MYSQL_USER"'/g" $DRUPAL_SETTINGS_FILE
# Set Mysql password
sed -i "s/'password' *=> *''/'password' => '"$DRUPAL_MYSQL_PASS"'/g" $DRUPAL_SETTINGS_FILE
# Set Mysql host
sed -i "s/'host' *=> *''/'host' => '"$DRUPAL_MYSQL_HOST"'/g" $DRUPAL_SETTINGS_FILE
fi
# Start Apache in foreground
tail -F /var/log/apache2/* &
exec /usr/sbin/apache2ctl -D FOREGROUND
Finally, you can simply define the environment variables during the Docker runtime like below.
docker run -d -t -i
-e DRUPAL_MYSQL_DB='database'
-e DRUPAL_MYSQL_USER='user'
-e DRUPAL_MYSQL_PASS='password'
-e DRUPAL_MYSQL_HOST='host'
-p 80:80
-p 443:443
--name <container name>
<custom image>
Step 5: Run CMD/Entypoint from a non-privileged user
It’s always a good choice to run production systems using a non-privileged user, which is better from security perspectives as well.
You can simply put USER entry before CMD or ENTRYPOINT in Dockerfile as follows.
# Set running user of ENTRYPOINT
USER www-data
# Start entrypoint
ENTRYPOINT ["entrypoint"]

Import broker definitions into Dockerized RabbitMQ

I have a RabbitMQ broker with some exchanges and queues already defined. I know I can export and import these definitions via the HTTP API. I want to Dockerize it, and have all the broker definitions imported when it starts.
Ideally, it would be done as easily as it is done via the API. I could write a bunch of rabbitmqctl commands, but with a lot of definitions this might take quite a some time. Also, every change somebody else makes through the web interface will have to be inserted.
I have managed to do what I want by writing a script that sleeps a curl request and starts the server, but this seems to be error prone and really not elegant. Are there any better ways to do definition importing/exporting
, or is this the best that can be done?
My Dockerfile:
FROM rabbitmq:management
LABEL description="Rabbit image" version="0.0.1"
ADD init.sh /init.sh
ADD rabbit_e6f2965776b0_2015-7-14.json /rabbit_config.json
CMD ["/init.sh"]
init.sh
sleep 10 && curl -i -u guest:guest -d #/rabbit_config.json -H "content-type:application/json" http://localhost:15672/api/definitions -X POST &
rabbitmq-server $#
Export definitions using rabbitmqadmin export rabbit.definitions.json.
Add them inside the image using your Dockerfile: ADD rabbit.definitions.json /tmp/rabbit.definitions.json
Add an environment variable when starting the container, for example, using docker-compose.yml:
environment:
- RABBITMQ_SERVER_ADDITIONAL_ERL_ARGS=-rabbitmq_management load_definitions "/tmp/rabbit.definitions.json"
There is a simple way to upload definitions to Docker container.
Use preconfigured node to export the definitions to json file.
Then move this file to the same folder where you have the Dockerfile and create a rabbitmq.config in the same folder too. Here is the context of rabbitmq.config:
[
{ rabbit, [
{ loopback_users, [ ] },
{ tcp_listeners, [ 5672 ] },
{ ssl_listeners, [ ] },
{ hipe_compile, false }
] },
{ rabbitmq_management, [ { listener, [
{ port, 15672 },
{ ssl, false }
] },
{ load_definitions, "/etc/rabbitmq/definitions.json" }
] }
].
Then prepare an appropriate Dockerfile:
FROM rabbitmq:3.6.14-management-alpine
ADD definitions.json /etc/rabbitmq
ADD rabbitmq.config /etc/rabbitmq
EXPOSE 4369 5672 25672 15672
The definitions will be loaded during image build. When you run the container all definitions will be already applied.
You could start your container with RabbitMQ, configure the resources (queues, exchanges, bindings) and then commit your configured container as a new image. This image can be used to start new containers.
More details at https://docs.docker.com/articles/basics/#committing-saving-a-container-state
I am not sure that this is an option, but the absolute easiest way to handle this situation is to periodically create a new, empty RabbitMQ container and have it join the first container as part of the RabbitMQ cluster. The configuration of the queues will be copied to the second container.
Then, you can stop the container and create a versioned image in your docker repository of the new container using docker commit. This process will only save the changes that you have made to your image, and then it would enable you to not have to worry about re-importing the configuration each time. You would just have to get the latest image to have the latest configuration!
Modern releases support definition import directly in the core, without the need to preconfigure the management plugin.
# New in RabbitMQ 3.8.2.
# Does not require management plugin to be enabled.
load_definitions = /path/to/definitions/file.json
From Schema Definition Export and Import RabbitMQ documentation.
If you use official rabbitmq image you can mount definition file in /etc/rabbitmq shown as below, rabbitmq will load this definitions on when daemon started
docker run -v ./your_local_definitions_file.json:/etc/rabbitmq/definitions.json ......

Resources