How does AKS handle the .env file in a container? - docker

Assume there is a backend application with a private key stored in a .env file.
For the project file structure:
|-App files
|-Dockerfile
|-.env
If I run the docker image locally, the application can be reached normally by using a valid public key during the API request. However, if I deploy the container into AKS cluster by using same docker image, the application failed.
I am wondering how the container in a AKS cluster handle the .env file. What should I do to solve this problem?

Moving this out of comments for better visibility.
First and most important is docker is not the same as kubernetes. What works on docker, won't work directly on kubernetes. Docker is a container runtime, while kubernetes is a container orchestration tool which sits on top of docker (not always docker now, containerd is used as well).
There are many resources on the internet which describe the key difference. For example this one is from microsoft docs
First configmaps and secrets should be created:
Creating and managing configmaps and creating and managing secrets
There are different types of secrets which can be created.
Use configmaps/secrets as environment variables.
Further referring to configMaps and secrets as environment variables looks like (configmaps and secrets have the same structure):
apiVersion: v1
kind: Pod
metadata:
name: pod-example
spec:
containers:
- ...
env:
-
name: ADMIN_PASS
valueFrom:
secretKeyRef: # here secretref is used for sensitive data
key: admin
name: admin-password
-
name: MYSQL_DB_STRING
valueFrom:
configMapKeyRef: # this is not sensitive data so can be used configmap
key: db_config
name: connection_string
...
Use configmaps/secrets as volumes (it will be presented as file).
Below the example of using secrets as files mounted in a specific directory:
apiVersion: apps/v1
kind: Deployment
metadata:
...
spec:
containers:
- ...
volumeMounts:
- name: secrets-files
mountPath: "/mnt/secret.file1" # "secret.file1" file will be created in "/mnt" directory
subPath: secret.file1
volumes:
- name: secrets-files
secret:
secretName: my-secret # name of the Secret
There's a good article which explains and shows use cases of secrets as well as its limitations e.g. size is limited to 1Mb.

Related

Deploying bluespice-free in Kubernetes

According to this source, I can store to my/data/folder by following:
docker run -d -p 80:80 -v {/my/data/folder}:/data bluespice/bluespice-free
I have created following deployment but not sure how to use persistent volume.
apiVersion: apps/v1
kind: Deployment
metadata:
name: bluespice
namespace: default
labels:
app: bluespice
spec:
replicas: 1
selector:
matchLabels:
app: bluespice
template:
metadata:
labels:
app: bluespice
spec:
containers:
- name: bluespice
image: bluespice/bluespice-free
ports:
- containerPort: 80
env:
- name: bs_url
value: "https://bluespice.mycompany.local"
My persistent volume claim name is bluespice-pvc.
Also I have deployed the pod without persistent volume. Can I attach persistent volume on the fly to keep data?
if you want to mount a local directory, you don't have to deal with PVC since you can't force a specific host path in a PersistentVolumeClaim. For testing locally, you can use hostPath as it explained in the documentation:
A hostPath volume mounts a file or directory from the host node's filesystem into your Pod. This is not something that most Pods will need, but it offers a powerful escape hatch for some applications.
For example, some uses for a hostPath are:
running a container that needs access to Docker internals; use a hostPath of /var/lib/docker
running cAdvisor in a container; use a hostPath of /sys
allowing a Pod to specify whether a given hostPath should exist prior to the Pod running, whether it should be created, and what it should exist as
In addition to the required path property, you can optionally specify a type for a hostPath volume.
hostPath configuration example:
apiVersion: apps/v1
kind: Deployment
metadata:
name: bluespice
namespace: default
labels:
app: bluespice
spec:
replicas: 1
selector:
matchLabels:
app: bluespice
template:
metadata:
labels:
app: bluespice
spec:
containers:
- image: bluespice/bluespice-free
name: bluespice
volumeMounts:
- mountPath: /data
name: bluespice-volume
volumes:
- name: bluespice-volume
hostPath:
# directory location on host
path: /my/data/folder
# this field is optional
type: Directory
However, if you want to move to a production cluster, you should consider more reliable option since allowing HostPaths has a lack of security and it's not portable:
HostPath volumes present many security risks, and it is a best practice to avoid the use of HostPaths when possible. When a HostPath volume must be used, it should be scoped to only the required file or directory, and mounted as ReadOnly.
If restricting HostPath access to specific directories through AdmissionPolicy, volumeMounts MUST be required to use readOnly mounts for the policy to be effective.
For more information about PersistentVolumes, you can check the official Kubernetes documents
A PersistentVolume (PV) is a piece of storage in the cluster that has been provisioned by an administrator or dynamically provisioned using Storage Classes. It is a resource in the cluster just like a node is a cluster resource. PVs are volume plugins like Volumes, but have a lifecycle independent of any individual Pod that uses the PV. This API object captures the details of the implementation of the storage, be that NFS, iSCSI, or a cloud-provider-specific storage system.
A PersistentVolumeClaim (PVC) is a request for storage by a user. It is similar to a Pod. Pods consume node resources and PVCs consume PV resources. Pods can request specific levels of resources (CPU and Memory). Claims can request specific size and access modes (e.g., they can be mounted ReadWriteOnce, ReadOnlyMany or ReadWriteMany, see AccessModes).
Therefore, I would recommend to use some cloud solutions like GCP or AWS, or at least by using a NFS share directly from Kubernetes. Also check this topic on StackOverFlow.
About your last question: it's impossible to attach Persistent Volume on the fly.

Expose volumes in Helm just like in docker

I'm creating an application that is using helm(v3.3.0) + k3s. A program in a container uses different configuration files. As of now there are just few config files (that I added manually before building the image) but I'd like to add the possibility to add them dynamically when the container is running and not to lose them once the container/pod is dead. In docker I'd do that by exposing a folder like this:
docker run [image] -v /host/path:/container/path
Is there an equivalent for helm?
If not how would you suggest to solve this issue without stopping using helm/k3s?
In Kubernetes (Helm is just a tool for it) you need to do two things to mount host path inside container:
spec:
volumes:
# 1. Declare a 'hostPath' volume under pod's 'volumes' key:
- name: name-me
hostPath:
path: /path/on/host
containers:
- name: foo
image: bar
# 2. Mount the declared volume inside container using volume name
volumeMounts:
- name: name-me
mountPath: /path/in/container
Lots of other volumes types and examples in Kubernetes documentation.
Kubernetes has a dedicated construct for holding configuration files, ConfigMaps. Helm in turn has support for Accessing Files Inside Templates which can help you copy them into ConfigMap objects. A minimal setup here would look like:
# templates/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: my-config
data:
config.ini: |
{{ .Files.Get "config.ini" | indent 4 }}
# templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment:
metadata: { ... }
spec:
template:
spec:
volumes:
- name: config-data
configMap:
name: my-config # matches ConfigMap metadata: { name: }
containers:
- volumeMounts:
- name: config-data # matches volume name: in this file
mountPath: /container/path
You can use Helm's templating constructs in various ways here: to dynamically construct the contents of the ConfigMap, to set an environment variable saying which file to use, and so on.
Do not use hostPath volumes here. Since Kubernetes is designed as a clustered environment, you do not have much control over which node a given pod will run on; you would have to copy these config files to every node in the cluster and try to update them all when a file changed. That's a huge maintenance problem, especially if you don't have direct filesystem access to the nodes.

Kubernetes / Docker - SSL certificates for web service use

I have a Python web service that collects data from frontend clients. Every few seconds, it creates a Pulsar producer on our topic and sends the collected data. I have also set up a dockerfile to build an image and am working on deploying it to our organization's Kubernetes cluster.
The Pulsar code relies on certificate and key .pem files for TLS authentication, which are loaded over file paths in the test code. However, if the .pem files are included in the built Docker image, it will result in an obvious compliance violation from the Twistlock scan on our Kubernetes instance.
I am pretty inexperienced with Docker, Kubernetes, and security with certificates in general. What would be the best way to store and load the .pem files for use with this web service?
You can mount certificates in the Pod with Kubernetes secret.
First, you need to create a Kubernetes secret:
(Copy your certificate to somewhere kubectl is configured for your Kubernetes cluster. For example file mykey.pem and copy it to the /opt/certs folder.)
kubectl create secret generic mykey-pem --from-file=/opt/certs/
Confirm it was created correctly:
kubectl describe secret mykey-pem
Mount your secret in your deployment (for example nginx deployment):
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nginx
name: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
strategy:
type: Recreate
template:
metadata:
labels:
app: nginx
spec:
containers:
- image: nginx
name: nginx
volumeMounts:
- mountPath: "/etc/nginx/ssl"
name: nginx-ssl
readOnly: true
ports:
- containerPort: 80
volumes:
- name: nginx-ssl
secret:
secretName: mykey-pem
restartPolicy: Always
After that .pem files will be available inside the container and you don't need to include them in the docker image.

On a Kubernetes Pod, what is the ConfigMap directory location?

Many applications require configuration via some combination of config files, command line arguments, and environment variables. These configuration artifacts should be decoupled from image content in order to keep containerized applications portable. The ConfigMap API resource provides mechanisms to inject containers with configuration data while keeping containers agnostic of Kubernetes. ConfigMap can be used to store fine-grained information like individual properties or coarse-grained information like entire config files or JSON blobs.
I am unable to find where configmaps are saved. I know they are created however I can only read them via the minikube dashboard.
ConfigMaps in Kubernetes can be consumed in many different ways and mounting it as a volume is one of those ways.
You can choose where you would like to mount the ConfigMap on your Pod. Example from K8s documentation:
ConfigMap:
apiVersion: v1
kind: ConfigMap
metadata:
name: special-config
namespace: default
data:
special.how: very
special.type: charm
Pod
apiVersion: v1
kind: Pod
metadata:
name: dapi-test-pod
spec:
containers:
- name: test-container
image: gcr.io/google_containers/busybox
command: [ "/bin/sh", "-c", "cat /etc/config/special.how" ]
volumeMounts:
- name: config-volume
mountPath: /etc/config
volumes:
- name: config-volume
configMap:
name: special-config
restartPolicy: Never
Note the volumes definition and the corresponding volumeMounts.
Other ways include:
Consumption via environment variables
Consumption via command-line arguments
Refer to the documentation for full examples.

Inject code/files directly into a container in Kubernetes on Google Cloud Engine

How can I inject code/files directly into a container in Kubernetes on Google Cloud Engine, similar to the way that you can mount host files / directories with Docker, e.g.
docker run -d --name nginx -p 443:443 -v "/nginx.ssl.conf:/etc/nginx/conf.d/default.conf"
Thanks
It is possible to use ConfigMaps to achieve that goal:
The following example mounts a mariadb configuration file into a mariadb POD:
ConfigMap
apiVersion: v1
data:
charset.cnf: |
[client]
# Default is Latin1, if you need UTF-8 set this (also in server section)
default-character-set = utf8
[mysqld]
#
# * Character sets
#
# Default is Latin1, if you need UTF-8 set all this (also in client section)
#
character-set-server = utf8
collation-server = utf8_unicode_ci
kind: ConfigMap
metadata:
name: mariadb-configmap
MariaDB deployment
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: mariadb
labels:
app: mariadb
spec:
replicas: 1
template:
metadata:
labels:
app: mariadb
version: 10.1.16
spec:
containers:
- name: mariadb
image: mariadb:10.1.16
ports:
- containerPort: 3306
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mariadb
key: rootpassword
volumeMounts:
- name: mariadb-data
mountPath: /var/lib/mysql
- name: mariadb-config-file
mountPath: /etc/mysql/conf.d
volumes:
- name: mariadb-data
hostPath:
path: /var/lib/data/mariadb
- name: mariadb-config-file
configMap:
name: mariadb-configmap
It is also possible to use subPath feature that is available in kubernetes from version 1.3, as stated here.
I'm not sure you can do that exactly. Kubernetes does things quite differently than docker, and isn't really ideal for interacting with the 'host' you are probably used to with docker.
A few alternative possibilities come to mind. First, and probably least ideal but closest to what you are asking, would be to add the file after the container is running, either by adding commands or args to the pod spec, or using kubectl exec and echo'ing the contents into the file. Second would be to create a volume where that file already exists, e.g. create a GCE or EBS disk, add that file, and then mount the file location (read-only) in the container's spec. Third, would be to create a new docker image where that file or other code already exists.
For the first option, the kubectl exec would be for one-off jobs, it isn't very scalable/repeatable. Any creation/fetching at runtime adds that much overhead to the start time for the container, so I normally go with the third option, building a new docker image whenever the file or code changes. The more you change it, the more you'll probably want a CI system (like drone) to help automate the process.
Add a comment if I should expand any of these options with more details.
Kubernetes allows you to mount volumes into your pod. One such volume type is hostPath (link) which allows you to mount a directory from the host into the pod.

Resources