I have a process running in a ubuntu docker container which I run with the following command:
docker container_name:latest -ms server_name -msp server_port -ma server_address -ir receiver_ip_address -pr receiver_port -s sleep_time -r true
An Entrypoint has been defined in the docker recipe from which the container starts and to which the arguments passed by the docker run are added.
With the following deployment.yaml:
containers:
- name: container_name
image: {{ .Values.global.container_name}}:{{ .Values.global.container_name.tag }}
args:
{{ range .Values.global.container_args }}
- {{ . }}
{{ end }}
And the following values.yaml:
global:
container_name: ['-ms','server_name','-msp','server_port','-ma','server_address',' ir',' receiver_ip_address', '-pr','receiver_port' , '-s','sleep_time','-r' 'true']
In particular, the r and s flags are passed from the values.yaml file as boolean and integer inside singlequote like also server_port.
I get the following error:
INSTALLATION FAILED: Deployment in version "v1" cannot be handled as a Deployment: v1.Deployment.Spec: v1.DeploymentSpec.Template: v1.PodTemplateSpec.Spec: v1.PodSpec.Containers: []v1.Container: v1.Container.Args: []string: ReadString: expects " or n, but found 1, error found in #10 byte of ...|","-msp",server_port,"-ma","|..., bigger context ...|args":["-ms","server_name","-msp",server_port,"-ma","server_address","-m|...
Removing the integer and boolean values from the value.yaml file and inserting random strings I noticed that the pod was running (with application error) and the previous error did not appear.
Trying to figure out the reason for this problem I found this post:
Error when running Helm Chart with environment variables
I changed the deploy.yaml template line from:
containers:
- name: container_name
image: {{ .Values.global.container_name}}:{{ .Values.global.container_name.tag }}
args:
{{ range .Values.global.container_args }}
- {{ . }}
{{ end }}
to:
containers:
- name: container_name
image: {{ .Values.global.container_name}}:{{ .Values.container_name.tag}}
args:
{{ range .Values.global.container_args}}
- {{ . | quote }}
{{ end }}
using the quote function.
Related
I have this definition in my values.yaml which is supplied to job.yaml
command: ["/bin/bash"]
args: ["-c", "cd /opt/nonrtric/ric-common/ansible/; cat group_vars/all"]
However, after the pod initializes, I get this error:
/bin/bash: - : invalid option
If i try this syntax:
command: ["/bin/sh", "-c"]
args:
- >
cd /opt/nonrtric/ric-common/ansible/ &&
cat group_vars/all
I get this error: Error: failed to start container "ric-register-avro": Error response from daemon: OCI runtime create failed: container_linux.go:370: starting container process caused: exec: "/bin/sh -c": stat /bin/sh -c: no such file or directory: unknown
Both sh and bash are supplied in the image, which is CentOS 7
job.yaml
---
apiVersion: batch/v1
kind: Job
metadata:
name: ric-register-avro
spec:
backoffLimit: 0
template:
spec:
containers:
- image: "{{ .Values.ric_register_avro_job.image }}"
name: "{{ .Values.ric_register_avro_job.name }}"
command: {{ .Values.ric_register_avro_job.containers.command }}
args: {{ .Values.ric_register_avro_job.containers.args }}
volumeMounts:
- name: all-file
mountPath: "/opt/nonrtric/ric-common/ansible/group_vars/"
readOnly: true
subPath: all
volumes:
- name: all-file
configMap:
name: ric-register-avro--configmap
restartPolicy: Never
values.yaml
global:
name: ric-register-avro
namespace: foo-bar
ric_register_avro_job:
name: ric-register-avro
all_file:
rest_api_url: http://10.230.227.13/foo
auth_username: foo
auth_password: bar
backoffLimit: 0
completions: 1
image: 10.0.0.1:5000/5gc/ric-app
containers:
name: ric-register-avro
command: ["/bin/bash"]
args: ["-c cd /opt/nonrtric/ric-common/ansible/; cat group_vars/all"]
restartPolicy: Never
In your Helm chart, you directly specify command: and args: using template syntax
command: {{ .Values.ric_register_avro_job.containers.command }}
args: {{ .Values.ric_register_avro_job.containers.args }}
However, the output of a {{ ... }} block is always a string. If the value you have inside the template is some other type, like a list, it will be converted to a string using some default Go rules, which aren't especially useful in a Kubernetes context.
Helm includes two lightly-documented conversion functions toJson and toYaml that can help here. Valid JSON is also valid YAML, so one easy approach is just to convert both parts to JSON
command: {{ toJson .Values.ric_register_avro_job.containers.command }}
args: {{ toJson .Values.ric_register_avro_job.containers.args }}
or, if you want it to look a little more like normal YAML,
command:
{{ .Values.ric_register_avro_job.containers.command | toYaml | indent 12 }}
args:
{{ .Values.ric_register_avro_job.containers.args | toYaml | indent 12 }}
or, for that matter, if you're passing a complete container description via Helm values, it could be enough to
containers:
- name: ric_register_avro_job
{{ .Values.ric_register_avro_job.containers | toYaml | indent 10 }}
In all of these cases, I've put the templating construct starting at the first column, but then used the indent function to correctly indent the YAML block. Double-check the indentation and adjust the indent parameter.
You can also double-check that what's coming out looks correct using helm template, using the same -f option(s) as when you install the chart.
(In practice, I might put many of the options you show directly into the chart template, rather than making them configurable as values. The container name, for example, doesn't need to be configurable, and I'd usually fix the command. For this very specific example you can also set the container's workingDir: rather than running cd inside a shell wrapper.)
I use this:
command: ["/bin/sh"]
args: ["-c", "my-command"]
Trying this simple job I've no issue:
apiVersion: batch/v1
kind: Job
metadata:
name: foo
spec:
template:
spec:
containers:
- name: foo
image: centos:7
command: ["/bin/sh"]
args: ["-c", "echo 'hello world'"]
restartPolicy: Never
I have a cluster that has numerous services running as pods from which I want to pull logs with fluentd. All services show logs when doing kubectl logs service. However, some logs don't show up in those folders:
/var/log
/var/log/containers
/var/log/pods
although the other containers are there. The containers that ARE there are created as a Cronjob, or as a Helm chart, like a MongoDB installation.
The containers that aren't logging are created by me with a Deployment file like so:
kind: Deployment
metadata:
namespace: {{.Values.global.namespace | quote}}
name: {{.Values.serviceName}}-deployment
spec:
replicas: {{.Values.replicaCount}}
selector:
matchLabels:
app: {{.Values.serviceName}}
template:
metadata:
labels:
app: {{.Values.serviceName}}
annotations:
releaseTime: {{ dateInZone "2006-01-02 15:04:05Z" (now) "UTC"| quote }}
spec:
containers:
- name: {{.Values.serviceName}}
# local: use skaffold, dev: use passed tag, test: use released version
image: {{ .Values.image }}
{{- if (eq .Values.global.env "dev") }}:{{ .Values.imageConfig.tag}}{{ end }}
imagePullPolicy: {{ .Values.global.imagePullPolicy }}
envFrom:
- configMapRef:
name: {{.Values.serviceName}}-config
{{- if .Values.resources }}
resources:
{{- if .Values.resources.requests }}
requests:
memory: {{.Values.resources.requests.memory}}
cpu: {{.Values.resources.requests.cpu}}
{{- end }}
{{- if .Values.resources.limits }}
limits:
memory: {{.Values.resources.limits.memory}}
cpu: {{.Values.resources.limits.cpu}}
{{- end }}
{{- end }}
imagePullSecrets:
- name: {{ .Values.global.imagePullSecret }}
restartPolicy: {{ .Values.global.restartPolicy }}
{{- end }}
and a Dockerfile CMD like so:
CMD ["node", "./bin/www"]
One assumption might be that the CMD doesn't pipe to STDOUT, but why would the logs show up in kubectl logs then?
This is how I would proceed to find out where a container is logging:
Identify the node on which the Pod is running with:
kubectl get pod pod-name -owide
SSH on that node, you can check which logging driver is being used by the node with:
docker info | grep -i logging
if the output is json-file, then the logs are being written to file as expected. If there is something different, then it may depends on what the driver do (there are many drivers, they could write to journald for example, or other options)
If the logging driver writes to file, you can check the current output for a specific Pod by knowing the container id of that Pod, to do so, on a control-plane node:
kubectl get pod pod-name -ojsonpath='{.status.containerStatuses[0].containerID}'
(if there are more containers in the same pod, the index to use may vary, depending on which container you want to inspect)
With the id extracted, which will be something like docker://f834508490bd2b248a2bbc1efc4c395d0b8086aac4b6ff03b3cc8fd16d10ce2c, you can inspect the container with docker, on the node on which the container is running. Just remove the docker:// part from the id, SSH again on the node you identified before, then do a:
docker inspect container-id | grep -i logpath
Which should output where the container is actively writing its logs to file.
In my case, the particular container I tried this procedure on, is currently logging into:
/var/lib/docker/containers/289271086d977dc4e2e0b80cc28a7a6aca32c888b7ea5e1b5f24b28f7601ff63/289271086d977dc4e2e0b80cc28a7a6aca32c888b7ea5e1b5f24b28f7601ff63-json.log
I am trying to make an helm chart for my custom elk stack. I use the stable/elasticsearch-curator chart as a dependency.
In my values.yaml file, I use some env variables to pass the elasticsearch host:
esClusterName: &esClusterName "elasticsearch-logs"
...
elasticsearch-curator:
env:
ES_CLUSTER_NAME: *esClusterName
ELASTICSEARCH_HOST: $(ES_CLUSTER_NAME)-master
configMaps:
config_yml: |-
---
client:
hosts:
- ${ELASTICSEARCH_HOST}
port: 9200
But the variable is not correctly interpolated as shown by this error message:
HTTP N/A error: HTTPConnectionPool(host='$(es_cluster_name)-master', port=9200): Max retries exceeded with ...
Inside my pod, ELASTICSEARCH_HOST = '$(es_cluster_name)-master' --a string of the name of my variable in LOWERCASE and "-master"-- instead of "elasticsearch-logs-master".
I cannot wrap my head arround this. I have used the same technique -- env variable interpolation -- for the other dependencies and it works.
The only difference I see is that the helm chart for elasticsearch-curator passes env variables differently from the other charts:
# stable/elasticsearch-curator/templates/cronjob.yaml (The file is here)
env:
{{- if .Values.env }}
{{- range $key,$value := .Values.env }}
- name: {{ $key | upper | quote}}
value: {{ $value | quote}}
{{- end }}
{{- end }}
And this template expects the values to be passed in values.yaml like so: ( the file is here)
env:
MY_ENV_VAR: value1
MY_OTHER_VAR: value2
whereas all other templates use this way: ( exemple file)
env: {{ toYaml .Values.extraEnvs | nindent 10 }}
with a values.yaml like so: ( exemple file)
extraEnvs:
- name: MY_ENVIRONMENT_VAR
value: the_value_goes_here
But I'm not sure if this difference explains my problem. So my question is: how do I make it work?
I replaced ELASTICSEARCH_HOST with ES_HOST like so:
elasticsearch-curator:
env:
ES_CLUSTER_NAME: *esClusterName
ES_HOST: $(ES_CLUSTER_NAME)-master
configMaps:
config_yml: |-
---
client:
hosts:
- ${ES_HOST}
port: 9200
and it just worked!
I think it comes from the fact that when values.yaml is parsed, the keys from the env: object are sorted in alphabetical order:
env: {
ELASTICSEARCH_HOST: $(ES_CLUSTER_NAME)-master
ES_CLUSTER_NAME: "elasticsearch-logs"
}
Then, when the pod tries to interpolate the value of ES_CLUSTER_NAME inside ELASTICSEARCH_HOST, it does not work since it doesn't know the value of ES_CLUSTER_NAME yet.
It would be nice to have confirmation (or infirmation) of this.
I am writing a simple docker file for a golang and I'm still getting familiar with docker so I know what I want to do just don't know how to do it:
What I have right now (below) is exposing port 8080, but I want to expose port 80 but forward that to port 8080.
I know that I can do it via docker run -p but I'm wondering if there's a way I can set it up in Dockerfile or something else. I'm trying to find how I can do that through Helm.
Dockerfile:
FROM scratch
COPY auth-service /auth-service
EXPOSE 8080
CMD ["/auth-service","-logtostderr=true", "-v=-1"]
EXPOSE informs Docker that the container listens on the specified network ports at runtime but does not actually make ports accessible. only -p as you already mentioned will do that:
docker run -p :$HOSTPORT:$CONTAINERPORT
Or you can opt for a docker-compose file, extra file but also do the thing for you:
version: "2"
services:
my_service:
build: .
name: my_container_name
ports:
- 80:8080
.....
Edit:
If you are using helm you have just to use the exposed docker port as your targetPort :
apiVersion: v1
kind: Service
metadata:
name: {{ template "fullname" . }}
labels:
chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
spec:
type: {{ .Values.service.type }}
ports:
- port: {{ .Values.service.externalPort }}
targetPort: {{ .Values.service.internalPort }} #8080
protocol: TCP
name: {{ .Values.service.name }}
selector:
app: {{ template "fullname" . }}
I want playbook that will start an container (in a task) and only link it to another container if the link is provided in a variable. For example:
- name: Start container
docker_container:
image: somerepo/app-server:{{ var_tag }}
name: odoo-server
state: started
log_opt: "tag=app-server-{{ var_tag }}"
expose:
- 8080
links:
- "{{ var_db_link }}"
when: var_db_link is defined
But of course this does not work. (I know - without a value is invalid ~ this is just pseudo code)
The whole task is actually quite a bit larger because it includes other directives so I really don't to have 2 versions of the task defined, one for starting with a link and another without.
when use '-', it means there is certain value , so I have a way to avoid it.
---
- hosts: localhost
tasks:
- name: Start container
docker_container:
image: centos
name: odoo-server
state: started
expose:
- 8080
links: "{{ var_db_link | default([]) }}"
then test it use
ansible-playbook ha.yml -e var_db_link="redis-master:centos"
ansible-playbook ha.yml
It runs normally!