Named arguments not getting picked up from my kubernetes template - docker

I'm trying to update a kubernetes template that we have so that I can pass in arguments such as --db-config <value> when my container starts up.
This is obviously not right b/c there's not getting picked up
...
containers:
- name: {{ .Chart.Name }}
...
args: ["--db-config", "/etc/app/cfg/db.yaml", "--tkn-config", "/etc/app/cfg/tkn.yaml"] <-- WHY IS THIS NOT WORKING

Here's an example showing your approach working:
main.go:
package main
import "flag"
import "fmt"
func main() {
db := flag.String("db-config", "default", "some flag")
tk := flag.String("tk-config", "default", "some flag")
flag.Parse()
fmt.Println("db-config:", *db)
fmt.Println("tk-config:", *tk)
}
Dockerfile [simplified]:
FROM scratch
ADD kube-flags /
ENTRYPOINT ["/kube-flags"]
Test:
docker run kube-flags:180906
db-config: default
tk-config: default
docker run kube-flags:180906 --db-config=henry
db-config: henry
tk-config: default
pod.yaml:
apiVersion: v1
kind: Pod
metadata:
name: test
spec:
containers:
- image: gcr.io/.../kube-flags:180906
imagePullPolicy: Always
name: test
args:
- --db-config
- henry
- --tk-config
- turnip
test:
kubectl logs test
db-config: henry
tk-config: turnip

Related

Kubernetes /bin/bash with -c argument returns - : invalid option

I have this definition in my values.yaml which is supplied to job.yaml
command: ["/bin/bash"]
args: ["-c", "cd /opt/nonrtric/ric-common/ansible/; cat group_vars/all"]
However, after the pod initializes, I get this error:
/bin/bash: - : invalid option
If i try this syntax:
command: ["/bin/sh", "-c"]
args:
- >
cd /opt/nonrtric/ric-common/ansible/ &&
cat group_vars/all
I get this error: Error: failed to start container "ric-register-avro": Error response from daemon: OCI runtime create failed: container_linux.go:370: starting container process caused: exec: "/bin/sh -c": stat /bin/sh -c: no such file or directory: unknown
Both sh and bash are supplied in the image, which is CentOS 7
job.yaml
---
apiVersion: batch/v1
kind: Job
metadata:
name: ric-register-avro
spec:
backoffLimit: 0
template:
spec:
containers:
- image: "{{ .Values.ric_register_avro_job.image }}"
name: "{{ .Values.ric_register_avro_job.name }}"
command: {{ .Values.ric_register_avro_job.containers.command }}
args: {{ .Values.ric_register_avro_job.containers.args }}
volumeMounts:
- name: all-file
mountPath: "/opt/nonrtric/ric-common/ansible/group_vars/"
readOnly: true
subPath: all
volumes:
- name: all-file
configMap:
name: ric-register-avro--configmap
restartPolicy: Never
values.yaml
global:
name: ric-register-avro
namespace: foo-bar
ric_register_avro_job:
name: ric-register-avro
all_file:
rest_api_url: http://10.230.227.13/foo
auth_username: foo
auth_password: bar
backoffLimit: 0
completions: 1
image: 10.0.0.1:5000/5gc/ric-app
containers:
name: ric-register-avro
command: ["/bin/bash"]
args: ["-c cd /opt/nonrtric/ric-common/ansible/; cat group_vars/all"]
restartPolicy: Never
In your Helm chart, you directly specify command: and args: using template syntax
command: {{ .Values.ric_register_avro_job.containers.command }}
args: {{ .Values.ric_register_avro_job.containers.args }}
However, the output of a {{ ... }} block is always a string. If the value you have inside the template is some other type, like a list, it will be converted to a string using some default Go rules, which aren't especially useful in a Kubernetes context.
Helm includes two lightly-documented conversion functions toJson and toYaml that can help here. Valid JSON is also valid YAML, so one easy approach is just to convert both parts to JSON
command: {{ toJson .Values.ric_register_avro_job.containers.command }}
args: {{ toJson .Values.ric_register_avro_job.containers.args }}
or, if you want it to look a little more like normal YAML,
command:
{{ .Values.ric_register_avro_job.containers.command | toYaml | indent 12 }}
args:
{{ .Values.ric_register_avro_job.containers.args | toYaml | indent 12 }}
or, for that matter, if you're passing a complete container description via Helm values, it could be enough to
containers:
- name: ric_register_avro_job
{{ .Values.ric_register_avro_job.containers | toYaml | indent 10 }}
In all of these cases, I've put the templating construct starting at the first column, but then used the indent function to correctly indent the YAML block. Double-check the indentation and adjust the indent parameter.
You can also double-check that what's coming out looks correct using helm template, using the same -f option(s) as when you install the chart.
(In practice, I might put many of the options you show directly into the chart template, rather than making them configurable as values. The container name, for example, doesn't need to be configurable, and I'd usually fix the command. For this very specific example you can also set the container's workingDir: rather than running cd inside a shell wrapper.)
I use this:
command: ["/bin/sh"]
args: ["-c", "my-command"]
Trying this simple job I've no issue:
apiVersion: batch/v1
kind: Job
metadata:
name: foo
spec:
template:
spec:
containers:
- name: foo
image: centos:7
command: ["/bin/sh"]
args: ["-c", "echo 'hello world'"]
restartPolicy: Never

How do I run create multiple container and run different command inside using k8s

I have a Kubernetes Job, job.yaml :
---
apiVersion: v1
kind: Namespace
metadata:
name: my-namespace
---
apiVersion: batch/v1
kind: Job
metadata:
name: my-job
namespace: my-namespace
spec:
template:
spec:
containers:
- name: my-container
image: gcr.io/project-id/my-image:latest
command: ["sh", "run-vpn-script.sh", "/to/download/this"] # need to run this multiple times
securityContext:
privileged: true
allowPrivilegeEscalation: true
restartPolicy: Never
I need to run command for different parameters. I have like 30 parameters to run. I'm not sure what is the best solution here. I'm thinking to create container in a loop to run all parameters. How can I do this? I want to run the commands or containers all simultaneously.
Some of the ways that you could do it outside of the solutions proposed in other answers are following:
With a templating tool like Helm where you would template the exact specification of your workload and then iterate over it with different values (see the example)
Use the Kubernetes official documentation on work queue topics:
Indexed Job for Parallel Processing with Static Work Assignment - alpha
Parallel Processing using Expansions
Helm example:
Helm in short is a templating tool that will allow you to template your manifests (YAML files). By that you could have multiple instances of Jobs with different name and a different command.
Assuming that you've installed Helm by following guide:
Helm.sh: Docs: Intro: Install
You can create an example Chart that you will modify to run your Jobs:
helm create chart-name
You will need to delete everything that is in the chart-name/templates/ and clear the chart-name/values.yaml file.
After that you can create your values.yaml file which you will iterate upon:
jobs:
- name: job1
command: ['"perl", "-Mbignum=bpi", "-wle", "print bpi(3)"']
image: perl
- name: job2
command: ['"perl", "-Mbignum=bpi", "-wle", "print bpi(20)"']
image: perl
templates/job.yaml
{{- range $jobs := .Values.jobs }}
apiVersion: batch/v1
kind: Job
metadata:
name: {{ $jobs.name }}
namespace: default # <-- FOR EXAMPLE PURPOSES ONLY!
spec:
template:
spec:
containers:
- name: my-container
image: {{ $jobs.image }}
command: {{ $jobs.command }}
securityContext:
privileged: true
allowPrivilegeEscalation: true
restartPolicy: Never
---
{{- end }}
If you have above files created you can run following command on what will be applied to the cluster beforehand:
$ helm template . (inside the chart-name folder)
---
# Source: chart-name/templates/job.yaml
apiVersion: batch/v1
kind: Job
metadata:
name: job1
namespace: default
spec:
template:
spec:
containers:
- name: my-container
image: perl
command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(3)"]
securityContext:
privileged: true
allowPrivilegeEscalation: true
restartPolicy: Never
---
# Source: chart-name/templates/job.yaml
apiVersion: batch/v1
kind: Job
metadata:
name: job2
namespace: default
spec:
template:
spec:
containers:
- name: my-container
image: perl
command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(20)"]
securityContext:
privileged: true
allowPrivilegeEscalation: true
restartPolicy: Never
A side note #1!
This example will create X amount of Jobs where each one will be separate from the other. Please refer to the documentation on data persistency if the files that are downloaded are needed to be stored persistently (example: GKE).
A side note #2!
You can also add your namespace definition in the templates (templates/namespace.yaml) so it will be created before running your Jobs.
You can also run above Chart by:
$ helm install chart-name . (inside the chart-name folder)
After that you should be seeing 2 Jobs that are completed:
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
job1-2dcw5 0/1 Completed 0 82s
job2-9cv9k 0/1 Completed 0 82s
And the output that they've created:
$ echo "one:"; kubectl logs job1-2dcw5; echo "two:"; kubectl logs job2-9cv9k
one:
3.14
two:
3.1415926535897932385
Additional resources:
Stackoverflow.com: Questions: Kubernetes creation of multiple deployment with one deployment file
In simpler terms , you want to run multiple commands , following is a sample format to execute multiple commands in a pod :
command: ["/bin/bash","-c","touch /foo && echo 'here' && ls /"]
When we apply this logic to your requirement for two different operations
command: ["sh", "-c", "run-vpn-script.sh /to/download/this && run-vpn-script.sh /to/download/another"]
If you want to run the same command multiple times you can deploy the same YAML multiple times by just changing the name.
You can go with the sed command for replacing the values in YAML and apply those YAML to the cluster for creating the container.
Example job.yaml
---
apiVersion: v1
kind: Namespace
metadata:
name: my-namespace
---
apiVersion: batch/v1
kind: Job
metadata:
name: my-job
namespace: my-namespace
spec:
template:
spec:
containers:
- name: my-container
image: gcr.io/project-id/my-image:latest
command: COMMAND # need to run this multiple times
securityContext:
privileged: true
allowPrivilegeEscalation: true
restartPolicy: Never
command :
'job.yaml | sed -i "s,COMMAND,["sh", "run-vpn-script.sh", "/to/download/this"],"
so the above command will replace all the values in YAML and you can apply the YAML to the cluster for creating the container. Same you can apply for other variables.
You can pass the different parameters as per the need in the command that got set in the YAML.
You can also deploy the multiple jobs using the command also
kubectl create job test-job --from=cronjob/a-cronjob
https://www.mankier.com/1/kubectl-create-job
pass other param as per need into the command.
If you don't just want to run the POD you can also try
kubectl run nginx --image=nginx --command -- <cmd> <arg1> ... <argN>
https://jamesdefabia.github.io/docs/user-guide/kubectl/kubectl_run/

Cloudbuild does not trigger new pod deployment, No resources found in namespace GKE

I've been playing around with GCP Triggers to deploy a new pod every time a push is made to a Github repo. I've got everything set up and the docker image is pushed to the GCP Container Registry and the trigger completes successfully without any errors. I use the $SHORT_SHA tags that are generated by the build pipeline as my tags. But, however, the new pod deployment does not work. I am not sure what the issue is because I am modifying the codebase as well with every new push just to test the deployment. I've followed couple of tutorials by Google on Triggers, but I am unable to understand what exactly the issue is and why does the newly pushed image does not get deployed.
cloudbuild.yaml
- name: maven:3-jdk-8
id: Maven Compile
entrypoint: mvn
args: ["package", "-Dmaven.test.skip=true"]
- name: 'gcr.io/cloud-builders/docker'
id: Build
args:
- 'build'
- '-t'
- 'us.gcr.io/$PROJECT_ID/<image_name>:$SHORT_SHA'
- '.'
- name: 'gcr.io/cloud-builders/docker'
id: Push
args:
- 'push'
- 'us.gcr.io/$PROJECT_ID/<image_name>:$SHORT_SHA'
- name: 'gcr.io/cloud-builders/gcloud'
id: Generate manifest
entrypoint: /bin/sh
args:
- '-c'
- |
sed "s/GOOGLE_CLOUD_PROJECT/$SHORT_SHA/g" kubernetes.yaml
- name: "gcr.io/cloud-builders/gke-deploy"
args:
- run
- --filename=kubernetes.yaml
- --image=us.gcr.io/$PROJECT_ID/<image_name>:$SHORT_SHA
- --location=us-central1-c
- --cluster=cluster-1
kubernetes.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: <deployment_name>
spec:
replicas: 1
selector:
matchLabels:
app: <container_label>
template:
metadata:
labels:
app: <container_label>
spec:
nodeSelector:
cloud.google.com/gke-nodepool: default-pool
containers:
- name: <container_name>
image: us.gcr.io/<project_id>/<image_name>:GOOGLE_CLOUD_PROJECT
ports:
- containerPort: 8080
apiVersion: v1
kind: Service
metadata:
name: <service-name>
spec:
selector:
app: <selector_name>
ports:
- port: 80
targetPort: 8080
type: LoadBalancer
I would recommend few changes to work your cloud build to deploy an application in the EKS cluster.
cloudbuild.yaml
In build and push stage change the arg into gcr.io/$PROJECT_ID/<image_name>:$SHORT_SHA use the gcr.io/$PROJECT_ID/sample-image:latest.
Generate a manifest stage - you can skip/delete the stage.
gke-deploy stage - remove the image step.
kubernetes.yaml
In the spec - you can mention the image as gcr.io/$PROJECT_ID/sample-image:latest it will always take the latest on each deployment.
Rest all seems good.

Get Environment Variable Kubernetes on Next.js App

I've been reading other questions about getting K8s environment variables to work in a Next.js app, but no accepted answer till now.
My app works fine using .env.local but it's getting an error (undefined) when deployed to K8s.
This is my next.config.js
module.exports = {
env: {
NEXT_PUBLIC_API_BASE_URL: process.env.NEXT_PUBLIC_API_BASE_URL,
},
};
K8s environment:
Can anyone help me to get that environment var works on my next.js app?
Right now I do a simple trick, that is added ARG and ENV on dockerfile, then inject it when I build the docker image
Dockerfile:
ARG NEXT_PUBLIC_API_BASE_URL
ENV NEXT_PUBLIC_API_BASE_URL=${NEXT_PUBLIC_API_BASE_URL}
You should add the ENV_VARS in a .env.local file. in form of a configMap. (https://nextjs.org/docs/basic-features/environment-variables)
In Kubernetes you create a configMap like so:
apiVersion: v1
name: env-local
data:
.env: |-
NEXT_PUBLIC_API_URL=http:/your.domain.com/api
API_URL=http://another.endpoint.com/serverSide
kind: ConfigMap
Then you mount that configMap as FILE into your deployment, it then is available at app/.env.local:
apiVersion: apps/v1
kind: Deployment
spec:
replicas: 1
selector:
matchLabels:
app: your-app
template:
metadata:
labels:
app: your-app
spec:
containers:
- image: your/image:latest
imagePullPolicy: Always
name: your-app
ports:
volumeMounts:
- mountPath: /app/.env.local
name: env-local
readOnly: true
subPath: .env.local
volumes:
- configMap:
defaultMode: 420
items:
- key: .env
path: .env.local
name: env-local
name: env-local
What also worked - for me at least - for server side vars was simply adding them as regular env vars in my deployment: https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/#define-an-environment-variable-for-a-container
apiVersion: v1
kind: Pod
metadata:
name: your-app
labels:
purpose: demonstrate-envars
spec:
containers:
- name: your-app-container
image: gcr.io/google-samples/node-hello:1.0
env:
- name: DEMO_GREETING
value: "Hello from the environment"
- name: DEMO_FAREWELL
value: "Such a sweet sorrow"
const withSvgr = require('next-svgr');
module.exports = {
// Will only be available on the server side
serverRuntimeConfig: {
API_URL: process.env.API_URL,
},
// Will be available on both server and client
publicRuntimeConfig: {
NEXT_PUBLIC_API_URL: process.env.API_URL,
},
};
I spent whole day experimenting with the ways to throw my vars into next js app not exposing them in a repo. None of above mentioned clues did the job, same as official docs. I use GitLab CI/CD for building stage, and K8S deployments. Finally made it work like so:
Create Project variables in GitLab
in .gitlab-ci.yml reconstructed .env.local (since it's the only point you get vars from)
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
- touch .env.local
- echo "NEXT_PUBLIC_API_KEY='$NEXT_PUBLIC_API_KEY'" | cat >> .env.local
...
- echo "NEXT_PUBLIC_MEASUREMENT_ID='$NEXT_PUBLIC_MEASUREMENT_ID'" | cat >> .env.local
- docker build -t $IMAGE_TAG .
- docker push $IMAGE_TAG
I ran into this issue and got it working with Docker while still respecting the 12 Factor App Rules, the TL;DR is you need to modify your next.js.config and your _app.js files with the following:
next.js.config
/** #type {import('next').NextConfig} */
const nextConfig = {
publicRuntimeConfig: {
// remove private variables from processEnv
processEnv: Object.fromEntries(
Object.entries(process.env).filter(([key]) =>
key.includes('NEXT_PUBLIC_')
)
),
},
}
module.exports = nextConfig
_app.js
import App from 'next/app'
function MyApp({ Component, pageProps }) {
return <Component {...pageProps} />
}
// Only uncomment this method if you have blocking data requirements for
// every single page in your application. This disables the ability to
// perform automatic static optimization, causing every page in your app to
// be server-side rendered.
MyApp.getInitialProps = async (appContext) => {
// calls page's `getInitialProps` and fills `appProps.pageProps`
const appProps = await App.getInitialProps(appContext);
return { ...appProps }
}
export default MyApp
To access the environment variables in any page or component simply add this:
import getConfig from 'next/config';
const {
publicRuntimeConfig: { processEnv },
} = getConfig();
Here’s an example of what component would look like:
import getConfig from 'next/config';
const {
publicRuntimeConfig: { processEnv },
} = getConfig();
const Header = () => {
const {NEXT_PUBLIC_MESSAGE} = processEnv;
return (
<div>
Hello, {NEXT_PUBLIC_MESSAGE}
</div>
)
}
export default Header;
The real issue is the way the Dockerfile starts the app in order to load env vars we need to start it with npm start.
I wrote an article with my findings if you want to get the full details of why and how it works: https://benmarte.com/blog/nextjs-in-docker/
I also made a sample repo which can be used as a template: https://github.com/benmarte/nextjs-docker
I will make a PR this week to the with-docker repo.
Kubernetes set the environment variables in runtime. But the NEXT_PUBLIC_API_BASE_URL is created in BUILD TIME, not in RUN TIME.
That means that env var should be in the .env file when you run the command npm run build. It's not possible to set up that env var in run time according to the documentation:
This inlining occurs at build time, so your various NEXT_PUBLIC_ envs need to be set when the project is built.
https://nextjs.org/docs/basic-features/environment-variables#exposing-environment-variables-to-the-browser
What you can do is implement the getServerSideProps and return the value in the props.
Or you have another option more complex to achieve the configuration in runtime there is a workaround like this https://dev.to/akdevcraft/react-runtime-variables-49dc
Try to remove the definition of the environment variable from your dockerfile.
Then add the definition of the environment variable to your deployment(or pod or replicaset), for exemple:
...
spec:
containers:
- name: test-container
image: gcr.io/kuar-demo/kuard-amd64:blue
imagePullPolicy: Always
env:
- name: NEXT_PUBLIC_API_BASE_URL
value: ANY_VALUE
...

How to use Local docker image in kubernetes via kubectl

I created customize Docker Image and stored in my local system Now I want use that Docker Image via kubectl .
Docker image:-
1:- docker build -t backend:v1 .
Then Kubernetes file:-
apiVersion: apps/v1beta1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: backend
namespace: web-console
spec:
replicas: 1
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: backend
spec:
containers:
- env:
- name: mail_auth_pass
- name: mail_auth_user
- name: mail_from
- name: mail_greeting
- name: mail_service
- name: mail_sign
- name: mongodb_url
value: mongodb://mongodb.mongodb.svc.cluster.local/console
- name: server_host
value: "0.0.0.0"
- name: server_port
value: "3000"
- name: server_sessionSecret
value: "1234"
image: backend
imagePullPolicy: Never
name: backend
resources: {}
restartPolicy: Always
status: {}```
Command to run kubectl:- kubectl create -f backend-deployment.yaml
**getting Error:-**
error: error validating "backend-deployment.yaml": error validating data: [ValidationError(Deployment.spec.template.spec.containers[0].env[9]): unknown field "image" in io.k8s.api.core.v1.EnvVar, ValidationError(Deployment.spec.template.spec.containers[0].env[9]): unknown field "imagePullPolicy" in io.k8s.api.core.v1.EnvVar]; if you choose to ignore these errors, turn validation off with --validate=false
Local Registry
Set the local registry first using this command
docker run -d -p 5000:5000 --restart=always --name registry registry:2
Image Tag
Given a Dockerfile, the image could be built and tagged this easy way:
docker build -t localhost:5000/my-image
Image Pull Policy
the field imagePullPolicy should then be changed to Never get the right image from the right repo.
given this sample pod template
apiVersion: v1
kind: Pod
metadata:
name: my-pod
labels:
app: my-app
spec:
containers:
- name: app
image: localhost:5000/my-image
imagePullPolicy: Never
Deploy Pod
The pod can be deployed using:
kubectl create -f pod.yml
Hope this comes in handy :)
As the error specifies unknown field "image" and unknown field "imagePullPolicy"
There is syntax error in your kubernetes deployment file.
Make these changes in your yaml file.
apiVersion: apps/v1beta1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: backend
namespace: web-console
spec:
replicas: 1
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: backend
spec:
containers:
- name: backend
image: backend
imagePullPolicy: Never
env:
- name: mail_auth_pass
- name: mail_auth_user
- name: mail_from
- name: mail_greeting
- name: mail_service
- name: mail_sign
- name: mongodb_url
value: mongodb://mongodb.mongodb.svc.cluster.local/console
- name: server_host
value: "0.0.0.0"
- name: server_port
value: "3000"
- name: server_sessionSecret
value: "1234"
resources: {}
restartPolicy: Always
status: {}
Validate your kubernetes yaml file online using https://kubeyaml.com/
Or with kubectl apply --validate=true --dry-run=true -f deployment.yaml
Hope this helps.

Resources