Verbose logging in Kubernetes deployment file - docker

Iam new to Kubernetes , i want to know what does 'V' in the following mean ?
spec:
containers:
- args:
- -v=9
It seems to me it denotes verbose logging , is there any documentation for the same regarding the various levels of logging avbl like what values that arg v can take ?

Kubernetes uses glog. The available values are 1-4, as denoted in this doc

Related

Google Secret Manager secrets do not seem to work yet I can find nothing wrong

I have created a bunch of secrets using the documented CLI method like so:
echo "ak_prod_4kj56hv24hkjcg56hj2c34k5j3hbj3k124v5h243c" | gcloud secrets versions add some-api-key --data-file=-
I have set my YAML to read them at start-up, this works because my app code will throw if no value is configured.
spec:
template:
spec:
- image:
env:
- name: Some__ApiKey
valueFrom:
secretKeyRef:
key: "1"
name: some-api-key
But my code doesn't work. It was working on Azure, so this isn't a problem with my code. When I call the API, my key is rejected. A key is configured, my code checks that and besides, Cloud Run fails if it cannot read its secrets.
The problem was due to whitespace at the end of the secret.
Somehow a single whitespace character had been introduced. Looking back over my CLI command history it could be trailing whitespace after the --data-file=-
Perhaps it's the space between the " | in Google's example.
The Google console GUI does not present the secret value in quotes and so it is almost impossible to tell this has happened.
One week just on this problem. One week. The cost of badly designed software/bad sample code.
It's actually the echo. You need echo -n.
echo -n "ak_prod_4kj56hv24hkjcg56hj2c34k5j3hbj3k124v5h243c" | gcloud secrets versions add some-api-key --data-file=-

Deploying Cloud Run via YAML gives error spec.template.spec.containers should contain exactly 1 container

When deploying a Cloud Run service via a YAML file from the command line, it fails with this error.
ERROR: (gcloud.run.services.replace) spec.template.spec.containers should contain exactly 1 container
This is because the documentation for adding an environment variable is wrong, or confusing at best.
The env node should be a child of the image and not the containers node as it says here.
https://cloud.google.com/run/docs/configuring/environment-variables#yaml
This is correct:
- image: us-east1-docker.pkg.dev/proj/repo/image:r1
env:
- name: SOMETHING
value: Xyz

How to get resolved sha digest for all images within Kubernetes yaml?

Docker image tags are mutable, in that image:latest and image:1.0 can both point to image#sha256:....., but when version 1.1 is released, image:latest stored within a registry can be pointed to an image with a different sha digest. Pulling an image with a particular tag now does not mean that an identical image will be pulled next time.
If a Kubernetes YAMl resource definition refers to an image by tag (not by digest), is there a means of determining what sha digest each image will actually resolve to, before the resource definition is deployed? Is this functionality supported using kustomize or kubectl?
Use case is wanting to determine what has actually been deployed in one environment before deploying to another (I'd like to take a hash of the resolved resource definition and could then use this to understand whether image:1.0 to be deployed to PROD refers to the same image:1.0 that was deployed to UAT).
Are there any tools that can be used to support this functionality?
For example, given the following YAML, is there a way of replacing all images with their resolved digests?
apiVersion: v1
kind: Pod
metadata:
name: example
spec:
containers:
- name: image1
image: image1:1.1
command:
- /bin/sh -c some command
- name: image2
image: image2:2.2
command:
- /bin/sh -c some other command
To get something like this:
apiVersion: v1
kind: Pod
metadata:
name: example
spec:
containers:
- name: image1
image: image1#sha:....
command:
- /bin/sh -c some command
- name: image2
image: image2#sha:....
command:
- /bin/sh -c some other command
I'd like to be able to do something like pipe yaml (that might come from cat, kustomize or kubectl ... --dry-run) through a tool and then pass to kubectl apply -f:
cat mydeployment.yaml | some-tool | kubectl apply -f -
EDIT:
The background to this is the need to be able to prove to auditors/regulators that what is about to be deployed to one env (PROD) is exactly what has been successfully deployed to another env (UAT). I'd like to use normal tags in the deployment template and at the time of deploying to UAT, take a snapshot of the template with the tags replaced with the digests of the resolved images. That snapshot will be what is deployed (via kubectl or similar). When deploying to PROD, that same snapshot will be used.
This tool is supporting exactly what you need...
kbld: https://get-kbld.io/
Resolves name-tag pair reference (nginx:1.17) into digest reference
(index.docker.io/library/nginx#sha256:2539d4344...)
Looks integrates quite well with templating tools like Kustomize or even Helm
You can all the containers used info with this command. This will list all namespaces, with pod names, with container image name and sha256 of the image.
kubectl get pods --all-namespaces -o=jsonpath='{range .items[*]}{"\n"}{.metadata.namespace}{","}{.metadata.name}{","}{range .status.containerStatuses[*]}{.image}{", "}{.imageID}{", "}{end}{end}' | sort
is there a means of determining what sha digest each image will actually resolve to, before the resource definition is deployed?
No, and in the case you describe, it can vary by node. The Deployment will create some number of Pods, each Pod will get scheduled on some Node, and the Kubelet there will only pull the image if it doesn’t have something with that tag already. If you have two replicas, and you’ve changed the image a tag points to, then on node A it could use the older image that was already there, but on node B where there isn’t an image, it will pull and get the newer version.
The best practice here is to avoid changing the image a tag points to. Give each build coming out of your CI system a unique tag (a datestamp or source control commit ID for example) and use that in your Kubernetes object specifications. That avoids this problem entirely.
A workaround is to set
imagePullPolicy: Always
in your pod specs, which will force the node to pull a newer version, but this is unnecessary overhead in most cases.
Here's another on - k8s-digester from google folks. It's a bit different in a sense than the main focus is on cluster-side changes(via Adm Controller) even though client-side KRM functions seems to also be possible.
Overall, kbld seems to be more about development experience and adoption with your cli/CICD/orchestration, while k8s-digester is more about administration on the cluster side.

Travis CI Build to deploy on Cloud Foundry fails

I am trying to deploy a python Flask Application on Cloudfoundry but it fails.
It shows the output
The app cannot be mapped to route hello.cfapps.io because the route exists in a different space.
Please find the screenshot of the error
Here is what my travis.yml looks like:
stages:
- test
- deploy
language: python
python:
- '3.6'
env:
- PORT=8080
cache: pip
script: python hello.py &
jobs:
include:
- stage: test
install:
- pip install -r requirements.txt
- pip install -r tests/requirements_test.txt
script:
- python hello.py &
- python tests/test.py
- stage: deploy
deploy:
provider: cloudfoundry
username: vaibhavgupta0702#gmail.com
password:
secure: myencrytedpassword
api: https://api.run.pivotal.io
organization: Hello_Flask
space: development
on:
repo: vaibhavgupta0702/flask_helloWorld
Here is what my manifest.yml file looks like
---
applications:
- name: hello
memory: 128M
buildpacks:
- https://github.com/vaibhavgupta0702/flask_helloWorld.git
command: python hello.py &
timeout: 60
env:
PORT: 8080
I do not understand why the error is coming. Any help would be highly appreciated.
The app cannot be mapped to route hello.cfapps.io because the route exists in a different space.
This means exactly what it says. The domain cfapps.io is a shared domain which can be used by many people on the platform. When you see this error, it is telling you that someone else using the platform has already pushed an app which is utilizing that route.
There's a couple possibilities here:
Routes are scoped to a space. If you have multiple spaces, it's possible that the route in question could be used by an app in one of your other spaces. What you can do is run cf routes --orglevel. This will list all the routes in all the spaces under your organization. If you see the route hello listed under one of your spaces, simply run cf delete-route cfapps.io --hostname hello in the space where the route exists. That will delete it. Then deploy again.
Someone else is using the route. This means it would be in another org & space where you can't see it being used. In this case, there's not much you can do. You just need to pick another route or use a custom, private domain (note that custom, private domains require you to register a domain name & configure DNS as described here).
You can pick another route in a couple ways.
Use a random route. This works OK for testing, but not for anything where you want a consistent address. To use, just add random-route: true to your manifest.
Change your app name. By default, the route assigned to your app will be <app-name>.<default-domain>. Thus you get hello.cfapps.io because hello is your app name and cfapps.io is the default domain on PWS. If you change your app name to something unique, that'll result in a unique route that no one else is using.
Specifically define one or more routes. You can do this in your manifest.yml file. You need to add a routes: block and then add one or more routes.
Example:
---
...
routes:
- route: route1.example.com
- route: route2.example.com
- route: route3.example.com

Scraping traefik metrics from prometheus

I am trying to scrape traefik metrics from prometheus.
Traefik (latest) is hosted as a service on a swarm cluster, and the prometheus metrics are activated.
The matching endpoint is 10.200.1.1:8088/metrics
When I reach my endpoint from the navigator, I see the expected metrics :
...
# HELP traefik_config_last_reload_failure Last config reload failure
# TYPE traefik_config_last_reload_failure gauge
traefik_config_last_reload_failure 0
# HELP traefik_config_last_reload_success Last config reload success
# TYPE traefik_config_last_reload_success gauge
traefik_config_last_reload_success 1.53633684e+09
# HELP traefik_config_reloads_failure_total Config failure reloads
# TYPE traefik_config_reloads_failure_total counter
traefik_config_reloads_failure_total 0
# HELP traefik_config_reloads_total Config reloads
# TYPE traefik_config_reloads_total counter
traefik_config_reloads_total 76
...
So, to my pov, editing the following prometheus.yml (and POSTing to the /-/reload) should add these metrics.
global:
scrape_interval: 15s
rule_files:
- "targets.rules"
- "host.rules"
- "containers.rules"
scrape_configs:
...
- job_name: 'traefik'
metrics_path: '/metrics'
static_configs:
- targets: ['10.200.1.2:8088']
But unfortunately, none of those appear on prometheus api's drop down list.
Since I am new to traefik and prometheus, I am quite sure I understood something wrong.
I tried to follow a few guides (such as this one), but could not manage to have it work (may have worked with the previous version).
So.... does anyone have an idea on what I do wrong and/or what is the correct way?
After a while, many attempts and some pertinent questions later : I ended up thinking it was not about my configuration...
So since I also observed some randomly odd behavior (such as some 503 errors on my remote /providers call), I started thinking the problem was related to the access to my machine.
So I tried to demote the manager and promote another node of the swarm instead.
... And it worked!
My traefik metrics now appear in prometheus!
I still have to understand what is wrong with my former manager, but at least, I am stepping forward!
Thanks #AlinSînpălean & #AndreasJägle for your help!

Resources