MountVolume.SetUp failed for volume "azurefileshare" : Couldn't get secret default - azure-aks

Hi I am trying to mount azure file share to windows container. My node Pool kubernetes version is 119.9.
I have added secrete as below in my release pipeline as first step
apiVersion: v1
kind: Secret
metadata:
name: storage-secret
type: Opaque
data:
azurestorageaccountname: base64accountname
azurestorageaccountkey: base64accountkey
In second step I have below code
volumeMounts:
- name: azurefileshare
mountPath: Z:\
volumes:
- name: azurefileshare
azureFile:
secretName: storage-secret
shareName: share
readOnly: false
After deployment I see below error
MountVolume.SetUp failed for volume "azurefileshare" : Couldn't get
secret default/storage-secret
I am trying to find out the root cause but not able to find what is wrong here. Can someone help me to fix this. Any help would be appreciated. Thanks

Related

Docker build failing inside Azure Dev Ops Self Hosted Agent

Background
I have set up some Self Hosted, Azure Devops build agents, inside my AKS cluster. This is the documentation: https://learn.microsoft.com/en-us/azure/devops/pipelines/agents/docker?view=azure-devops
The agents have been successfully created, I can see them in my Agent Pools and target them from my pipeline.
One of the first things my pipeline does, is build and push some docker images. This is a problem inside a self hosted agent. The documentation includes the below warning and link:
In order to use Docker from within a Docker container, you bind-mount the Docker socket.
If you're sure you want to do this, see the bind mount documentation on Docker.com.
Files
Kubernetes Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: azdevops-deployment
labels:
app: azdevops-agent
spec:
replicas: 1 #here is the configuration for the actual agent always running
selector:
matchLabels:
app: azdevops-agent
template:
metadata:
labels:
app: azdevops-agent
spec:
containers:
- name: kubepodcreation
image: AKRTestcase.azurecr.io/kubepodcreation:5306
env:
- name: AZP_URL
valueFrom:
secretKeyRef:
name: azdevops
key: AZP_URL
- name: AZP_TOKEN
valueFrom:
secretKeyRef:
name: azdevops
key: AZP_TOKEN
- name: AZP_POOL
valueFrom:
secretKeyRef:
name: azdevops
key: AZP_POOL
volumeMounts:
- mountPath: /var/run/docker.sock
name: docker-volume
volumes:
- name: docker-volume
hostPath:
path: /var/run/docker.sock
Error
Attempting to run the pipeline gives me the following error:
##errorUnhandled: Unable to locate executable file: 'docker'. Please verify either the file path exists or the file can be found within a directory specified by the PATH environment variable. Also check the file mode to verify the file is executable.
Questions
Is it possible (and safe) to build and push docker images from an Azure Devops build agent running in a docker container?
How can I modify the Kubernetes deployment file, to bind mount the docker socket.
Any help will be greatly appreciated.

Unable to connect to Kubernetes - Invalid Configuration

I here for hours every day, reading and learning, but this is my first question, so bear with me.
I'm simply trying to get my Kubernetes cluster to start up.
Below is my skaffold.yaml file in the root of the project:
apiVersion: skaffold/v2alpha3
kind: Config
deploy:
kubectl:
manifests:
- ./infra/k8s/*
build:
local:
push: false
artifacts:
- image: omesadev/auth
context: auth
docker:
dockerfile: Dockerfile
sync:
manual:
- src: 'src/**/*.ts'
dest: .
Below is my auth-depl.yaml file in the infra/k8s/ directory:
apiVersion: apps/v1
kind: Deployment
metadata:
name: auth-depl
spec:
replicas: 1
selector:
matchLabels:
app: auth
template:
metadata:
labels:
app: auth
spec:
containers:
- name: auth
image: omesadev/auth
---
apiVersion: v1
kind: Service
metadata:
name: auth-srv
spec:
selector:
app: auth
ports:
- name: auth
protocol: TCP
port: 3000
targetPort: 3000
Below is the error message I'm receiving in the cli:
exiting dev mode because first deploy failed: unable to connect to Kubernetes: getting client config for Kubernetes client: error creating REST client config for kubeContext "": invalid configuration: [unable to read client-cert C:\Users\omesa\.minikube\profiles\minikube\client.crt for minikube due to open C:\Users\omesa\.minikube\profiles\minikube\client.crt: The system cannot find the path specified., unable to read client-key C:\Users\omesa\.minikube\profiles\minikube\client.key for minikube due to open C:\Users\omesa\.minikube\profiles\minikube\client.key: The system cannot find the path specified., unable to read certificate-authority C:\Users\omesa\.minikube\ca.crt for minikube due to open C:\Users\omesa\.minikube\ca.crt: The system cannot find the file specified.
I've tried to install kubernetes, minikube, and kubectl. I've added them to the path and removed them a few times in different ways because I thought my configuration or usage could have been incorrect.
Then, I read that if I'm using the Docker GUI that Kubernetes should be running in that, so I checked the settings in the Docker GUI to ensure Kubernetes was running through Docker and it is.
I have Hyper-V set up. I've used it in the past successfully with Docker and with Virtualbox, so I know my Hyper-V is not the issue.
I've also attached an image of my file directory, but I'm pretty sure everything is good to go here too.
src tree
Thanks in advance!
Enable Kubernetes!
The reason why you are getting is that Kubernetes is not enabled.
Posting #Jim solution from comments as community wiki for better visibility:
The problem was, I had two different contexts inside of my kubectl
config and the project I was trying to launch was using the wrong
cluster/context. I don't know how the minikube cluster and context
were created, but I deleted them and set the new context to
docker-desktop with "kubectl config use-context docker-desktop"
Helpful links:
Organizing Cluster Access Using kubeconfig Files
Configure Access to Multiple Clusters

Failed to mount Splunk config On Kubernetes - ERROR: Couldn't read "/opt/splunk/etc/splunk-launch.conf

I'm using this Splunk image on Kubernetes (testing locally with minikube).
After applying the code below I'm facing the following error:
ERROR: Couldn't read "/opt/splunk/etc/splunk-launch.conf" -- maybe
$SPLUNK_HOME or $SPLUNK_ETC is set wrong?
My Splunk deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: splunk
labels:
app: splunk-app
tier: splunk
spec:
selector:
matchLabels:
app: splunk-app
track: stable
replicas: 1
template:
metadata:
labels:
app: splunk-app
tier: splunk
track: stable
spec:
volumes:
- name: configmap-inputs
configMap:
name: splunk-config
containers:
- name: splunk-client
image: splunk/splunk:latest
imagePullPolicy: Always
env:
- name: SPLUNK_START_ARGS
value: --accept-license --answer-yes
- name: SPLUNK_USER
value: root
- name: SPLUNK_PASSWORD
value: changeme
- name: SPLUNK_FORWARD_SERVER
value: splunk-receiver:9997
ports:
- name: incoming-logs
containerPort: 514
volumeMounts:
- name: configmap-inputs
mountPath: /opt/splunk/etc/system/local/inputs.conf
subPath: "inputs.conf"
---
apiVersion: v1
kind: ConfigMap
metadata:
name: splunk-config
data:
inputs.conf: |
[monitor:///opt/splunk/var/log/syslog-logs]
disabled = 0
index=my-index
I tried to add also this env variables - with no success:
- name: SPLUNK_HOME
value: /opt/splunk
- name: SPLUNK_ETC
value: /opt/splunk/etc
I've tested the image with the following docker configuration - and it ran successfully:
version: '3.2'
services:
splunk-forwarder:
hostname: splunk-client
image: splunk/splunk:latest
environment:
SPLUNK_START_ARGS: --accept-license --answer-yes
SPLUNK_USER: root
SPLUNK_PASSWORD: changeme
ports:
- "8089:8089"
- "9997:9997"
Saw this on Splunk forum but the answer did not help in my case.
Any ideas?
Edit #1:
Minikube version: Upgraded fromv0.33.1 to v1.2.0.
Full error log:
$kubectl logs -l tier=splunk
splunk_common : Set first run fact -------------------------------------- 0.04s
splunk_common : Set privilege escalation user --------------------------- 0.04s
splunk_common : Set current version fact -------------------------------- 0.04s
splunk_common : Set splunk install fact --------------------------------- 0.04s
splunk_common : Set docker fact ----------------------------------------- 0.04s
Execute pre-setup playbooks --------------------------------------------- 0.04s
splunk_common : Setting upgrade fact ------------------------------------ 0.04s
splunk_common : Set target version fact --------------------------------- 0.04s
Determine captaincy ----------------------------------------------------- 0.04s
ERROR: Couldn't read "/opt/splunk/etc/splunk-launch.conf" -- maybe $SPLUNK_HOME or $SPLUNK_ETC is set wrong?
Edit #2: Adding config map to the code (was removed from the original question for the sake of brevity). This is the cause of failure.
Based on the direction pointed out by #Amit-Kumar-Gupta I'll try also to give a full solution.
So this PR change makes it so that containers cannot write to secret, configMap, downwardAPI and projected volumes since the runtime will now mount them as read-only.
This change is since v1.9.4 and can lead to issues for various applications which chown or otherwise manipulate their configs.
When Splunk boots, it registers all the config files in various locations on the filesystem under ${SPLUNK_HOME} which is in our case /opt/splunk.
The error specified in the my question reflect that splunk failed to manipulate all the relevant files in the /opt/splunk/etc directory because of the change in the mounting mechanism.
Now for the solution.
Instead of mounting the configuration file directly inside the /opt/splunk/etc directory we'll use the following setup:
We'll start the docker container with a default.yml file which will be mounted in /tmp/defaults/default.yml.
For that, we'll create the default.yml file with: docker run splunk/splunk:latest create-defaults > ./default.yml
Then, We'll go to the splunk: block and add a config: sub block under it:
splunk:
conf:
inputs:
directory: /opt/splunk/etc/system/local
content:
monitor:///opt/splunk/var/log/syslog-logs:
disabled : 0
index : syslog-index
outputs:
directory: /opt/splunk/etc/system/local
content:
tcpout:splunk-indexer:
server: splunk-indexer:9997
This setup will generate two files with a .conf postfix (Remember that the sub block start with conf:) which be owned by the correct Splunk user and group.
The inputs: section will produce the a inputs.conf with the following content:
[monitor:///opt/splunk/var/log/syslog-logs]
disabled = 0
index=syslog-index
In a similar way, the outputs: block will resemble the following:
[tcpout:splunk-receiver]
server=splunk-receiver:9997
This is instead of the passing an environment variable directly like I did in the origin code:
SPLUNK_FORWARD_SERVER: splunk-receiver:9997
Now everything is up and running (:
Full setup of the forwarder.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: splunk-forwarder
labels:
app: splunk-forwarder-app
tier: splunk
spec:
selector:
matchLabels:
app: splunk-forwarder-app
track: stable
replicas: 1
template:
metadata:
labels:
app: splunk-forwarder-app
tier: splunk
track: stable
spec:
volumes:
- name: configmap-forwarder
configMap:
name: splunk-forwarder-config
containers:
- name: splunk-forwarder
image: splunk/splunk:latest
imagePullPolicy : Always
env:
- name: SPLUNK_START_ARGS
value: --accept-license --answer-yes
- name: SPLUNK_PASSWORD
valueFrom:
secretKeyRef:
name: splunk-secret
key: password
volumeMounts:
- name: configmap-forwarder
mountPath: /tmp/defaults/default.yml
subPath: "default.yml"
For further reading:
https://splunk.github.io/docker-splunk/ADVANCED.html
https://github.com/splunk/docker-splunk/blob/develop/docs/ADVANCED.md
https://www.splunk.com/blog/2018/12/17/deploy-splunk-enterprise-on-kubernetes-splunk-connect-for-kubernetes-and-splunk-insights-for-containers-beta-part-1.html
https://splunk.github.io/splunk-ansible/ADVANCED.html#inventory-script
https://static.rainfocus.com/splunk/splunkconf18/sess/1521146368312001VwQc/finalPDF/FN1089_DockerizingSplunkatScale_Final_1538666172485001Loc0.pdf
There are two questions here: (1) why are you seeing that error message, and (2) how to achieve the desired behaviour you're hoping to achieve that you're trying to express through your Deployment and ConfigMap. Unfortunately, I don't believe there's a "cloud-native" way to achieve what you want, but I can explain (1), why it's hard to do (2), and point you to something that might give you a workaround.
The error message:
ERROR: Couldn't read "/opt/splunk/etc/splunk-launch.conf" -- maybe $SPLUNK_HOME or $SPLUNK_ETC is set wrong?
does not imply that you've set those environment variables incorrectly (necessarily), it implies that Splunk is looking for a file in that location and can't read a file there, and it's providing a hint that maybe you've put the file in another place but forgot to give Splunk the hint (via the $SPLUNK_HOME or $SPLUNK_ETC environment variables) to look elsewhere.
The reason why it can't read /opt/splunk/etc/splunk-launch.conf is because, by default, the /opt/splunk directory would be populated with tons of subdirectories and files with various configurations, but because you're mounting a volume at /opt/splunk/etc/system/local/inputs.conf, nothing can be written to /opt/splunk.
If you simply don't mount that volume, or mount it somewhere else (e.g. /foo/inputs.conf) the Deployment will start fine. Of course the problem is that it won't know anything about your inputs.conf, and it'll use the default /opt/splunk/etc/system/local/inputs.conf it writes there.
I assume what you want to do is allow Splunk to generate all the directories and files it likes, you only want to set the contents of that one file. While there is a lot of nuance about how Kubernetes deals with volume mounts, in particular those coming from ConfigMaps, and in particular when using subPath, at the end of the day I don't think there's a clean way to do what you want.
I did an Internet search for "splunk kubernetes inputs.conf" and this was my first result: https://www.splunk.com/blog/2019/02/11/deploy-splunk-enterprise-on-kubernetes-splunk-connect-for-kubernetes-and-splunk-insights-for-containers-beta-part-2.html. This is from official splunk.com, and it's advising running things like kubectl cp and kubectl exec to:
"Exec" into the master pod, and run ... commands, to copy (configuration) into the (target) directory and chown to splunk user.
🤷🏾‍♂️
One solution that worked for me in K8s deployment was:
Ammend below to the image Dockerfile
#RUN chmod -R 755 /opt/ansible
#RUN echo " ignore_errors: yes" >> /opt/ansible/roles/splunk_common/tasks/change_splunk_directory_owner.yml
Then use that same image in your deployment from your private repo with belo env variables:
#has to run as root otherwise won't let you write to $SPLUNK_HOME/S
env:
- name: SPLUNK_START_ARGS
value: --accept-license --answer-yes --no-prompt
- name: SPLUNK_USER
value: root

Kubernetes Storage type Hostpath- files mapping issue

Hi I am using latest kubernetes 1.13.1 and docker-ce (Docker version 18.06.1-ce, build e68fc7a).
I setup a deployment file that mount a file from the host (host-path) and mounts it inside a container (mountPath).
The bug is when I am trying to mount a find from the host to the container I get an error message that It's not a file. (Kubernetes think that the file is a directory for some reason)
When I am trying to run the containers using the command:
Kubectl create -f
it stay at ContainerCreating stage forever.
after deeper look on it using Kubectl describe pod it say:
Is has an error message the the file is not recognized as a file.
Here is the deployment file:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
creationTimestamp: null
labels:
io.kompose.service: notixxxion
name: notification
spec:
replicas: 1
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
io.kompose.service: notification
spec:
containers:
- image: docker-registry.xxxxxx.com/xxxxx/nxxxx:laxxt
name: notixxxion
ports:
- containerPort: xxx0
#### host file configuration
volumeMounts:
- mountPath: /opt/notification/dist/hellow.txt
name: test-volume
readOnly: false
volumes:
- name: test-volume
hostPath:
# directory location on host
path: /exec-ui/app-config/hellow.txt
# this field is optional
type: FileOrCreate
#type: File
status: {}
I have reinstalled the kubernetes cluster and it got little bit better.
kubernetes now can read files without any problem and the container in creating and running But, there is some other issue with the host path storage type:
hostPath containing mounts do not update as they change on the host even after I delete the pod and create it again
Check for file permissions which you are trying to mount!
As a last resort try using privileged mode.
Hope it helps!

Kubernetes (GKE) persistent volume resizing not working.

I am trying to resize the persistent volume in Google Kubernetes Engine. but I ending up with an error
The PersistentVolumeClaim "pvc1" is invalid: spec: Forbidden: field is immutable after creation
I have been following https://kubernetes.io/blog/2018/07/12/resizing-persistent-volumes-using-kubernetes/ guide.
Steps
1. Created a standard.yaml file with following content
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: standard
parameters:
type: pd-standard
provisioner: kubernetes.io/gce-pd
allowVolumeExpansion: true
reclaimPolicy: Delete
2. Created gke-pvc.yml with following content
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc1
spec:
accessModes:
- ReadWriteOnce
storageClassName: standard
resources:
requests:
storage: 20Gi
3. Ran kubectl apply -f standard.yaml
Ran kubectl apply -f gke-pvc.yml
Now ran kubectl edit pvc pvc1 and changed storage from 20Gi to 30 Gi and saved the file but I got error
error: persistentvolumeclaims "pvc1" is invalid
error: persistentvolumeclaims "pvc1" is invalid
A copy of your changes has been stored to "/tmp/kubectl-edit-0hztl.yaml"
Please help me to solve this issue.
This is expected behavior on GKE. I believe feature is available on Kubernetes 1.11 but not yet released on GKE. If you want early access to feature, you may sign up here.
It is working currently, after you edit pvc, you get this message:
conditions:
- lastProbeTime: null
lastTransitionTime: "2019-02-17T23:31:42Z"
status: "True"
type: Resizing
and soon after, this:
message: Waiting for user to (re-)start a pod to finish file system resize of
volume on node.
status: "True"
type: FileSystemResizePending
Then just delete pod and your volume will be resized

Resources