I was trying to use helm3 in my k8s cluster. I installed helm3 on one of the nodes and this happened.
helm erro message
WARNING: Invalid auth configuration file
github.com/docker/cli/cli/config/configfile.decodeAuth
github.com/docker/cli#v20.10.21+incompatible/cli/config/configfile/file.go:297
github.com/docker/cli/cli/config/configfile.(*ConfigFile).LoadFromReader
github.com/docker/cli#v20.10.21+incompatible/cli/config/configfile/file.go:128
github.com/docker/cli/cli/config.load
github.com/docker/cli#v20.10.21+incompatible/cli/config/config.go:130
github.com/docker/cli/cli/config.Load
github.com/docker/cli#v20.10.21+incompatible/cli/config/config.go:111
oras.land/oras-go/pkg/auth/docker.NewClientWithDockerFallback
oras.land/oras-go#v1.2.2/pkg/auth/docker/client.go:86
helm.sh/helm/v3/pkg/registry.NewClient
helm.sh/helm/v3/pkg/registry/client.go:83
main.newRootCmd
helm.sh/helm/v3/cmd/helm/root.go:155
main.main
helm.sh/helm/v3/cmd/helm/helm.go:66
runtime.main
runtime/proc.go:250
runtime.goexit
runtime/asm_amd64.s:1571
/root/.docker/config.json
github.com/docker/cli/cli/config.load
github.com/docker/cli#v20.10.21+incompatible/cli/config/config.go:132
github.com/docker/cli/cli/config.Load
github.com/docker/cli#v20.10.21+incompatible/cli/config/config.go:111
oras.land/oras-go/pkg/auth/docker.NewClientWithDockerFallback
oras.land/oras-go#v1.2.2/pkg/auth/docker/client.go:86
helm.sh/helm/v3/pkg/registry.NewClient
helm.sh/helm/v3/pkg/registry/client.go:83
main.newRootCmd
helm.sh/helm/v3/cmd/helm/root.go:155
main.main
helm.sh/helm/v3/cmd/helm/helm.go:66
runtime.main
runtime/proc.go:250
runtime.goexit
runtime/asm_amd64.s:1571
I tried to install helm3 on another node, and worked fine. The difference between these two nodes is I have executed docker login command on the first node. ~/.docker/config.json file is automatically created. I removed the config.json file on first node, and helm worked properly.
I want to know how to make docker login and helm both work at same time. And I want to know why docker login and helm confilcted on my node, as well.
Related
We have a GitLab CI pipeline that currently pulls images from our internal Docker registry, authenticated using a variable defined in .gitlab-ci.yml:
variables:
...
DOCKER_AUTH_CONFIG: '{"auths": {"our.registry": {"auth": "$B64AUTH"}}}'
This works fine.
We are trying to add a step to the end of the pipeline, to push our built Docker images to an Amazon ECR registry. We have installed the amazon-ecr-credential-helper on our runner instances, and given them the correct IAM permissions to be able to push to these registries. We have changed the .gitlab-ci.yml variable to:
DOCKER_AUTH_CONFIG: '{"auths": {"our.registry": {"auth": "$B64AUTH"}}, "credHelpers": { "<account-id>.dkr.ecr.<region>.amazonaws.com": "ecr-login"}}'
However, this causes the runner to fail to authenticate to our internal registry, so it cannot pull the images in which our jobs run. Whereas previously we would see in our pipeline jobs' logs:
Authenticating with credentials from $DOCKER_AUTH_CONFIG
... we are no longer seeing this. We're not even getting to the step where we want to push to ECR.
We have added a wrapper script around the credential helper, to log all the ins and outs to a file, and try and debug what is happening. However, it appears as if the helper isn't getting called at all, as there is nothing in the log file.
What can we do to try and get this working?
Our problems here boiled down to a number of causes:
Since we referenced the credential helper in DOCKER_AUTH_CONFIG, we needed the helper installed on the machine spawning the runners. (We use the docker+machine runner.) This machine also needed IAM permissions. Without this, it just gave up on the DOCKER_AUTH_CONFIG variable completely (a questionable decision if you ask me...)
In order to authenticate from within the jobs and push the images to ECR, we needed to configure the helper there too. We did this by modifying our spawner's config.toml file to add a volume /usr/bin/docker-credential-ecr-login:/usr/bin/docker-credential-ecr-login. (We also mounted the log directory and our helper wrapper.) In the docker push command, we added a --config docker-config flag, and wrote out an appropriate config to docker-config.config.json
Finally, our job image was docker/compose, and our verbose wrapper was written in bash, which isn't included in that image, so that was another silent failure. 😖.
VSCode Version:
1.62.2
Local OS Version:
Windows 10.0.18363
Reproduces in: Remote - Containers
Name of Dev Container Definition with Issue:
/vscode/devcontainers/typescript-node
In our company we use a proxy which terminates the SSL connections. When I now try to start any devcontainer (the workspace is in the WSL2 filesystem), I get the following error message:
Installing VS Code Server for commit 3a6960b964327f0e3882ce18fcebd07ed191b316
[2021-11-12T17:01:44.400Z] Start: Downloading VS Code Server
[2021-11-12T17:01:44.400Z] 3a6960b964327f0e3882ce18fcebd07ed191b316 linux-x64 stable
[2021-11-12T17:01:44.481Z] Stop (81 ms): Downloading VS Code Server
[2021-11-12T17:01:44.499Z] Error: unable to get local issuer certificate
at TLSSocket.onConnectSecure (_tls_wrap.js:1497:34)
at TLSSocket.emit (events.js:315:20)
at TLSSocket._finishInit (_tls_wrap.js:932:8)
at TLSWrap.ssl.onhandshakedone (_tls_wrap.js:706:12)
In the dockerfile I copy the company certificates and update them:
ADD ./certs /usr/local/share/ca-certificates
RUN update-ca-certificates 2>/dev/null
The proxy environment variables are also set correctly. Out of desperation I also tried to disable the certificate check for wget:
RUN su node -c "echo check_certificate=off >> ~/.wgetrc"
Even in the devcontainer configuration I have disabled the proxy and the security check for VS code via the settings:
// Set *default* container specific settings.json values on container create.
"settings": {
"http.proxy": "http://<proxy.url>:8080",
"http.proxyStrictSSL": false
},
I have tried many other things, like setting NODE_TLS_REJECT_UNAUTHORIZED=0 as env variable inside the dockerfile, unfortunately without any success. Outside the company network, without the proxy, it works wonderfully.
Maybe one of you has an idea how I can solve this problem?
A working if not so nice solution to the problem is to add HTTPS exceptions for the following domains:
https://update.code.visualstudio.com
https://az764295.vo.msecnd.net
A list of common hostnames can be found here:
https://code.visualstudio.com/docs/setup/network
Below is screenshot of reference but not able to get exact what need to get temporary password from mention path.
These are guidelines given :
Next steps
Prerequisites
You'll need the following tools in your environment:
gcloud: if gcloud has not been configured yet, then configure gcloud by following the gcloud Quickstart.
kubectl: set kubectl to a specific cluster by following the steps at container get-credentials.
sed
Accessing your Jenkins instance
NOTE: For HTTPS, you must accept a temporary TLS certificate.
Read a temporary password:
$(kubectl -ndefault get pod -oname | sed -n /\\/jenkins-job-jenkins/s.pods\\?/..p) \
cat /var/jenkins_home/secrets/initialAdminPassword
Identify the HTTPS endpoint:
echo https://$(kubectl -ndefault get ingress -l "app.kubernetes.io/name=jenkins-job" -ojsonpath="{.items[0].status.loadBalancer.ingress[0].ip}")/
Navigate to the endpoint.
Configuring your Jenkins instance
Follow the on-screen instructions to fully configure your Jenkins instance:
Install plugins
Create a first admin user
Set your Jenkins URL (you can choose to start with the default URL and change it later)
Start using your fresh Jenkins installation!
For further information, refer to the Jenkins website or this project GitHub page.
I put a step by step instruction as follow:
Under Kubernetes engine goto Workloads tab then on the right side click on your jenkins Stateful Set.
You will route to the Stateful Set details page.
Under Managed pods click on your pod name.
On Pod details page you can find KUBECTL on the top right, Click on KUBECTL > Exec > jenkins-master
Cloud shell terminal should open and two row of command will be paste into it.
The very end of command should be ended with jenkins-master -- ls
Replace ls with cat /var/jenkins_home/secrets/initialAdminPassword then press enter.
Outcome will be your Administrator password, you may copy and paste it into "Unlock Jenkins" page!
Good luck!
I am using CircleCI's persistent workspace feature to run jobs with the same build folder between Linux and Windows executor types. I was able to go from Linux to Windows but when I went from Windows to Linux I got this error when CircleCI attempted to attach the workspace.
Applying workspace layers:
9ba3eddc-3658-43c2-858b-aea39250af3e
25c476af-8804-4125-b979-05a62a9ac204
Error applying workspace layer for job 25c476af-8804-4125-b979-05a62a9ac204: Error extracting tarball /tmp/workspace-layer-25c476af-8804-4125-b979-05a62a9ac204854634413 : tar: project/.circleci/config.yml: Cannot change ownership to uid 3434, gid 197121: Invalid argument
Looking at the error it's clear that the UIDs are not existing on the system. I attempted to run commands to create the same UID/GID it was erroring on but I still got an unable to change owner issue.
I was expecting CircleCI to move the files and ignore the user: group part when it was extracted as you can't guarantee the UID/GID exists.
I opened a support ticket but hoping for a faster solution to this issue.
I found a solution to this issue and it's forcing CircleCI to use TAR_OPTIONS environment variable to force the options to ignore the owner/group.
Here is what I added to my jobs steps that attach the workspace when the previous job run was Window.
build-app:
build:
docker:
- image: Dockerhub.com/myrepo/myimage:1.0.0
environment:
TAR_OPTIONS: --no-same-owner
using TAR_OPTIONS environment to inject the option --no-same-owner allowed CircleCI to extract the tarball without issue.
I am following this tutorial at https://gettech1.wordpress.com/2016/05/26/setting-up-kubernetes-cluster-on-ubuntu-14-04-lts/ to setup kubernet multi node with 2 minions and 1 master node on remote ubuntu machines, after following all the steps it goes OK. But when I am trying to run the ./kube-up.sh bash file. It returns the following errors
ubuntu#ip-XXX-YYY-ZZZ-AAA:~/kubernetes/cluster
$ ./kube-up.sh
Starting cluster in us-central1-b using provider gce ... calling
verify-prereqs Can't find gcloud in PATH, please fix and retry. The
Google Cloud SDK can be downloaded from
https://cloud.google.com/sdk/.
Edit : I have fixed above issue after exporting different environment variables like
$ export KUBE_VERSION=2.2.1
$ export FLANNEL_VERSION=0.5.5
$ export ETCD_VERSION=1.1.8
but after that it is generating this issue
kubernet gzip: stdin: not in gzip format tar: Child returned status 1
tar: Error is not recoverable: exiting now
The command you should be executing is KUBERNETES_PROVIDER=ubuntu ./kube-up.sh
Without setting that environment variable kube-up.sh tries to deploy VMs on Google Compute Engine and to do so it needs the gcloud binary that you don't have installed.