I am trying to create argo workflow and getting below error.
E0819 10:53:21.439832 9420 portforward.go:378] error copying from remote stream to local connection: readfrom tcp6 [::1]:2746->[::1]:52354: write tcp6 [::1]:2746->[::1]:52354: wsasend: An established connection was aborted by the software in your host machine.
Here are the steps I followed:-
minikube start which started minikube in virtualbox.
kubectl -n argo port-forward deployment/argo-server 2746:2746
in another terminal ran kubectl -n argo create -f wf-hello-world.yaml to create workflow.
In Teminal I see message as workflow.argoproj.io/hello-world-mj2gn created, but in argo UI workflow is not created. ie https://localhost:2746/workflows?limit=50
How to resolve this?
SOLUTION:
Followed instructions from https://argoproj.github.io/argo-workflows/quick-start/
Installed argo cli from https://github.com/argoproj/argo-workflows/releases/tag/v3.3.9
Downloaded argo-windows-amd64.exe.gz from https://github.com/argoproj/argo-workflows/releases/tag/v3.3.9
Extracted argo-windows-amd64.exe from argo-windows-amd64.exe.gz file using GZ extractor tool.
Renamed argo-windows-amd64.exe to argo.exe.
Move file to c:\projects\argo-cli.
Open environment variables > system vars > new > c:\projects\argo-cli.
Cmd prompt> argo version, should give version.
Now, after above steps used below command to create the workflow
kubectl -n argo create -f wf-hello-world.yaml
workflow created successfully.
Related
I have deployed pods using kubectl apply command and I can see pods running:
$kubectl describe pod test-pod -n sample | grep -i container
Containers:
Container ID: containerd://ce6cd9XXXXXX69538XXX
ContainersReady True
Can I say that it's using contained runtime technology? How do I verify the runtime used by containers.
I am also getting some errors like below in pod:
kubectl logs test-pod -n sample
'docker.images' is not supported: Cannot fetch data: Get http://1.28/images/json: dial unix /var/run/docker.sock: connect: no such file or directory.
Is it because I am not using docker runtime?
As i already mentioned in a comment the command is
kubectl get nodes -o wide
It will returns the container runtime for each node.
I am following all the steps from this link : https://github.com/justmeandopensource/kubernetes
after running the join command in the worker node it's getting added to master, but the status of the worker node is getting changed to ready.
From the logs I got the following :
Container runtime network not ready: NetworkReady=false
reason:NetworkPluginNotReady message:dock
Unable to update cni config: No networks found in /etc/cni/net.d
kubelet.go:2266 -- node "XXXXXXXXX" not found. (xxxxx is the masters
host/node name)
To establish CNI I am using flannel and also tried with weave and many other
CNI networks but the results are the same
points to ponder:
---> worker node kubelet status is healthy
---> trying to run kubeadm init command in the worker node,its showing the status of kubelet might be unhealthy. (Not able to make worker node master by running the kubeadm init command but kubeadm join command is working.After joining kubectl get nodes is showing the worker node but status is notready)
Thank you for the help
I cannot reproduce your issue. I followed exactly the instructions on github`s site you shared, and did not face similar error.
The only extra steps I needed to do, to suppress errors, detected by pre-flight checks of kubeadm init:
[ERROR FileContent--proc-sys-net-ipv4-ip_forward]: /proc/sys/net/ipv4/ip_forward contents are not set to 1
[preflight] If you know what you are doing, you can make a check non-fatal with --ignore-preflight-errors=...
was to set appropriate flag by running:
echo '1' > /proc/sys/net/ipv4/ip_forward
State of my cluster nodes:
NAME STATUS ROLES AGE VERSION
centos-master Ready master 18h v1.13.1
centos-worker Ready <none> 18h v1.13.1
I verified cluster condition by deploying&exposing sample application and everything seems to be working fine:
kubectl create deployment hello-node --image=gcr.io/hello-minikube-zero-install/hello-node
kubectl expose deployment hello-node --port=8080
I`m getting valid response from hello-world node.js app:
curl 10.100.113.255:8080
Hello World!#
What IP address you have put to your /etc/hosts files ?
Installed Docker on Mac and trying to run Vespa on Docker following steps specified in following link
https://docs.vespa.ai/documentation/vespa-quick-start.html
I did n't had any issues till step 4. I see vespa container running after step 2 and step 3 returned 200 OK response.
But Step 5 failed to return 200 OK response. Below is the command I ran on my terminal
curl -s --head http://localhost:8080/ApplicationStatus
I keep getting
curl: (52) Empty reply from server whenever I run without -s option.
So I tried to see listening ports inside my vespa container and don't see anything for 8080 but can see for 19071(used in step 3)
➜ ~ docker exec vespa bash -c 'netstat -vatn| grep 8080'
➜ ~ docker exec vespa bash -c 'netstat -vatn| grep 19071'
tcp 0 0 0.0.0.0:19071 0.0.0.0:* LISTEN
Below doc has info related to vespa ports
https://docs.vespa.ai/documentation/reference/files-processes-and-ports.html
I'm assuming port 8080 should be active after docker run(step 2 of quick start link) and can be accessed outside container as port mapping is done.
But I don't see 8080 port active inside container in first place.
A'm I missing something. Do I need to perform any additional step than mentioned in quick start? FYI I installed Jenkins inside my docker and was able to access outside container via port mapping. But not sure why it's not working with vespa.I have been trying from quiet sometime but no progress. Please advice me if I'm missing something here.
You have too low memory for your docker container, "Minimum 6GB memory dedicated to Docker (the default is 2GB on Macs).". See https://docs.vespa.ai/documentation/vespa-quick-start.html
The deadlock detector warnings and failure to get configuration from configuration server (which is likely oom killed) indicates that you are too low on memory.
My guess is that your jdisc container had not finished initialize or did not initialize properly? Did you try to check the log?
docker exec vespa bash -c '/opt/vespa/bin/vespa-logfmt /opt/vespa/logs/vespa/vespa.log'
This should tell you if there was something wrong. When it is ready to receive requests you would see something like this:
[2018-12-10 06:30:37.854] INFO : container Container.org.eclipse.jetty.server.AbstractConnector Started SearchServer#79afa369{HTTP/1.1,[http/1.1]}{0.0.0.0:8080}
[2018-12-10 06:30:37.857] INFO : container Container.org.eclipse.jetty.server.Server Started #10280ms
[2018-12-10 06:30:37.857] INFO : container Container.com.yahoo.container.jdisc.ConfiguredApplication Switching to the latest deployed set of configurations and components. Application switch number: 0
[2018-12-10 06:30:37.859] INFO : container Container.com.yahoo.container.jdisc.ConfiguredApplication Initializing new set of configurations and components. Application switch number: 1
KubeletNotReady
runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady
message:docker: network plugin is not ready: cni config uninitialized
I don't know how to make the network plugin ready
While you run kubectl describe node <node_name>
In the Conditions table, the Ready type will contain this message if you did not initialized cni. Proper initialization can be obtained by installing network addon. I will point you to 2 most used: Weave and Flannel
1) Weave
$ export kubever=$(kubectl version | base64 | tr -d '\n')
$ kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$kubever"
After executing those two commands you should see node in status "Ready"
$ kubectl get nodes
You could also check status
$ kubectl get cs
2) Flannel
$ kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/62e44c867a2846fefb68bd5f178daf4da3095ccb/Documentation/kube-flannel.yml
3) Kubernetes documentation will explain how install other network addons. In this article each CNI provider have a short description.
In my case, update systemd from 30.el7_3.9 to 67.el7_7.4 solved this.
I'm going through this tutorial
Setting up Jenkins on Container Engine
https://cloud.google.com/solutions/jenkins-on-container-engine-tutorial
and failing on "Creating the Jenkins deployment and services" step
I got this error at one point:
jenkins- 0/1 rpc error: code = 2 desc = failed to start container "": Error response from daemon: {"message":"linux spec user: unable to find user jenkins: no matching entries in passwd file"}
And I get these results for the following commands:
> kubectl apply -f jenkins/k8s/
deployment "jenkins" configured
service "jenkins-ui" configured
service "jenkins-discovery" configured
> get pods --namespace jenkins
NAME READY STATUS RESTARTS AGE
jenkins-<some id> 0/1 CrashLoopBackOff 5 10m
I get it that it is looking for jenkins user in the passwd file, but I'm still not sure why this error took place and what the correct way to fix it is. Any insight would be highly appreciated.
Edit: output of running "kubectl get pods --namespace jenkins"
The very first time running it:
> kubectl get pods --namespace jenkins
NAME READY STATUS RESTARTS AGE
jenkins-1937056428-fp7vr 0/1 ContainerCreating 0 16s
Second time running it:
> kubectl get pods --namespace jenkins
NAME READY STATUS RESTARTS AGE
jenkins-1937056428-fp7vr 0/1 rpc error: code = 2 desc = failed to start container "10a8ab7e3eb0ad153fd6055d86336b1cdfe9642b6993684a7e01fefbeca7a566": Error response from
daemon: {"message":"linux spec user: unable to find user jenkins: no matching entries in passwd file"} 1 39s
Third and after:
> kubectl get pods --namespace jenkins
NAME READY STATUS RESTARTS AGE
jenkins-1937056428-fp7vr 0/1 CrashLoopBackOff 270 22h
It appears that the persistent disk volume for the jenkins is not properly setup. Try running the following commands to reconfigure disk volumes and rerun jenkins pod,
kubectl delete -f jenkins/k8s/
gcloud compute disks delete jenkins-home
gcloud compute images delete jenkins-home-image
gcloud config set compute/zone us-east1-d
gcloud compute images create jenkins-home-image --source-uri https://storage.googleapis.com/solutions-public-assets/jenkins-cd/jenkins-home-v3.tar.gz
gcloud compute disks create jenkins-home --image jenkins-home-image --zone us-east1-d
kubectl apply -f jenkins/k8s/
I basically did one step wrong:
Provision a Kubernetes cluster using Container Engine.
gcloud container clusters create jenkins-cd \
--network jenkins \
--scopes "https://www.googleapis.com/auth/projecthosting,storage-rw"
Here make sure the options --network and --scopes actually get passed in. I guess I copied the command without fixing it up and the options got dropped.