I am setting up two Openshift Origins clusters running on RHEL 7.4.
The only thing that really shows me anything that is really wrong is this error log from the origin-pod from the docker node.
error: --deployment or OPENSHIFT_DEPLOYMENT_NAME is required
This leads me to think the deployment name is not getting passed to the container at startup. It then crashes not letting the deploy spin up. I see a ton of CNI errors but this is due to all containers assigned to this pod are killed as the origin-pod fails.
Here are the version of everything I could thing of that is needed to help find the true problem.
oc v3.6.0+c4dd4cf
kubernetes v1.6.1+5115d708d7
features: Basic-Auth GSSAPI Kerberos SPNEGO
Server https://is-origins-tetration.cisco.com:8443
openshift v3.6.0+c4dd4cf
kubernetes v1.6.1+5115d708d7
docker version
Client:
Version: 1.12.6
API version: 1.24
Package version: docker-1.12.6-55.gitc4618fb.el7.x86_64
Go version: go1.8.3
Git commit: c4618fb/1.12.6
Built: Thu Aug 24 14:48:49 2017
OS/Arch: linux/amd64
Server:
Version: 1.12.6
API version: 1.24
Package version: docker-1.12.6-55.gitc4618fb.el7.x86_64
Go version: go1.8.3
Git commit: c4618fb/1.12.6
Built: Thu Aug 24 14:48:49 2017
OS/Arch: linux/amd64
oci-umount-1.12.6-55.gitc4618fb.el7.x86_64
oci-register-machine-0-3.11.1.gitdd0daef.el7.x86_64
oci-systemd-hook-0.1.12-1.git1e84754.el7.x86_64
Has anyone else seen this type of error?
Related
Is there a bug in the docker containerwait api?The logs show that it stops at the containerwait function, but I found out via docker ps -a that the container has ended and exitcode=0. With 2000+ containers created, there is a 1 in 100 chance that this will happen.When I create a smaller number of containers, this does not happen. Has anyone experienced the same problem?
Can I implement a waitcontainer api myself by polling the container state?Are there any problems with this approach?
docker version:
Client:
Version: 1.13.1
API version: 1.26
Package version: docker-1.13.1-205.git7d71120.el7.centos.x86_64
Go version: go1.10.3
Git commit: 7d71120/1.13.1
Built: Wed Apr 28 13:37:12 2021
OS/Arch: linux/amd64
Server:
Version: 1.13.1
API version: 1.26 (minimum version 1.12)
Package version: docker-1.13.1-205.git7d71120.el7.centos.x86_64
Go version: go1.10.3
Git commit: 7d71120/1.13.1
Built: Wed Apr 28 13:37:12 2021
OS/Arch: linux/amd64
Experimental: false
go sdk version: v1.13.1
I see that docker containers in my host show as Running/Up , however when I try to exec , I see .
rpc error: code = 2 desc = containerd: container not found
I don't see any related processes running on ps -aef output.
Looking through the dockerd logs I see -
level=error msg="containerd: get exit status" error="containerd:
process has not exited" id=e4e5d58359 pid=bba1944c4 systemPid=5132
docker version:
Client: Version: 1.13.1 API version: 1.26 Go
version: go1.7.5 Git commit: 092cba3 Built: Wed Feb 8
06:50:14 2017 OS/Arch: linux/amd64
Server: Version: 1.13.1 API version: 1.26 (minimum version
1.12) Go version: go1.7.5 Git commit: 092cba3 Built: Wed Feb 8 06:50:14 2017 OS/Arch: linux/amd64 Experimental: false
What might be causing this behavior ? Pointers?
This issue is fixed since v17.12.
Version 18.03 is the latest supported release so you should do upgrade your docker to latest edition.
I am running windows docker on a virtual machine and when i try to run docker run, it crashes the vm as well saying unhandled exception. And get logged out of the vm
The vm is windows server 2016 running on a host that is 2012.
The docker version info
Client:
Version: 17.06.2-ee-6
API version: 1.30
Go version: go1.8.3
Git commit: e75fdb8
Built: Mon Nov 27 22:46:09 2017
OS/Arch: windows/amd64
Server:
Version: 17.06.2-ee-6
API version: 1.30 (minimum version 1.24)
Go version: go1.8.3
Git commit: e75fdb8
Built: Mon Nov 27 22:55:16 2017
OS/Arch: windows/amd64
Experimental: false
The docker logs get corrupted for this specific container I am trying to run each time that I try to run the docker. So i cant diagnose what happened. Please suggest what i can do.
In a windows OS I ended up running docker inside a vagrant debian/centos box.
It's ligthweight and never had a problem.
I still have this issue. I have docker version 17.12.0-ce-mac49 (21995). I am running IBM Cloud Private on 4 VMs with docker version on Ubuntu 16. When I run docker version on my master node I get the following:
Client:
Version: 17.12.0-ce
API version: 1.35
Go version: go1.9.2
Git commit: c97c6d6
Built: Wed Dec 27 20:11:19 2017
OS/Arch: linux/amd64
Server:
Engine:
Version: 17.12.0-ce
API version: 1.35 (minimum version 1.12)
Go version: go1.9.2
Git commit: c97c6d6
Built: Wed Dec 27 20:09:53 2017
OS/Arch: linux/amd64
Experimental: false
On my mac if I run the command:
docker login mycluster.icp:8500
I login with my user id and password and get the following response:
Error response from daemon: Get https://mycluster.icp:8500/v2/: x509: certificate signed by unknown authority
I am trying to load a docker image into IBM Cloud Private and I get the same error. Any help would be greatly appreciated.
You need to copy the ICP registry certificate to you host that you want to push image. You can refer below ICP documentation for details.
https://www.ibm.com/support/knowledgecenter/en/SSBS6K_2.1.0/manage_images/configuring_docker_cli.html
$ docker-compose up
Creating network "app_default" with the default driver
ERROR: b'failed to parse pool request for address space "LocalDefault" pool "" subpool "": could not find an available predefined network'
What is the meaning of this error, and how can I fix it?
Additional context:
$ docker-compose version
docker-compose version 1.7.1, build 6c29830
docker-py version: 1.8.1
CPython version: 3.5.1
OpenSSL version: OpenSSL 1.0.2h 3 May 2016
$ docker version
Client:
Version: 1.10.3
API version: 1.22
Go version: go1.5.3
Git commit: 20f81dd
Built: Thu Mar 10 21:49:11 2016
OS/Arch: darwin/amd64
Server:
Version: 1.11.1
API version: 1.23
Go version: go1.5.4
Git commit: 5604cbe
Built: Wed Apr 27 00:34:20 2016
OS/Arch: linux/amd64
Are you using some vpn service?
Here is a link to a possible reason:
https://github.com/docker/libnetwork/issues/779
I was having this problem. Solved by removing all docker defined networks with:
docker network rm `docker network ls -q`