I Want to setup Rancher to manage my Kubernetes Cluster.
I installed rancher on the Ubuntu and when try to connect I got the error.
Steps that I ran:
curl --insecure -sfL https://localhost/v3/import/zkzmqg5kjpcqmw9qplrskzwwvwhgbbzdk2z9dn579rhpp66wt65wgj.yaml | kubectl apply -f -
I got this error:
curl --insecure -sfL https://localhost/v3/import/zkzmqg5kjpcqmw9qplrskzwwvwhgbbzdk2z9dn579rhpp66wt65wgj.yaml | kubectl apply -f -
Error from server (Forbidden): <html><head><meta http-equiv='refresh' content='1;url=/login?from=%2Fswagger-2.0.0.pb-v1%3Ftimeout%3D32s'/><script>window.location.replace('/login?from=%2Fswagger-2.0.0.pb-v1%3Ftimeout%3D32s');</script></head><body style='background-color:white; color:white;'>
Authentication required
<!--
You are authenticated as: anonymous
Groups that you are in:
Permission you need to have (but didn't): hudson.model.Hudson.Read
... which is implied by: hudson.security.Permission.GenericRead
... which is implied by: hudson.model.Hudson.Administer
-->
Any idea how I can To fix the problem?
Related
$ docker login -u uploader -p ****** http://10.11.20.186:8082 [14:13:41]
Error: credentials key has https[s]:// prefix
$ docker -v [14:28:20]
podman version 3.4.1-dev
$ cat /etc/os-release [14:28:59]
NAME="CentOS Stream"
VERSION="8"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="8"
PLATFORM_ID="platform:el8"
PRETTY_NAME="CentOS Stream 8"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:centos:centos:8"
HOME_URL="https://centos.org/"
BUG_REPORT_URL="https://bugzilla.redhat.com/"
REDHAT_SUPPORT_PRODUCT="Red Hat Enterprise Linux 8"
REDHAT_SUPPORT_PRODUCT_VERSION="CentOS Stream"
sheng#B-Product-U-WEB01 /etc/containers/registries.conf.d
$ cat 003-nexus.conf [14:45:28]
[[registry]]
prefix = "10.11.20.186:8082"
location = "10.11.20.186:8082"
insecure = true
I want to log in to my nexus repository manager.
However, when I attempt to do this I get the error.
Error: credentials key has https[s]:// prefix
docker login -u uploader -p 1qaz2wsx 10.11.20.186:8082
This should work without a downgrade. Just remove 'https://'.
dnf downgrade podman-docker -y
sheng#B-Product-U-WEB01 /etc/containers/registries.conf.d
$ docker -v [17:27:36]
podman version 3.3.0-dev
sheng#B-Product-U-WEB01 /etc/containers/registries.conf.d
$ docker login -u uploader -p ****** https://10.11.20.186:8082 [17:27:38]
Login Succeeded!
it's work
My Jenkins requires https://, without it, it fails with
java.net.MalformedURLException: no protocol: quay.io
at java.net.URL.<init>(URL.java:611)
at java.net.URL.<init>(URL.java:508)
at java.net.URL.<init>(URL.java:457)
at org.jenkinsci.plugins.docker.commons.credentials.DockerRegistryEndpoint.getEffectiveUrl(DockerRegistryEndpoint.java:145)
at org.jenkinsci.plugins.docker.commons.credentials.DockerRegistryEndpoint.newKeyMaterialFactory(DockerRegistryEndpoint.java:300)
at org.jenkinsci.plugins.docker.workflow.RegistryEndpointStep$Execution2.newKeyMaterialFactory(RegistryEndpointStep.java:95)
at org.jenkinsci.plugins.docker.workflow.AbstractEndpointStepExecution2.doStart(AbstractEndpointStepExecution2.java:52)
at org.jenkinsci.plugins.workflow.steps.GeneralNonBlockingStepExecution.lambda$run$0(GeneralNonBlockingStepExecution.java:77)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:750)
The solution turned out to be a system upgrade to the latest version of the RHEL 8.6 packages through dnf upgrade. Now I have
# docker --version
podman version 4.0.2
which allows me to use the https:// prefix without complaints.
I intend to test a non-trivial Kubernetes setup as part of CI and wish to run the full system before CD. I cannot run --privileged containers and am running the docker container as a sibling to the host using docker run -v /var/run/docker.sock:/var/run/docker.sock
The basic docker setup seems to be working on the container:
linuxbrew#03091f71a10b:~$ docker run hello-world
Hello from Docker!
This message shows that your installation appears to be working correctly.
However, minikube fails to start inside the docker container, reporting connection issues:
linuxbrew#03091f71a10b:~$ minikube start --alsologtostderr -v=7
I1029 15:07:41.274378 2183 out.go:298] Setting OutFile to fd 1 ...
I1029 15:07:41.274538 2183 out.go:345] TERM=xterm,COLORTERM=, which probably does not support color
...
...
...
I1029 15:20:27.040213 197 main.go:130] libmachine: Using SSH client type: native
I1029 15:20:27.040541 197 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7a1e20] 0x7a4f00 <nil> [] 0s} 127.0.0.1 49350 <nil> <nil>}
I1029 15:20:27.040593 197 main.go:130] libmachine: About to run SSH command:
sudo hostname minikube && echo "minikube" | sudo tee /etc/hostname
I1029 15:20:27.040992 197 main.go:130] libmachine: Error dialing TCP: dial tcp 127.0.0.1:49350: connect: connection refused
This is despite the network being linked and the port being properly forwarded:
linuxbrew#51fbce78731e:~$ docker container ls
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
93c35cec7e6f gcr.io/k8s-minikube/kicbase:v0.0.27 "/usr/local/bin/entrβ¦" 2 minutes ago Up 2 minutes 127.0.0.1:49350->22/tcp, 127.0.0.1:49351->2376/tcp, 127.0.0.1:49348->5000/tcp, 127.0.0.1:49349->8443/tcp, 127.0.0.1:49347->32443/tcp minikube
51fbce78731e 7f7ba6fd30dd "/bin/bash" 8 minutes ago Up 8 minutes bpt-ci
linuxbrew#51fbce78731e:~$ docker network ls
NETWORK ID NAME DRIVER SCOPE
1e800987d562 bridge bridge local
aa6b2909aa87 host host local
d4db150f928b kind bridge local
a781cb9345f4 minikube bridge local
0a8c35a505fb none null local
linuxbrew#51fbce78731e:~$ docker network connect a781cb9345f4 93c35cec7e6f
Error response from daemon: endpoint with name minikube already exists in network minikube
The minikube container seems to be alive and well when trying to curl from the host and even sshis responding:
mastercook#linuxkitchen:~$ curl https://127.0.0.1:49350
curl: (35) OpenSSL SSL_connect: SSL_ERROR_SYSCALL in connection to 127.0.0.1:49350
mastercook#linuxkitchen:~$ ssh root#127.0.0.1 -p 49350
The authenticity of host '[127.0.0.1]:49350 ([127.0.0.1]:49350)' can't be established.
ED25519 key fingerprint is SHA256:0E41lExrrezFK1QXULaGHgk9gMM7uCQpLbNPVQcR2Ec.
This key is not known by any other names
What am I missing and how can I make minikube properly discover the correctly working minikube container?
Because minikube does not complete the cluster creation, running Kubernetes in a (sibling) Docker container favours kind.
Given that the (sibling) container does not know enough about its setup, the networking connections are a bit flawed. Specifically, a loopback IP is selected by kind (and minikube) upon cluster creation even though the actual container sits on a different IP in the host docker.
To correct the networking, the (sibling) container needs to be connected to the network actually hosting the Kubernetes image. To accomplish this, the procedure is illustrated below:
Create a kubernetes cluster:
linuxbrew#324ba0f819d7:~$ kind create cluster --name acluster
Creating cluster "acluster" ...
β Ensuring node image (kindest/node:v1.21.1) πΌ
β Preparing nodes π¦
β Writing configuration π
β Starting control-plane πΉοΈ
β Installing CNI π
β Installing StorageClass πΎ
Set kubectl context to "kind-acluster"
You can now use your cluster with:
kubectl cluster-info --context kind-acluster
Thanks for using kind! π
Verify if the cluster is accessible:
linuxbrew#324ba0f819d7:~$ kubectl cluster-info --context kind-acluster
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
The connection to the server 127.0.0.1:36779 was refused - did you specify the right host or port?
3.) Since the cluster cannot be reached, retrieve the control planes master IP. Note the "-control-plane" addition to the cluster name:
linuxbrew#324ba0f819d7:~$ export MASTER_IP=$(docker inspect --format='{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' acluster-control-plane)
4.) Update the kube config with the actual master IP:
linuxbrew#324ba0f819d7:~$ sed -i "s/^ server:.*/ server: https:\/\/$MASTER_IP:6443/" $HOME/.kube/config
5.) This IP is still not accessible by the (sibling) container and to connect the container with the correct network retrieve the docker network ID:
linuxbrew#324ba0f819d7:~$ export MASTER_NET=$(docker inspect --format='{{range .NetworkSettings.Networks}}{{.NetworkID}}{{end}}' acluster-control-plane)
6.) Finally connect the (sibling) container ID (which should be stored in the $HOSTNAME environment variable) with the cluster docker network:
linuxbrew#324ba0f819d7:~$ docker network connect $MASTER_NET $HOSTNAME
7.) Verify whether the control plane accessible after the changes:
linuxbrew#324ba0f819d7:~$ kubectl cluster-info --context kind-acluster
Kubernetes control plane is running at https://172.18.0.4:6443
CoreDNS is running at https://172.18.0.4:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
If kubectl returns Kubernetes control plane and CoreDNS URL, as shown in the last step above, the configuration has succeeded.
You can run minikube in docker in docker container. It will use docker driver.
docker run --name dind -d --privileged docker:20.10.17-dind
docker exec -it dind sh
/ # wget https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64
/ # mv minikube-linux-amd64 minikube
/ # chmod +x minikube
/ # ./minikube start --force
...
* Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
/ # ./minikube kubectl -- run --image=hello-world
/ # ./minikube kubectl -- logs pod/hello
Hello from Docker!
...
Also, note that --force is for running minikube using docker driver as root which we shouldn't do according minikube instructions.
I'm trying to run sentry onpremise version (https://github.com/getsentry/onpremise)
logs shows
relay_1 | 2020-09-09T10:45:13Z [relay_server::actors::upstream] ERROR: authentication encountered error: could not send request to upstream
relay_1 | caused by: Failed to connect to host: Failed resolving hostname: no record found for name: web.router703710.com. type: AAAA class: IN
relay_1 | caused by: Failed resolving hostname: no record found for name: web.router703710.com. type: AAAA class: IN
relay_1 | caused by: Failed resolving hostname: no record found for name: web.router703710.com. type: AAAA class: IN
router703710.com is the dns assigned by our cisco router..
although we don't use it in any way.
I just see cisco has that dns setup in the router's setting page.
From the hostmachine where the docker is running, I can't connect to the router703710.com or www.router703710.com .
So how do I tell the docker not to use the dns and some other dns which is actually working?
Only thing I can think of are the following settings and not sure it'll make difference
network_mode: host
dns:
- 192.168.1.1
The docker image seems to be https://hub.docker.com/r/getsentry/relay/ but I don't know how to see the actuall dockerfile
It has been discussed in https://github.com/getsentry/onpremise/issues/771 and it seems this issue is from Docker. agrevtcev has provided an workaround in that thread, to downgrade Docker to 19.03.
I made some changes on the commands. The latest Docker of 19.03 is 19.03.15 (not 19.03.14 in the original workaround), and use apt-mark to prevent from upgrading docker automatically (note this is for Ubuntu 20.04, focal):
apt install -y --allow-downgrades docker-ce=5:19.03.15~3-0~ubuntu-focal docker-ce-cli=5:19.03.15~3-0~ubuntu-focal
apt-mark hold docker-ce docker-ce-cli
I have a Jenkins pod running in OpenShift.
I have enabled anyuid for my project as per http://appagile.io/2017/03/29/how-to-run-a-pod-as-root/.
However, I'm unable to get the initial admin password due to the following issue
<html><head><meta http-equiv='refresh' content='1;url=/login?from=%2F' /><script>window.location.replace('/login?from=%2F');</script></head><body style='background-color:white; color:white;'>
Authentication required
<!--
You are authenticated as: anonymous
Groups that you are in:
Permission you need to have (but didn't): hudson.model.Hudson.Read
... which is implied by: hudson.security.Permission.GenericRead
... which is implied by: hudson.model.Hudson.Administer
-->
As per the solution mentioned in this - https://stackoverflow.com/a/41055670
I tried adding jenkins user as part of root.
sudo usermod -a -G root jenkins
But sudo package was not installed in the Jenkins pod.
I tried su -i but it's asking for root password which I don't know.
In Docker, I use docker exec -u root option to log in as root.
But I don't see any such option in oc. Are there any other option we can try to resolve this issue?
I am using use the ice command line interface for IBM Container Services, and I am seeing a couple of different problems from a couple of different boxes I am testing with. Here is one example:
[root#cds-legacy-monitor ~]# ice --verbose login --org chrisr#ca.ibm.com --space dev --user chrisr#ca.ibm.com --registry registry-ice.ng.bluemix.net
#2015-11-26 01:38:26.092288 - Namespace(api_key=None, api_url=None, cf=False, cloud=False, host=None, local=False, org='chrisr#ca.ibm.com', psswd=None, reg_host='registry-ice.ng.bluemix.net', skip_docker=False, space='dev', subparser_name='login', user='chrisr#ca.ibm.com', verbose=True)
#2015-11-26 01:38:26.092417 - Executing: cf login -u chrisr#ca.ibm.com -o chrisr#ca.ibm.com -s dev -a https://api.ng.bluemix.net
API endpoint: https://api.ng.bluemix.net`
Password>
Authenticating...
OK
Targeted org chrisr#ca.ibm.com
Targeted space dev
API endpoint: https://api.ng.bluemix.net (API version: 2.40.0)
User: chrisr#ca.ibm.com
Org: chrisr#ca.ibm.com
Space: dev
#2015-11-26 01:38:32.186204 - cf exit level: 0
#2015-11-26 01:38:32.186340 - config.json path: /root/.cf/config.json
#2015-11-26 01:38:32.186640 - Bearer: <long string omitted>
#2015-11-26 01:38:32.186697 - cf login succeeded. Can access: https://api-ice.ng.bluemix.net/v3/containers
Authentication with container cloud service at https://api-ice.ng.bluemix.net/v3/containers completed successfully
You can issue commands now to the container service
Proceeding to authenticate with the container cloud registry at registry-ice.ng.bluemix.net
#2015-11-26 01:38:32.187317 - using bearer token
#2015-11-26 01:38:32.187350 - config.json path: /root/.cf/config.json
#2015-11-26 01:38:32.187489 - Bearer: <long pw string omitted>
#2015-11-26 01:38:32.187517 - Org Guid: dae00d7c-1c3d-4bfd-a207-57a35a2fb42b
#2015-11-26 01:38:32.187551 - docker login -u bearer -p '<long pw string omitted>' -e a#b.c registry-ice.ng.bluemix.net
FATA[0012] Error response from daemon: </html>
#2015-11-26 01:38:44.689721 - docker call exit level: 256
docker is not available on your system or is not properly configured
Could not authenticate with cloud registry at registry-ice.ng.bluemix.net
You can still use IBM Containers but will not be able to run local docker containers, push, or pull images
#2015-11-26 01:38:44.689842 - Exit err level = 2**
On the other box, it also fails, but the final error is slightly different.
#2015-11-26 01:44:48.916034 - docker login -u bearer -p '<long pw string omitted>' -e a#b.c registry-ice.ng.bluemix.net
Error response from daemon: Unexpected status code [502] : <html>
<head><title>502 Bad Gateway</title></head>
<body bgcolor="white">
<center><h1>502 Bad Gateway</h1></center>
<hr><center>nginx</center>
</body>
</html>
#2015-11-26 01:45:02.582753 - docker call exit level: 256
docker is not available on your system or is not properly configured
Could not authenticate with cloud registry at registry-ice.ng.bluemix.net
You can still use IBM Containers but will not be able to run local docker containers, push, or pull images
#2015-11-26 01:45:02.582868 - Exit err level = 2
Any thoughts on what might be causing these issues?
The errors are referring the same problem, ice isn't finding any docker env locally.
It doesn't prevent working remotely on Bluemix but without a local docker env ice cannot work with local containers