kubernetes : PTY allocation request failed - docker

I get this error when I try to connect to a node with kubernetes master running as a container: "PTY allocation request failed on channel 0"
Steps to reproduce:
I run a mac with OS X el Captain 10.11.1.
download standard centos 7.1 from oxboxes.
start in virtualbox 5.0.10. 1 natted interface. 1 port forward from host:2200->guest:22.
install docker 1.9.
ssh into the centos 6 run the following(as per kubernetes user manual):
6.a docker run --net=host -d gcr.io/google_containers/etcd:2.0.12 /usr/local/bin/etcd --addr=127.0.0.1:4001 --bind-addr=0.0.0.0:4001 --data-dir=/var/etcd/data
6.b docker run --volume=/:/rootfs:ro --volume=/sys:/sys:ro --volume=/dev:/dev --volume=/var/lib/docker/:/var/lib/docker:ro --volume=/var/lib/kubelet/:/var/lib/kubelet:rw --volume=/var/run:/var/run:rw --net=host --pid=host --privileged=true -d gcr.io/google_containers/hyperkube:v1.0.1 /hyperkube kubelet --containerized --hostname-override="127.0.0.1" --address="0.0.0.0" --api-servers=http://localhost:8080 --config=/etc/kubernetes/manifests
6.c docker run -d --net=host --privileged gcr.io/google_containers/hyperkube:v1.0.1 /hyperkube proxy --master=http://127.0.0.1:8080 --v=2
ssh into the centos again, you'll get the following error: "PTY allocation request failed on channel 0"
I'm opening this issue in kubernetes because otherwise the above configuration seems to be working fine. Only when I start kubernetes the issue shows up.
Thanks
Raffaele

Related

SSL(curl) connection error in ElasticSearch setup

Have setup a 3-node Elasticsearch cluster using docker-compose. Followed below steps:
On one of the master nodes, es11, gets below error, however same curl command works fine on other 2 nodes i.e. es12, es13:
Error:
curl -X GET 'https://localhost:9316'
curl: (35) Encountered end of file
Below error in logs:
"stacktrace": ["org.elasticsearch.transport.RemoteTransportException: [es13][SOMEIP:9316][internal:cluster/coordination/join]",
"Caused by: org.elasticsearch.transport.ConnectTransportException: [es11][SOMEIP:9316] handshake failed. unexpected remote node {es13}{SOMEVALUE}{SOMEVALUE
"at org.elasticsearch.transport.TransportService.lambda$connectionValidator$6(TransportService.java:468) ~[elasticsearch-7.17.6.jar:7.17.6]",
"at org.elasticsearch.action.ActionListener$MappedActionListener.onResponse(ActionListener.java:95) ~[elasticsearch-7.17.6.jar:7.17.6]",
"at org.elasticsearch.transport.TransportService.lambda$handshake$9(TransportService.java:577) ~[elasticsearch-7.17.6.jar:7.17.6]",
https://localhost:9316 on browser gives site can't be reached error as well.It seems SSL certificate as created in step 4 below is having some issues in es11.
Any leads please? OR If I repeat step 4, do i need to copy the certs again to es12 & es13?
Below elasticsearch.yml
cluster.name: "docker-cluster"
network.host: 0.0.0.0
Ports as defined in all 3 nodes docker-compose.yml
environment:
- node.name=es11
- transport.port=9316
ports:
- 9216:9200
- 9316:9316
Initialize a docker swarm. On ES11 run docker swarm init. Follow the instructions to join 12 and 13 to the swarm.
Create an overlay network docker network create -d overlay --attachable elastic
If necessary, bring down the current cluster and remove all the associated volumes by running docker-compose down -v
Create SSL certificates for ES with docker-compose -f create-certs.yml run --rm create_certs
Copy the certs for es12 and 13 to the respective servers
Use this busybox to create the overlay network on 12 and 13 sudo docker run -itd --name containerX --net [network name] busybox
Configure certs on 12 and 13 with docker-compose -f config-certs.yml run --rm config_certs
Start the cluster with docker-compose up -d on each server
Set the passwords for the built-in ES accounts by logging into the cluster docker exec -it es11 sh then running bin/elasticsearch-setup-passwords interactive --url localhost:9316
(as per your https://discuss.elastic.co thread)
you cannot talk HTTP to the transport protocol port, which you have defined in transport.port. you need to talk to port 9200 in the container, which you have mapped to 9216 outside the container
the transport port runs a binary protocol that is not HTTP accessible

Connecting with Portainer: "resource is online but isn't responding to connection attempts"

I installed Ubuntu on an older Laptop. Now there is Docker with Portainer running and I want to access Portainer via my main PC in the same network. When I try to connect to Portainer via my Laptop where it is runnig (not Localhost address) it works fine. But when I try to connect via my PC, I get a timeout. Windows diagnostics says: "resource is online but isn't responding to connection attempts". How can I open Portainer to my local network? Or is this a problem with Ubuntu?
so check if you have openssh server running for ssh! disable firewall on terminal sudo ufw disable check if your network card is running on name eth0 ifconfig if not change following this step below
Using netplan which is the default these days. File /etc/netplan/00-installer-config.yaml file. but b4 you need to get serial/mac
Find the target devices mac/hw address using the lshw command:
lshw -C network
You'll see some output which looks like:
root#ys:/etc# lshw -C network
*-network
description: Ethernet interface
physical id: 2
logical name: eth0
serial: dc:a6:32:e8:23:19
size: 1Gbit/s
capacity: 1Gbit/s
capabilities: ethernet physical tp mii 10bt 10bt-fd 100bt 100bt-fd 1000bt 1000bt-fd autonegotiation
configuration: autonegotiation=on broadcast=yes driver=bcmgenet driverversion=5.8.0-1015-raspi duplex=full ip=192.168.0.112 link=yes multicast=yes port=MII speed=1Gbit/s
So then you take the serial
dc:a6:32:e8:23:19
Note the set-name option.
This works for the wifi section as well.
if you using calbe you can delete everything add the example only change for your serial "mac" sudo nano /etc/netplan/00-installer-config.yaml file.
network:
version: 2
ethernets:
eth0:
dhcp4: true
match:
macaddress: <YOUR MAC ID HERE>
set-name: eth0
Then then to test this config run.
netplan try
When your happy with it
netplan apply
reboot you ubuntu
after restart
stop portainer container
sudo docker stop portainer
remove portainer container
sudo docker rm portainer
now run again on the last version
docker run -d -p 8000:8000 -p 9000:9000 \
--name=portainer --restart=always \
-v /var/run/docker.sock:/var/run/docker.sock \
-v portainer_data:/data \
portainer/portainer-ce:2.13.1

How to run minikube inside a docker container?

I intend to test a non-trivial Kubernetes setup as part of CI and wish to run the full system before CD. I cannot run --privileged containers and am running the docker container as a sibling to the host using docker run -v /var/run/docker.sock:/var/run/docker.sock
The basic docker setup seems to be working on the container:
linuxbrew#03091f71a10b:~$ docker run hello-world
Hello from Docker!
This message shows that your installation appears to be working correctly.
However, minikube fails to start inside the docker container, reporting connection issues:
linuxbrew#03091f71a10b:~$ minikube start --alsologtostderr -v=7
I1029 15:07:41.274378 2183 out.go:298] Setting OutFile to fd 1 ...
I1029 15:07:41.274538 2183 out.go:345] TERM=xterm,COLORTERM=, which probably does not support color
...
...
...
I1029 15:20:27.040213 197 main.go:130] libmachine: Using SSH client type: native
I1029 15:20:27.040541 197 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7a1e20] 0x7a4f00 <nil> [] 0s} 127.0.0.1 49350 <nil> <nil>}
I1029 15:20:27.040593 197 main.go:130] libmachine: About to run SSH command:
sudo hostname minikube && echo "minikube" | sudo tee /etc/hostname
I1029 15:20:27.040992 197 main.go:130] libmachine: Error dialing TCP: dial tcp 127.0.0.1:49350: connect: connection refused
This is despite the network being linked and the port being properly forwarded:
linuxbrew#51fbce78731e:~$ docker container ls
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
93c35cec7e6f gcr.io/k8s-minikube/kicbase:v0.0.27 "/usr/local/bin/entr…" 2 minutes ago Up 2 minutes 127.0.0.1:49350->22/tcp, 127.0.0.1:49351->2376/tcp, 127.0.0.1:49348->5000/tcp, 127.0.0.1:49349->8443/tcp, 127.0.0.1:49347->32443/tcp minikube
51fbce78731e 7f7ba6fd30dd "/bin/bash" 8 minutes ago Up 8 minutes bpt-ci
linuxbrew#51fbce78731e:~$ docker network ls
NETWORK ID NAME DRIVER SCOPE
1e800987d562 bridge bridge local
aa6b2909aa87 host host local
d4db150f928b kind bridge local
a781cb9345f4 minikube bridge local
0a8c35a505fb none null local
linuxbrew#51fbce78731e:~$ docker network connect a781cb9345f4 93c35cec7e6f
Error response from daemon: endpoint with name minikube already exists in network minikube
The minikube container seems to be alive and well when trying to curl from the host and even sshis responding:
mastercook#linuxkitchen:~$ curl https://127.0.0.1:49350
curl: (35) OpenSSL SSL_connect: SSL_ERROR_SYSCALL in connection to 127.0.0.1:49350
mastercook#linuxkitchen:~$ ssh root#127.0.0.1 -p 49350
The authenticity of host '[127.0.0.1]:49350 ([127.0.0.1]:49350)' can't be established.
ED25519 key fingerprint is SHA256:0E41lExrrezFK1QXULaGHgk9gMM7uCQpLbNPVQcR2Ec.
This key is not known by any other names
What am I missing and how can I make minikube properly discover the correctly working minikube container?
Because minikube does not complete the cluster creation, running Kubernetes in a (sibling) Docker container favours kind.
Given that the (sibling) container does not know enough about its setup, the networking connections are a bit flawed. Specifically, a loopback IP is selected by kind (and minikube) upon cluster creation even though the actual container sits on a different IP in the host docker.
To correct the networking, the (sibling) container needs to be connected to the network actually hosting the Kubernetes image. To accomplish this, the procedure is illustrated below:
Create a kubernetes cluster:
linuxbrew#324ba0f819d7:~$ kind create cluster --name acluster
Creating cluster "acluster" ...
βœ“ Ensuring node image (kindest/node:v1.21.1) πŸ–Ό
βœ“ Preparing nodes πŸ“¦
βœ“ Writing configuration πŸ“œ
βœ“ Starting control-plane πŸ•ΉοΈ
βœ“ Installing CNI πŸ”Œ
βœ“ Installing StorageClass πŸ’Ύ
Set kubectl context to "kind-acluster"
You can now use your cluster with:
kubectl cluster-info --context kind-acluster
Thanks for using kind! 😊
Verify if the cluster is accessible:
linuxbrew#324ba0f819d7:~$ kubectl cluster-info --context kind-acluster
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
The connection to the server 127.0.0.1:36779 was refused - did you specify the right host or port?
3.) Since the cluster cannot be reached, retrieve the control planes master IP. Note the "-control-plane" addition to the cluster name:
linuxbrew#324ba0f819d7:~$ export MASTER_IP=$(docker inspect --format='{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' acluster-control-plane)
4.) Update the kube config with the actual master IP:
linuxbrew#324ba0f819d7:~$ sed -i "s/^ server:.*/ server: https:\/\/$MASTER_IP:6443/" $HOME/.kube/config
5.) This IP is still not accessible by the (sibling) container and to connect the container with the correct network retrieve the docker network ID:
linuxbrew#324ba0f819d7:~$ export MASTER_NET=$(docker inspect --format='{{range .NetworkSettings.Networks}}{{.NetworkID}}{{end}}' acluster-control-plane)
6.) Finally connect the (sibling) container ID (which should be stored in the $HOSTNAME environment variable) with the cluster docker network:
linuxbrew#324ba0f819d7:~$ docker network connect $MASTER_NET $HOSTNAME
7.) Verify whether the control plane accessible after the changes:
linuxbrew#324ba0f819d7:~$ kubectl cluster-info --context kind-acluster
Kubernetes control plane is running at https://172.18.0.4:6443
CoreDNS is running at https://172.18.0.4:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
If kubectl returns Kubernetes control plane and CoreDNS URL, as shown in the last step above, the configuration has succeeded.
You can run minikube in docker in docker container. It will use docker driver.
docker run --name dind -d --privileged docker:20.10.17-dind
docker exec -it dind sh
/ # wget https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64
/ # mv minikube-linux-amd64 minikube
/ # chmod +x minikube
/ # ./minikube start --force
...
* Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
/ # ./minikube kubectl -- run --image=hello-world
/ # ./minikube kubectl -- logs pod/hello
Hello from Docker!
...
Also, note that --force is for running minikube using docker driver as root which we shouldn't do according minikube instructions.

How to access etcd in docker

I create a container by
docker run -d --name etcd \
-v /usr/share/ca-certificates/:/etc/ssl/certs \
quay.io/coreos/etcd:v3.0.4 /usr/local/bin/etcd -advertise-client-urls \
http://0.0.0.0:2379 -listen-client-urls http://0.0.0.0:2379
And use
docker exec 40cc9457f132 ifconfig
to get its IP "172.17.0.2"
And then I use local etcdctl to get data,
etcdctl --endpoint=http://172.17.0.2:2379 get /testdir/testkey1
but fail with:
Error: client: etcd cluster is unavailable or misconfigured
error #0: dial tcp 0.0.0.0:2379: getsockopt: connection refused
What should I do?
PS:
To make sure the data is actually stored on the container, I stoped local etcd first.
systemctl stop etcd
If I don't do that, I could get the data, but it's not the same with the result of
docker exec 40cc9457f132 etcdctl get /testdir/testkey1
"40cc9457f132 " is the container id.
OK, I fix it. It's the problem of version.
My local etcd is v2.2.4 (installed by apt install), and the etcd image version is v3.0.4.
I update both of them to v3.3.5 and set $ETCDCTL_API=3.
Now it seems all right.
It seems that same port use both localhost and docker container.
please assign another port for container. example 2379->2380
etcdctl --endpoint=http://localhost:2379 -> localhost
etcdctl --endpoint=http://localhost:2380 -> docker container

Docker neo4j container just hangs

Pretty straightforward:
christian#christian:~/development$ docker -v
Docker version 1.6.2, build 7c8fca2
I ran these instructions to start docker.
docker run --detach --name neo4j --publish 7474:7474 \
--volume $HOME/neo4j/data:/data neo4j
Nothing exciting here; this should all just work.
But, http://localhost:7474 doesn't respond. When I jump into the container, it seems to respond just fine (see debug session). What did I miss?
christian#christian:~$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
2d9e0d5d2f73 neo4j:latest "/docker-entrypoint. 15 minutes ago Up 15 minutes 7473/tcp, 0.0.0.0:7474->7474/tcp neo4j
christian#christian:~$ curl http://localhost:7474
^C
christian#christian:~$ time curl http://localhost:7474
^C
real 0m33.353s
user 0m0.008s
sys 0m0.000s
christian#christian:~$ docker exec -it 2d9e0d5d2f7389ed8b7c91d923af4a664471a93f805deb491b20fe14d389a3d2 /bin/bash
root#2d9e0d5d2f73:/var/lib/neo4j# curl http://localhost:7474
{
"management" : "http://localhost:7474/db/manage/",
"data" : "http://localhost:7474/db/data/"
}root#2d9e0d5d2f73:/var/lib/neo4j# exit
christian#christian:~$ docker logs 2d9e0d5d2f7389ed8b7c91d923af4a664471a93f805deb491b20fe14d389a3d2
Starting Neo4j Server console-mode...
/var/lib/neo4j/data/log was missing, recreating...
2016-03-07 17:37:22.878+0000 INFO No SSL certificate found, generating a self-signed certificate..
2016-03-07 17:37:25.276+0000 INFO Successfully started database
2016-03-07 17:37:25.302+0000 INFO Starting HTTP on port 7474 (4 threads available)
2016-03-07 17:37:25.462+0000 INFO Enabling HTTPS on port 7473
2016-03-07 17:37:25.531+0000 INFO Mounting static content at /webadmin
2016-03-07 17:37:25.579+0000 INFO Mounting static content at /browser
2016-03-07 17:37:26.384+0000 INFO Remote interface ready and available at http://0.0.0.0:7474/
I can't reproduce this. Docker 1.8.2. & 1.10.0 is OK with your case:
docker run --detach --name neo4j --publish 7474:7474 neo4j
curl -i 127.0.0.1:7474
HTTP/1.1 200 OK
Date: Tue, 08 Mar 2016 16:45:46 GMT
Content-Type: application/json; charset=UTF-8
Access-Control-Allow-Origin: *
Content-Length: 100
Server: Jetty(9.2.4.v20141103)
{
"management" : "http://127.0.0.1:7474/db/manage/",
"data" : "http://127.0.0.1:7474/db/data/"
}
Try upgrade Docker and check netfilter rules for forwarding.
Instead of making the request to localhost you'll want to use the docker-machine VM ip address, which you can determine with this command:
docker-machine inspect default | grep IPAddress
or
curl -i http://$(docker-machine ip default):7474/
The default IP address is 192.168.99.100
OK, basically I removed the volume mount in the args to docker and it works. Ultimately, I don't want an out-of-container mount anyways. Thank you #LoadAverage for cluing me in. It's still not 'right' but for my purposes I don't care.
christian#christian:~/development$ docker run --detach --name neo4j --publish 7474:7474 neo4j
6c94527816057f8ca1e325c8f9fa7b441b4a5d26682f72d42ad17614d9251170
christian#christian:~/development$ curl http://127.0.0.1:7474
{
"management" : "http://127.0.0.1:7474/db/manage/",
"data" : "http://127.0.0.1:7474/db/data/"
}
christian#christian:~/development$

Resources