deploy Mirantis Kubernetes Engine in VirtualBox - docker

I try to deploy Mirantis Kubernetes Engine in my VirtualBox (ubuntu).
I make a yaml file like this:
apiVersion: launchpad.mirantis.com/mke/v1.4
kind: mke
metadata:
name: my-mke-cluster
spec:
hosts:
- ssh:
address: 192.168.100.194
user: kub
port: 22
keyPath: ~/.ssh/id_rsa
role: manager
- ssh:
address: 192.168.100.194
user: kub
port: 22
keyPath: ~/.ssh/id_rsa
role: worker
mke:
version: 3.3.7
installFlags:
- --pod-cidr="10.0.0.0/16"
- --admin-username=admin
- --admin-password=admin
mcr:
version: 20.10.0
cluster:
prune: false
But I have issues to connect SSH error output is:
FATA failed on 2 hosts:
[ssh] 192.168.100.194:22: All attempts fail:
#1: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
[ssh] 192.168.100.194:22: All attempts fail:
#1: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain

The issue that you are facing is most likely connected with the SSH authorization.
As you can see in the following documentation:
Target machines must be configured for access via SSH using keys instead of passwords, and for passwordless use of sudo for the administrative account. This is the standard for AWS EC2 VMs.
-- Mirantis.com: Download: Mirantis cloud native platform: Mirantis kubernetes engine
I've tried to replicate the same error and it occurred when the SSH key (public one) wasn't placed in the target machine (/home/$USER/.ssh/authorized_keys or /root/.ssh/authorized_keys depending on the setup):
INFO ==> Running phase: Open Remote Connection
INFO See /SOME/PATH/.mirantis-launchpad/cluster/hello-cluster/apply.log for more logs
FATA failed on 2 hosts:
- [ssh] 192.168.0.123:22: All attempts fail:
#1: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
- [ssh] 192.168.0.123:22: All attempts fail:
#1: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
To fix that you would need to configure the password-less login to your target machine.
Progressing this provisioning process further should also show you the duplicate hostname error (assuming that the duplicated IP addresses are correct).
Additional resources:
Mirantis.com: Software: Mirantis: Mirantis Kubernetes Engine

Related

minikube start did not start minikube when kept the command in AWS EC2 location /etc/rc.d/rc.local

I want
minikube start
to run in /etc/rc.d/rc.local as this script executes after everytime ec2 instance starts.
It is failing to start minikube when kept in rc.local but when I execute it as non-root user, it works.
Any help is appreciated to make it work from rc.local script
Update:
I've added minikube start --force --driver=docker
This time, it says:
E0913 18:12:21.898974
10063 status.go:258]
status error: NewSession:
new client:
new client: ssh: handshake failed:
ssh: unable to authenticate, attempted methods [none publickey],
no supported methods remain.
Failed to list containers for "kube-apiserver":
docker: NewSession: new client: new client:
ssh: handshake failed: ssh: unable to authenticate,
attempted methods [none publickey],
no supported methods remain StackOverflow
etc etc

Spring Cloud Data Flow: Error org.springframework.dao.InvalidDataAccessResourceUsageException

I am trying to run/configure a Spring Data Cloud Data Flow (SCDF) to schedule a task for a Spring Batch Job.
I am running in a minikube that connects to a local postgresql(localhost:5432). The minikube runs in a virtualbox where I assigned a vnet thru the --cidr, so minikube can connect to the local postgres.
Here is the postgresql service yaml:
https://github.com/msuzuki23/SpringCloudDataFlowDemo/blob/main/postgres-service.yaml
Here is the SCDF config yaml:
https://github.com/msuzuki23/SpringCloudDataFlowDemo/blob/main/server-config.yaml
Here is the SCDF deployment yaml:
https://github.com/msuzuki23/SpringCloudDataFlowDemo/blob/main/server-deployment.yaml
Here is the SCDF server-svc.yaml:
https://github.com/msuzuki23/SpringCloudDataFlowDemo/blob/main/server-svc.yaml
To launch the SCDF server in minikube I do the following kubectl commands:
kubectl apply -f secret.yaml
kubectl apply -f configmap.yaml
kubectl apply -f postgres-service.yaml
kubectl create -f server-roles.yaml
kubectl create -f server-rolebinding.yaml
kubectl create -f service-account.yaml
kubectl apply -f server-config.yaml
kubectl apply -f server-svc.yaml
kubectl apply -f server-deployment.yaml
I am not running Prometeus, Grafana, Kafka/Rabbitmq as I want to test and make sure I can launch the Spring Batch Job from SCDF. I did not run the skipper deployment (Spring Cloud DataFlow server runnning locally pointing to Skipper in Kubernetes) it is not necessary if just running tasks.
This is the error I am getting when trying to add an application from a docker private repo:
And this is the full error stack from the pod:
https://github.com/msuzuki23/SpringCloudDataFlowDemo/blob/main/SCDF_Log_Error
Highlights from the error stack:
2021-07-08 13:04:13.753 WARN 1 --- [-nio-80-exec-10] o.s.c.d.s.controller.AboutController : Skipper Server is not accessible
org.springframework.web.client.ResourceAccessException: I/O error on GET request for "http://localhost:7577/api/about": Connect to localhost:7577 [localhost/127.0.0.1] failed: Connection refused (Connection refused); nested exception is org.apache.http.conn.HttpHostConnectException: Connect to localhost:7577 [localhost/127.0.0.1] failed: Connection refused (Connection refused)
Postges Hibernate Error:
2021-07-08 13:05:22.142 WARN 1 --- [p-nio-80-exec-5] o.h.engine.jdbc.spi.SqlExceptionHelper : SQL Error: 0, SQLState: 42P01
2021-07-08 13:05:22.200 ERROR 1 --- [p-nio-80-exec-5] o.h.engine.jdbc.spi.SqlExceptionHelper : ERROR: relation "hibernate_sequence" does not exist
Position: 17
2021-07-08 13:05:22.214 ERROR 1 --- [p-nio-80-exec-5] o.s.c.d.s.c.RestControllerAdvice : Caught exception while handling a request
org.springframework.dao.InvalidDataAccessResourceUsageException: could not extract ResultSet; SQL [n/a]; nested exception is org.hibernate.exception.SQLGrammarException: could
The first couple of errors are from the SCDF trying to connect to the skipper, since it was not configured, that was expected.
The second error is the Postgres JDBC Hibernate. How do I solve that?
Is there a configuration I am missing when setting the SCDF to point into the local postgres?
Also, in my docker jar I have not added any annotation such as #EnableTask.
Any help is appreciated, thx! Markus.
I did a search on
Caused by: org.postgresql.util.PSQLException: ERROR: relation
"hibernate_sequence" does not exist Position: 17
And found this stackoverflow anwer:
Postgres error in batch insert : relation "hibernate_sequence" does not exist position 17
Went to the postgres and created the hibernate_sequence:
CREATE SEQUENCE my_seq_gen START 1;
Then, add application worked.

Failed try to connecting metricbeat to elasticsearch and kibana with docker

I am trying to up elasticsearch kibana and metricbeat through docker-compose.
This is the code.
ELK is working fine but metricbeat failed, I am getting this error, and not able to figure out how can i solve this :
MetricBeat Log:
2020-05-09T19:27:03.353Z ERROR pipeline/output.go:100 Failed to connect to backoff(elasticsearch(http://elasticsearch:9200)): Connection marked as failed because the onConnect callback failed: cannot retrieve the elasticsearch license from the /_xpack endpoint, Metricbeat requires the default distribution of Elasticsearch. Please make the endpoint accessible to Metricbeat so it can verify the license.: could not extract license information from the server response: unknown state, received: 'expired'
The license expired but i was hoping it would work without it.

Docker for Desktop Kubernetes Unable to connect to the server: dial tcp [::1]:6445

I am using Docker for Desktop on Windows 10 Professional with Hyper-V, also I am not using minikube. I have installed Kubernetes cluster via Docker for Desktop, as shown below:
It shows the Kubernetes is successfully installed and running.
When I run the following command:
kubectl config view
I get the following output:
apiVersion: v1
clusters:
- cluster:
insecure-skip-tls-verify: true
server: https://localhost:6445
name: docker-for-desktop-cluster
contexts:
- context:
cluster: docker-for-desktop-cluster
user: docker-for-desktop
name: docker-for-desktop
current-context: docker-for-desktop
kind: Config
preferences: {}
users:
- name: docker-for-desktop
user:
client-certificate-data: REDACTED
client-key-data: REDACTED
However when I run the
kubectl cluster-info
I am getting the following error:
Unable to connect to the server: dial tcp [::1]:6445: connectex: No connection could be made because the target machine actively refused it.
It seems like there is some network issue, I am not sure how to resolve this.
I know this is an old question but the following helped me to resolve a similar issue. The root cause was that I had minikube installed previously and that was being used as my default context.
I was getting following error:
Unable to connect to the server: dial tcp 192.168.1.8:8443: connectex: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond.
In the power-shell run the following command:
> kubectl config get-contexts
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
docker-desktop docker-desktop docker-desktop
docker-for-desktop docker-desktop docker-desktop
* minikube minikube minikube
this will list all the contexts and see if there are multiple. If you had installed minikube in the past, that will show a * mark as currently selected default context. You can change that to point to docker-desktop context like follows:
> kubectl config use-context docker-desktop
Run the get-contexts command again to verify the * mark.
Now, the following command should work:
> kubectl get pods
Posting a response to this very old question, as I was searching for a solution and later found a different cause for my problem and the solution was simple.
Cause was that the config file was missing from the $HOME$/.kube directory
A simple restart of Docker Desktop restored the file with some defaults and things were back ok.
Side note: The issue started after I upgraded my Docker Desktop Installation to latest (when I got the update available popup). I should also mention that the cluster stopped working and I had to manually remove Docker Desktop and Reinstall the latest version (this was the story before the problem occurred).

Pod creation in ContainerCreating state always

I am trying to create a pod using kubernetes with the following simple command
kubectl run example --image=nginx
It runs and assigns the pod to the minion correctly but the status is always in ContainerCreating status due to the following error. I have not hosted GCR or GCloud on my machine. So not sure why its picking from there only.
1h 29m 14s {kubelet centos-minion1} Warning FailedSync Error syncing pod, skipping:
failed to "StartContainer" for "POD" with ErrImagePull: "image pull failed
for gcr.io/google_containers/pause:2.0, this may be because there are no
credentials on this request. details: (unable to ping registry endpoint
https://gcr.io/v0/\nv2 ping attempt failed with error: Get https://gcr.io/v2/:
http: error connecting to proxy http://87.254.212.120:8080: dial tcp
87.254.212.120:8080: i/o timeout\n v1 ping attempt failed with error:
Get https://gcr.io/v1/_ping: http: error connecting to proxy
http://87.254.212.120:8080: dial tcp 87.254.212.120:8080: i/o timeout)
Kubernetes is trying to create a pause container for your pod; this container is used to create the pod's network namespace. See this question and its answers for more general information on the pause container.
To your specific error: Kubernetes tries to pull the pause container's image (which would be gcr.io/google_containers/pause:2.0, according to your error message) from the Google Container Registry (gcr.io). Apparently, your Docker engine tries to connect to GCR using a HTTP proxy located at 87.254.212.120:8080, to which it apparently cannot connect (i/o timeout).
To correct this error, either make sure that you HTTP proxy server is online and does not block HTTP requests to GCR, or (if you do have public Internet access) disable the proxy connection for your Docker engine (this would typically be done using the http_proxy and https_proxy environment variables, which would have been set in /etc/sysconfig/docker or /etc/default/docker, depending on your Linux distribution).

Resources