Rancher enabling monitoring via Marketplace Failure - monitoring

Rancher 2.5.8
k8S: 1.20.11
Longhorn 1.2.2
Rancher Monitoring Chart 14.5.100
I am very confused regarding my issue. I have a singular Rancher server with a number of clusters. In one cluster, running k8s 1.20.11 I was able to install the Monitoring chart from Cluster Manager without incident. Everythings's up and running and people are happy.
However, on the same Rancher server, on another cluster also running 1.20.11 I cannot get the Prometheus pods to get a persistent volume. BOTH clusters use Longhorn 1.2.2 with no special modifications on either. I have verified the settings for the second cluster are the same as the first which works.
Error:
Unable to attach or mount volumes: unmounted volumes=[prometheus-rancher-monitoring-prometheus-db], unattached volumes=[config prometheus-nginx nginx-home config-out tls-assets prometheus-rancher-monitoring-prometheus-db prometheus-rancher-monitoring-prometheus-rulefiles-0 rancher-monitoring-prometheus-token-pd5ps]: volume prometheus-rancher-monitoring-prometheus-db has volumeMode Block, but is specified in volumeMounts

Related

Running node manager inside a docker container and adding to a existing hadoop cluster

I have created a hadoop cluster using ambari , now on a new vm i need to create a docker which needs to join this hadoop cluster and the docker container should be running the nodemanager.
Docker shouldn't run a Nodemanager. That would effectively cause a memory constrained environment to be responsible for further memory constrained JVM containers.
A Nodemanager should be installed directly on the host OS. Then YARN can be configured to run Docker containers - https://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/DockerContainers.html
Alternatively, YuniKorn just became a top level Apache Project - https://yunikorn.apache.org/
(Ambari is dead, Kubernetes is the current future for data analytics cluster configuration, installation, and application deployments)

Using FlexVolume on local kubernetes cluster with docker-desktop

I'm trying to use flex volumes for mounting file server and key vault, respectively:
Git Repo
and Git Repo
However, mounting any of them cause pods needing them to get stuck in ContainerCreating with warning messages about being unable to mount volumes due to a timeout. There is a step in the configuration of non-aks clusters that requires adjusting configs, which seems to be impossible when using a docker-provided Kubernetes server.
Is it possible to install flex volume driver on the docker Kubernetes server, as outlined here config kubelet service to enable FlexVolume driver and if so, how to access the config files? And if not, is it at all possible to mount flex volume volumes when working locally using docker-desktop Kubernetes?
I've deployed the same configuration to the AKS cluster and it's working correctly.

How to fix "Cluster must have at least one etcd plane host: failed to connect to the following etcd host(s) [10.xxx.xxx.36]" in Rancher?

When i would like to join a node by selecting etcd, Controle Plane and Worker in rancher UI, i got this error:
Cluster must have at least one etcd plane host: failed to connect to the following etcd host(s) [10.xxx.xxx.36]
Click here to see the screenshot
So Rancher it could not download the remind docker containers (like etcd, kubectl..) automatically since the docker images should be preceded by the proxy: <proxy_url>
example: docker pull <proxy_url>/ubuntu for downloading ubuntu images.
Any help to resolve this would be appreciated. Thank you in advance!
You can define a private registry that Rancher should use to build downstream Kubernetes clusters with by setting the system-default-registry parameter in the "Settings" section of the Rancher UI. Then when you launch clusters, it should use this registry to fetch the images. This assumes you have already copied the images needed to this repo (example of how to do that).
Since you already created this cluster, you'll need to regenerate the docker run command and reapply to the node.

Kubernetes service not visible when Docker is initialized in Windows container mode

I'm testing the side-by-side Windows/Linux container experimental feature in Docker for Windows and all is going well. I can create Linux containers while the system is set to use Windows containers. I see my ReplicaSets, Services, Deployments, etc in the Kubernetes dashboard and all status indicators are green. The issue, though, is that my external service endpoints don't seem to resolve to anything when Docker is set to Windows container mode. The interesting thing, however, is that if I create all of my Kubernetes objects in Linux mode and then switch to Windows mode, I can still access all services and the Linux containers behind them.
Most of my Googling took me to errors with services and Kubernetes but this doesn't seem to be suffering from any errors that I can report. Is there a configuration somewhere which must be set in order for this to work? Or is this just a hazard of running the experimental features?
Docker Desktop 2.0.0.3
Docker Engine 18.09.2
Kubernetes 1.10.11
just to confirm your thoughts about experimental features:
Experimental features are not appropriate for production environments or workloads. They are meant to be sandbox experiments for new ideas. Some experimental features may become incorporated into upcoming stable releases, but others may be modified or pulled from subsequent Edge releases, and never released on Stable.
Please consider additional steps to resolve this issue:
The Kubernetes client command, kubectl, is included and configured to connect to the local Kubernetes server. If you have kubectl already installed and pointing to some other environment, such as minikube or a GKE cluster, be sure to change context so that kubectl is pointing to docker-for-desktop
> kubectl config get-contexts
> kubectl config use-context docker-for-desktop
If you installed kubectl by another method, and experience conflicts, remove it.
To enable Kubernetes support and install a standalone instance of Kubernetes running as a Docker container, select Enable Kubernetes and click the Apply and restart button.
By default, Kubernetes containers are hidden from commands like docker service ls, because managing them manually is not supported. To make them visible, select Show system containers (advanced) and click Apply and restart. Most users do not need this option.
Please verify also System requirements.

How Swarm mode image orchestration works?

I have setup a 3 node cluster (with no Internet access) with 1 manager and 2 worker-nodes using the standard swarm documentation.
How does the swarm manager in swarm mode know about the images present in worker nodes?
Lets say I have image A in worker-node-1 and image B in worker-node-2 and no images in the manager-node.
Now how do I start container for image A using the manager?
Will it start in manager or node-1?
When I query manager for the list of images will it give the whole list with A and B in it?
Does anyone know how this works?
I couldn’t get the details from the documentation.
Docker Swarm manager node may to be a worker one by the second role but not strictly necessary.
Image deployment policy is mapped via docker-compose.yml which has an information like target nodes, networks, hostnames, volumes, etc. in relation of particular service. So, it will start either in specified node or in emptiest default one.
Swarm manager communicates with the worker nodes via Docker networks:
When you initialize a swarm or join a Docker host to an existing swarm, two new networks are created on that Docker host:
an overlay network called ingress, which handles control and data
traffic related to swarm services. When you create a swarm service and
do not connect it to a user-defined overlay network, it connects to
the ingress network by default
a bridge network called
docker_gwbridge, which connects the individual Docker daemon to the
other daemons participating in the swarm.
Reference
During Swarm deployment, the images of it's services are being propagated to worker nodes according to their deployment policy.
The manager node will contain images once the node is the worker one too (correct me, if it won't).
The default configuration with swarm mode is to pull images from a registry server and use pinning to reference a unique hash for those images. This can be adjusted, but there is no internal mechanism to distribute images within a cluster.
For an offline environment, I'd recommend a stand alone registry server accessible to the cluster. You can even run it on the cluster. Push your image there, and point your server l services to the registry for their images to pull. See this doc for details on running a stand alone registry, or any of the many 3rd party options (e.g. Harbor): https://docs.docker.com/registry/
The other option is to disable the image pinning, and manually copy images to each of your swarm nodes. You need to do this in advance of deploying any service changes. You'll also lose the benefit of reused image layers when you manually copy them. Because of all this issues it creates, overhead to manage, and risk of mistakes, I'd recommend against this option.
Run the docker stack deploy command with --with-registry-auth that will give the Workers access to pull the needed image
By default Docker Swarm will pull the latest image from registry when deploying

Resources