I have a influxdb and want to show the downsampled values on grafana.
InfluxDB Version: 1.2
Grafana Version: 4.2
I have created a influxedb database:
> CREATE DATABASE "mydb3_rp"
Then I have created an retention policy:
> CREATE RETENTION POLICY "1week" ON "mydb3_rp" DURATION 1w REPLICATION 1
Then I have created the continuous query:
create continuous query "cq_10" on mydb3_rp begin
select mean(\*) into "mydb3_rp"."1week".:MEASUREMENT
from /.\*/
group by time(10m),*
end
Then I put some data into the database. When I finished and want to see the saved values with the influx command line tool :
select * from cpu1
name: cpu1
time cluster node value
---- ------- ---- -----
2017-05-03T17:06:00Z cluster-2 node2 2.9552020666133956
2017-05-03T17:07:00Z cluster-2 node2 -1.5774569414324822
2017-05-03T17:08:00Z cluster-2 node2 0.16813900484349714
2017-05-03T17:09:00Z cluster-2 node2 1.244544235070617
2017-05-03T17:10:00Z cluster-1 node2 7.833269096274834
2017-05-03T17:10:00Z cluster-2 node2 -5.440211108893697
2017-05-03T17:11:00Z cluster-1 node2 -6.877661591839738
and so on...
And now I want to see, if the continuous query was working and did the aggregation:
select * from "1week".cpu1
name: cpu1
time cluster mean_value node
---- ------- ---------- ----
2017-05-03T16:45:00Z cluster-1 1.074452901375393 node1
2017-05-03T16:45:00Z cluster-2 1.477524301989568 node1
2017-05-03T16:45:00Z cluster-1 0.8845193960173319 node2
2017-05-03T16:45:00Z cluster-2 -0.6551129796659627 node2
2017-05-03T16:50:00Z cluster-2 -1.6457347223119738 node1
2017-05-03T16:50:00Z cluster-2 0.6789712320493559 node2
...and so on
Now I go into grafana and define a query like this :
FROM 1week cpu1
select field(value)
There are - No Data Points
I know that the question was asked two months ago, but maybe someone is still reading this.
I'm currently struggling with this too and found a working query in my case. In yours I think it should be:
select "mean_value" from "mydb3_rp"."1week"."cpu1" where $timeFilter
And other where conditions if you wan't to be more specific.
K.
Related
I'm stuck deploying the microservices locally with the following stack: Skaffold, minikube, helm, and harbor.
These microservices can be deployed locally without any problem with docker and docker-compose.
When I run skaffold dev, it stop at this point:
- statefulset/service0: Waiting for 1 pods to be ready...
When I describe the pod with the command:
kubectl describe pod service-0
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 12s (x3 over 13s) default-scheduler 0/1 nodes are available: 1 node(s) didn't match node selector.
I don't know what I am doing wrong... Any ideas?
https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes/
Assign labels to a node to match your manifest or alter your manifest to match the nodeSelector statement in your YAML.
the problem was solved by running this command:
kubectl label node <node-name> role=server
in my case:
kubectl label nodes minikube role=server
I have set a docker swarm with multiple worker nodes.
My current Jupyterhub setup with SwarmSpawner works fine, I am able to deploy single-user docker images based on user-selected image before spawning the image, using _options_form_default in my jupyterhub_config.py.
What I would like now is to give users the possibility to select the swarm worker node name (hostname) on which he would like to spawn his single-user JupyterHub image, because our worker nodes have different types of hardware specs (GPUs, RAM, processors etc) and users know in advance the name of the host he would like to use.
Is it possible to determine the node on which to spawn the image ?
My current swarm has for example 1 master node: "master" and 3 worker nodes: "node1", "node2", "node3" (those are their hostnames, as it appears in the column HOSTNAME in the output of the command docker node ls on the master node).
So what I would like is that, just as it appears in the image below, users have a dropdown selection of the swarm worker nodes hostnames on which they would like to spawn their jupyterhub image, with a question such as: "Select the server name".
Ok so I actually figured out how to do that.
Here is the relevant part in my jupyterhub_config.py:
class CustomFormSpawner(SwarmSpawner):
# Shows frontend form to user for host selection
# The option values should correspond to the hostnames
# that appear in the `docker node ls` command output
def _options_form_default(self):
return """
<label for="hostname">Select your desired host</label>
<select name="hostname" size="1">
<option value="node1">node1 - GPU: RTX 2070 / CPU: 40</option>
<option value="node2">node2 - GPU: GTX 1080 / CPU: 32</option>
</select>
"""
# Retrieve the selected choice and set the swarm placement constraints
def options_from_form(self, formdata):
options = {}
options['hostname'] = formdata['hostname']
hostname = ''.join(formdata['hostname'])
self.extra_placement_spec = { 'constraints' : ['node.hostname==' + hostname] }
return options
c.JupyterHub.spawner_class = CustomFormSpawner
I am following all the steps from this link : https://github.com/justmeandopensource/kubernetes
after running the join command in the worker node it's getting added to master, but the status of the worker node is getting changed to ready.
From the logs I got the following :
Container runtime network not ready: NetworkReady=false
reason:NetworkPluginNotReady message:dock
Unable to update cni config: No networks found in /etc/cni/net.d
kubelet.go:2266 -- node "XXXXXXXXX" not found. (xxxxx is the masters
host/node name)
To establish CNI I am using flannel and also tried with weave and many other
CNI networks but the results are the same
points to ponder:
---> worker node kubelet status is healthy
---> trying to run kubeadm init command in the worker node,its showing the status of kubelet might be unhealthy. (Not able to make worker node master by running the kubeadm init command but kubeadm join command is working.After joining kubectl get nodes is showing the worker node but status is notready)
Thank you for the help
I cannot reproduce your issue. I followed exactly the instructions on github`s site you shared, and did not face similar error.
The only extra steps I needed to do, to suppress errors, detected by pre-flight checks of kubeadm init:
[ERROR FileContent--proc-sys-net-ipv4-ip_forward]: /proc/sys/net/ipv4/ip_forward contents are not set to 1
[preflight] If you know what you are doing, you can make a check non-fatal with --ignore-preflight-errors=...
was to set appropriate flag by running:
echo '1' > /proc/sys/net/ipv4/ip_forward
State of my cluster nodes:
NAME STATUS ROLES AGE VERSION
centos-master Ready master 18h v1.13.1
centos-worker Ready <none> 18h v1.13.1
I verified cluster condition by deploying&exposing sample application and everything seems to be working fine:
kubectl create deployment hello-node --image=gcr.io/hello-minikube-zero-install/hello-node
kubectl expose deployment hello-node --port=8080
I`m getting valid response from hello-world node.js app:
curl 10.100.113.255:8080
Hello World!#
What IP address you have put to your /etc/hosts files ?
Followed official guide to install Kubernetes cluster with kubeadm on Vagrant.
https://kubernetes.io/docs/getting-started-guides/kubeadm/
master
node1
node2
Master
# kubeadm init --apiserver-advertise-address=192.168.33.200
# sudo cp /etc/kubernetes/admin.conf $HOME/
# sudo chown $(id -u):$(id -g) $HOME/admin.conf
# export KUBECONFIG=$HOME/admin.conf
# wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
# kubectl apply -f kube-flannel.yaml
Node1 and Node2
# kubeadm join --token <token> 192.168.33.200:6443
...
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
Node join complete:
* Certificate signing request sent to master and response
received.
* Kubelet informed of new secure connection details.
Run 'kubectl get nodes' on the master to see this machine join.
Until now all success.
But when check kubectl get nodes on master host, retunes only one node:
# kubectl get nodes
NAME STATUS AGE VERSION
localhost.localdomain Ready 25m v1.6.4
Sometimes, it retunes:
# kubectl get nodes
Unable to connect to the server: net/http: TLS handshake timeout
Edit
Add hostname to all the hosts.
Then check kubectl get nodes again from master:
[root#master ~]# kubectl get nodes
NAME STATUS AGE VERSION
localhost.localdomain Ready 4h v1.6.4
master Ready 12m v1.6.4
Just added a new current host name.
how could i get nodes ip and container ip (running on manager(s) and workers(s) node(s) ) from a created service?
I'd like to inspect this for studying round robin load balancing of docker swarm engine and develop a new load balancing.
To get the node IP address you can use below command:
docker node inspect self --format '{{ .Status.Addr }}'
To get the service IP address, Just add service-id in the end, like:
docker node inspect self --format '{{ .Status.Addr }}' service-id
To get the container IP address, use:
docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' container-id
I am assuming you mean getting all this information on your terminal using docker command, for a programming language integration check docker-py
Containers
To get managers and workers info you can use docker node command:
docker node ls
Gives you details on each node, for example:
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS
j74jxqb4wz38l2odl2seiiuzh * db-cluster-1 Ready Active Leader
As you can see it also includes a "manager status" field
Swarm nodes
docker service ps <SERVICE-NAME>
Gives you a list of each service node, its ID and status, for example:
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
ypd65x4i06nn db-cluster.1 severalnines/mariadb:latest db-cluster-1 Running Running 2 hours ago
278rdv7m4015 db-cluster.2 severalnines/mariadb:latest db-cluster-1 Running Running 2 hours ago
z9zr6xgnyuob db-cluster.3 severalnines/mariadb:latest db-cluster-1 Running Running 2 hours ago
then you can use inspect to get more detailed information on any object, for example:
docker inspect ypd65x4i06nn
check NetworksAttachments section to get network details
for getting the node IP address, the value is in different places depending on if the node is a worker or a manager.
docker node inspect node1 --pretty
ID: 0lkd116rve1rwbvfuonrfzdko
Hostname: node1
Joined at: 2022-09-18 16:16:28.6670527 +0000 utc
Status:
State: Ready
Availability: Active
Address: 192.168.64.5
Manager Status:
Address: 192.168.64.5:2377
Raft Status: Reachable
Leader: No
...
here node2 is manager
docker node inspect node2 --pretty
ID: u8tfyh5txt5qecgsi543pnimc
Hostname: node2
Joined at: 2022-09-19 09:05:57.91370814 +0000 utc
Status:
State: Ready
Availability: Active
Address: 0.0.0.0. <--------- CHECK HERE ---
Manager Status:
Address: 192.168.64.6:2377
Raft Status: Reachable
Leader: Yes
...
But you can make use of the go template syntax:
> docker node inspect node2 \
--format 'worker addr:{{ .Status.Addr }} {{printf "\n"}}manager addr: {{ .ManagerStatus.Addr }}'
worker addr:0.0.0.0
manager addr: 192.168.64.6:2377