Installed Spring Cloud Dataflow on Kubernetes (running on DockerDesktop).
Configured Grafana and Prometheus as the per the install guide https://dataflow.spring.io/docs/installation/kubernetes/kubectl/
Created and deployed a simple Stream with time (source) and log (sink) from starter apps .
On selecting Stream dashboard icon in UI, navigates to grafana dashboard but DON'T see the stream and related metrics.
Am I missing any configuration here?
Don't see any action in Prometheus proxy log since it started
scdf-server config map
kind: ConfigMap
apiVersion: v1
metadata:
name: scdf-server
namespace: default
selfLink: /api/v1/namespaces/default/configmaps/scdf-server
uid: ce23d5a3-1cb9-4580-ba1a-bf51b09850dc
resourceVersion: '53607'
creationTimestamp: '2020-04-29T01:28:36Z'
labels:
app: scdf-server
data:
application.yaml: |-
spring:
cloud:
dataflow:
applicationProperties:
stream:
management:
metrics:
export:
prometheus:
enabled: true
rsocket:
enabled: true
host: prometheus-proxy
port: 7001
task:
management:
metrics:
export:
prometheus:
enabled: true
rsocket:
enabled: true
host: prometheus-proxy
port: 7001
grafana-info:
url: 'http://localhost:3000'
task:
platform:
kubernetes:
accounts:
default:
limits:
memory: 1024Mi
datasource:
url: jdbc:mysql://${MYSQL_SERVICE_HOST}:${MYSQL_SERVICE_PORT}/mysql
username: root
password: ${mysql-root-password}
driverClassName: org.mariadb.jdbc.Driver
testOnBorrow: true
validationQuery: "SELECT 1"
[Following fixed the Issue]
I updated the stream definition set below in Applications.Properties it started working fine.
management.metrics.export.prometheus.rsocket.host=prometheus-proxy
This metrics collection flow diagram from https://github.com/spring-cloud/spring-cloud-dataflow-samples/tree/master/monitoring-samples helped to spot the issue quick. Thanks
Related
I'm monitoring Java apps with Opentelemetry and exported data to Elastic APM. This integration works well, however we are missing some critical information about metrics.
We want to collect information about the host system and jvm metrics.
Openetelemetry collector is running as sidecar in k8s and its conf is below:
apiVersion: opentelemetry.io/v1alpha1
kind: OpenTelemetryCollector
metadata:
name: app-sidecar
spec:
mode: sidecar
config: |
receivers:
otlp:
protocols:
http:
grpc:
exporters:
logging:
otlp:
endpoint: http://endpoint:8200
headers:
Authorization: Bearer token
tls:
insecure: true
service:
pipelines:
traces:
receivers: [otlp]
exporters: [logging, otlp]
metrics:
receivers: [otlp]
exporters: [logging, otlp]
logs:
receivers: [otlp]
exporters: [logging, otlp]
Start your Java app with java agent opentelemetry-javaagent.jar (OTEL java autoinstrumentation). Configure it to export metrics (it provides by default JVM metrics), for example OTEL_METRICS_EXPORTER=otlp, OTEL_EXPORTER_OTLP_ENDPOINT=<your side car otel collector otlp grpc endpoint>" - check doc for right doc/syntax.
I'm trying to configure my Api that is running on a K8 cluster to talk to a database that is hosted on Docker. And I wasn't able to find much when trying to connect an external service locally, everything example would be referring to none local IP because when I try to replicate it using the examples I've found, I run into the issue of
The Endpoints "postgres" is invalid: subset[0].addresses[0].ip: Invalid value: "127.0.0.1": may not be in the loopback range (127.0.0.1/8, ::1/128
apiVersion: v1
kind: Service
metadata:
name: postgres
spec:
ports:
- port: 5432
targetPort: 5432
---
apiVersion: v1
kind: Endpoints
metadata:
name: postgres
subsets:
- addresses:
- ip: 127.0.0.1
ports:
- port: 5432
If there is a reason why I can't use a locally hosted database it would be great if you could explain! Thank you in advance
So add another service with name say database-service.yml with the following
kind: Service
apiVersion: v1
metadata:
name: postgressql
spec:
type: ExternalName
externalName: <database host Name>
Also specify the dbname, password, user, port and db host in a ConfigMap as bellow
apiVersion: v1
kind: ConfigMap
metadata:
name: backend-env
namespace: mynamespace
data:
ENVIRONMENT: local
DB_NAME: <dbname>
DB_PASSWORD: <password>
DB_USER: <user>
DB_PORT: "port number"
DB_HOST: <host>
I am trying to setup solr 8.0 on GKE. I can successfully run it on my local instance. But when I configured it on GKE, it keeps giving 502 error.
Here's my deployment file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: solr
namespace: api
labels:
app: solr
spec:
replicas: 1
revisionHistoryLimit: 10
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
type: RollingUpdate
selector:
matchLabels:
app: solr
template:
metadata:
labels:
app: solr
spec:
containers:
- name: app
image: solr:8
imagePullPolicy: Always
ports:
- name: http
containerPort: 8983
resources:
limits:
cpu: 250m
ephemeral-storage: 1Gi
memory: 512Mi
requests:
cpu: 250m
ephemeral-storage: 1Gi
memory: 512Mi
livenessProbe:
initialDelaySeconds: 20
httpGet:
path: /
port: http
Service:
apiVersion: v1
kind: Service
metadata:
name: solr
namespace: api
labels:
app: solr
spec:
type: ClusterIP
ports:
- name: solr
port: 8080
targetPort: 8983
selector:
app: solr
and ingress:
- host: solr.*****.***
http:
paths:
- pathType: ImplementationSpecific
backend:
service:
name: solr
port:
name: http
Things I have tried so far:
I have tried running the service on different ports and default ports.
I can exec into pod and access solr through command line. It is working fine.
Using port-forwarding kubectl port-forward --namespace api my-pod-name 8080:8983 I can access solr admin dashboard using the temporary url that google provides. But when i use the subdomain created for solr, it keeps giving me 502 Server error
Error: Server Error
The server encountered a temporary error and could not complete your request.
Please try again in 30 seconds.
Logs display error failed_to_pick_backend when I open subdomain that I added.
I have a set of applications I would like to deploy on several eks clusters like Prometheus, Grafana and others.
I have this setup inside 1 git repo that has an app of apps that each cluster could reference to.
My issue is having small changes in the value for these deployments, lets say for the Grafana deployment I want a unique url per cluster:
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: grafana
namespace: argocd
spec:
project: default
syncPolicy:
automated:
prune: true
selfHeal: true
syncOptions:
- PrunePropagationPolicy=foreground
- CreateNamespace=true
retry:
limit: 2
backoff:
duration: 5s
maxDuration: 3m0s
factor: 2
destination:
server: "https://kubernetes.default.svc"
namespace:
source:
repoURL:
targetRevision:
chart:
helm:
releaseName: grafana
values: |
...
...
hostname/url: {cluster_name}.grafana.... <-----
...
...
so far the only way i see doing this is by having multiple values files, is there a way to make it read values from config maps or maybe pass down a variable through the app of apps to make this work?
any help is appreciated
I'm afraid there is no (yet) good generic solution for templating values.yaml for the Helm charts in the ArgoCD.
Still, for your exact case, ArgoCD already has all you need.
Your "I have a set of applications" should naturally bring you to the ApplicationSet Controller and its features.
For iteration over the set of clusters, I'd recommend you to look at ApplicationSet Generators and in particular on Cluster Generator. Then your example would look something like:
apiVersion: argoproj.io/v1alpha1
kind: ApplicationSet
metadata:
name: 'grafana'
namespace: 'argocd'
finalizers:
- 'resources-finalizer.argocd.argoproj.io'
spec:
generators:
- clusters: # select only "remote" clusters
selector:
matchLabels:
'argocd.argoproj.io/secret-type': 'cluster'
template:
metadata:
name: 'grafana-{{ name }}'
spec:
project: 'default'
destination:
server: '{{ server }}'
namespace: 'grafana'
source:
path:
repoURL:
targetRevision:
releaseName: grafana
helm:
values: |
...
...
hostname/url: {{ name }}.grafana.... <-----
...
...
syncPolicy:
automated:
prune: true
selfHeal: true
syncOptions:
- PrunePropagationPolicy=foreground
- CreateNamespace=true
retry:
limit: 2
backoff:
duration: 5s
maxDuration: 3m0s
factor: 2
Also check full Application definition for examples how to override particular parameters through:
...
helm:
# Extra parameters to set (same as setting through values.yaml, but these take precedence)
parameters:
- name: "nginx-ingress.controller.service.annotations.external-dns\\.alpha\\.kubernetes\\.io/hostname"
value: mydomain.example.com
- name: "ingress.annotations.kubernetes\\.io/tls-acme"
value: "true"
forceString: true # ensures that value is treated as a string
# Use the contents of files as parameters (uses Helm's --set-file)
fileParameters:
- name: config
path: files/config.json
As well as a combination of the inline values with valueFiles: for common options.
Trying to do something that should be pretty simple: starting up an Express pod and fetch the localhost:5000/ which should respond with Hello World!.
I've installed ingress-nginx for Docker for Mac and minikube
Mandatory: kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/mandatory.yaml
Docker for Mac: kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/cloud-generic.yaml
minikube: minikube addons enable ingress
I run skaffold dev --tail
It prints out Example app listening on port 5000, so apparently is running
Navigate to localhost and localhost:5000 and get a "Could not get any response" error
Also, tried minikube ip which is 192.168.99.100 and experience the same results
Not quite sure what I am doing wrong here. Code and configs are below. Suggestions?
index.js
// Import dependencies
const express = require('express');
// Set the ExpressJS application
const app = express();
// Set the listening port
// Web front-end is running on port 3000
const port = 5000;
// Set root route
app.get('/', (req, res) => res.send('Hello World!'));
// Listen on the port
app.listen(port, () => console.log(`Example app listening on port ${port}`));
skaffold.yaml
apiVersion: skaffold/v1beta15
kind: Config
build:
local:
push: false
artifacts:
- image: sockpuppet/server
context: server
docker:
dockerfile: Dockerfile.dev
sync:
manual:
- src: '**/*.js'
dest: .
deploy:
kubectl:
manifests:
- k8s/ingress-service.yaml
- k8s/server-deployment.yaml
- k8s/server-cluster-ip-service.yaml
ingress-service.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-service
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
rules:
- http:
paths:
- path: /?(.*)
backend:
serviceName: server-cluster-ip-service
servicePort: 5000
server-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: server-deployment
spec:
replicas: 3
selector:
matchLabels:
component: server
template:
metadata:
labels:
component: server
spec:
containers:
- name: server
image: sockpuppet/server
ports:
- containerPort: 5000
server-cluster-ip-service.yaml
apiVersion: v1
kind: Service
metadata:
name: server-cluster-ip-service
spec:
type: ClusterIP
selector:
component: server
ports:
- port: 5000
targetPort: 5000
Dockerfile.dev
FROM node:12.10-alpine
EXPOSE 5000
WORKDIR "/app"
COPY ./package.json ./
RUN npm install
COPY . .
CMD ["npm", "run", "dev"]
Output from describe
$ kubectl describe ingress ingress-service
Name: ingress-service
Namespace: default
Address:
Default backend: default-http-backend:80 (<none>)
Rules:
Host Path Backends
---- ---- --------
localhost
/ server-cluster-ip-service:5000 (172.17.0.7:5000,172.17.0.8:5000,172.17.0.9:5000)
Annotations:
kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"extensions/v1beta1","kind":"Ingress","metadata":{"annotations":{"kubernetes.io/ingress.class":"nginx"},"name":"ingress-service","namespace":"default"},"spec":{"rules":[{"host":"localhost","http":{"paths":[{"backend":{"serviceName":"server-cluster-ip-service","servicePort":5000},"path":"/"}]}}]}}
kubernetes.io/ingress.class: nginx
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal CREATE 16h nginx-ingress-controller Ingress default/ingress-service
Normal CREATE 21s nginx-ingress-controller Ingress default/ingress-service
Output from kubectl get po -l component=server
$ kubectl get po -l component=server
NAME READY STATUS RESTARTS AGE
server-deployment-cf6dd5744-2rnh9 1/1 Running 0 11s
server-deployment-cf6dd5744-j9qvn 1/1 Running 0 11s
server-deployment-cf6dd5744-nz4nj 1/1 Running 0 11s
Output from kubectl describe pods server-deployment: Noticed that the Host Port: 0/TCP. Possibly the issue?
Name: server-deployment-6b78885779-zttns
Namespace: default
Priority: 0
PriorityClassName: <none>
Node: minikube/10.0.2.15
Start Time: Tue, 08 Oct 2019 19:54:03 -0700
Labels: app.kubernetes.io/managed-by=skaffold-v0.39.0
component=server
pod-template-hash=6b78885779
skaffold.dev/builder=local
skaffold.dev/cleanup=true
skaffold.dev/deployer=kubectl
skaffold.dev/docker-api-version=1.39
skaffold.dev/run-id=c545df44-a37d-4746-822d-392f42817108
skaffold.dev/tag-policy=git-commit
skaffold.dev/tail=true
Annotations: <none>
Status: Running
IP: 172.17.0.5
Controlled By: ReplicaSet/server-deployment-6b78885779
Containers:
server:
Container ID: docker://2d0aba8f5f9c51a81f01acc767e863b7321658f0a3d0839745adb99eb0e3907a
Image: sockpuppet/server:668dfe550d93a0ae76eb07e0bab900f3968a7776f4f177c97f61b18a8b1677a7
Image ID: docker://sha256:668dfe550d93a0ae76eb07e0bab900f3968a7776f4f177c97f61b18a8b1677a7
Port: 5000/TCP
Host Port: 0/TCP
State: Running
Started: Tue, 08 Oct 2019 19:54:05 -0700
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-qz5kr (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
default-token-qz5kr:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-qz5kr
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled <unknown> default-scheduler Successfully assigned default/server-deployment-6b78885779-zttns to minikube
Normal Pulled 7s kubelet, minikube Container image "sockpuppet/server:668dfe550d93a0ae76eb07e0bab900f3968a7776f4f177c97f61b18a8b1677a7" already present on machine
Normal Created 7s kubelet, minikube Created container server
Normal Started 6s kubelet, minikube Started container server
OK, got this sorted out now.
It boils down to the kind of Service being used: ClusterIP.
ClusterIP: Exposes the service on a cluster-internal IP. Choosing this value makes the service only reachable from within the cluster. This is the default ServiceType.
If I am wanting to connect to a Pod or Deployment directly from outside of the cluster (something like Postman, pgAdmin, etc.) and I want to do it using a Service, I should be using NodePort:
NodePort: Exposes the service on each Node’s IP at a static port (the NodePort). A ClusterIP service, to which the NodePort service will route, is automatically created. You’ll be able to contact the NodePort service, from outside the cluster, by requesting <NodeIP>:<NodePort>.
So in my case, if I want to continue using a Service, I'd change my Service manifest to:
apiVersion: v1
kind: Service
metadata:
name: server-cluster-ip-service
spec:
type: NodePort
selector:
component: server
ports:
- port: 5000
targetPort: 5000
nodePort: 31515
Making sure to manually set nodePort: <port> otherwise it is kind of random and a pain to use.
Then I'd get the minikube IP with minikube ip and connect to the Pod with 192.168.99.100:31515.
At that point, everything worked as expected.
But that means having separate sets of development (NodePort) and production (ClusterIP) manifests, which is probably totally fine. But I want my manifests to stay as close to the production version (i.e. ClusterIP).
There are a couple ways to get around this:
Using something like Kustomize where you can set a base.yaml and then have overlays for each environment where it just changes the relevant info avoiding manifests that are mostly duplicative.
Using kubectl port-forward. I think this is the route I am going to go. That way I can keep my one set of production manifests, but when I want to QA Postgres with pgAdmin I can do:
kubectl port-forward services/postgres-cluster-ip-service 5432:5432
Or for the back-end and Postman:
kubectl port-forward services/server-cluster-ip-service 5000:5000
I'm playing with doing this through the ingress-service.yaml using nginx-ingress, but don't have that working quite yet. Will update when I do. But for me, port-forward seems the way to go since I can just have one set of production manifests that I don't have to alter.
Skaffold Port-Forwarding
This is even better for my needs. Appending this to the bottom of the skaffold.yaml and is basically the same thing as kubectl port-forward without tying up a terminal or two:
portForward:
- resourceType: service
resourceName: server-cluster-ip-service
port: 5000
localPort: 5000
- resourceType: service
resourceName: postgres-cluster-ip-service
port: 5432
localPort: 5432
Then run skaffold dev --port-forward.