One of the targets in static_configs in my prometheus.yml config file is secured with basic authentication. As a result, an error of description "Connection refused" is always displayed against that target in the Prometheus Targets' page.
I have researched how to setup prometheus to provide the security credentials when trying to scrape that particular target but couldn't find any solution. What I found was how to set it up on the scrape_config section in the docs. This won't work for me because I have other targets that are not protected with basic_auth.
Please help me out with this challenge.
Here is part of my .yml config as regards my challenge.
scrape_configs:
# The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
- job_name: 'prometheus'
# Override the global default and scrape targets from this job every 5 seconds.
scrape_interval: 5s
scrape_timeout: 5s
# metrics_path defaults to '/metrics'
# scheme defaults to 'http'.
static_configs:
- targets: ['localhost:5000']
labels:
service: 'Auth'
- targets: ['localhost:5090']
labels:
service: 'Approval'
- targets: ['localhost:6211']
labels:
service: 'Credit Assessment'
- targets: ['localhost:6090']
labels:
service: 'Sweep'
- targets: ['localhost:6500']
labels:
I would like to add more details to the #PatientZro answer.
In my case, I need to create another job (as specified), but basic_auth needs to be at the same level of indentation as job_name. See example here.
As well, my basic_auth cases require a path as they are not displayed at the root of my domain.
Here is an example with an API endpoint specified:
- job_name: 'myapp_health_checks'
scrape_interval: 5m
scrape_timeout: 30s
static_configs:
- targets: ['mywebsite.org']
metrics_path: "/api/health"
basic_auth:
username: 'email#username.me'
password: 'cfgqvzjbhnwcomplicatedpasswordwjnqmd'
Best,
Create another job for the one that needs auth.
So just under what you've posted, do another
- job_name: 'prometheus-basic_auth'
scrape_interval: 5s
scrape_timeout: 5s
static_configs:
- targets: ['localhost:5000']
labels:
service: 'Auth'
basic_auth:
username: foo
password: bar
I'm doing research on how to run a Spring Batch job on RedHat OpenShift as a Kubernetes Scheduled Job.
Steps have done,
1) Created a sample Spring Batch app that reads a .csv file that does simple processing and puts some data into in-memory h2 DB. The job launcher is called upon as a REST endpoint (/load). The source code can be found here. Please see the README file for the endpoint info.
2) Created the Docker Image and pushed into DockerHub
3) Deployed using that image to my OpenShift Online cluster as an app
What I want to do is,
Run a Kubernetes Cron Job from OpenShift to call /load REST endpoint which launches the SpringBatch job periodically
Can someone please guide me here on how can I achieve this?
Thank you
Samme
The easiest way would be to curl your /load REST endpoint.
Here's a way to do that:
The Pod definition that I used as replacement for you application (for testing purposes):
apiVersion: v1
kind: Pod
metadata:
name: myapp-pod
labels:
app: myapp
spec:
containers:
- name: myapp-container
image: mendhak/http-https-echo
I used this image because it sends various HTTP request properties back to client.
Create a service for pod:
apiVersion: v1
kind: Service
metadata:
name: myapp-service
spec:
selector:
app: myapp the selector
ports:
- protocol: TCP
port: 80 #Port that service is available on
targetPort: 80 #Port that app listens on
Create a CronJob:
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: curljob
spec:
jobTemplate:
metadata:
name: curljob
spec:
template:
metadata:
spec:
containers:
- command:
- curl
- http://myapp-service:80/load
image: curlimages/curl
imagePullPolicy: Always
name: curljobt
restartPolicy: OnFailure
schedule: '*/1 * * * *'
Alternatively you can use command to launch it:
kubectl create cronjob --image curlimages/curl curljob -oyaml --schedule "*/1 * * * *" -- curl http://myapp-service:80/load
When "*/1 * * * *" will specify how often this CronJob would run. I`ve set it up to run every one minute.
You can see more about how to setup cron job here and here
Here is the result of the kubectl logs from one of the job`s pod:
{
"path": "/load",
"headers": {
"host": "myapp-service",
"user-agent": "curl/7.68.0-DEV",
"accept": "*/*"
},
"method": "GET",
"body": "",
"fresh": false,
"hostname": "myapp-service",
"ip": "::ffff:192.168.197.19",
"ips": [],
"protocol": "http",
"query": {},
"subdomains": [],
"xhr": false,
"os": {
"hostname": "myapp-pod"
As you can see the application receives GET request with path: /load.
Let me know if that helps.
I've just finished Google's tutorial on how to implement continuous integration for a Go app on Kubernetes using Jenkins, and it works great. I'm now trying to do the same thing with a Node app that is served on port 3001, but I keep getting this error:
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {},
"status": "Failure",
"message": "services \"gceme-frontend\" not found",
"reason": "NotFound",
"details": {
"name": "gceme-frontend",
"kind": "services"
},
"code": 404
}
The only thing I've changed on the routing side is having the load balancer point to 3001 instead of 80, since that's where the Node app is listening. I have a very strong feeling that the error is somewhere in the .yaml files.
My node server (relevant part):
const PORT = process.env.PORT || 3001;
frontend-dev.yaml: (this is applied to the dev environment)
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: gceme-frontend-dev
spec:
replicas:
template:
metadata:
name: frontend
labels:
app: gceme
role: frontend
env: dev
spec:
containers:
- name: frontend
image: gcr.io/cloud-solutions-images/gceme:1.0.0
resources:
limits:
memory: "500Mi"
cpu: "100m"
imagePullPolicy: Always
ports:
- containerPort: 3001
protocol: TCP
services/frontend.yaml:
kind: Service
apiVersion: v1
metadata:
name: gceme-frontend
spec:
type: LoadBalancer
ports:
- name: http
#THIS PORT ACTUALLY GOES IN THE URL: i.e. gcme-frontend: ****
#when it says "no endpoints available for service, that doesn't mean this one is wrong, it means that target port is not working not exist"
port: 80
#matches port and -port in frontend-*.yaml
targetPort: 3001
protocol: TCP
selector:
app: gceme
role: frontend
Jenkinsfile (for dev branches, which is what I'm trying to get working)
sh("kubectl get ns ${env.BRANCH_NAME} || kubectl create ns ${env.BRANCH_NAME}")
// Don't use public load balancing for development branches
sh("sed -i.bak 's#LoadBalancer#ClusterIP#' ./k8s/services/frontend.yaml")
sh("sed -i.bak 's#gcr.io/cloud-solutions-images/gceme:1.0.0#${imageTag}#' ./k8s/dev/*.yaml")
sh("kubectl --namespace=${env.BRANCH_NAME} apply -f k8s/services/")
sh("kubectl --namespace=${env.BRANCH_NAME} apply -f k8s/dev/")
echo 'To access your environment run `kubectl proxy`'
echo "Then access your service via http://localhost:8001/api/v1/proxy/namespaces/${env.BRANCH_NAME}/services/${feSvcName}:80/"
Are you creating Service or Ingress resources to expose your application to the outside world?
See tutorials:
https://cloud.google.com/kubernetes-engine/docs/tutorials/hello-app
https://cloud.google.com/kubernetes-engine/docs/tutorials/http-balancer
which have working examples you can copy and modify.
I have a Docker Swarm with a Prometheus container and 1-n containers for a specific microservice.
The microservice-container can be reached by a url. I suppose the requests to this url is kind of load-balanced (of course...).
Currently I have spawned two microservice-container. Querying the metrics now seems to toggle between the two containers. Example: Number of total requests: 10, 13, 10, 13, 10, 13,...
This is my Prometheus configuration. What do I have to do? I do not want to adjust the Prometheus config each time I kill or start a microservice-container.
scrape_configs:
- job_name: 'myjobname'
metrics_path: '/prometheus'
scrape_interval: 15s
static_configs:
- targets: ['the-service-url:8080']
labels:
application: myapplication
UPDATE 1
I changed my configuration as follows which seems to work. This configuration uses a dns lookup inside of the Docker Swarm and finds all instances running the specified service.
scrape_configs:
- job_name: 'myjobname'
metrics_path: '/prometheus'
scrape_interval: 15s
dns_sd_configs:
- names: ['tasks.myServiceName']
type: A
port: 8080
The question here is: Does this configuration recognize that a Docker instance is stopped and another one is started?
UPDATE 2
There is a parameter for what I am asking for:
scrape_configs:
- job_name: 'myjobname'
metrics_path: '/prometheus'
scrape_interval: 15s
dns_sd_configs:
- names: ['tasks.myServiceName']
type: A
port: 8080
# The time after which the provided names are refreshed
[ refresh_interval: <duration> | default = 30s ]
That should do the trick.
So the answer is very simple:
There are multiple, documented ways to scrape.
I am using the dns-lookup-way:
scrape_configs:
- job_name: 'myjobname'
metrics_path: '/prometheus'
scrape_interval: 15s
dns_sd_configs:
- names ['tasks.myServiceName']
type: A
port: 8080
refresh_interval: 15s
I would like to create a kubernetes pod that contains 2 containers, both with different images, so I can start both containers together.
Currently I have tried the following configuration:
{
"id": "podId",
"desiredState": {
"manifest": {
"version": "v1beta1",
"id": "podId",
"containers": [{
"name": "type1",
"image": "local/image"
},
{
"name": "type2",
"image": "local/secondary"
}]
}
},
"labels": {
"name": "imageTest"
}
}
However when I execute kubecfg -c app.json create /pods I get the following error:
F0909 08:40:13.028433 01141 kubecfg.go:283] Got request error: request [&http.Request{Method:"POST", URL:(*url.URL)(0xc20800ee00), Proto:"HTTP/1.1", ProtoMajor:1, ProtoMinor:1, Header:http.Header{}, B
ody:ioutil.nopCloser{Reader:(*bytes.Buffer)(0xc20800ed20)}, ContentLength:396, TransferEncoding:[]string(nil), Close:false, Host:"127.0.0.1:8080", Form:url.Values(nil), PostForm:url.Values(nil), Multi
partForm:(*multipart.Form)(nil), Trailer:http.Header(nil), RemoteAddr:"", RequestURI:"", TLS:(*tls.ConnectionState)(nil)}] failed (500) 500 Internal Server Error: {"kind":"Status","creationTimestamp":
null,"apiVersion":"v1beta1","status":"failure","message":"failed to find fit for api.Pod{JSONBase:api.JSONBase{Kind:\"\", ID:\"SSH podId\", CreationTimestamp:util.Time{Time:time.Time{sec:63545848813, nsec
:0x14114e1, loc:(*time.Location)(0xb9a720)}}, SelfLink:\"\", ResourceVersion:0x0, APIVersion:\"\"}, Labels:map[string]string{\"name\":\"imageTest\"}, DesiredState:api.PodState{Manifest:api.ContainerMa
nifest{Version:\"v1beta1\", ID:\"podId\", Volumes:[]api.Volume(nil), Containers:[]api.Container{api.Container{Name:\"type1\", Image:\"local/image\", Command:[]string(nil), WorkingDir:\"\", Ports:[]ap
i.Port(nil), Env:[]api.EnvVar(nil), Memory:0, CPU:0, VolumeMounts:[]api.VolumeMount(nil), LivenessProbe:(*api.LivenessProbe)(nil)}, api.Container{Name:\"type2\", Image:\"local/secondary\", Command:[]string(n
il), WorkingDir:\"\", Ports:[]api.Port(nil), Env:[]api.EnvVar(nil), Memory:0, CPU:0, VolumeMounts:[]api.VolumeMount(nil), LivenessProbe:(*api.LivenessProbe)(nil)}}}, Status:\"\", Host:\"\", HostIP:\"\
", PodIP:\"\", Info:api.PodInfo(nil), RestartPolicy:api.RestartPolicy{Type:\"RestartAlways\"}}, CurrentState:api.PodState{Manifest:api.ContainerManifest{Version:\"\", ID:\"\", Volumes:[]api.Volume(nil
), Containers:[]api.Container(nil)}, Status:\"\", Host:\"\", HostIP:\"\", PodIP:\"\", Info:api.PodInfo(nil), RestartPolicy:api.RestartPolicy{Type:\"\"}}}","code":500}
How can I modify the configuration accordingly?
Running kubernetes on a vagrant vm (yungsang/coreos).
The error in question here is "failed to find fit". This generally happens when you have a port conflict (try and use the same hostPort too many times or perhaps you don't have any worker nodes/minions.
I'd suggest you either use the Vagrant file that is in the Kubernetes git repo (see http://kubernetes.io) as we have been trying to make sure that stays working as Kubernetes is under very active development. If you want to make it work with the CoreOS single machine set up, I suggest you hop on IRC (#google-containers on freenode) and try and get in touch with Kelsey Hightower.
Your pod spec file looks like invalid.
According to http://kubernetes.io/v1.0/docs/user-guide/walkthrough/README.html#multiple-containers, a valid multiple containers pod spec should like this
apiVersion: v1
kind: Pod
metadata:
name: www
spec:
containers:
- name: nginx
image: nginx
volumeMounts:
- mountPath: /srv/www
name: www-data
readOnly: true
- name: git-monitor
image: kubernetes/git-monitor
env:
- name: GIT_REPO
value: http://github.com/some/repo.git
volumeMounts:
- mountPath: /data
name: www-data
volumes:
- name: www-data
emptyDir: {}
Latest doc at http://kubernetes.io/docs/user-guide/walkthrough/#multiple-containers
apiVersion: v1
kind: Pod
metadata:
name: test
spec:
containers:
- name: wp
image: wordpress
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
- name: ng
image: nginx
imagePullPolicy: IfNotPresent