Sensu /checks API call appears empty - docker

I am running Sensu as a series of Docker containers (sensu-server, sensu-api, n sensu-clients, rabbitmq and redis). While the clients successfully register themselves and run checks requested by the server, and whose checks will be reported via handlers and via /clients, API calls to /checks return nothing.
Server config:
{
"rabbitmq":{
"host": "rabbitmq"
},
"redis":{
"host":"redis"
},
"api":{
"host":"api",
"port":4567
}
"handlers": { ... },
"checks": { ... }
}
API config:
{
"rabbitmq":{
"host":"rabbitmq"
},
"redis":{
"host":"redis"
},
"api":{
"host":"api",
"port":4567
}
}
Client config:
{
"client":{
"name":"openshift-{{ .Env.AVAILABILITY_ZONE }}",
"address":"{{ .Env.HOSTNAME }}",
"subscriptions":[
"{{ .Env.AVAILABILITY_ZONE }}",
"any-client"
]
},
{
"rabbitmq":{
"host":"rabbitmq"
}
}
}

I solved this in a similar scenario - our configuration didn't give the api & servers (which ran inside separate docker containers) a copy of the check definitions.
Here's the Github issue that lead me to it: https://github.com/sensu/uchiwa/issues/83#issuecomment-51917336

Related

Prometheus metric retruns no data

i have installed the prometheus-es-exporter for querying the elasticsearch and also i have written some queries.E.g one of the query looks like:
[query_database_connection_exception]
QueryIntervalSecs = 300
QueryIndices = logs.*
QueryJson = {
"size": 0,
"query": {
"query_string": {
"query": "message: \"com.microsoft.sqlserver.jdbc.SQLServerException: \" AND #timestamp:(>=now-1h AND <now)"
}
},
"aggs": {
"application": {
"terms": {
"field": "kubernetes.labels.app.keyword"
}
}
}
}
ES-Exporter exposes after the configuration the metric database_connection_exception_application_doc_count but i face the issue that sometimes i get in prometheus the error message:
This happens not only for this query but for other queries as well.My understanding and expectation is that if my query does not find the string com.microsoft.sqlserver.jdbc.SQLServerException for the last 1h it must return the value=0 in prometheus but for some reason it returns no data.How should i understand this?
ES-Exporter is running smoothly,health check of ES-Exporter and Elastic shows no error,all elastic nodes are at state green.

Firebase Storage Emulator rules not working

My app is working just fine but when I use the Firebase emulators, I have a problem with the Storage rules not being obeyed.
I have downloaded the storage.rules file and it is in the same directory as the firebase-json file. The Emulator suite launches just fine and I can see that the Storage emulator is working.
However, when I try to upload an image (as I do in the live app) I get an error.
Error while uploading file: Error Domain=FIRStorageErrorDomain Code=-13021 "User does not have permission to access gs://my-stuff-7796d.appspot.com/Profiles/0ye7psTQA4xR6DfjZRXjjtCWKyPw.jpg." UserInfo={object=Profiles/0ye7psTQA4xR6DfjZRXjjtCWKyPw.jpg, ResponseBody={"error":{"code":403,"message":"Permission denied. No WRITE permission."}}, bucket=my-stuff-7796d.appspot.com, data={length = 74, bytes = 0x7b226572 726f7222 3a7b2263 6f646522 ... 73696f6e 2e227d7d }, data_content_type=application/json; charset=utf-8, NSLocalizedDescription=User does not have permission to access gs://my-stuff-7796d.appspot.com/Profiles/0ye7psTQA4xR6DfjZRXjjtCWKyPw.jpg., ResponseErrorDomain=com.google.HTTPStatus, ResponseErrorCode=403}
The storage.rules are:
service firebase.storage {
match /b/{bucket}/o {
match /{allPaths=**} {
allow read, write: if request.auth != null;
}
}
}
Again, running against the live Firebase works just fine and the rules are obeyed.
Here is my firebase.json file
"firestore": {
"rules": "firestore.rules",
"indexes": "firestore.indexes.json"
},
"storage": {
"rules": "storage.rules"
},
"emulators": {
"auth": {
"port": 9099
},
"firestore": {
"port": 8080
},
"storage": {
"port": 9199
},
"ui": {
"enabled": true
}
}
}
When I launch my app, this is the code I initialize after call FirebaseApp.configure
Auth.auth().useEmulator(withHost:"localhost", port:9099)
Storage.storage().useEmulator(withHost:"localhost", port:9199)
let settings = Firestore.firestore().settings
settings.host = "localhost:8080"
settings.isPersistenceEnabled = false
settings.isSSLEnabled = false
Firestore.firestore().settings = settings
What am I missing, or is this a bug?
I also had this issue. It seems to have been resolved for me in version 11.8.0. As a temporary workaround, I resorted to allowing all reads/writes so that I wasn't having to use my production environment and pay for usage. Not an ideal solution, but it unblocked me.
But for those who might be having similar issues, try updating to the latest firebase-tools:
npm install -g firebase-tools
Be sure to address any issues with:
npm audit fix
Or make the following change to the storage.rules file:
service firebase.storage {
match /b/{bucket}/o {
match /{allPaths=**} {
allow read, write;
}
}
}

Error using EFS in ECS, returns unknown filesystem type 'efs'

I'm using a docker image for jenkins (jenkins/jenkins:2.277.1-lts-alpine) in an AWS ECS, and I want to persist the data using a AWS EFS.
I created the EFS and got the ID (fs-7dcef848)
My terraform code looks like:
resource "aws_ecs_service" "jenkinsService" {
cluster = var.ECS_cluster
name = var.jenkins_name
task_definition = aws_ecs_task_definition.jenkinsService.arn
deployment_maximum_percent = "200"
deployment_minimum_healthy_percent = 50
desired_count = var.service_desired_count
tags = {
"ManagedBy" : "Terraform"
}
}
resource "aws_ecs_task_definition" "jenkinsService" {
family = "${var.jenkins_name}-task"
container_definitions = file("task-definitions/service.json")
volume {
name = var.EFS_name
efs_volume_configuration {
file_system_id = "fs-7dcef848"
}
}
tags = {
"ManagedBy" : "Terraform"
}
}
and the service.json
[
{
"name": "DevOps-jenkins",
"image": "jenkins/jenkins:2.284-alpine",
"cpu": 0,
"memoryReservation": 1024,
"essential": true,
"portMappings": [
{
"containerPort" : 8080,
"hostPort" : 80
}
],
"mountPoints": [
{
"sourceVolume" : "DevOps-Jenkins",
"containerPath" : "/var/jenkins_home"
}
]
}
]
The terraform apply works OK, but the task cannot start returning:
Stopped reason Error response from daemon: create ecs-DevOps-jenkins-task-33-DevOps-Jekins-bcb381cd9dd0f7ae2700: VolumeDriver.Create: mounting volume failed: mount: unknown filesystem type 'efs'
Does anyone know whats happening?
There is another way to persist data?
Thanks in advance.
Solved: The first attempt was to install the "amazon-efs-utils" package using a remote-exec
But following the indications provided by #Oguzhan Aygun , I did it on the USER DATA section and it worked!
Thanks!

Xamarin couchbase server and sync gateway not syncing

In Xamarin, I am trying to sync the local database with sync gateway, but it is not syncing.
I am trying it with sample https://github.com/couchbaselabs/mini-hacks/tree/master/kitchen-sync
Local database is getting updated but it is not syncing with sync gateway
I have provided the sync url:http://localhost:4984/sync_gateway/
Also updated config file
{
"interface": ":4984",
"adminInterface": ":4985",
"log":["CRUD+", "REST+", "Changes+", "Attach+"],
"databases": {
"sync_gateway": {
"server": "walrus:data",
"bucket": "sync_gateway",
"sync": `function(doc)
{
channel(doc.channels);
}`,
"users": {
"GUEST": {
"disabled": true,
"admin_channels": [ "*" ]
}
}
}
On checking the data bucket, item count is not getting added up. Can anyone help me out.

Grunt Livereload + Grunt Connect Proxy

I am using Rails for my API, AngularJS on the front and I am having some issues getting livereload / grunt connect proxy to work properly.
Here is the snippet from my gruntfile:
connect: {
options: {
port: 9000,
// Change this to '0.0.0.0' to access the server from outside.
hostname: 'localhost',
livereload: 35729
},
proxies: [
{
context: '/api',
host: 'localhost',
port: 3000
}
],
livereload: {
options: {
open: true,
base: [
'.tmp',
'<%= yeoman.app %>'
],
middleware: function (connect, options) {
var middlewares = [];
var directory = options.directory || options.base[options.base.length - 1];
// enable Angular's HTML5 mode
middlewares.push(modRewrite(['!\\.html|\\.js|\\.svg|\\.css|\\.png$ /index.html [L]']));
if (!Array.isArray(options.base)) {
options.base = [options.base];
}
options.base.forEach(function(base) {
// Serve static files.
middlewares.push(connect.static(base));
});
// Make directory browse-able.
middlewares.push(connect.directory(directory));
return middlewares;
}
}
},
test: {
options: {
port: 9001,
base: [
'.tmp',
'test',
'<%= yeoman.app %>'
]
}
},
dist: {
options: {
base: '<%= yeoman.dist %>'
}
}
}
If I 'grunt build' everything works perfectly - off localhost:3000
However if I 'grunt serve' it opens a window through 127.0.0.1:9000 and I get 404 to all my API calls.
Also under serve it is mangling my background images from a CSS file I get this warning:
Resource interpreted as Image but transferred with MIME type text/html: "http://127.0.0.1:9000/images/RBP_BG.jpg"
I haven't done this before - so chances are I am doing it all wrong.
I don't like too much code in your connect.livereload.middleware configuration.
Is that all necessary ?
Take a look at this commit - chore(yeoman-gruntfile-update): configured grunt-connect-proxy in some of my projects.
backend is Django
ports: frontend: 9000, backend: 8000
generator-angular was in v.0.6.0 when generating the project
my connect.livereload.middleware configuration was based on: https://stackoverflow.com/a/19403176/1432478
This is an old post, but please make sure that you actually initialize the proxy in the grunt serve task by calling configureProxies before livereload.
grunt.task.run([
'clean:server',
'bower-install',
'concurrent:server',
'autoprefixer',
'configureProxies',
'connect:livereload',
'watch'
]);
Should work fine afterwards.
I have a similar problem with you but I have no use yeoman.
My solution is to add the task 'configureProxies'.
this is my tasks:
grunt.registerTask('serve', ['connect:livereload','configureProxies',
'open:server', 'watch']);
and,'connect:livereload','configureProxies'——After my test, the order of these two tasks will not affect the results.
github grunt-connect-proxy

Resources