How to enable query logging in Kairosdb? - kairosdb

I want to see which queries Kairosdb receives from my application and from others. How can I enable query logging?

I do not know a simple solution for that, AFAICT it is not available for configuring in kairosDB. The easier is to use a proxy.
Otherwise you may create a KairosDB plugin that adds a logger to the embedded jetty instance.

Related

How can background tasks be executed from a library in an ASP.NET MVC 5 app

In my job we are building Web Apps that rely on a common Enterprise class. This class has a method that sends a request to our server every time the app_start or app_end event triggers so we can monitor the status remotely. But we are now requiring that at least once a day the web app reports its status, a bit like telemetry. I don't know how to accomplish this, so far I have found some options, but some have limitations:
Use hangfire. I don't like this since it requires to setup a Database or add more tables and install a new Nuget package on each project, but could be my last option.
Use a Windows Service that reads databases. This could be less work but it can't access the Web App web.config code
Use a Javascript tasks that sends an AJAX request. This requires to have an open web browser and is a big risk.
I'm looking for a server side approach that could allow to set to trigger an event or function at 1am.
I would got with Hangifire.
It is dead easy to setup and very reliable.
You don't need to setup the database, you might want to check memory storage:
https://github.com/perrich/Hangfire.MemoryStorage
Also check:
What is the equivalent to CRON jobs in ASP.NET? - C#
You can use FluentScheduler instead of Hangfire (it is more lightweight).
Instead of a Javascript task that sends an AJAX request you can use a WebJob or an Azure Function.

Use case for #EnableZuulServer

I am wondering what use case would be served by #EnableZuulServer?
In my case I want to use the ZuulFilter framework for micro-services, and also the spring handlerMappings on Controllers to be called after passing through the Zuul Filter framework. Do not want proxy forwarding.
Is that possible and how? Can we use #EnableZuulServer mode for this scenario? I didn't find much documentation to be able to understand how #EnableZuulServer would work.
Can someone explain and help?
From the Spring Cloud Netflix documentation:
Spring Cloud Netflix installs a number of filters based on which
annotation was used to enable Zuul. #EnableZuulProxy is a superset of
#EnableZuulServer. In other words, #EnableZuulProxy contains all
filters installed by #EnableZuulServer. The additional filters in the
"proxy" enable routing functionality. If you want a "blank" Zuul, you
should use #EnableZuulServer.
Based on that, the answer to your question is: yes.
You can add the #EnableZuulServer annotation and you will not get proxy forwarding but you will still be able to use the ZuulFilter framework.
That said, if you're just looking to filter requests and responses, you can use a standard Servlet Filter along with a FilterRegistrationBean(relevant javadoc)
As far as a use case goes, you'd use #EnableZuulServer when you need more customized behavior than what is available with #EnableZuulProxy.
So, for instance, maybe for debug purposes you want to be able to support a request header that proxies your request to a specified host when the request originates from within a specific IP range.
From the Spring Cloud Netflix documentation:
In this case the routes into the Zuul server are still specified by
configuring "zuul.routes.*", but there is no service discovery and no
proxying, so the "serviceId" and "url" settings are ignored.

Modifying/updating grails config on runtime

Since lots of config in grails were placed at grailsApplication.config, lets say if i have a secure management page for managing and updating properties. Should i directly modify those configuration properties directly? Is it a good practice to do so? Im taking this into consideration:
the app should be scalable. Multiple instance of the same app will be deployed.
i will use an application servlet to deploy my app, e.g. wildfly
i will use hazelcast for session, etc
Can you guys share your experience in this?

Use neo4j server instead of embedded mode

I'm working on a webapp running on Tomcat which using spring-data to connect to a neo4j graph in embedded mode.
I would like to use neo4j server instead of the embedded mode and I am looking for some help to be sure about how to do that.
Some of my application services are quite difficult and combine, in a single transaction, the result of several cypher requests in a dto sent back to the user.
First I thought that I have to create a server unmanaged extension and I think I should follow these following steps.
- Keep my webapp with springMVC and spring security to hold and secure users sessions.
- Regroup all my transactional services in a specific jar my-app.jar
- Use Jax-RS to add a REST access point on each of my service of my-app.jar
- use something like spring restTemplate from my spring controller to call services from my-app.jar
First question : is this way of doing things is the good way ?
Second question : I have many spring injection in my services layer. How can I keep them working (how can I add dependencies in the server extension ?
Then I discovered graphAware and I wonder if I should use it instead.
And finally I just read this post http://jexp.de/blog/2014/12/spring-data-neo4j-improving-remoting-performance/ and it seems that I should use
the SpringCypherRestGraphDatabase (as explain in the bold text at the end of the article).
Well, I'm a little bit lost and I would appreciate any help to use neo4j server instead the embedded mode for my application which contain some complexe transactions.
You have a number of options here and you are on the right track with your thinking.
Option 1:
If your use cases are business-logic-heavy, and your question suggests that they are, going the unmanaged extension route is one option.
Essentially, you can then combine the most performant Java API and Cypher (if you wish) to perform your use case. I wouldn't use SDN here by the way, so you have to do your mapping manually, but is there really any mapping? Maybe you just want to execute traversals / Cypher queries for each one of your use cases.
Each use case then exposes a simple REST API, which is consumed by your Spring-powered application running Spring MVC, Spring Security, and all that. You can use the RestTemplate from Spring in your app's Controllers.
To add a twist to all that, you can use the GraphAware Framework to develop the "unmanaged extension" using Spring MVC as well. That would be my preferred option, knowing nothing about your domain/app.
Option 2:
Use the new version of SDN (v4) as Michael suggests. This allows you to run your application with annotated domain objects, Spring MVC, Security, et al. Operations (CRUD and other) are automatically translated to Cypher and sent across the wire to Neo4j running in server mode (no extensions needed). Results are then marshalled back to Java objects.
We're about to release Milestone 1 of SDN v4. It shouldn't take more than a week. That said, it is still going to be a Milestone release, thus not ready for production. A GA release is expected in May (ish).
You can already try SDN v4 yourself. Clone this repo: https://github.com/spring-projects/spring-data-neo4j, make sure you're on the 4.0 branch, and do an mvn clean install on it. Here's a sample app, built using Angular JS and Spring Boot.
Please do get in touch with feedback / questions / problems (best by email info at graphaware dot com). Cheers!
I suggest you wait a bit until SDN4 Milestone 1 comes out (developed by GraphAware) this was written from scratch for Neo4j-Server.

Grails Per-User Database Authentication

First off, I know the best-practice is to use a single database user account in your web app to take advantage of connection pooling to keep the app nice and responsive. However, due to the REQUIREMENTS (as in no changing this under any circumstances), I must authenticate each user with his or her database account.
The context is a warehouse management application that runs on Android, but gets its data from web services that I'm probably going to write in Grails unless a suggestion here shows me a tech more suitable for my requirements. Due to the nature of the application, the users would likely only need to authenticate once or twice a day, so I was thinking I could simply persist the Connections in a HashMap keyed by the hash code of the username concatenated with the password. That should allow the application to maintain the same or similar performance level as the best practice.
Now, my issue is in using the persisted Connection objects. I know that I will not be able to use them with GORM without a significant amount of customization, so I was planning on using them with groovy.sql.Sql, which works out well because most of the business logic is in PL/SQL packages anyway.
My question is how does the groovy.sql.Sql class deal with its Connection object? Will I run into issues of Connections being closed by it, or can I safely use my HashMap to persist the Connections?
groovy.sql.Sql will not close your connection. In the class spec you can find:
If this SQL object was created with a Connection then this method
closes the connection.
So SQL class is really if you want to do things by yourself, not trusting entirely on Hibernate. Although, I think you can use Spring's UserCredentialsDataSourceAdapter for your solution. It uses ThreadLocal to set credentials for each thread, so the call to: UserCredentialsDataSourceAdapter.setCredentialsForCurrentThread(String username, String password)
would solve. There are other aproaches you could try here.
I actually just found something that future visitors to this question may find useful. While digging into the Spring Framework's documentation, I discovered that their JDBC Extensions actually implements proxy authentication (where a proxy account is used to establish the connection, but an actual account is provided for the context of SQL execution). Unfortunately, the implementation as of 8/17/2012 does not support using passwords for users over the proxy connection, so it won't be usable for me currently, but anyone finding this question should check to see if that is still the case. Here are the links:
JDBC Extensions Docs v1.0.0.RC1
JDBC Extensions Docs Base

Resources