How to enable SQL logging in Aqueduct 3? - dart

It would be very useful for me to see in the terminal what requests are executed and how long they take.
Logging of HTTP requests works fine, but I did not find a similar function for SQL.
Is there a way to enable logging globally using config.yaml or in prepare() of ApplicationChannel?

Looks like i found dirty hack solution:
Future prepare() async {
logger.onRecord.listen((rec) => print("$rec ${rec.error ?? ""} ${rec.stackTrace ?? ""}"));
logger.parent.level = Level.FINE;
...
}
We need to set log level higher then default INFO. All SQL queries log their requests on FINE level.
I expected that this setting should be able to load from a config.yaml, but I did not find anything similar.
More about log levels can be find here

Related

Play 2.6, URI length exceeds the configured limit of 2048 characters

I am trying to migrate a Play 2.5 version to 2.6.2. I keep getting the URI-length exceeds error. Anyone knows how to override this?
I tried below Akka setting but still no luck.
play.server.akka{
http.server.parsing.max-uri-length = infinite
http.client.parsing.max-uri-length = infinite
http.host-connection-pool.client.parsing.max-uri-length = infinite
http.max-uri-length = infinite
max-uri-length = infinite
}
Simply add
akka.http {
parsing {
max-uri-length = 16k
}
}
to your application.conf. The prefix play.server is only used for a small subset of convenience features for Akka-HTTP integration into the Playframework, e.g. play.server.akka.requestTimeout. Those are documented in the Configuring the Akka HTTP server backend documentation.
I was getting error due to header length exceeding default 8 KB(8192). Added the following to build.sbt and it worked for me :D
javaOptions += "-Dakka.http.parsing.max-header-value-length=16k"
You can try similar for uri length if other options don't work
This took me way to long to figure out. It is somehow NOT to be found in the documentation.
Here is a snippet (confirmed working with play 2.8) to put in your application.conf which is also configurable via an environment variable and works for BOTH dev and prod mode:
# Dev Mode
play.akka.dev-mode.akka.http.parsing.max-uri-length = 16384
play.akka.dev-mode.akka.http.parsing.max-uri-length = ${?PLAY_MAX_URI_LENGTH}
# Prod Mode
akka.http.parsing.max-uri-length = 16384
akka.http.parsing.max-uri-length = ${?PLAY_MAX_URI_LENGTH}
You can then edit the config or with an already deployed application just set PLAY_MAX_URI_LENGTH and it is dynamically configurable without the need to modify commandline arguments.
env PLAY_MAX_URI_LENGTH=16384 sbt run
If anyone getting this type of error in chrome browser when trying to access a site or login. [HTTP header value exceeds the configured limit of 8192 characters]
, Go to chrome
settings -> Security and Privacy -> Site Settings , View Permission and data stored across sites
Search for the specific website and on that site do Clear all data.

Using Neo4j with React JS

Can we use graph database neo4j with react js? If not so is there any alternate option for including graph database in react JS?
Easily, all you need is neo4j-driver: https://www.npmjs.com/package/neo4j-driver
Here is the most simplistic usage:
neo4j.js
//import { v1 as neo4j } from 'neo4j-driver'
const neo4j = require('neo4j-driver').v1
const driver = neo4j.driver('bolt://localhost', neo4j.auth.basic('username', 'password'))
const session = driver.session()
session
.run(`
MATCH (n:Node)
RETURN n AS someName
`)
.then((results) => {
results.records.forEach((record) => console.log(record.get('someName')))
session.close()
driver.close()
})
It is best practice to close the session always after you get the data. It is inexpensive and lightweight.
It is best practice to only close the driver session once your program is done (like Mongo DB). You will see extreme errors if you close the driver at a bad time, which is incredibly important to note if you are beginner. You will see errors like 'connection to server closed', etc. In async code, for example, if you run a query and close the driver before the results are parsed, you will have a bad time.
You can see in my example that I close the driver after, but only to illustrate proper cleanup. If you run this code in a standalone JS file to test, you will see node.js hangs after the query and you need to press CTRL + C to exit. Adding driver.close() fixes that. Normally, the driver is not closed until the program exits/crashes, which is never in a Backend API, and not until the user logs out in the Frontend.
Knowing this now, you are off to a great start.
Remember, session.close() immediately every time, and be careful with the driver.close().
You could put this code in a React component or action creator easily and render the data.
You will find it no different than hooking up and working with Axios.
You can run statements in a transaction also, which is beneficial for writelocking affected nodes. You should research that thoroughly first, but transaction flow is like this:
const session = driver.session()
const tx = session.beginTransaction()
tx
.run(query)
.then(// same as normal)
.catch(// errors)
// the difference is you can chain multiple transactions:
const tx1 = await tx.run().then()
// use results
const tx2 = await tx.run().then()
// then, once you are ready to commit the changes:
if (results.good !== true) {
tx.rollback()
session.close()
throw error
}
await tx.commit()
session.close()
const finalResults = { tx1, tx2 }
return finalResults
// in my experience, you have to await tx.commit
// in async/await syntax conditions, otherwise it may not commit properly
// that operation is not instant
tl;dr;
Yes, you can!
You are mixing two different technologies together. Neo4j is graph database and React.js is framework for front-end.
You can connect to Neo4j from JavaScript - http://neo4j.com/developer/javascript/
Interesting topic. I am using the driver in a React App and recently experienced some issues. I am closing the session every time a lifecycle hook completes like in your example. When there where more intensive queries I would see a timeout error. Going back to my setup decided to experiment by closing the driver in some more expensive queries and it looks like (still need more testing) the crashes are gone.
If you are deploying a real-world application I would urge you to think about Authentication and Authorization when using a DB-React setup only as you would have to store username/password of the neo4j server in the client. I am looking into options of having the Neo4J server issuing a token and receiving it for Authorization but the best practice is for sure to have a Node.js server in the middle with something like Passport to handle Authentication.
So, all in all, maybe the best scenario is to only use the driver in Node and have the browser always communicating with the Node server using axios...

collecting logg4j log using apache flume

guys
I met a problem.I use logg4j and apache-flume to collect logs.the architecture is use logg4j remote print,the config like this:
log4j.appender.flume=org.apache.flume.clients.log4jappender.Log4jAppender
log4j.appender.flume.Hostname=192.168.152.49
log4j.appender.flume.Port=44446
log4j.appender.flume.layout=org.apache.log4j.PatternLayout
while the configure of flume like this:
a1.sources.r1.type=avro
a1.sources.r1.bind=192.168.152.49
a1.sources.r1.port=44446
it works!but the question is when the flume closed.the application which use logg4j can't print log!so is anybody can tell me.
how to fix this problem
It depends on how you want to handle Flume being down. With the regular Log4jAppender, you can enable unsafe mode which will log the error in the log4j LogLog, but otherwise fail silently. To do that you can set log4j.appender.flume.UnsafeMode = true. You can see an example here:
https://github.com/kite-sdk/kite-examples/blob/master/logging/src/main/resources/log4j.properties#L20
With unsafe enabled, any events you log while Flume is down will be lost.
If you want to be able to point to multiple Flume agents and have it balance the load between them as well as fail over if one of them goes down, you can use the LoadBalancingLog4jAppender instead. The docs here should help:
http://flume.apache.org/FlumeUserGuide.html#load-balancing-log4j-appender

notify() is not working in IE8

I am working on a web app based on Sproutcore 1.9.1. To retrieve data from server it
makes a SC.Request.getUrl() request, which works fine in all the browsers except IE8.
For IE8 when the request is like this:
SC.Request.getUrl("'http://example.com/some/path')
.set('isJSON', YES)
.async(false) // made async false to work in IE
.notify(this, 'someMethodDidComplete', { query: query, store: store})
.send();
works fine. But when the request is :
SC.Request.getUrl("'http://example.com/some/path')
.set('isJSON', YES)
.notify(this, 'someMethodDidComplete', { query: query, store: store})
.send();
it works fine for other browsers but for IE8, it is not working. After spending some
time with the issue i found out that the finishrequest() is not invoking. For doing so
what I did is made 'asynchronous false' and then it works. Now I don't know what to do.
Please suggest me something on this and why normal request is not working.
thanks in advance.
This problem is known (https://github.com/sproutcore/sproutcore/issues/866) and seems to be fixed, at least on SC master.
As a side note, you include the query and store in an object as parameter to .notify(). You don't need to do this, you can simply include them as extra parameters and your notify function will be called with those extra parameters:
.notify(this,this.notifier,query,store)
and somewhere else in the file:
notifier: function(result,query,store){ }

How to disable Rack-Mini-Profiler temporarily?

I'm using rack mini profiler in rails just fine, but during some coding sessions especially where I'm working on a lot of different client side code, it gets in the way. (mainly in my client side debugging tools network graphs, etc.)
I'm trying to turn it off with a before filter, that also serves to see if the user is authorized to see the profile anyway, but "deauthorize" doesn't seem to do anything for me. Here's my code called as a before filter:
def miniprofiler
off = true
if off || !current_user
Rack::MiniProfiler.deauthorize_request
return
elsif current_user.role_symbols.include?(:view_page_profiles)
Rack::MiniProfiler.authorize_request
return
end
Rack::MiniProfiler.deauthorize_request
end
I also know there is a setting "Rack::MiniProfiler.config.authorization_mode" but I can't find docs on what the possible settings are, and not seeing it used in the code? Right now its telling me :allow_all, but :allow_none doesn't do anything either.
Even if I can just temporarily set a value in the dev environment file and restart the server, that would serve my purposes.
Get latest and type:
http://mysite.com?pp=disable
When you are done type
http://mysite.com?pp=enable
See ?pp=help for all the options:
Append the following to your query string:
pp=help : display this screen
pp=env : display the rack environment
pp=skip : skip mini profiler for this request
pp=no-backtrace : don't collect stack traces from all the SQL executed (sticky, use pp=normal-backtrace to enable)
pp=normal-backtrace (*) : collect stack traces from all the SQL executed and filter normally
pp=full-backtrace : enable full backtraces for SQL executed (use pp=normal-backtrace to disable)
pp=sample : sample stack traces and return a report isolating heavy usage (experimental works best with the stacktrace gem)
pp=disable : disable profiling for this session
pp=enable : enable profiling for this session (if previously disabled)
pp=profile-gc: perform gc profiling on this request, analyzes ObjectSpace generated by request (ruby 1.9.3 only)
pp=profile-gc-time: perform built-in gc profiling on this request (ruby 1.9.3 only)
You can also use Alt + p to toggle on Windows/Linux and Option + p on MacOS.
If you want the profiler to be disabled initially, and then activate on demand... add a pre-authorize callback in an initializer file like:
Rack::MiniProfiler.config.pre_authorize_cb = lambda {|env| ENV['RACK_MINI_PROFILER'] == 'on'}
then in your application controller, add a before_filter that looks for the pp param
before_filter :activate_profiler
def activate_profiler
ENV['RACK_MINI_PROFILER'] = 'on' if params['pp']
ENV['RACK_MINI_PROFILER'] = 'off' if params['pp'] == 'disabled'
end
your environment will not have RACK_MINI_PROFILER set initially, but if you want to turn it on, you can tack ?pp=enabled onto your url. Then you can disable again later (the pp=disabled will only turn it off for the current session, but setting the ENV variable to off will kill it entirely until you force it back on).

Resources