How to setup service method caching in grails - grails

My application has a couple of services that make external calls via httpClient (GET and POST) that are unlikely to change in months, but they are slow; making my application even slower.
Clarification: this is NOT about caching GORM/hibernate/queries to my db.
How can I cache these methods (persistence on disk gets bonus points...) in grails 2.1.0?
I have installed grails-cache-plugin but it doesn't seem to be working, or i configured it wrong (very hard to do since there are 2-5 lines to add only, but i've managed to do it in the past)
I also tried setting up an nginx proxy cache in front of my app, but when i submit one of my forms with slight changes, I get the first submission as result.
Any suggestions/ideas will be greatly appreciated.
EDIT: Current solution (based on Marcin's answer)
My config.groovy: (the caching part only)
//caching
grails.cache.enabled = true
grails.cache.clearAtStartup = false
grails.cache.config = {
defaults {
timeToIdleSeconds 3600
timeToLiveSeconds 2629740
maxElementsInMemory 1
eternal false
overflowToDisk true
memoryStoreEvictionPolicy 'LRU'
}
diskStore {
path 'cache'
}
cache {
name 'scoring'
}
cache {
name 'query'
}
}
The important parts are:
do not clear at startup (grails.cache.clearAtStartup = false)
overflowToDisk=true persists all results over maxElementsInMemory
maxElementsInMemory=1 reduced number of elements in memory
'diskStore' should be writable by the user running the app.

Grails Cache Plugin works quite well for me under Grails 2.3.11. Documentation is pretty neat, but just to show you a draft...
I use the following settings in Config.groovy:
grails.cache.enabled = true
grails.cache.clearAtStartup = true
grails.cache.config = {
defaults {
maxElementsInMemory 10000
overflowToDisk false
maxElementsOnDisk 0
eternal true
timeToLiveSeconds 0
}
cache {
name 'somecache'
}
}
Then, in the service I use something like:
#Cacheable(value = 'somecache', key = '#p0.id.toString().concat(#p1)')
def serviceMethod(Domain d, String s) {
// ...
}
Notice the somecache part is reused. Also, it was important to use String as key in my case. That's why I used toString() on id.
The plugin can be also set up to use disk storage, but I don't use it.
If it doesn't help, please provide more details on your issue.

This may not help, but if you upgrade the application to Grails 2.4.x you can use the #Memoize annotation. This will automagically cache the results of each method call based upon the arguments passed into it.

In order to store this "almost static" information you could use Memcached or Redis as a cache system. (There are many others)
This two cache systems allows you to store key-value data (in your case something like this "key_GET": JSON,XML,MAP,String ).
Here is a related post: Memcached vs. Redis?
Regards.

Related

graph builder of dse configuration

It seems like messing with the configuration, like trying to tune the number of thread readers for vertices & edges, cause a lot of unexplained exceptions, also there is an issue with trying to set the batch size.
It seems to only work with the default settings produced by the executable.
I've got a lot of exception, while trying "to play" with those values,
some of them are
[1] CAS exception of cassandra, something about the inability to create more partition keys.
[2] Cassandra timeout during write query at consistency ONE
and more.
As there is no reference within the documentation about how to solve those issues, I don't know how to continue. It seems like everything there is very delicate and shaky, so any little change causes tons of exceptions.
This is for using graph-loader-6.0.1 and dse 6.0.0 or dse 6.0.1
For example :
com.datastax.dsegraphloader.exception.LoadingException: com.datastax.driver.core.exceptions.InvalidQueryException: Resource limited exceeded on added vertices, properties and edges. Maximum of 100000 allowed. Please split transaction into multiple smaller ones and retry.
This is what I get when I try to use some config.
this is the groovy file for configuration :
// config
config preparation: false
config create_schema: false
config load_new: true
config load_edge_threads: 5
config load_vertex_threads: 5
config batch_size: 5000
// orders
inputfiledir = '/home/dseuser/'
profileInput = File.text(inputfiledir + "soc-pokec-profiles.txt").
delimiter("\t").header('user_id','public','completion_percentage','gender','region','last_login','registration','age',
'body','I_am_working_in_field','spoken_languages','hobbies','I_most_enjoy_good_food','pets','body_type',
'my_eyesight','eye_color','hair_color','hair_type','completed_level_of_education','favourite_color',
'relation_to_smoking','relation_to_alcohol','sign_in_zodiac','on_pokec_i_am_looking_for','love_is_for_me',
'relation_to_casual_sex','my_partner_should_be','marital_status','children','relation_to_children','I_like_movies',
'I_like_watching_movie','I_like_music','I_mostly_like_listening_to_music','the_idea_of_good_evening',
'I_like_specialties_from_kitchen','fun','I_am_going_to_concerts','my_active_sports','my_passive_sports','profession',
'I_like_books','life_style','music','cars','politics','relationships','art_culture','hobbies_interests',
'science_technologies','computers_internet','education','sport','movies','travelling','health','companies_brands',
'holder1','holder2')
relationInput = File.text(inputfiledir + "soc-pokec-relationships.txt").
delimiter("\t").header('auser','buser')
profileInput = profileInput.transform {
if (it['completion_percentage'] == 'null') { it.remove('completion_percentage')};
if (it['gender'] == 'null') { it.remove('gender')};
if (it['last_login'] == 'null') { it.remove('last_login')};
if (it['registration'] == 'null') { it.remove('registration')};
if (it['age'] == 'null') { it.remove('age')};
it
}
load(profileInput).asVertices {
label "user"
key "user_id"
}
load(relationInput).asEdges {
label "relation"
outV "auser", {
label "user"
key "user_id"
}
inV "buser", {
label "user"
key "user_id"
}
}
I tried to use the soc-pokec (social network) from stanford (available in web).
I had to loose most of the config to solve the issue.
Note that there is totally no correlation between the numbers in the exception, and the settings I mad in the config.

TYPO3 - Retrieved TypoScript in itemsProcFunc are incomplete

I have following problem:
We are overriding the tt_content TCA with a custom column which has an itemsProcFunc in it's config. In the function we try to retrieve the TypoScript-Settings, so we can display the items dynamically. The problem is: In the function we don't receive all the TypoScript-Settings, which are included but only some.
'itemsProcFunc' => 'Vendor\Ext\Backend\Hooks\TcaHook->addFields',
class TcaHook
{
public function addFields($config){
$objectManager = \TYPO3\CMS\Core\Utility\GeneralUtility::makeInstance('TYPO3\\CMS\\Extbase\\Object\\ObjectManager');
$configurationManager = $objectManager->get('TYPO3\\CMS\\Extbase\\Configuration\\ConfigurationManagerInterface');
$setup = $configurationManager->getConfiguration(
\TYPO3\CMS\Extbase\Configuration\ConfigurationManagerInterface::CONFIGURATION_TYPE_FULL_TYPOSCRIPT
);
}
$setup is now incomplete and doesn't contain the full TypoScript, for example some of the static-included TypoScript is missing.
Used TYPO3 7 LTS (7.6.18), PHP 7.0.* in composer-mode.
Does anybody know where the problem is? Is there some alternative?
You maybe misunderstood the purpose of TypoScipt. It is a way of configuration for the Frontend. The Hook you mentioned is used in the TCA, whích is a Backend part of TYPO3. TypoScript usually isn't used for backend related stuff at all, because it is bound to a specific page template record. Instead in the backend, there is the TSConfig, that can be bound to a page, but also can be added globally. Another thing you are doing wrong is the use of the ObjectManager and the ConfigurationManager, which are classes of extbase, which isn't initialized in the backend. I would recommend to not use extbase in TCA, because the TCA is cached and loaded for every page request. Instead use TSConfig or give your configuration settings directly to the TCA. Do not initialize extbase and do not use extbase classes in these hooks.
Depending on what you want to configure via TypoScript, you may want to do something like this:
'config' => [
'type' => 'select',
'renderType' => 'singleSelect',
'items' => [
['EXT:my_ext/Resources/Private/Language/locallang_db.xlf:myfield.I.0', '']
],
'itemsProcFunc' => \VENDOR\MyExt\UserFunctions\FormEngine\TypeSelectProcFunc::class . '->fillSelect',
'customSetting' => 'somesetting'
]
and then access it in your class:
class TypeSelectProcFunc{
public function fillSelect(&$params){
if( $params['customSetting'] === 'somesetting' ){
$params['items'][] = ['New item',1];
}
}
}
I had a similar problem (also with itemsProcFunc and retrieving TypoScript). In my case, the current page ID of the selected backend page was not known to the ConfigurationManager. Because of this it used the page id of the root page (e.g. 1) and some TypoScript templates were not loaded.
However, before we look at the solution, Euli made some good points in his answer:
Do not use extbase configuration manager in TCA functions
Use TSconfig instead of TypoScript for backend configuration
You may like to ask another question what you are trying to do specifically and why you need TypoScript in BE context.
For completeness sake, I tested this workaround, but I wouldn't recommend it because of the mentioned reasons and because I am not sure if this is best practice. (I only used it because I was patching an extension which was already using TypoScript in the TCA and I wanted to find out why it wasn't working. I will probably rework this part entirely.)
I am posting this in the hope that it may be helpful for similar problems.
public function populateItemsProcFunc(array &$config): array
{
// workaround to set current page id for BackendConfigurationManager
$_GET['id'] = $this->getPageId((int)($config['flexParentDatabaseRow']['pid'] ?? 0));
$objectManager = GeneralUtility::makeInstance(ObjectManager::class);
$configurationManager = $objectManager->get(BackendConfigurationManager::class);
$setting = $configurationManager->getTypoScriptSetup();
$templates = $setting['plugin.']['tx_rssdisplay.']['settings.']['templates.'] ?? [];
// ... some code removed
}
protected function getPageId(int $pid): int
{
if ($pid > 0) {
return $pid;
}
$row = BackendUtility::getRecord('tt_content', abs($pid), 'uid,pid');
return $row['pid'];
}
The function getPageId() was derived from ext:news which also uses this in an itemsProcFunc but it then retrieves configuration from TSconfig. You may want to also look at that for an example: ext:news GeorgRinger\News\Hooks\ItemsProcFunc::user_templateLayout
If you look at the code in the TYPO3 core, it will try to get the current page id from
(int)GeneralUtility::_GP('id');
https://github.com/TYPO3/TYPO3.CMS/blob/90fa470e37d013769648a17a266eb3072dea4f56/typo3/sysext/extbase/Classes/Configuration/BackendConfigurationManager.php#L132
This will usually be set, but in an itemsProcFunc it may not (which was the case for me in TYPO3 10.4.14).

ASP.NET Caching appears to be very limited after Windows Creators Update

I'm not sure I have a specific question other than--have other people seen this? And if so, is there a known workaround / fix? I have failed to find anything on Google on this topic.
Basically, we have an ASP.NET MVC application that we run on localhost that extensively uses the ASP.NET in memory cache. We cache many repeated requests to the Db.
Yesterday, we upgraded two of our dev machines to Windows 10 Creators Update. After that update, we noticed that the page requests on just those machines started to craw. Upwards of 30 seconds per page.
After some debugging and viewing logs, we are seeing that the system is making the same request to the Db 200-300 times per request. Previously, this would just be cached the first time, and that request would not happen again until the cache expired it.
What we are seeing is that this code:
var myObject = LoadSomethingFromDb();
HttpRuntime.Cache.Insert("test", myObject);
var test = HttpRuntime.Cache.Get("test");
at some point, the Get would be returning NULL even if it is right after the Insert in code and even though there is no way the cache is even close to full. The application is just starting.
Anybody else see this?
Nevermind. We got bit by the Absolute Cache Expiration parameter which I neglected to include in the question's code because I didn't think that was relevant.
We were using an absolute cache expiration of:
DateTime.Now.AddMinutes(60)
instead, we should have been using:
DateTime.UtcNow.AddMinutes(60)
Not sure why the former was fine in Windows prior to Creator's Update, but the change to UtcNow seems to make the cache work again.
It appears that after the windows creators update that the behavior of cache.Insert overload methods behave differently.
[Test]
public void CanDemonstrateCacheExpirationInconsistency()
{
var cache = HttpRuntime.Cache;
var now = DateTime.Now;
var key1 =$"Now{now.Ticks}";
var key2 = key1+"2";
var key3 = $"UtcNow{now.Ticks}";
var key4 = key3 + "2";
cache.Insert(key1, true, null, DateTime.Now.AddHours(1), Cache.NoSlidingExpiration);
cache.Insert(key2, true, null, DateTime.Now.AddHours(1), Cache.NoSlidingExpiration,CacheItemPriority.Default,null);
cache.Insert(key3, true, null, DateTime.UtcNow.AddHours(1), Cache.NoSlidingExpiration);
cache.Insert(key4, true, null, DateTime.UtcNow.AddHours(1), Cache.NoSlidingExpiration, CacheItemPriority.Default, null);
Assert.That(cache.Get(key1), Is.Null); //Using this overload with datetime.now expires the cache immediately
Assert.That(cache.Get(key2), Is.Not.Null);
Assert.That(cache.Get(key3), Is.Not.Null);
Assert.That(cache.Get(key4), Is.Not.Null);
}

Grails bulk insert/update optimization

I am importing a large amount of data from a csv file, (file size is over 100MB)
the code i'm using looks like this :
def errorLignes = []
def index = 1
csvFile.toCsvReader(['charset':'UTF-8']).eachLine { tokens ->
if (index % 100 == 0) cleanUpGorm()
index++
def order = Orders.findByReferenceAndOrganization(tokens[0],organization)
if (!order) {
order = new Orders()
}
if (tokens[1]){
def user = User.findByReferenceAndOrganization(tokens[1],organization)
if (user){
order.user = user
}else{
errorLignes.add(tokens)
}
}
if (tokens[2]){
def customer = Customer.findByCustomCodeAndOrganization(tokens[2],organization)
if (customer){
order.customer = customer
}else{
errorLignes.add(tokens)
}
}
if (tokens[3]){
order.orderType = Integer.parseInt(tokens[3])
}
// etc.....................
order.save()
}
and i'm using the cleanUpGorm method to clean session after each 100 entries
def cleanUpGorm() {
println "clean up gorm"
def session = sessionFactory.currentSession
session.flush()
session.clear()
propertyInstanceMap.get().clear()
}
I also turned 2nd level cache off
hibernate {
cache.use_second_level_cache = false
cache.use_query_cache = false
cache.provider_class = 'net.sf.ehcache.hibernate.EhCacheProvider'
}
the grails version of the project is 2.0.4 and as database i am using mysql
for every entry , i am doing 3 calls to a find
to check if the order already exists
to check if user is correct
to check if customer is correct
and finally i'm saving the order instance
the import process is too slow, i am wondering how can I speed up and optimise this code.
EDIT :
I found that the searchable plugin is also making it slower .
so , to get around this , I used the command :
searchableService.stopMirroring()
But it still not fast enough,I am finally changing the code to use groovy sql instead
This found this blog entry very useful:
http://naleid.com/blog/2009/10/01/batch-import-performance-with-grails-and-mysql/
You are already cleaning up GORM, but try cleaning every 100 entries:
def propertyInstanceMap = org.codehaus.groovy.grails.plugins.DomainClassGrailsPlugin.PROPERTY_INSTANCE_MAP
propertyInstanceMap.get().clear()
Creating database indexes might help aswell and use default-storage-engine=innodb instead of MyISAM.
I'm also in the process of writing a number of services that will accomplish loads of very large datasets (multiple files of up to ~17million rows each). I initially tried the cleanUpGorm method you use, but found that, whilst it did improve things, the loading was still slow. Here's what I did to make it much faster:
Investigate what it is that is causing the app to actually be slow. I installed the Grails Melody plugin, then did a run-app then opened a browser at /monitoring. I could then see which routines took time to execute and what the worst-performing queries actually were.
Many of the Grails GORM methods map to a SQL ... where ... clause. You need to ensure that you have an index for each item used in a where clause for each query that you want to make faster, otherwise the method will become considerably slower the bigger your dataset is. This includes putting indexes on the id and version columns that are injected into each of your domain classes.
Ensure you have indexes set up for all of your hasMany and belongsTo relationships.
If the performance is still too slow, use Spring Batch. Even if you've never used it before, it should take you no time at all to set up a batch parse of a CSV file to parse into Grails domain objects. I suggest you use the grails-spring-batch plugin to do this and use the examples here to get a working implementation going quickly. It's extremely fast, very configurable and you don't have to mess around with cleaning up the session.
I had used batch insert while insert records, this is much faster than gorm cleanup method. Below example describes you how to implement it.
Date startTime = new Date()
Session session = sessionFactory.openSession();
Transaction tx = session.beginTransaction();
(1..50000).each {counter ->
Person person = new Person()
person.firstName = "abc"
person.middleName = "abc"
person.lastName = "abc"
person.address = "abc"
person.favouriteGame = "abc"
person.favouriteActor = "abc"
session.save(person)
if(counter.mod(100)==0) {
session.flush();
session.clear();
}
if(counter.mod(10000)==0) {
Date endTime =new Date()
println "Record inserted Counter =>"+counter+" Time =>"+TimeCategory.minus(endTime,startTime)
}
}
tx.commit();
session.close();

Several questions about this Varnish VCL

I'm setting up varnish-devicedetect VCL in Varnish 4.0.2:
https://github.com/varnish/varnish-devicedetect/blob/master/INSTALL.rst
I'm following the directions for method #1: "Send HTTP header to backend"
I've read through this readme and have Googled for quite some time now and still quite a few concepts are escaping me.
Here's my code (excerpts):
default.vcl
include "devicedetect.vcl";
sub vcl_recv {
call devicedetect;
# ... snip ...
}
sub vcl_backend_response {
# device detect
if (bereq.http.X-UA-Device) {
if (!beresp.http.Vary) { # no Vary at all
set beresp.http.Vary = "X-UA-Device";
} elseif (beresp.http.Vary !~ "X-UA-Device") { # add to existing Vary
set beresp.http.Vary = beresp.http.Vary + ", X-UA-Device";
}
}
# ... snip ...
}
sub vcl_deliver {
# device detect
if ((req.http.X-UA-Device) && (resp.http.Vary)) {
set resp.http.Vary = regsub(resp.http.Vary, "X-UA-Device", "User-Agent");
}
# ... snip ...
}
Here's my questions.
When I inspect the response in Chrome Dev Tools, why is the Vary header set to User-Agent. Isn't the whole approach of method #1 NOT to use user agent, and instead use X-UA-Device?
Based on other guides I read... it seems this will hit the origin for EACH type of mobile (if you look in device detect, its split up into... mobile-iphone, mobile-android, mobile-smartphone, etc). Is this true in my code above? I definitely DONT want to hit the origin server more than twice for any given URL (desktop, and mobile ... I don't want all the mobile-* cached separately).
Can someone describe what the 3 code blocks above actually do? In somewhat laymen's terms. About the only one I truly understand is the first code block. call devicedetect just looks at the User-Agent and then sets X-UA-Device header with the appropriate grouping on the request to the backend. I'm a bit confused what the other 2 code blocks do though.
Can I delete the bit with X-UA-Device-force if I don't intend to allow the user to 'use desktop site'?
The guide mentions that I should be setting something in the backend in my app code. Right now this is all I have (rails). I'm not changing headers or changing anything about the response. I'm only changing the way the HTML looks (for the mobile version of the site). Should I be changing a header or something? This is what I have so far:
Rails:
def detect_device
if request.headers['X-UA-Device'] =~ /^mobile/
#device = 'mobile'
prepend_view_path Rails.root + 'app' + 'views_mobile'
else
#device = 'desktop'
end
end
As to point 1, your X-UA-Device is a custom header for internal consumption, ie by default not exposed to the external world. To ensure the external caches/proxies understand you are considering the device/user-agent in the response, you have to update the Vary with a header which reflect this. this is where the user-agent comes in, as thats where you have derived the X-UA-Device from.
note the comment within the link you indicate
to keep any caches in the wild from serving wrong content to client #2 behind them, we need to transform the Vary on the way out.

Resources