How to disable Grails Jasypt plugin encryption during integration tests? - grails

Is there a way to disable encryption, or possibly use a trivial algorithm during integration testing of a Grails project? There is quite a bit of overhead in the field level encryption that doesn't necessarily need to be tested then and simply adds to the time taken to run the tests.
Excluding the plugin during the test phase probably won't work since mapping is required and will likely break the compile.
I'm thinking a plain text or simpler algorithm might work, or would it be possible even to have a config ignore the encryption processing all together?
The goal is simply to reduce the performance hit of the plugin during tests.

One alternative that could help would be to turn down the keyObtentionIterations in dev (it's a config value). This is the number of iterations that the encryptor does to make it much harder to crack, as it recursively encrypts that many times to slow things down.
Change this in your config:
keyObtentionIterations = 1000
to
keyObtentionIterations = 1
(if you have it set, otherwise add it). That should speed things up significantly and still keep it so that overflow issues are still tested.
If that does help, I'd be curious to hear how much that speeds things up if you could reply with speed differences.

You could use Category to stub out encryption. But you will have to measure which one is a perf hit (Category or Encryption).
`class EncryptionCategory {
static String decrypt(PBEStringEncryptor obj,String encStr) {
return encStr
}
static String encrypt(PBEStringEncryptor obj,String str) {
return str;
}
}`

Related

Why does VkAccessFlagBits include both read bits and write bits?

In vulkan.h, every instance of VkAccessFlagBits appears in a pair that contains a srcAccessMask and a dstAccessMask:
VkAccessFlags srcAccessMask;
VkAccessFlags dstAccessMask;
In every case, according to my understanding, the purpose of these masks is to help designate two sets of operations, such that results of operations in the first set will be visible to operations in the second set. For instance, write operations occurring prior to a barrier should not get hung up in caches but should instead propagate all the way to locations from which they can be read after the barrier. Or something like that.
The access flags come in both READ and WRITE forms:
/* ... */
VK_ACCESS_SHADER_READ_BIT = 0x00000020,
VK_ACCESS_SHADER_WRITE_BIT = 0x00000040,
VK_ACCESS_COLOR_ATTACHMENT_READ_BIT = 0x00000080,
VK_ACCESS_COLOR_ATTACHMENT_WRITE_BIT = 0x00000100,
/* ... */
But it seems to me that srcAccessMask should probably always be some sort of VK_ACCESS_*_WRITE_BIT combination, while dstAccessMask should always be a combination of VK_ACCESS_*_READ_BIT values. If that is true, then the READ/WRITE distinction is identical to and implicit in the src/dst distinction, and so it should be good enough to just have VK_ACCESS_SHADER_BIT etc., without READ_ or WRITE_ variants.
Why are there READ_ and WRITE_ variants, then? Is it ever useful to specify that some read operations must fully complete before some other operations have begun? Note that all operations using VkAccessFlagBits produce (I think) execution dependencies as well as memory dependencies. It seems to me that the execution dependencies should be good enough to prevent earlier reads from receiving values written by later writes.
While writing this question I encountered a statement in the Vulkan specification that provides at least part of an answer:
Memory dependencies are used to solve data hazards, e.g. to ensure that write operations are visible to subsequent read operations (read-after-write hazard), as well as write-after-write hazards. Write-after-read and read-after-read hazards only require execution dependencies to synchronize.
This is from the section 6.4. Execution And Memory Dependencies. Also, from earlier in that section:
The application must use memory dependencies to make writes visible before subsequent reads can rely on them, and before subsequent writes can overwrite them. Failure to do so causes the result of the reads to be undefined, and the order of writes to be undefined.
From this I surmise that, yes, the execution dependencies produced by the Vulkan commands that involve these access flags probably do free you from ever having to put a VK_ACCESS_*_READ_BIT into a srcAccessMask field--but that you might in fact want to have READ_ flags, WRITE_ flags, or both in some of your dstAccessMask fields, because apparently it's possible to use an explicit dependency to prevent read-after-write hazards in such a way that write-after-write hazards are NOT prevented. (And maybe vice-versa?)
Like, maybe your Vulkan will sometimes decide that a write does not actually need to be propagated all the way through a particular cache to its final specified destination for the sake of a subsequent read operation, IF Vulkan happens to know that that read operation will simply read from that same cache, saving some time? But then a second write might happen, and write to a different cache, and there'll be two caches left in a race (with the choice of winner undefined) to send their two values to the same spot. Or something? Maybe my mental model of these caches is entirely wrong.
It is fairly solidly established, at least, that memory barriers are confusing.
Let's go over all the possibilities:
read–read — well yeah that one is pretty useless. Khronos seems to agree #131 it is pointless value in src (basically equivalent to 0).
read–write — execution dependency should be sufficient to synchronize without this. Khronos seems to agree #131 it is pointless value in src (basically equivalent to 0).
write–read — that's the obvious and most common one.
write–write — similar reason to write–read above. Without it the order of the writes would be undefined. It is a bit pointless for most situations to write something you haven't even read in between. But hey, now you have a way to synchronize it.
You can provide bitmask of more of these masks to both src and dst. In which case it makes sense to have both masks for driver to sort the dependencies out for you. (I don't expect performance overhead from this on API level, so it is allowed as convenience)
From API design perspective, it could mean adding different enum for srcAccess. But perhaps _READ variants could just be forbidden in srcAccess through "Valid Usage", making this argument weak. The src == READ variant might have been kept, because it is benign.

Any best practice around having different data in /priv for testing vs production?

I'm writing tests with EUnit and some of the Units Under Test need to read a data file via file:consult/1. My tests make assumptions about the data that will be available in /priv, but the data will be different in production. What's the best way to accomplish this?
I'm a total newbie in Erlang and I have thought of a few solutions that feel a bit ugly to me. For example,
Put both files in /priv and use a macro (e.g., "-ifdef(EUNIT)") to determine which one to pass to file:consult/1. This seems too fragile/error-prone to me.
Get Rebar to copy the right file to /priv.
Also please feel free to point out if I'm trying to do something that is fundamentally wrong. That may well be the case.
Is there a better way to do this?
I think both of your solutions would work. It is rather question of maintaining such tests, and both of those rely on some external setup (file existing, and having wright data).
For me easiest way to keep contents of such file local to given test is mocking, and making file:consult/1 return value you want.
7> meck:new(file, [unstick, passthrough]).
ok
8> meck:expect(file, consult, fun( _File ) -> {some, data} end).
ok
9> file:consult(any_file_at_all).
{some,data}
It will work, but there are two more things you could do.
First of all, you should not be testing file:consult/1 at all. It is part of standard library, and can assume it works all wright. Rather than doing that you should test functions that use data you read from this file; and of course pass to them some "in-test-created" data. It will give you some nice separation between data source, and parsing (acting on) it. And later it might be simpler to replace file:consult with call to external service, or something like that.
Other thing is that problem with testing something should be sign of bad smell for you. You might think a little about redesigning your system. I'm not saying that you have to, but such problems are good indicator to justify on . If you testing some functionality x, and you would like it to behave one way in production and other in tests (read one file or another), than maybe this behaviour should be injected to it. Or in other words, maybe file your function should read should be a parameter in this function. If you would like to still have some "default file to read" functionality in your production code, you could use something like this
function_with_file_consult(A, B, C) ->
function_with_file_consult(A, B, C, "default_file.dat").
function_with_file_consult(A, B, C, File) ->
[ ... actual function logic ... ]
It will allow you to use shorter version in production, and longer just for your tests.

symfony2 free -m out of memory

I have symfony2 app out there. But we have RAM memory problems... It works like a charm when there is 50 active people (google analytics).
I select data from DB usally like this:
$qb=$this->createQueryBuilder('s')
->addSelect('u')
->where('s.user = :user')
->andWhere('s.admin_status = false')
->andWhere('s.typ_statusu != :group')
->setParameter('user', $user)
->setParameter('group', 'group')
->innerJoin('s.user', 'u')
->orderBy('s.time', 'DESC')
->setMaxResults(15);
return $query=$qb->getQuery()->getResult();
The queries are fast i dont have problem with them.
Let me please know exactly what you need and i will paste it here. I need to fix it so much..
BUT THE PROBLEM COME NOW: When there is 470people at the same time.. (google analytics) there is about 7GB of memory away... then it fall down after peak to 5GB. But WHY SO MUCH??? My scripts take from 10-17MB of memory usage in app_dev.
I also use APC. How can i solve this situation? Why is there so much memory consumed? Thx for any advice!!
Whats your average memory?
BTW: When i will not solve this i will be in big troubles.
One problem could be doctrine and if you are hydrating too much obejcts in every single request.
Set max execution time of a script to only 30 seconds:
max_execution_time = 30
Set APC shm_size to something reasonable compared to your memory:
apc.shm_size = 256M
Then optimize your query. And if you use php/symfony in cli, you better limit the resource usage for php in cli too.
Ensure you are understanding correctly memory consumption. http://blog.scoutapp.com/articles/2010/10/06/determining-free-memory-on-linux
To fast Apc you can remove the modified check with apc.stat = 0 but you will need to clear apc-cache every time you modify existing files: http://www.php.net/manual/en/apc.configuration.php#ini.apc.stat
To reduce memory consumption reduce hydration my adding ->select('x') and fetching only the essential.
To optimize memory consumption enable mysql-cache, something like /etc/mysql/my.cnf:
query_cache_size=128M
query_cache_limit=1M
Do not forget to enable and check your slow-query-log to avoid bottlenecks.
I suspect that your page has more that one query. How many queries happening on the page? Worst thing in the doctrine is the ability to make queries through getter(ex. getComments()). If you are using a many-to-many relation, this leads to huge problems. You can see all queries via profiler in dev environment.
Also possible, that problem in the settings of apache or php. Incorrect settings of php-fpm lead to problems too. The best solution is to stress test your server with tools like siege and look what goes on through the htop or top. 300 peoples can be a heavy load for the "naked" apache
Have you ever tried to retrieve scalar results instead of a collection of objects ?
// [...]
return $query=$qb->getQuery()->getScalarResult();
http://docs.doctrine-project.org/en/latest/reference/query-builder.html#executing-a-query
http://doctrine-orm.readthedocs.org/en/2.0.x/reference/dql-doctrine-query-language.html#query-result-formats
At symfony configuration level, have you double-checked your configuration to ensure caching has been enabled properly ?
http://symfony.com/doc/current/reference/configuration/doctrine.html#caching-drivers
Detaching entities from your entity manager could prove useful depending on your overall process:
http://docs.doctrine-project.org/en/2.0.x/reference/working-with-objects.html#detaching-entities

Runtime script evaluation in Grails - Best Practicse

In our application, numerous emails are being sent from the system. These emails were of the same format for all users with different contextual variables populating the dynamic data.
We are now planning a feature to allow administrators to edit and customize these templates. As such the plan is to use the groovy shell to evaluate the templates at run time e.g.
Binding binding = new Binding();
binding.setVariable("model", [var1: "First Name", var2: "Last Name"])
GroovyShell shell = new GroovyShell(binding);
Object email = shell.evaluate('return "<html><title>Test Shell</title><body>${model.var1} ${model.var2}</body></html>";');
This seems to work adequately for us. The questions I have are:
Is the GroovyShell the preferred engine to use or is Rhino or other better?
Are there any performance concerns or memory issues to be aware of? Any low hanging fruit we can optimize i.e. can the shell or binding be reused
What's the biggest bottleneck in the above code? The construction? The evaluation?
thanks
I would recommend using something like GroovyPagesTemplateEngine because it goes beyond just Groovy eval and you and you can use all the grails taglib goodness as well. I'm using both GroovyPagesTemplateEngine and SimpleTemplateEngine for your exact scenario.
SimpleTemplateEngine is slightly faster so if I don't need much more than simple binding I use it. When I need to deal with logic and control structures, I use GroovyPagesTemplateEngine.
For grails, use the page rendering api instead. http://grails.org/doc/2.0.x/guide/introduction.html#whatsNew

lua script error checking

Is it possible to check if a lua script contains errors without executing it? I have fallowing code:
if(luaL_loadbuffer(L, data, size, name))
{
fprintf (stderr, "%s", lua_tostring (L, -1));
lua_pop (L, 1);
}
if(lua_pcall(L, 0, 0, 0))
{
fprintf (stderr, "%s", lua_tostring (L, -1));
lua_pop (L, 1);
}
But if the script contains errors it passes first if and it is executed. I want to know if it contains errors when I load it, not when I execute it. Is this possible?
You can use the LUA Compiler. It will only compile your file to bytecode without executing it.
Your program will also have the advantage the run faster if it is compiled.
You can even use the -p option to only perform a syntax checking, according to the linked man page :
-p load files but do not generate any output file. Used mainly for syntax checking or testing precompiled chunks: corrupted files will probably generate errors when loaded. For a thourough integrity test, use -t.
(This was originally meant as a reply to the first comment to Krtek's question, but I ran out of space there and to be honest it works as an answer just fine.)
Functions are essentially values, and thus a named function is actually a variable of that name. Variables, by their very definition, can change as a script is executed. Hell, someone might accidentally redefine one of those functions. Is that bad? To sum my thoughts up: depending on the script, parameters passed and/or actual implementations of those pre-defined functions you speak of (one might unset itself or others, for example), it is not possible to guarantee things work unless you are willing to narrow down some of your demands. Lua is too dynamic for what you are looking for. :)
If you want a flawless test: create a dummy environment with all bells and whistles in place, and see if it crashes anywhere along the way (loading, executing, etc). This is basically a sort of unit test, and as such would be pretty heavy.
If you want a basic check to see if a script has a valid syntax: Krtek gave an answer for that already. I am quite sure (but not 100%) that the lua equivalent is to loadfile or loadstring, and the respective C equivalent is to try and lua_load() the code, each of which convert readable script to bytecode which you would already need to do before you could actually execute the code in your normal all-is-well usecase. (And if that contained function definitions, those would need to be executed later on for the code inside those to execute.)
However, these are the extent of your options with regards to pre-empting errors before they actually happen. Lua is a very dynamic language, and what is a great strength also makes for a weakness when you want to prove correctness. There are simply too many variables involved for a perfect solution.
In general it is not possible, as Lua is a dynamic language, and most of errors happen in runtime.
If you want to check for syntax errors, use luac -p option. I use it as a part of my pre-commit hook, for example.
Other common errors are triggering by misusing the global variables. You may analyze output of luac -l to catch these cases. See here: http://lua-users.org/wiki/DetectingUndefinedVariables.
If you want something more advanced, there are several more-or-less functional static analysis tools for Lua code. Start with LuaInspect.
In any case, you are advised to write unit tests instead of just relying on static code checks. Less pain, more gain.

Resources