My understanding of net.sf.ehcache.Cache -> net.sf.ehcache.statistics.StatisticsGateway chain is there no way to disable statistic upfront (whenever programatically, or via configuration). Is that right?
I'm asking because right after app started ehcache stats (in org.terracotta.statistics.archive.StatisticSampler to be more precise) are taking 2.5Mb of memory and they are not unused within app.
No, you can't disable them. They actually do auto enable/disable themselves.
Related
For Android automation tests I want to reduce the time of Execution between test cases.
Apart from using ID's is there any way around?
First, if you are using xpaths for selectors you should avoid them. Xpath is the one of the slowest selector method. If you are using id's beside of xpath and other selectors, this is the most efficient way to use selector. (you already mentioned it, you are using id's so you don't need to worry about the selectors)
Second thing to improve is waits. If you are using implicit waits and/or thread.sleep() you should get rid of them and should try to implement conditional explicit wait like waitUntilElementVisible. This gonna reduce your unnecessary wait time. And if you are also using verification methods for validating the elements which should be disappear at the page, you should keep waiting time minimum.
Third, you can use "noReset" capability as true in to your Desired Capabilities. This capability will check your emulator or device for reset needing. If there is no reason to reset, the initialization will take less.
Fourth, Turning off the animations will also reduce the execution time.
If you're targeting Android platform only it would make sense to reconsider the tool selection and switch to Espresso which is faster than Appium due to its implementation nature. Check out How to Get Started with Espresso (Android) article for more information
If you have to proceed with Appium:
Consider using the most optimal locator strategy (if possible use ID instead of XPath)
Consider using Page Object Design Pattern, it will allow you to get rid of necessary and unnecessary waits and stale elements errors due to lazy initialization routine
Consider running your Appium tests in parallel.
I have a long list of cells which each contain an image.
The images are large on disk, as they are used for other things in the app like wallpapers etc.
I am familiar with the normal Android process for resampling large bitmaps and disposing of them when no longer needed.
However, I feel like trying to resample the images on the fly in a list adapter would be inefficient without caching them once decoded, otherwise a fling would spawn many threads and I will have to manage cancelling unneeded images etc etc.
The app is built making extensive use of the fantastic MVVMCross framework. I was thinking about using the MvxImageViews as these can load images from disk and cache them easily. The thing is, I need to resample them before they are cached.
My question is, does anybody know of an established pattern to do this in MVVMCross, or have any suggestions as to how I might go about achieving it? Do I need to customise the Download Cache plugin? Any suggestions would be great :)
OK, I think I have found my answer. I had been accidentally looking at the old MVVMCross 3.1 version of the DownloadCache Plugin / MvxLocalFileImageLoader.
After cloning the up to date (v3.5) repo I found that this functionality has been added. Local files are now cached and can be resampled on first load :)
The MvxImageView has a Max Height / Width setter method that propagates out to its MvxImageHelper, which in turn sends it to the MvxLocalFileImageLoader.
One thing to note is that the resampling only happens if you are loading from a file, not if you are using a resource id.
Source is here: https://github.com/MvvmCross/MvvmCross/blob/3.5/Plugins/Cirrious/DownloadCache/Cirrious.MvvmCross.Plugins.DownloadCache.Droid/MvxAndroidLocalFileImageLoader.cs
Once again MVVMCross saves my day ^_^
UPDATE:
Now I actually have it all working, here are some pointers:
As I noted in the comments, the local image caching is only currently available on the 3.5.2 alpha MVVMCross. This was incompatible with my project, so using 3.5.1 I created my own copies of the 3.5.2a MvxImageView, MvxImageHelper and MvxAndroidLocalFileImageLoader, along with their interfaces, and registered them in the Setup class.
I modified the MvxAndroidLocalFileImageLoader to also resample resources, not just files.
You have to bind to the MvxImageView's ImageUrl property using the "res:" prefix as documented here (Struggling to bind local images in an MvxImageView with MvvmCross); If you bind to 'DrawableId' this assigns the image directly to the underlying ImageView and no caching / resampling happens.
I needed to be able to set the customised MvxImageview's Max Height / Width for resampling after the layout was inflated/bound, but before the images were retrieved (I wanted to set them during 'OnMeasure', but the images had already been loaded by then). There is probably a better way but I hacked in a bool flag 'SizeSet'. The image url is temporarily stored if this is false (i.e. during the initial binding). Once this is set to true (after OnMeasure), the stored url is passed to the underlying ImageHelper to be loaded.
One section of the app uses full screen images as the background of fragments in a pager adapter. The bitmaps are not getting garbage collected quick enough, leading to eventual OOMs when trying to load the next large image. Manually calling GC.Collect() when the fragments are destroyed frees up the memory but causes a UI stutter and also wipes the cache as it uses weak refs.
I was getting frequent SIGSEGV crashes on Lollipop when moving between fragments in the pager adapter (they never happened on KitKat). I managed to work around the issue by adding a SetImageBitmap(null) to the ImageView's Dispose method. I then call Dispose() on the ImageView in its containing fragment's OnDestroyView().
Hope this helps someone, as it took me a while!
Using logos, I can initiate a group of hooks using %init(groupName) I was wondering if there is a way to disable the group of hooks as well. I need my tweak to be disabled while the phone is locked.
Currently, I'm calling init in my tweak whenever the lockscreen is dismissed, and killing the process (mobilemail) whenever the lockscreen is activated. That seems like a crude solution though, is there something better?
thanks for the help
1) No, you cannot disable hooks in the sense you're thinking of once they're initialized.
2) Yes, killing the process will disable the tweak (because it's injected into the process when the process is spawned, and runs within that process). However, you definitely should not do that. Instead, you should enable the tweak when the user unlocks the device and disable it when they lock it. You could even simply use a static boolean to do this if you want to be über simple. You cannot "unload" the code, per se, but you can have it stop executing if a condition is not met for sure.
Happy coding.
We need to drive 8 to 12 monitors from one pc, all rendering different views of a single 3d scenegraph, so have to use several graphics cards. We're currently running on dx9, so are looking to move to dx11 to hopefully make this easier.
Initial investigations seem to suggest that the obvious approach doesn't work - performance is lousy unless we drive each card from a separate process. Web searches are turning up nothing. Can anybody suggest the best way to go about utilising several cards simultaneously from a single process with dx11?
I see that you've already come to a solution, but I thought it'd be good to throw in my own recent experiences for anyone else who comes onto this question...
Yes, you can drive any number of adapters and outputs from a single process. Here's some information that might be helpful:
In DXGI and DX11:
Each graphics card is an "Adapter". Each monitor is an "Output". See here for more information about enumerating through these.
Once you have pointers to the adapters that you want to use, create a device (ID3D11Device) using D3D11CreateDevice for each of the adapters. Maybe you want a different thread for interacting with each of your devices. This thread may have a specific processor affinity if that helps speed things up for you.
Once each adapter has its own device, create a swap chain and render target for each output. You can also create your depth stencil view for each output as well while you're at it.
The process of creating a swap chain will require your windows to be set up: one window per output. I don't think there is much benefit in driving your rendering from the window that contains the swap chain. You can just create the windows as hosts for your swap chain and then forget about them entirely afterwards.
For rendering, you will need to iterate through each Output of each Device. For each output change the render target of the device to the render target that you created for the current output using OMSetRenderTargets. Again, you can be running each device on a different thread if you'd like, so each thread/device pair will have its own iteration through outputs for rendering.
Here are a bunch of links that might be of help when going through this process:
Display Different images per monitor directX 10
DXGI and 2+ full screen displays on Windows 7
http://msdn.microsoft.com/en-us/library/windows/desktop/ee417025%28v=vs.85%29.aspx#multiple_monitors
Good luck!
Maybe you not need to upgrade the Directx.
See this article.
Enumerate the available devices with IDXGIFactory, create a ID3D11Device for each and then feed them from different threads. Should work fine.
I've been using System.Web.Cache for a while for testing purpose. It's quite a nice cache store and speed up my webpage quite a lot. But i don't know there are some case, in which i run for a few more more page, and when I turn back to that page few more time, the page query again ( I checked using a profiler ).
Does System.web.cache cache in Ram or some type of flash memory that make the cache go off once in a while when it's low on memory? or is it my mistake somewhere? Is it good to use System.web.cache for production?
best wishes to you :)
The cache will automatically start removing items when your system runs low on memory, the items it picks can be controlled to some degree by the priority you give them when you insert them into the cache. Use the CacheItemPriority enum to specify the priority in the Cache.Add() method. Yes the cache is fine for production, whether it is good or not for your specific implementation only you can tell.
The other issue to watch for is when the IIS application pool gets recycled.
Yes, ASP.NET cache is perfectly fine for production use (however, consider Velocity if you have a web farm). And yes, it does automatically remove items based on memory, item priority & other "metrics".