BPEL, threads stucked on HashMap.getEntry? - xml-parsing

I am new to SOA, and currently we met a problem when using BPEL to do some XML transformation.
we have 3 SOA projects will do something like:
Read input files from folder which is in text format
Save file content in Database and put on AQ
Read file id from AQ, load content from database, and transform to our internal XML format
apply some business logic and transform content back to text format.
SOA proejct1 do step 1-2, project2 do step 3 and project3 to step4.
We are doing some load test with input 7000 files.
the problem we experienced is that the memory use of "Old Generation" keep accumulating, although major GC can reduce it, it still keep growing, until 100%. Then no new BEPL instance can be created, and we met transaction timeout.
after analyze heap dump, we get a result like below, it seems that BPELFactoryImpl hold a HashMap which more than 180M, and it will keep growing. so does anyone experienced something similar?
we use SOA version 12.1.3. this problem stopped us for weeks, please help, thanks a lot.
Image of heap analysis

Guys
Finally we got an answer on this, it was caused by a bug, as said by Oracle Support, we are waiting for the patch.
thanks for your attention.
It's a bug. You should raise an SR referring for: stuck threads on
at java.util.HashMap.getEntry(HashMap.java:465)
at java.util.HashMap.get(HashMap.java:417)
at oracle.xml.parser.v2.XMLNode.setUserData(XMLNode.java:2137)
at oracle.bpel.lang.v20.model.impl.ExtensibleElementImpl.doCreateElement(ExtensibleElementImpl.java:502)
at oracle.dp.entity.impl.EmFacadeObjectImpl.getElement(EmFacadeObjectImpl.java:35)
at oracle.bpel.lang.v20.model.impl.ExtensibleElementImpl.performDOMChange(ExtensibleElementImpl.java:707)
at oracle.bpel.lang.v20.model.impl.ExtensibleElementImpl.doOnChange(ExtensibleElementImpl.java:636)
at oracle.bpel.lang.v20.model.impl.ExtensibleElementImpl$DOMUpdater.notifyChanged(ExtensibleElementImpl.java:535)
at oracle.dp.notify.impl.NotifierImpl.emNotify(NotifierImpl.java:39)
at oracle.dp.entity.impl.EmHolderImpl.doNotifyOnSet(EmHolderImpl.java:53)
at oracle.dp.entity.impl.EmHolderImpl.set(EmHolderImpl.java:47)
at oracle.bpel.lang.v20.model.impl.CopyImpl.setTo(CopyImpl.java:115)
at com.collaxa.cube.engine.ext.bpel.v2.wmp.BPEL2xCallWMP$CallArgument$1.evaluate(BPEL2xCallWMP.java:190)
at com.collaxa.cube.engine.ext.bpel.v2.wmp.BPEL2xCallWMP.invokeMethod(BPEL2xCallWMP.java:103)
at com.collaxa.cube.engine.ext.bpel.v2.wmp.BPEL2xCallWMP.__executeStatements(BPEL2xCallWMP.java:62)
at com.collaxa.cube.engine.ext.bpel.common.wmp.BaseBPELActivityWMP.perform(BaseBPELActivityWMP.java:188)
at com.collaxa.cube.engine.CubeEngine.performActivity(CubeEngine.java:2880)
....
Bug 20857627 (20867804) : Performance issue due to large number of threads stuck in HashMap.get

Related

Getting "chunk write below min size" when trying to write to Google Cloud using gcsfs

I have a script which uses gcsfs to write data to Google Cloud. Most of the time it works, but fairly regularly I get the following error:
ValueError: Non-final chunk write below min size.
This error seems to come from GCSFile._upload_chunk.
I can't really find anything in the docs that explains what might be going wrong here. I read this thread which suggests it might be related to how the data is committed (should I disable autocommit?) but I'm not sure it's entirely relevant. I read through the source of that function but that didn't help too much either. Would appreciate any guidance!
My code looks like this:
with gcs.open(file_path, mode='w') as f:
f.write('\n'.join(output_data))
output_data here is a list of strings. gcs is an instance of gcsfs.GCSFileSystem.
This issue apparently no longer happens in v0.7.0. Anyone facing it should upgrade.

--workerCacheMB setting missing in apache beam 0.6?

In Google Cloud Dataflow 1.x, I presumably had access to this critical pipeline option called:
workerCacheMb
I tried to set in in my beam 0.6 pipeline, but couldn't do so (it said that no such option existed.). I then scoured through the options source code to see if any option had a similar name -- but I still couldn't find it.
I need to set it as I think that my worfklow's incredibly slowness is due to a side input that 3GB but that appears to be taking well over 20 minutes to read. (I have a View.asList() and then I'm trying to do a for-loop on the list -- it's taking more than 20 minutes and still going; even at 3 GB, that's way too slow.) So, I was hoping that setting the workerCacheMb would help. (The only other theory I have is to switch from serializablecoder to AvroCoder....)
Are you using the right class of options?
The following code works for me in Beam:
DataflowWorkerHarnessOptions options = PipelineOptionsFactory.fromArgs(args).withValidation().create()
.cloneAs(DataflowWorkerHarnessOptions.class);
options.setWorkerCacheMb(3000);

Delphi Text Files get NULLS (0's) written to them instead of text

Unfortunately this question may be a bit vague, I have a problem that I am finding difficult to describe, it is intermittent and I cannot reproduce it myself, I am just hoping that someone else has seen something like it before.
My application has quite a lot of text and ini files that get written when it closes down. Typically this would be in response to a Close event, but may also be triggered by a WM_ENDSESSION. Unfortunately at the moment I am not sure if both or only one of these events can result in the problem I am about to describe, because I have been unable to reproduce this problem myself.
The issue I have is that for some users some of the text and ini files end up being written as NULLs. The file sizes end up looking about right, but instead of text, every character is written as a x00. So instead of 500 bytes of regular ASCII text I end up with 500 x00's. I also have an application log file that can sometimes end up with nulls written to it also. However the logging of x00's to the log file does not necessarily correspond to the exact same time as x00's were written to the config files.
For my files I am using TmemIniFile or TstringList which means that ultimately a Tstrings.SaveToFile is being called for all of my config files.
sl:=TstringList.Create;
try
SourceList.GetSpecificSubset(sl);
AppLogLogLine('Commands: Saving Always Available list. List has '+inttostr(sl.Count)+' commands.');
sl.SaveToFile(fn);
finally
sl.Free;
end;
But then I also have instance where I would already have a TstringList in memory and I just call SaveToFile on it. For TmemIniFile the structure would look similar to above. In some instances I may have an outer loop to write multiple lists. Some of those will result in files being written correctly, some will be full of 00's.
EDIT: GetSpecificSubset is simply a function that will populate "sl" with a list of command names. I have "GetAllUsersCommands", "GetHiddenCommands", "GetAlwaysVisibleCommands" etc. Note that my log file also writes this kind of thing, as a check for how big those lists are:
16/10/2013 11:17:49 AM: Commands: Saving Any User list. List has 8 commands.
16/10/2013 11:17:49 AM: Commands: Saving Always Visible list. List has 17 commands.
16/10/2013 11:17:49 AM: Commands: Saving Always Hidden list. List has 2 commands.
I accidentally left the logging line out of the code above. So this log line is the last thing written before calling Tstrings.SaveToFile, and at that point it thinks it has data. Even if somehow each line of text were NULLs, I would still expect to see x13x10 in the files, but that is not happening.
Here's a screen cap from a HEX editor:
EDIT 2: I just realised I left off a very important piece of information. This is only intermittent. It works 99% of the time. When saving files at shutdown it might not even be all files. Even if I have a loop saving multiple similar files, some may work fine and others may fail.

symfony2 free -m out of memory

I have symfony2 app out there. But we have RAM memory problems... It works like a charm when there is 50 active people (google analytics).
I select data from DB usally like this:
$qb=$this->createQueryBuilder('s')
->addSelect('u')
->where('s.user = :user')
->andWhere('s.admin_status = false')
->andWhere('s.typ_statusu != :group')
->setParameter('user', $user)
->setParameter('group', 'group')
->innerJoin('s.user', 'u')
->orderBy('s.time', 'DESC')
->setMaxResults(15);
return $query=$qb->getQuery()->getResult();
The queries are fast i dont have problem with them.
Let me please know exactly what you need and i will paste it here. I need to fix it so much..
BUT THE PROBLEM COME NOW: When there is 470people at the same time.. (google analytics) there is about 7GB of memory away... then it fall down after peak to 5GB. But WHY SO MUCH??? My scripts take from 10-17MB of memory usage in app_dev.
I also use APC. How can i solve this situation? Why is there so much memory consumed? Thx for any advice!!
Whats your average memory?
BTW: When i will not solve this i will be in big troubles.
One problem could be doctrine and if you are hydrating too much obejcts in every single request.
Set max execution time of a script to only 30 seconds:
max_execution_time = 30
Set APC shm_size to something reasonable compared to your memory:
apc.shm_size = 256M
Then optimize your query. And if you use php/symfony in cli, you better limit the resource usage for php in cli too.
Ensure you are understanding correctly memory consumption. http://blog.scoutapp.com/articles/2010/10/06/determining-free-memory-on-linux
To fast Apc you can remove the modified check with apc.stat = 0 but you will need to clear apc-cache every time you modify existing files: http://www.php.net/manual/en/apc.configuration.php#ini.apc.stat
To reduce memory consumption reduce hydration my adding ->select('x') and fetching only the essential.
To optimize memory consumption enable mysql-cache, something like /etc/mysql/my.cnf:
query_cache_size=128M
query_cache_limit=1M
Do not forget to enable and check your slow-query-log to avoid bottlenecks.
I suspect that your page has more that one query. How many queries happening on the page? Worst thing in the doctrine is the ability to make queries through getter(ex. getComments()). If you are using a many-to-many relation, this leads to huge problems. You can see all queries via profiler in dev environment.
Also possible, that problem in the settings of apache or php. Incorrect settings of php-fpm lead to problems too. The best solution is to stress test your server with tools like siege and look what goes on through the htop or top. 300 peoples can be a heavy load for the "naked" apache
Have you ever tried to retrieve scalar results instead of a collection of objects ?
// [...]
return $query=$qb->getQuery()->getScalarResult();
http://docs.doctrine-project.org/en/latest/reference/query-builder.html#executing-a-query
http://doctrine-orm.readthedocs.org/en/2.0.x/reference/dql-doctrine-query-language.html#query-result-formats
At symfony configuration level, have you double-checked your configuration to ensure caching has been enabled properly ?
http://symfony.com/doc/current/reference/configuration/doctrine.html#caching-drivers
Detaching entities from your entity manager could prove useful depending on your overall process:
http://docs.doctrine-project.org/en/2.0.x/reference/working-with-objects.html#detaching-entities

Any way to find more detail about WARNING: ID3D10Buffer::SetPrivateData: Existing private data of same name with different size found!

I'm encountering this error when I'm running my DirectX10 program in debug mode:
D3D10: WARNING: ID3D10Buffer::SetPrivateData: Existing private data of same name with different size found! [ STATE_SETTING WARNING #55: SETPRIVATEDATA_CHANGINGPARAMS ]
I'm trying to make the project highly OOP as a learning exercise, so there's a chance that this may be occurring, but is there a way to get some more details?
It appears this warning is raised by D3DX10CreateSprite, which is internally called by font->DrawText
You can ignore this warning, seems to be a bug in the Ms code :)
Direct3D11 doesn't have built-in text rendering anymore, so you won't encounter it in the future.
Since this is a D3D11 warning, you could always turn it off using ID3D11InfoQueue:
D3D11_MESSAGE_ID hide [] = {
D3D11_MESSAGE_ID_SETPRIVATEDATA_CHANGINGPARAMS,
// Add more message IDs here as needed
};
D3D11_INFO_QUEUE_FILTER filter;
memset(&filter, 0, sizeof(filter));
filter.DenyList.NumIDs = _countof(hide);
filter.DenyList.pIDList = hide;
d3dInfoQueue->AddStorageFilterEntries(&filter);
See this page for more. I found your question while googling for the answer and had to search a bit more to find the above snippet, hopefully this will help someone :)
What other data are you looking for or interested in?
The warning is pretty clear about what is going on, but if you want to hunt down a bit more data, there may be a few things to try.
Try calling ID3D10Buffer::GetPrivateData with the same name or do some other check to see if there is data with that name already, and if so, what the contents are. Print your results to a file, output window, or console. This may be combined with breakpoints to see where the duplicate is occurring (break when there's already data).
You may (not positive) be able to set the D3D runtimes to debug mode and to break on warnings (not sure if it can do warnings or just errors). Debug your app in VS or your preferred debugger, and when the warning is shown, it will break and you can look at the parameters.
Go through your code and track down all calls to ID3D10Buffer::SetPrivateData and look to see if there are any obvious duplicates. If there are, work up the program flow and see why and what you can do about them (this may work best after you use one of the former methods to know where to start).
How are your data names set up, and what is the buffer used for? Examining one or both may lead you to a conflict somewhere.
You may also try unicorns, they've been known to help with this kind of problem.

Resources