PHP Command-line scripts are ignoring php.ini and ini_set('memory_limit',...) directives - domdocument

I am facing the common "Fatal error: Out of memory (allocated 30408704) (tried to allocate 24 bytes)..." PHP Fatal error. Pages served via Apache are not exhibiting this behavior.
I've tried the following:
Increasing the memory_limit in php.ini to a much larger value.
Increasing memory_limit within the script itself via calls to ini_set('memory_limit', -1), ini_set('memory_limit', '-1'), ini_set('memory_limit', 100000000), ini_set('memory_limit', '128M'), etc.
unset()ing unneeded arrays and objects to encourage the garbage collector to free up memory.
Contacting the web host. They are normally very capable and knowledgeable, but have not been able to help me with this issue either.
I've tried explicitly including a php.ini file using the -c command-line flag to hand-pick specific php.ini files with various values.
I've tried setting memory_limit in php.ini using both raw numbers of bytes and values such as 64M, 128M, etc.
The hosting provider was able to run the script as root with no issues, but experiences the same issue I do when running it using my non-root user. Perhaps there is some kind of permissions issue involved?
Regardless of what I try, the error message is the same. It appears that my command line scripts are ignoring changes to memory_limit.
I tend to try to make sure my scripts are memory efficient, but I'm currently needing to parse large amounts of HTML via Simple HTML DOM and it is in the parser that I'm experiencing out of memory issues. In an attempt to reduce the memory footprint of the script, I've tried using DOMDocument instead. This does not help either. In fact, the out of memory error is now triggered elsewhere in the script.
My question: has anyone experienced this or a similar issue? Do you have any recommendations?
Thank you.

It turns out that the problem was caused by shell fork bomb protection being enabled on the server that was placing a hard memory limit on all command-line scripts. This had been enabled by my web host without my knowledge.

your PHP on cli may be using a different php.ini to your apache php. try a phpinfo() and check its using the ini file you think its using.

Related

Delphi - ISAPI DLL Application hanging on Fastreport

Found this this post ISAPI web application hanging if FastReport.PrepareReport is called
It helped solving my problem partially. As well I´ve turned Wisiyng property to False on frxRichView. Since I'm retunrnig a base 64 string I've also tryed switched loading from StrToStream/LoadFromStream to LoadFromFile. The problem persist with multiple acess, 2 out of 10 process can finish loading my Pdf file. All the others requests hangs until timeout. Does anyone have an idea what else can I do? is there anyother way to retunr rtf format into Fastreport report Thanks.
I could only get time-out error using Selenium to test multiples request from the client side.
Update: I've figured that just having a TfrxRichView component in the report causes the hanging, it doesn't even need to have a rtf text on it. Replacing it to a memo all request are answered.
UPDATE: Got a answer from fast report and I wold like your opinion.
ok,
I had similar problems, and it is not easy to find out the reason, but maybe you can find your solution in between my considerations..
1) Stack Size
When ran in IIS your ISAPI is only a DLL called by a process, you are not the main process so you have to pay attention to stack dimension.
Normally a Delphi application have a default stack size of 1Mb, in ISAPI DLL you will have only 256Kb of stack.
Maybe you are facing a stack overflow exception.. it can explain why it does not occurr always but only in some circumstances..
2) Trapped Exception
In general you get some error during the preparation of report (aka all the job of working with data, expressions, variables, formulas etc etc..) can bring to a trapped exception. You may be unable to see it from outside but code execution was broken somwhere and report preparation had not finished.
3) MessageBoxes and/or standard Exceptions
when running in ISAPI you should not output anything to user interface,
maybe a message dialog (or an exception) can bring to unexpected behaviour.
4) Global Var
You should avoid global var because in ISAPI they will be common across threads
So, if you have sources, debug the application.. at first exception you should understand where is your problem..
If you have not sources.. chek the above list.. I hope you can find some useful information.
You have two ways to solve this:
1- Try to recreate this behavior while debugging your ISAPI DLL. If you are lucky, you can identify the thread that is hanging your application. Sometimes this is hard or even impossible to recreate.
2- If you have access to the hung ISAPI application instance, use a tool like SysInternals Process Explorer to create a minidump file. Your application must be built using full debug symbols and you should have the corresponding map file. With one (or more - even better) dump files obtained from your hung application plus the map file, you can use another tool, WinDbg to analyze it and find the cause. (Sometimes) WinDbg can show exactly which thread is hanging the whole application and the line of code that causes it.
If you have never done that, I must warn you that this kind of analysis is almost a gamble... You have to use several different tools with little
or no documentation, read heaps of technical info in various places. In the end, sometimes it works wonderfully and sometimes it fails miserably.
Because debugging ISAPI is not obvious, but also because I wanted to be able to switch easily between more different hosting solutions — and wanted to update my website on the fly without a restart of the web-server/service — I created xxm. It has a singular interface to the HTTP context, your DLL gets loaded by either a IIS ISAPI handler, or a HTTP.SYS handler, or an Apache httpd module, or for debugging locally you can just set xxmHttp.exe as host application to get IIS out of the way.

Neo4j getting killed in Travis CI container

I'm trying to add Neo4j 3.0 to my tests for the neo4j gem and I'm having trouble with the server getting killed in a Travis CI container. Pre-3.0 works just fine, but when I use 3.0 it seems to get killed. There seems to be plenty of memory (when I run Neo4j locally it uses 300-400 MB). I get a warning from Neo4j saying:
WARNING: Max 30000 open files allowed, minimum of 40000 recommended. See the Neo4j manual.
That makes me think that it's getting killed because of too many open files. I'm not sure if there's a way to increase the number of files on a Neo4j container, and I have a number of jobs so I don't want to slow things down by running sudo: true. Did Neo4j 3.0 change to require more open files (the documentation doesn't seem to imply that it did)?
EDIT:
My .travis.yml file:
This is how I do it, and it works fine for me for 2.3 and 3.0 including a push to docker hub.
https://github.com/maxdemarzi/neo_travis
https://travis-ci.org/maxdemarzi/neo_travis
I think our memory allocation is messing things up. One thing that is unusual on your (travis's) setup is that there is twice the amount of swap memory compared to RAM, and that the amount of memory reported as available is very large.
Try specifying the amount of memory in your config file. See http://neo4j.com/docs/operations-manual/current/#performance-tuning for more details, but essentially add these to your config.
In neo4j.conf:
dbms.memory.pagecache.size=1G
and in neo4j-wrapper.conf:
dbms.memory.heap.max_size=1000
dbms.memory.heap.initial_size=1000
The memory limits are set quite low to guarantee that Travis doesn't kill the process, and I suspect that the tests don't need much in terms of memory.

Web deployment failing due to file in use

I'm using Microsoft's Web Deploy Remote Agent service to allow me to easily publish code to the server from within Visual Studio.
The web site I am deploying is using log4net to log messages to log files, and every time I try to deploy a new version of the code, I get this error in Visual Studio stating that the current log4net log file is in use:
An error occurred when the request was processed on the remote
computer. The file 'Web.log' is in use.
The process cannot access 'C:\inetpub\wwwroot\Logs\Web.log' because it
is being used by another process.
I can solve this by going onto the server and doing an iisreset before publishing... but that is kind of defeating the point of 'easy' publishing from Visual Studio :)
Is there some way I can get the publish task to issue an iisreset automatically, or some other way I can work round this?
I kept poking around and found some tidbits around the file being locked in a few other forums. Have you tried adding
<lockingModel type="log4net.Appender.FileAppender+MinimalLock" />
To your <appender> element in the web.config file? From the Apache docs
Opens the file once for each AcquireLock/ReleaseLock cycle,
thus holding the lock for the minimal amount of time. This method of
locking is considerably slower than FileAppender.ExclusiveLock but
allows other processes to move/delete the log file whilst logging
continues.
As far as the performance considerations, I suppose you would need to test if this will affect you or not as I am assuming it really depends on how often you are writing to the log file as to how much this will impact performance. I can't believe that getting/releasing a lock could take all that much time though.
There is a MSDEPLOY provider called recycleApp which is used exactly for this. You can include this in your deployment manifest.
Another option is to use ignoreOnErrors flag which will skip the file in use and continue with the deployment.

symfony 1.4 propel:build-model not working as expected

Just wondering if anyone might know what's happening here. I have several schema.yml files, and when I try to build model classes using symfony propel:build-model I don't get any error message, however instead of any classes being generated I get xml files generated in the same config folder as the schema yml files. i.e. if I have a file named logger_schema.yml in the config directory, after I run build-model, I will also have a generated-logger_schema.xml file in the config directory as well, and no generated classes.
Any idea what could be causing this?
The XML file in question is a worker file symfony/Propel creates as part of the class generation process - it's not an "error" as such.
symfony CLI tasks require quite a lot of PHP memory, especially on Windows. If the Propel task is failing, I would recommend a permanent change to the php.ini file setting on memory allocation to at least 256M. I know this seems high, but you should only ever need these tasks on a development machine. As you note, you saw evidence of memory exhaustion on another related task.
If that doesn't fix it, could you add to your question all of the CLI output when you run the task? It might shed some light on the step which is failing.
After looking at this ticket, it appears the XML files are likely the result of a symfony error, despite the fact that I repeatedly got no error message using propel:build-model. After trying propel:build --model --forms, I did in fact get a "memory exhausted" error, which was solved by temporarily increasing the PHP memory limit.

Does Struts 2 have any memory issues

I have a webapp developed with struts2 deployed in tomcat 5.5. The server has other applications deployed in it. But the app created with struts2 is very slow. Any ideas? How does Struts 2 handle object creation? And is there anything I can do on the tomecat server..
How slow is it? What are you doing? are you sure it is Struts 2 that is slow and not your application code? Did you do any profiling? What were the results?
Check this out: http://struts.apache.org/2.2.1/docs/performance-tuning.html
I found serving the static content from a folder increased the speed.
Well few details are really required for some one to answer your question in more good way
Which Struts2 version you are using
At which place/part do you think application is slow
as per my experience there are certain areas where Struts2 have known problems, OGNL in itself sometime creates problem since this is the part of the framework which took most of the time, this has been known to fixed in 3.x version of OGNL so you can get new jar of OGNL and than can test your application.
Second use some profiler and it will help you to catch the culprit like any thread blocking etc.
What OS is Tomcat running on?
If it's Linux, you may have run into a lack of entropy issue.
If this command returns something less than 200, it could explain your issue:
cat /proc/sys/kernel/random/entropy_avail
If it is low (or watch during startup/making requests), try pointing /dev/random to /dev/urandom. (Not for secure Production, but to test in Dev should be fine):
mv /dev/random /dev/random.orig
ln -s /dev/urandom /dev/random
And try starting Tomcat again.

Resources