I am unable to change my max execution time. I have changed it in my php.ini file and it still shows as 300 when I run phpinfo() even though I've set it to 0 and even ridiculous amounts (9000000000) Is there a setting I'm missing? I've rebooted the apache server and rebooted the actual server and I am still encountering this issue.
I ended up using the script on this post https://stackoverflow.com/a/7700253/815437 along with standard ini_set of memory limit and max execution time. This was not a solution to the underlying problem, but it did get me rolling for now.
I am still accepting answers on this problem though because I'm sure it'll be an issue in the future with some large databases I have to import.
Try adding this line of code:
set_time_limit(5);
This code sets the max execution time to 5 seconds and you can't set max execution time to 0 because that means that the script will run forever. You have to try a positive number, zero will make the script run forever.
The best way to deal with execution time issues is to enforce max execution time temporarily for that specific script. It's debatable but works fine as balanced solution.
Please read more about it
http://php.net/manual/en/function.set-time-limit.php
Related
Every time I run the homepage of the project I'm working with, it takes so much time to load (6-8 seconds page load time). And checking with webpagetest.org, the initial request takes more than 5000ms or 5seconds for the TTFB.
Environment Set-up:
ASP.Net MVC
IIS 8.5
Upon thorough investigation and checking the IIS tracing Logs, the long wait happens under the ManagePipelineHandler Module with 7515ms/7.5seconds (it consistently happens with AspNetPagePreInitEnter, AspNetPageInitEnter and AspNetPageInitLeave). Sample on the IIS trace log below:
IIS Trace Logs where the hang happens
And as I search on what does the hang on this ASP.NET events means, but I can't find any concrete answer on what causes this hang or why this hang happens.
Any help is much appreciated.
First of all, you could try to disable anti-virus.
To figure out the root cause, it is recommended to dump file because the long wait happend in ManagePipelineHandler.
We may need to capture 2-3 dump files by generating a dump every 2 second.
Then we can compare the managed call stack. It will tell us what the thread is doing. We can improve the peformance only when we know what it was doing.
If you are not sure how to analyze dump file, then you could try
https://www.microsoft.com/en-us/download/details.aspx?id=58210
If you know how to analyze a dump file, then you could try mex extension with WINDBG.
!mex.aspxpagesext will show all current requests hand its thread.
As long as we know the thread ID, we can use mex.clrstack2! to show the managed call stack
I'm using LDBC dataset to test execution time in Neo4j 4.0.1, SF = 1, and I use java to connect Neo4j, ResultSummary.resultAvailableAfter() to get the execution time, which is the time to get the result and start streaming.
But for the same query, when I run for the first time, the execution time seems reasonable, like hundreds of ms, but when I continue running this same query, the execution time becomes almost 0.
I guess it's effect of query cache, but is there any proper approach to test query execution time and get a reasonable result?
Now I can only restart db to get the result that seems to be correct.
I guess it's because Neo4j directly caches the query result and just fetches it if the same query is executed multiple times. Is there any way to avoid this? i.e. let neo4j do normal caching (like nodes and relationships), not directly cache query result.
Thanks!
The page cache most likely is responsible for the results you are seeing (Well, i had some discussions with neo4j engineers when i was working on building a neo4j cluster. Their suggestions to optimize our cluster performance seemed to indicate this). You should set the page cache size to 0 or very close to 0 (say 1Meg or something low). You can read about the memory settings here https://neo4j.com/docs/operations-manual/current/performance/memory-configuration/
The specific setting you need to change is
dbms.memory.pagecache.size=1M
or set it 0. Explicitly set this to a value. Dont leave this setting commented. neo4j may assign default memory size to page cache. Restart your server/cluster after the settings change and see what performance numbers you come up with. You should also check how your cache looks by running
:sysinfo
command in the browser before and after running your queries.
And there is no direct setting to tell neo4j what to cache. That is, rightly, decided by the server itself.
Sorry, i dont have enough reputation points to leave a comment to your question!
Hoping someone can point me in the right direction here. My electron app needs to perform an API call every 10 minutes or so. I currently do this using a setInterval loop in the renderer process that fires every 10 minutes.
It generally works fine, for a few hours before it seems to just stop firing. I have several processes that clear and restart the setInterval to try and counteract the problem, but nothing seems to work.
The app opens new browser windows and displays content, which mean the main window may not be in focus all the time, which I suspect may have something to do with it.
I have tried adding
powerSaveBlocker.start("prevent-display-sleep");
powerSaveBlocker.start("prevent-app-suspension");
to my main electron js script, but it doesn't seem to have an effect. The problem is mostly being displayed on Windows machines. I'm not entirely sure if it happens on Mac or Linux.
So my question, is there any reason why this would be happening that intervals just die after a point? The powerSaveBlocker calls made sense to me, but they don't really seem to do anything.
Or is there a better way to have a background process running at intervals that can perform these API calls? I had looked at node-schedule but I'm not sure if it will fix this issue.
Answering my own question here with credit to #snwflk who pointed me in the right direction in a comment on the original post.
Whilst I have not been able to clarify with absolute certainty this solves the problem, I have also not seen the problem since. (It's not always 100% reproducible, and its difficult to test as it requires a machine, left alone for several hours, which may or may not display the problem).
I have however rolled the fix out to a few customers and their machines seem to still be online days later, which is a good sign.
So, the solution was to disable backgroundThrottling on the main BrowserWindow object (interval was running in the renderer processes)
Docs: https://electronjs.org/docs/api/browser-window#new-browserwindowoptions
An example
mainWindow = new BrowserWindow({
webPreferences: {
backgroundThrottling: false,
},
});
FYI be warned there have been a few bugs that prevented this setting from working, ie https://github.com/electron/electron/issues/20974 so be sure to update your electron version.
As far as I know intervals should keep running forever (the MDN page also doesn't mention anything).
If I understood correctly, your Electron app only does that API call? So it would be hard to tell the difference between the interval not triggering, and some other problem causing a freeze?
My guess would be you have an uncaught exception being thrown, or some other similar error or event. https://stackoverflow.com/a/43911292/841830 gives some suggestions of things to catch. https://github.com/sindresorhus/electron-unhandled says it can be used in both main and renderer processes to catch and report all problems.
Or is there a better way to have a background process running at intervals that can perform these API calls?
If you were on Linux, using cron to run a node script (or even just a wget or curl command) would be better than using an Electron app just for this. Task scheduler seems to be the windows equivalent of cron.
I have an application written in Delphi 5, which runs fine on most (windows) computers.
However, occasionally the program begins to load (you can see it in task manager, uses about 2.5-3 MB of memory), but then stalls for a number of minutes, sometimes hours.
If you leave it long enough, the formshow event will eventually occur and the application window will pop up, but it seems like some other application or windows setting is preventing it from initially using all the memory it needs to run (approx. 35-40 MB).
Also, on some of my client's workstations, if they have MS Outlook running, they can close it and my application will pop up. Does anyone know what is going on here, and/or how to fix it?
Since nobody has given a better answer I'll take a stab at how to solve this:
There's something in your initialization that is locking it up somehow. Without seeing your code I do not know what it is so I'll only address how to go about finding it:
You need to log what you accomplish during startup. If you have any kind of screen showing I find the window title useful for this but it sounds like you don't--that means you need to write the log to a file. Let it get lost, kill the task and see where it got.
Note that this means you need to cleanly write your data despite an abnormal program termination. How to go about this:
A) Append, write your line, close.
B) Write your line, then flush the file handle.
C) Initially write your file to consist of a large number of blanks--ensure this is larger than the actual log will be. Write your line. In case of abnormal termination it will retain the original larger file size.
I would write a timestamp on every log item so you can see if it's just processing something too slowly.
If examining the log shows you where the problem is, fine. If, as usually happens, it's not enough you put a bunch more logging between the last item that did get logged and the next one that didn't--I've been known to log every line when hunting a cryptic problem that only happened on someone else's system.
If finding the line isn't enough to pinpoint the problem also dump the value of relevant variables.
Finally, if such intense scrutiny makes the bug go away start looking for an uninitialized variable. (While a memory stomp is also an option I doubt it's the culprit here.)
I have a backroundrb scheduled task that takes quite a long time to run. However it seems that the process is ending after only 2.5 minutes.
My background.yml file:
:schedules:
:named_worker:
:task_name:
:trigger_args: 0 0 12 * * * *
:data: input_data
I have zero activity on the server when the process is running. (Meaning I am the only one on the server watching the log files do their thing until the process suddenly stops.)
Any ideas?
There's not much information here that allows us to get to the bottom of the problem.
Because backgroundrb operates in the background, it can be quite hard to monitor/debug.
Here are some ideas I use:
Write a unit test to test the worker code itself and make sure there are no problems there
Put "puts" statements at multiple points in the code so you can at least see some responses while the worker is running.
Wrap the entire worker in a begin..rescue..end block so that you can catch any errors that might be occurring and cutting the process short.
Thanks Andrew. Those debugging tips helped. Especially the begin..rescue..end block.
It was still a pain to debug though. In the end it wasn't BackgroundRB cutting short after 2.5 minutes. There was a network connection being made that wasn't being closed properly. Once that was found and closed, everything works great.