TCPDF Maximum execution time. Where to increase that? - tcpdf

Fatal error: Maximum execution time of 60 seconds exceeded in tcpdf.php on line 16304;
php.ini limits already updated,but cannot find where to increase TCPDF time limit.

I have found the solution if the class or script implements set_time_limit(0) TCPD sets it's executing time limit as parameter given to set_time_limit.

Related

Understand how k6 manages at low level a large number of API call in a short period of time

I'm new with k6 and I'm sorry if I'm asking something naive. I'm trying to understand how that tool manage the network calls under the hood. Is it executing them at the max rate he can ? Is it queuing them based on the System Under Test's response time ?
I need to get that because I'm running a lot of tests using both k6 run and k6 cloud but I can't make more than ~2000 requests per second (looking at k6 results). I was wondering if it is k6 that implement some kind of back-pressure mechanism if it understand that my system is "slow" or if there are some other reasons why I can't overcome that limit.
I read here that is possible to make 300.000 request per second and that the cloud environment is already configured for that. I also try to manually configure my machine but nothing changed.
e.g. The following tests are identical, the only changes is the number of VUs. I run all test on k6 cloud.
Shared parameters:
60 api calls (I have a single http.batch with 60 api calls)
Iterations: 100
Executor: per-vu-iterations
Here I got 547 reqs/s:
VUs: 10 (60.000 calls with an avg response time of 108ms)
Here I got 1.051,67 reqs/s:
VUs: 20 (120.000 calls with an avg response time of 112 ms)
I got 1.794,33 reqs/s:
VUs: 40 (240.000 calls with an avg response time of 134 ms)
Here I got 2.060,33 ​reqs/s:
VUs: 80 (480.000 calls with an avg response time of 238 ms)
Here I got 2.223,33 ​reqs/s:
VUs: 160 (960.000 calls with an avg response time of 479 ms)
Here I got 2.102,83 peak ​reqs/s:
VUs: 200 (1.081.380 calls with an avg response time of 637 ms) // I reach the max duration here, that's why he stop
What I was expecting is that if my system can't handle so much requests I have to see a lot of timeout errors but I haven't see any. What I'm seeing is that all the API calls are executed and no errors is returned. Can anyone help me ?
As k6 - or more specifically, your VUs - execute code synchronously, the amount of throughput you can achieve is fully dependent on how quickly the system you're interacting with responds.
Lets take this script as an example:
import http from 'k6/http';
export default function() {
http.get("https://httpbin.org/delay/1");
}
The endpoint here is purposefully designed to take 1 second to respond. There is no other code in the exported default function. Because each VU will wait for a response (or a timeout) before proceeding past the http.get statement, the maximum amount of throughput for each VU will be a very predictable 1 HTTP request/sec.
Often, response times (and/or errors, like timeouts) will increase as you increase the number of VUs. You will eventually reach a point where adding VUs does not result in higher throughput. In this situation, you've basically established the maximum throughput the System-Under-Test can handle. It simply can't keep up.
The only situation where that might not be the case is when the system running k6 runs out of hardware resources (usually CPU time). This is something that you must always pay attention to.
If you are using k6 OSS, you can scale to as many VUs (concurrent threads) as your system can handle. You could also use http.batch to fire off multiple requests concurrently within each VU (the statement will still block until all responses have been received). This might be slightly less overhead than spinning up additional VUs.

How much computing time does the kernel need

I wrote a program for a LED display. The program allows to set the refresh rate via webconfiguration. To meet the refresh rate I measure the processing time of a loop. At the end I calculate the delay and wait until the next loop.
e.g. Refresh Rate 5 Hz -> 200 milli seconds for one loop. 50 milli seconds computing time results in 150 milli seconds delay.
The ratio of process time (50 milli seconds) to total time (200 milli seconds) indicates the processor load of my program. But to find the optimal setting, I need the actual total processor load. And not only that of my program. But since I don't know the real processor load of the delay() (in which WIFI etc. is done), I don't really know the processor load. In other words, I don't know how much time the system spends doing system tasks in the delay(150).
Is there a way to find out how much of a delay is actually used for system tasks before the processor truly waits?
In other words, I'm looking for a way to get the kernel time within a certain time frame.
Cheers Gabriel

Airflow list dag times out exactly after 30 seconds

I have a dynamic airflow dag(backfill_dag) that basically reads admin variable(Json) and builds it self. Backfill_dag is used for backfilling/history loading, so for example if I wants to history load dag x,y, n z in some order(x n y run in parallel, z depends on x) then I will mention this in a particular json format and put it in admin variable of backfill_dag.
Backfill_dag now:
parses the Json,
renders the tasks of the dags x,y, and z, and
builds itself dynamically with x and y in parallel and z depends on x
Issue:
It works good as long as Backfill_dag can list_dags in 30 seconds.
Since Backfill_dag is bit complex here, it takes more than 30 seconds to list(airflow list_dags -sd Backfill_dag.py), hence it times out and the dag breaks.
Tried:
I tried to set a parameter, dagbag_import_timeout = 100, in airflow.cfg file of the scheduler, but that did not help.
I fixed my code.
Fix:
I had some aws s3 cp command in my dag that were running durring compilation hence my list_dags command was taking more than 30 seconds, i removed them(or had then in a BashOperator task), now my code compiles(list_dags) in couple of seconds.
Besides fixing your code you can also increase the core.dagbag_import_timeout which has per default 30 seconds. For me it helped increasing it to 150.
core.dagbag_import_timeout
default 30 seconds
The number of seconds before importing a Python file times out.
You can use this option to free up resources by increasing the time it takes before the Scheduler times out while importing a Python file to extract the DAG objects. This option is processed as part of the Scheduler "loop," and must contain a value lower than the value specified in core.dag_file_processor_timeout.
core.dag_file_processor_timeout
default: 50 seconds
The number of seconds before the DagFileProcessor times out processing a DAG file.
You can use this option to free up resources by increasing the time it takes before the DagFileProcessor times out. We recommend increasing this value if you're seeing timeouts in your DAG processing logs that result in no viable DAGs being loaded.
You can try change other airflow configs like:
AIRFLOW__WEBSERVER__WEB_SERVER_WORKER_TIMEOUT
AIRFLOW__CORE__DEFAULT_TASK_EXECUTION_TIMEOUT
also as mentioned above:
AIRFLOW__CORE__DAG_FILE_PROCESSOR_TIMEOUT
AIRFLOW__CORE__DAGBAG_IMPORT_TIMEOUT

Setting up MAX_RUN_TIME for delayed_job - how much time can I set up?

The default value is 4 hours. When I run my data to process, I got this error message:
E, [2014-08-15T06:49:57.821145 #17238] ERROR -- : 2014-08-15T06:49:57+0000: [Worker(delayed_job host:app-name pid:17238)] Job ImportJob (id=8) FAILED (1 prior attempts) with Delayed::WorkerTimeout: execution expired (Delayed::Worker.max_run_time is only 14400 seconds)
I, [2014-08-15T06:49:57.830621 #17238] INFO -- : 2014-08-15T06:49:57+0000: [Worker(delayed_job host:app-name pid:17238)] 1 jobs processed at 0.0001 j/s, 1 failed
Which means that the current limit is set on 4 hours.
Because I have a large amount of data to process that might take 40 or 80 hours to process, I was curious if I can set up this amount of hours for MAX_RUN_TIME.
Are there any limits or negatives for setting up, let's say, MAX_RUN_TIME on 100 hours? Or possibly, is there any other way to process this data?
EDIT: is there a way to set up MAX_RUN_TIME on an infinity value?
There does not appear to be a way to set MAX_RUN_TIME to infinity, but you can set it very high. To configure the max run time, add a setting to your delayed_job initializer (config/initializers/delayed_job_config.rb by default):
Delayed::Worker.max_run_time = 7.days
Assuming you are running your Delayed Job daemon on its own utility server (i.e. so that it doesn't affect your web server, assuming you have one) then I don't see why long run times would be problematic. Basically, if you're expecting long run times and you're getting them then it sounds like all is normal and you should feel free to up the MAX_RUN_TIME. However, it is also there to protect you so I would suggest keeping a reasonable limit lest you run into an infinite loop or something that actually never will complete.
As far as setting MAX_RUN_TIME to infinity... it doesn't look to be possible since Delayed Job doesn't make the max_run_time optional. And there's a part in the code where a to_i conversion is done, which wouldn't work with infinity:
[2] pry(main)> Float::INFINITY.to_i
# => FloatDomainError: Infinity

NUnit maximum timeout value

I have some tests that are very slow and I want to set as timeout 15 minutes.
As test purpose I have this example:
[Test, Timeout(900000)]
public void Test1()
{
Thread.Sleep(900001);
}
The test after some time stops without errors.
What's the correct way to do it?
As you know, both Thread.Sleep and NUnit's TimeoutAttribute take time in milliseconds. Specifying times that are 1 millisecond off from each other isn't enough to guarantee a timeout due to thread scheduling and general timer accuracy. See this answer for a little more discussion about the accuracy of Thread.Sleep, and by extension NUnit's timeout thread's accuracy.
Try specifying a larger difference between the two numbers and I suspect you'll see it behaves as you'd expect. For instance, sleep for 900100 milliseconds and leave the timeout value as is. With a timeout of 15 minutes, you won't notice an extra tenth of a second waiting for it to timeout.

Resources