Execute ffmpeg thought dart process take over a sec - dart

I tried to execute the same command in cmd/powershell,they just take ~400ms.
I also tried to execute other program thought dart process with run or start method.they are fine.
But once I start ffmpeg process, it will take a sec to warmup…even the log level is quiet, even the input parameter is empty.
At the end the ffmpeg execution time+warmup time will cost 1400ms.
Does anyone has familiar issue? Is fixable?

Related

How to run a build in Travis when the build is in an infinite loop

I currently have a build of an application that is set to run infinitely. It is designed to run on a Raspberry Pi as a service, so it will continuously be running.
Whenever I try to test it on Travis-CI, the infinite loop portion draws an error even though the file builds correctly since it is running infinitely. Is there any way to stop this error, or do I have to remove the ability to run the build from the .travis.yml?
language: cpp
compiler:
- clang
- g++
script:
- make
- cd main
- ./jsonWeatherPrediction
I would expect it to error, I'm just not sure of a current way to stop it without removing - ./jsonWeatherPrediction
I don't know if this will help, but the build is located at https://travis-ci.org/DMoore12/json-weather-prediction
Thanks in advance :)
In most any reasonable CI workflow, the job should have well-defined start and finish. Your software you are testing may run forever, but your tests should not. So, first, I suggest re-thinking how you run your build.
Looking at build such as https://travis-ci.org/DMoore12/json-weather-prediction/jobs/474719832, I see that you are simply running your command (which raises a different question: The command is printing the same output forever in a tight loop. Is this the desired behavior?).
For testing, you need a different kind of behavior, one that can be tested (e.g., take input from STDIN or a command-line flag, print, and terminate).

Ensuring all .sh curl download scripts download using gnu parallel

I'm executing the following command which executes a group of scripts with each script being a curl download.
parallel --resume-failed --joblog logshd.log {1} ::: SH/*.sh
The set of files downloaded is quite large. I've noticed some files don't download.
I hoped that the resume-failed parameter would ensure that all the downloads that fail resume and complete.
I'm not clear on if that means I need to run the process again a second time or if that should occur when I run the one time.
From the gnu documentation
Where --resume-failed reads the commands from the command line (and
ignores the commands in the joblog), --retry-failed ignores the
command line and reruns the commands mentioned in the joblog.
I'm not clear on what ignoring the command line or ignores the commands in the job log means. Could that be clarified.
Can --resume-failed and --retry-failed be declared within the same command and if so what is the effect of that?
Regards
Conteh
If we assume the download fails intermittently then your answer is --retries 10. It will run the command 10 times before giving up.
--resume-failed and --retry-failed are both used when GNU Parallel has finished, and you then figure out that you want to retry some of the jobs again.
The difference between the two is in how to retry the command.
--retry-failed will run exactly the same command as failed before. It does that by looking in the joblog for the command. This is typically what you want.
--resume-failed is used if you figure out that the failing command actually needed some other parameter: i.e. GNU Parallel should not run exactly the same command, but it should run a (typically slightly changed) command with the same parameters instead.

ToolTwist Controller hangs while generating images

While generating a large site using the ToolTwist Controller, the server hangs. Using ps -ef I can see that there is an ImageMagick 'convert' command that never seems to finish. If I kill the convert process, the generate continues.
If I get the full convert command from the log file or using ps, I can run it from the command line with no problem. Each time I run the generate process in the Controller it gets stuck in a different place.
How often it hangs seems to be sporadic, and only occurs maybe every 1,000 images.
I'm running OSX 10.7.3 on a Macbook Pro.
This is a known bug in ImageMagick - see http://www.imagemagick.org/discourse-server/viewtopic.php?f=3&t=19962
The solution is to define an environment variable:
export MAGICK_THREAD_LIMIT=1
You'll need to do this before starting the Controller's tomcat server.

Synchronous command execution in Lua

I'd like to get data from an output when a system command is finished in Lua,
even while that command may take a few minutes to an end.
Obviously popen executes the command separately from the lua process.
Does anyone has an idea to solve this?
r = popen('command','r')
for line in r:lines() do
print(line)
end
If the command uses buffered output (the default) then there's nothing you can do. Some commands (e.g., cat -u) have an option to use unbuffered output, but they're rare.

How do I increase timeout for a cronjob/crontab?

I have written a script that gets data from solr for which date is within the specified period, and I run the script using as a daily cron.
The problem is the cronjob does not complete the task. If I manually run the script (for the same time period), it works well. If I reduce the specified time period, the script runs from the cron as well. So my guess is cronjob is timing out while running the script is there is much data to process.
How do I increase the timeout for cronjob?
PS - 1. The script I am running in cronjob is a bash script which runs a python script.
Note that the ulimit -t solution suggested will limit the amount of CPU time used, not the amount of actual time that has passed.
From the bash manpage:
ulimit [-HSTabcdefilmnpqrstuvx [limit]]
...
-t The maximum amount of cpu time in seconds
And more importantly, cron doesn't impose any timeouts in the first place. It simply kicks off whatever process and moves on.
BTW: Sorry for posting this as an answer, but I don't have enough points to add comments yet.
You could try to use ulimit -t [number of seconds] in the cronjob before running the script.

Resources