I'm using fswebcam to capture an image using node-red exec block running on a raspberry pi.
The time it takes to capture the image is 3+ seconds.
fswebcam -r 1280x720 image.jpg
I tried the same using OpenCV and the result is a little better but similar.
from cv2 import *
cam = VideoCapture(1)
s, img = cam.read()
if s:
imwrite("/home/pi/pythontest/tt.jpg",img) #save image
cam.release()
I'm guessing that it takes some time for the USB camera to initialize and take a picture which increases the time drastically. Is there any way to keep the camera initialized?
Any other workarounds to ameliorate this issue?
There may be other methods, but one way to do this is to run the camera continuously during periods when you want faster responses. You will need to consider some things though:
bandwidth used to capture images
wear on your SD card
access incomplete images midway through capture.
I'll leave you to determine what USB bandwidth you need for the resolution you are using.
As regards the second - wear on your SD card - I would suggest you capture to /tmp and ensure that is based on a RAM filesystem by becoming root and adding a line like this to your /etc/fstab:
tmpfs /tmp tmpfs defaults,noatime,nosuid 0 0
Then reboot. This way the data never goes near your SD card.
As regards the third - incomplete images still being captured - you can leverage the --exec option of fswebcam to get around this. Basically, you capture to one file and then after it is complete, you use --exec to rename the file to /tmp/latest.jpg and you use that in your application.
fswebcam -r 640x480 --loop 1 --exec 'mv /tmp/inprogress.jpg /tmp/latest.jpg' /tmp/inprogress.jpg
This relies on the fact that, under Unix at least, renaming a file does not affect any process that has that file open and that renaming is atomic. So your application will always either get either the entire new or the entire old file and never half a file still being written.
My camera produces images around 160kB, so I tested the file size like this in a tight loop, reading the file as fast as possible and only notifying me if it is far less than the normal size, i.e. truncated:
while : ; do l=$(wc -c < latest.jpg); [[ $l -lt 140000 ]] && echo $l; done
Try to profile your code (using cProfile e.g.) to ensure that issue is not in python interpreter start-up time or imwrite.
If the issue in a camera initialization, then I suppose that the only options is to write a daemon that will keep camera online and give you an image at your request
Related
I realized that when I use OpenCV to grab videos (cv2.VideoCapture('rtsp://...')) in a rtsp url, actually I am getting everyframe of the stream and not the real time frame.
Example: If a video has 30fps and 10 seconds long, if I get the first frame and wait 1 second to get the next, I get the frame number 2 and not the real time frame (it should be frame number 30 or 31).
I am worried about these because if my code take a little longer to do the video processing (deep learning convolutions), the result will always be delivered later and not in real time.
Any ideas how can I manage to always get the current frame when I capture from rtsp?
Thanks!
This is not about the code. Many IP cameras gives you encoded output(H.265/H.264).
When you used VideoCapture() , the output data of the camera is decoded by the CPU. Getting delay as you mentioned also such as between 1 sec and 2 sec is normal.
What can be done to make it faster:
If you have a GPU hardware, you can decode the data via on it. This
will give you really good results(according to experiences by using
latest version of NVIDIA gpus: you will get almost 25 milisecond
delay) To achieve that on your code, you need:
CUDA installation
CUDA enabled OpenCV installation
VideoReader class of OpenCV
You can use VideoCapture() with FFMPEG flag, FFMPEG has advanced methods to decode encoded data and this will give you probably most faster output which you can get with your CPU. But this will not decrease time much.
My goal is to find the title screen from a movie trailer. I need a service where I can search a video for a string, then return the frame with that string. Pretty obscure, does anything like this exist?
e.g. for this movie, I'd scan for "Sausage Party" and retrieve this frame:
Edit: I found the cloudsight api which would actually work except cost is prohibitive # $.04 per call assuming I need to split the video into 1s intervals and scan every image (at least 60 calls per video).
No exact service that I can find, but you could attempt to do this yourself...
ffmpeg -i sausage_party.mp4 -r 1 %04d.png
/usr/local/bin/parallel --no-notice -j 8 \
/usr/local/bin/tesseract -psm 6 -l eng {} {.} \
::: *.png
This extracts one frame a second from the video file, and then uses tesseract to extract the text via OCR into files of the same name as the image frame (eg. 0135.txt. However your results are going to vary massively depending on the font used and the quality of the video file.
You'd probably find it cheaper/easier to use something like Amazon Mechanical Turk , especially since the OCR is going to have a hard time doing this automatically.
Another option could be implementing this service by yourself using the Scene Text Detection and Recognition module in OpenCV (docs.opencv.org/3.0-beta/modules/text/doc/text.html). You can take a look at this video to get an idea of how such a system would operate. As pointed out above the accuracy would depend on the font used in the movie titles, the quality of the video files, and the OCR.
OpenCV relies on Tesseract as the underlying OCR but, alternatively, you could use the text detection and localization functions (docs.opencv.org/3.0-beta/modules/text/doc/erfilter.html) in OpenCV to find text areas in the image and then employ a different OCR to perform the recognition. The text detection and localization stage can be done very quickly thus achieving real time performance would be mostly a matter of picking a fast OCR.
When I run Apple's Automator to simply cut a bunch of images in their size Automator will also reduce the quality of the files (jpg) and they get blurry.
How can I prevent this? Are there settings that I can take control of?
Edit:
Or are there any other tools that do the same job but without affecting the image quality?
If you want to have finer control over the amount of JPEG compression, as kopischke said you'll have to use the sips utility, which can be used in a shell script. Here's how you would do that in Automator:
First get the files and the compression setting:
The Ask for Text action should not accept any input (right-click on it, select "Ignore Input").
Make sure that the first Get Value of Variable action is not accepting any input (right-click on them, select "Ignore Input"), and that the second Get Value of Variable takes the input from the first. This creates an array that is then passed on to the shell script. The first item in the array is the compression level that was given to the Automator Script. The second is the list of files that the script will do the sips command on.
In the options on the top of the Run Shell Script action, select "/bin/bash" as the Shell and select "as arguments" for Pass Input. Then paste this code:
itemNumber=0
compressionLevel=0
for file in "$#"
do
if [ "$itemNumber" = "0" ]; then
compressionLevel=$file
else
echo "Processing $file"
filename="$file"
sips -s format jpeg -s formatOptions $compressionLevel "$file" --out "${filename%.*}.jpg"
fi
((itemNumber=itemNumber+1))
done
((itemNumber=itemNumber-1))
osascript -e "tell app \"Automator\" to display dialog \"${itemNumber} Files Converted\" buttons {\"OK\"}"
If you click on Results at the bottom, it'll tell you what file it's currently working on. Have fun compressing!
Automator’s “Crop Images” and “Scale Images” actions have no quality settings – as is often the case with Automator, simplicity trumps configurability. However, there is another way to access CoreImage’s image manipulation facilities whithout resorting to Cocoa programming: the Scriptable Image Processing System, which makes image processing functions available to
the shell via the sips utility. You can fiddle with the most minute settings using this, but as it is a bit arcane in handling, you might be better served with the second way,
AppleScript via Image Events, a scriptable faceless background application provided by OS X. There are crop and scale commands, and the option of specifying a compression level when saving as a JPEG with
save <image> as JPEG with compression level (low|medium|high)
Use a “Run AppleScript” action instead of your “Crop” / “Scale” one and wrap the Image Events commands in a tell application "Image Events" block, and you should be set. For instance, to scale the image to half its size and save as a JPEG in best quality, overwriting the original:
on run {input, parameters}
set output to {}
repeat with aPath in input
tell application "Image Events"
set aPicture to open aPath
try
scale aPicture by factor 0.5
set end of output to save aPicture as JPEG with compression level low
on error errorMessage
log errorMessage
end try
close aPicture
end tell
end repeat
return output -- next action processes edited files.
end run
– for other scales, adjust the factor accordingly (1 = 100 %, .5 = 50 %, .25 = 25 % etc.); for a crop, replace the scale aPicture by factor X by crop aPicture to {width, height}. Mac OS X Automation has good tutorials on the usage of both scale and crop.
Eric's code is just brilliant. Can get most of the jobs done.
but if the image's filename contains space, this workflow will not work.(due to space will break the shell script when processing sips.)
There is a simple solution for this: add "Rename Finder Item" in this workflow.
replace spaces with "_" or anything you like.
then, it's good to go.
Comment from '20
I changed the script into a quick action, without any prompts (for compression as well as confirmation). It duplicates the file and renames the original version to _original. I also included nyam's solution for the 'space' problem.
You can download the workflow file here: http://mobilejournalism.blog/files/Compress%2080%20percent.workflow.zip (file is zipped, because otherwise it will be recognized as a folder instead of workflow file)
Hopefully this is useful for anyone searching for a solution like this (like I did an hour ago).
Comment from '17
To avoid "space" problem, it's smarter to change IFS than renaming.
Back up current IFS and change it to \n only. And restore original IFS after the processing loop.
ORG_IFS=$IFS
IFS=$'\n'
for file in $#
do
...
done
IFS=$ORG_IFS
Is there a program or script that can read an image on standard input and write a resized image to standard output without waiting for EOF on standard input? Poor quality is acceptable; waiting for the whole image to load is not.
ImageMagick (convert and stream alike) will read, then process, then output. What I want is more like a real-time stream processor: if I'm scaling down 50%, it should output one row of thumbnail for every two rows of input (roughly), regardless of the state of the input stream.
If this doesn't make sense yet, imagine you're loading an image over a slow network connection. As soon as it can, the browser starts displaying the top edge of the image. If the image is larger than the window, the browser scales it down to fit the window. It doesn't have to wait for the whole image to load.
Here are some of the tools I've used for testing. This serves an image on port 8080 in ten slices, with a one-second delay between slices to simulate a slow network connection:
IMAGE=test.jpg; SLICES=10; SIZE=$(stat -c "%s" $IMAGE); BS=$(($SIZE / $SLICES + 1)); (echo HTTP/1.0 200 OK; echo Content-Type: image/jpeg; echo; for i in $(seq 0 $(($SLICES - 1))); do dd if=$IMAGE bs=$BS skip=$i count=1; sleep 1; done) | nc -lp8080 -q0
Run that and immediately open localhost:8080 in your browser to see the image slowly load. If you pipe the image slices to convert or stream instead of nc (omitting all the echoes), no output appears for ten seconds, and then you get the whole thumbnail at once.
This is difficult depending on the image format. PNG, for example, is chunked, and each chunk zlib-compressed, so you have to read in a potentially large portion of the file before you can start rendering the image. BMP images are stored "bottom-up", where it renders from the lower right to the upper left, so unless your thumbnail will also be a BMP, you will have either read in the entire image or process the file backwards. JPEG can do this more readily; it's stored in order, and if it's a progressive JPEG you can abuse that and read in only the first N passes to get the thumbnail resolution you need. Wavelet formats like DJVU might also be more straightforward.
I don't think you'll find general-purpose tools that do this, but you could write a custom format-specific streaming decoder to handle it.
I have a PHP script which is used to resize images in a user's FTP folder for use on his website.
While slow to resize, the script has completed correctly with all images in the past. Recently however, the user uploaded an album of 21-Megapixel JPEG images and as I have found, the script is failing to convert the images but not giving out any PHP errors. When I consulted various logs, I've found multiple Apache processes being killed off with Out Of Memory errors.
The functional part of the PHP script is essentially a for loop that iterates through my images on the disk and calls a method that checks if a thumbnail exists and then performs the following:
$image = new Imagick();
$image->readImage($target);
$image->thumbnailImage(1000, 0);
$image->writeImage(realpath($basedir)."/".rescale."/".$filename);
$image->clear();
$image->destroy();
The server has 512MB of RAM, with usually at least 360MB+ free.
PHP has it's memory limit set currently at 96MB, but I have set it higher before without any effect on the issue.
By my estimates, a 21-Megapixel image should occupy in the region of 80MB+ when uncompressed, and so I am puzzled as to why the RAM is disappearing so rapidly unless the Image Magick objects are not being removed from memory.
Is there some way I can optimise my script to use less memory or garbage collect more efficiently?
Do I simply not have the RAM to cope with such large images?
Cheers
See this answer for a more detailed explanation.
imagick uses a shared library and it's memory usage is out of reach for PHP, so tuning PHP memory and garbage collection won't help.
Try adding this prior to creating the new Imagick() object:
// pixel cache max size
IMagick::setResourceLimit(imagick::RESOURCETYPE_MEMORY, 32);
// maximum amount of memory map to allocate for the pixel cache
IMagick::setResourceLimit(imagick::RESOURCETYPE_MAP, 32);
It will cause imagick to swap to disk (defaults to /tmp) when it needs more than 32 MB for juggling images. It will be slower, but it will not run out of RAM (unless /tmp is on ramdisk, in that case you need to change where imagick writes its temp files).
MattBianco is nearly correct, only change is that the memory limits are in bytes so would be 33554432 for 32MB:
// pixel cache max size
IMagick::setResourceLimit(imagick::RESOURCETYPE_MEMORY, 33554432);
// maximum amount of memory map to allocate for the pixel cache
IMagick::setResourceLimit(imagick::RESOURCETYPE_MAP, 33554432);
Call $image->setSize() before $image->readImage() to have libjpeg resize the image whilst loading to reduce memory usage.
(edit), example usage: Efficient JPEG Image Resizing in PHP