Increase length of Spyder history log in history.py - spyder

The Spyder IDE logs the commands from the console in ~/.config/spyder-py3/history.py. However, it only stores about 1000 lines of history. How do I increase this limit?
I followed the advice in this post and increased the buffer size to 5000. However, it does not increase the length of the history file, (and I think it changes only the console buffer size).
In short, I am looking for the equivalent of HISTFILESIZE in .bashrc. I do not mind even if the buffer size remains short (say 500, the default size), i.e. I don't care about HISTSIZE equivalent of .bashrc.
P.S. there is another empty file in the config path: ~/.config/spyder-py3/history_internal.py . Don't know if that matters.

Related

How to disable core dump in a docker image?

I have a service that uses an Docker image. About a half dozen people use it. However, occasionally containers produces big core.xxxx dump files. How do I disable it on docker images? My base image is Debian 9.
To disable core dumps set a ulimit value in /etc/security/limits.conf file and defines some shell specific restrictions.
A hard limit is something that never can be overridden, while a soft limit might only be applicable for specific users. If you would like to ensure that no process can create a core dump, you can set them both to zero. Although it may look like a boolean (0 = False, 1 = True), it actually indicates the allowed size.
soft core 0
hard core 0
The asterisk sign means it applies to all users. The second column states if we want to use a hard or soft limit, followed by the columns stating the setting and the value.

Deleting full lines from the begining of a logfile in Delphi

I've developed a Delphi service that writes a log to a file. Each entry is written to a new line. Once this logfile reaches a specific size limit, I'd like to trim the first X lines from the beginning of the file to keep its size below the specified limit. I've found some code here on SO which demonstrates how to delete chunks of data from the BOF, but how do I go about deleting full randomly sized lines and not given chunks?

Apple's Automator: compression settings for jpg?

When I run Apple's Automator to simply cut a bunch of images in their size Automator will also reduce the quality of the files (jpg) and they get blurry.
How can I prevent this? Are there settings that I can take control of?
Edit:
Or are there any other tools that do the same job but without affecting the image quality?
If you want to have finer control over the amount of JPEG compression, as kopischke said you'll have to use the sips utility, which can be used in a shell script. Here's how you would do that in Automator:
First get the files and the compression setting:
The Ask for Text action should not accept any input (right-click on it, select "Ignore Input").
Make sure that the first Get Value of Variable action is not accepting any input (right-click on them, select "Ignore Input"), and that the second Get Value of Variable takes the input from the first. This creates an array that is then passed on to the shell script. The first item in the array is the compression level that was given to the Automator Script. The second is the list of files that the script will do the sips command on.
In the options on the top of the Run Shell Script action, select "/bin/bash" as the Shell and select "as arguments" for Pass Input. Then paste this code:
itemNumber=0
compressionLevel=0
for file in "$#"
do
if [ "$itemNumber" = "0" ]; then
compressionLevel=$file
else
echo "Processing $file"
filename="$file"
sips -s format jpeg -s formatOptions $compressionLevel "$file" --out "${filename%.*}.jpg"
fi
((itemNumber=itemNumber+1))
done
((itemNumber=itemNumber-1))
osascript -e "tell app \"Automator\" to display dialog \"${itemNumber} Files Converted\" buttons {\"OK\"}"
If you click on Results at the bottom, it'll tell you what file it's currently working on. Have fun compressing!
Automator’s “Crop Images” and “Scale Images” actions have no quality settings – as is often the case with Automator, simplicity trumps configurability. However, there is another way to access CoreImage’s image manipulation facilities whithout resorting to Cocoa programming: the Scriptable Image Processing System, which makes image processing functions available to
the shell via the sips utility. You can fiddle with the most minute settings using this, but as it is a bit arcane in handling, you might be better served with the second way,
AppleScript via Image Events, a scriptable faceless background application provided by OS X. There are crop and scale commands, and the option of specifying a compression level when saving as a JPEG with
save <image> as JPEG with compression level (low|medium|high)
Use a “Run AppleScript” action instead of your “Crop” / “Scale” one and wrap the Image Events commands in a tell application "Image Events" block, and you should be set. For instance, to scale the image to half its size and save as a JPEG in best quality, overwriting the original:
on run {input, parameters}
set output to {}
repeat with aPath in input
tell application "Image Events"
set aPicture to open aPath
try
scale aPicture by factor 0.5
set end of output to save aPicture as JPEG with compression level low
on error errorMessage
log errorMessage
end try
close aPicture
end tell
end repeat
return output -- next action processes edited files.
end run
– for other scales, adjust the factor accordingly (1 = 100 %, .5 = 50 %, .25 = 25 % etc.); for a crop, replace the scale aPicture by factor X by crop aPicture to {width, height}. Mac OS X Automation has good tutorials on the usage of both scale and crop.
Eric's code is just brilliant. Can get most of the jobs done.
but if the image's filename contains space, this workflow will not work.(due to space will break the shell script when processing sips.)
There is a simple solution for this: add "Rename Finder Item" in this workflow.
replace spaces with "_" or anything you like.
then, it's good to go.
Comment from '20
I changed the script into a quick action, without any prompts (for compression as well as confirmation). It duplicates the file and renames the original version to _original. I also included nyam's solution for the 'space' problem.
You can download the workflow file here: http://mobilejournalism.blog/files/Compress%2080%20percent.workflow.zip (file is zipped, because otherwise it will be recognized as a folder instead of workflow file)
Hopefully this is useful for anyone searching for a solution like this (like I did an hour ago).
Comment from '17
To avoid "space" problem, it's smarter to change IFS than renaming.
Back up current IFS and change it to \n only. And restore original IFS after the processing loop.
ORG_IFS=$IFS
IFS=$'\n'
for file in $#
do
...
done
IFS=$ORG_IFS

Tcl variable size limit

I am writing a Tcl script which will be used on an embedded device. The value of a variable in this script will be coming from a text file on the system. My concern is that if the source file is too big this may crash the device as there may not be enough memory to store the entire file. I wonder if the size of the variable can be limited so when feeding the variable it does not exhaust the entire amount of memory.
Also, if possible to limit the size of the variable will it still be filled with as much information as possible from the source file, even if the entire file cannot be fed into the variable?
You can limit the size of the variable by specifying the number of characters to read from the file. For example:
set f [open file.dat r]
set var [read $f 1024]
This code will read up to 1024 characters from the file (you'll get less than 1024 characters if the file is shorter than that, naturally).
ISTR, the size limit of a string representation of any variable in v8.5 is limited to 2 GiB. But as Eric already said, in your situation you should not blindly read the file into a variable, but rather either process the contents of the file in chunks or at least first estimate its size using file stat and then read it, if the size is OK (but note that this approach, of course, contains a race condition as the file can grow between the check and the read, but in your case this might or might not be a problem).

Image magick/PHP is falling over with large images

I have a PHP script which is used to resize images in a user's FTP folder for use on his website.
While slow to resize, the script has completed correctly with all images in the past. Recently however, the user uploaded an album of 21-Megapixel JPEG images and as I have found, the script is failing to convert the images but not giving out any PHP errors. When I consulted various logs, I've found multiple Apache processes being killed off with Out Of Memory errors.
The functional part of the PHP script is essentially a for loop that iterates through my images on the disk and calls a method that checks if a thumbnail exists and then performs the following:
$image = new Imagick();
$image->readImage($target);
$image->thumbnailImage(1000, 0);
$image->writeImage(realpath($basedir)."/".rescale."/".$filename);
$image->clear();
$image->destroy();
The server has 512MB of RAM, with usually at least 360MB+ free.
PHP has it's memory limit set currently at 96MB, but I have set it higher before without any effect on the issue.
By my estimates, a 21-Megapixel image should occupy in the region of 80MB+ when uncompressed, and so I am puzzled as to why the RAM is disappearing so rapidly unless the Image Magick objects are not being removed from memory.
Is there some way I can optimise my script to use less memory or garbage collect more efficiently?
Do I simply not have the RAM to cope with such large images?
Cheers
See this answer for a more detailed explanation.
imagick uses a shared library and it's memory usage is out of reach for PHP, so tuning PHP memory and garbage collection won't help.
Try adding this prior to creating the new Imagick() object:
// pixel cache max size
IMagick::setResourceLimit(imagick::RESOURCETYPE_MEMORY, 32);
// maximum amount of memory map to allocate for the pixel cache
IMagick::setResourceLimit(imagick::RESOURCETYPE_MAP, 32);
It will cause imagick to swap to disk (defaults to /tmp) when it needs more than 32 MB for juggling images. It will be slower, but it will not run out of RAM (unless /tmp is on ramdisk, in that case you need to change where imagick writes its temp files).
MattBianco is nearly correct, only change is that the memory limits are in bytes so would be 33554432 for 32MB:
// pixel cache max size
IMagick::setResourceLimit(imagick::RESOURCETYPE_MEMORY, 33554432);
// maximum amount of memory map to allocate for the pixel cache
IMagick::setResourceLimit(imagick::RESOURCETYPE_MAP, 33554432);
Call $image->setSize() before $image->readImage() to have libjpeg resize the image whilst loading to reduce memory usage.
(edit), example usage: Efficient JPEG Image Resizing in PHP

Resources