This fragment does not write to file a stream of 90 characters after the buffer is full
"full" means that an output operation is executed when the buffer is full, or when we explicitly flush the file. By writing out:write(string.rep("A",90)) and opening the file with notepad I can see the text.
This fragment does not write to the file
out = io.open("E:\\file","w")
out:setvbuf("full",90)
out:write(string.rep("A",89))
out:write("A")
On the other hand this fragment does write to the file
out = io.open("E:\\file","w")
out:setvbuf("full",90)
out:write(string.rep("A",90))
This might seem a simple question, but what actually surprises me is that rather that writing the text to the file, the first fragment does not write anything due to such a trivial change. Why does this happen? By the way, I am using Lua 5.3.4.
Related
When processing a large single file, it can be broken up as so:
import dask.bag as db
my_file = db.read_text('filename', blocksize=int(1e7))
This works great, but the files I'm working with have a high level of redundancy and so we keep them compressed. Passing in compressed gzip files gives an error that seeking in gzip isn't supported and so it can't be read in blocks.
The documentation here http://dask.pydata.org/en/latest/bytes.html#compression suggests that some formats support random access.
The relevant internal code I think is here:
https://github.com/dask/dask/blob/master/dask/bytes/compression.py#L47
It looks like lzma might support it, but it's been commented out.
Adding lzma into the seekable_files dict like in the commented out code:
from dask.bytes.compression import seekable_files
import lzmaffi
seekable_files['xz'] = lzmaffi.LZMAFile
data = db.read_text('myfile.jsonl.lzma', blocksize=int(1e7), compression='xz')
Throws the following error:
Traceback (most recent call last):
File "example.py", line 8, in <module>
data = bag.read_text('myfile.jsonl.lzma', blocksize=int(1e7), compression='xz')
File "condadir/lib/python3.5/site-packages/dask/bag/text.py", line 80, in read_text
**(storage_options or {}))
File "condadir/lib/python3.5/site-packages/dask/bytes/core.py", line 162, in read_bytes
size = fs.logical_size(path, compression)
File "condadir/lib/python3.5/site-packages/dask/bytes/core.py", line 500, in logical_size
g.seek(0, 2)
io.UnsupportedOperation: seek
I assume that the functions at the bottom of the file (get_xz_blocks) for example can be used for this, but don't seem to be in use anywhere in the dask project.
Are there compression libraries that do support this seeking and chunking? If so, how can they be added?
Yes, you are right that the xz format can be useful to you. The confusion is, that the file may be block-formatted, but the standard implementation lzmaffi.LZMAFile (or lzma) does not make use of this blocking. Note that block-formatting is only optional for zx files, e.g., by using --block-size=size with xz-utils.
The function compression.get_xz_blocks will give you the set of blocks in a file by reading the header only, rather than the whole file, and you could use this in combination with delayed, essentially repeating some of the logic in read_text. We have not put in the time to make this seamless; the same pattern could be used to write blocked xz files too.
I'm using node-imagemagick for the first time.
Disclaimer: I've never used imagemagick, and am not much of a javascript guy. Most experience is in C/C++, Objective-C.
I'm writing a snippet for a server side process that needs to take an input buffer, crop it to an arbitrary bounds, and then output that in stdout.
Currently, my code looks like this:
var im = require('imagemagick');
...
im.convert([
binaryDataBlock,
'-crop',
cropStr], function(err, stdout, stderr) {....
I know my input is good... I've done this with imagemagick's "resize" routine. "cropStr" is a string, "100x100+10+10" -- any arbitrary values go here.
But I still get errors back in stderr:
"result" : "convert: unable to open image `����': # error/blob.c/OpenBlob/2587.\nconvert: no decode delegate for this
image format ����' #
error/constitute.c/ReadImage/532.\nconvert: option requires an
argument-crop' # error/convert.c/ConvertImageCommand/1081.\n"
I've tried putting in a "-format" argument, which I thought would set the output format. But then it complains about the argument.
I feel like I'm missing something obvious here. It shouldn't be so hard to just crop to an arbitrary rectangle in an image.
Any help would be tremendously appreciated.
I have written a program that relies on Magick++ simply for importing and exporting of a wide variety of image formats. It uses Image.getPixels() to get a PixelPacket, does a lot of matrix transformations, then calls Image.syncPixels() before writing a new image. The general approach is the same as the example shown in Magick++'s documentation. More or less, the relevant code is:
Magick::Image image("image01.bmp");
image.modifyImage();
Magick::PixelPacket *imagePixels = image.getPixels(0, 0, 10, 10);
// Matrix manipulation occurs here.
// All actual changes to the PixelPacket direct changes to pixels like so:
imagePixels[i].red = 4; // or any other integer
// finally, after matrix manipulation is done
image.syncPixels();
image.write("image01_transformed.bmp");
When I run the above code, the new image file ("image01_transformed.bmp" in this example) ends up being the same as the original. However, if I write it to a different format, such as "image01_transformed.ppm", I get the correct result: a modified image. I assume this is due to a cached version of the format-encoded image, and that Magick++ is for some reason not aware that the image is actually changed and therefore the cache is out of date. I tested this idea by adding image.blur(1.0, 0.1); immediately before image.syncPixels();, and forcing this inconsequential change did indeed result in the correct result for same-format images.
Is there a way to force Magick++ to realize that the cache is out-of-date? Am I using getPixels() and syncPixels() incorrectly in the first place? Thanks!
I am printing an EPS File generated with following credentials.
%-12345X#PJL JOB
#PJL ENTER LANGUAGE = POSTSCRIPT
%!PS-Adobe-3.0
%%Title: InvoiceDetail_combine
%%Creator: PScript5.dll Version 5.2.2
%%CreationDate: 10/7/2011 4:46:59
%%For: Administrator
%%BoundingBox: (atend)
%%Pages: (atend)
%%Orientation: Portrait
%%PageOrder: Special
%%DocumentNeededResources: (atend)
%%DocumentSuppliedResources: (atend)
%%DocumentData: Clean7Bit
%%TargetDevice: (HP Color LaserJet 4500) (2014.200) 0
%%LanguageLevel: 2
%%EndComments
While doing Selection Printing on Ricoh Afficio 2090 or any other drivers/printers get the following error printed on the sheets
ERROR: undefined
OFFENDING COMMAND: F4S47
Stack:
.
Kindly Review and suggest a turn around for the same as i am already stuck in this hell. I have tried to convert/extract in PS but all in vain. I am using gsview to Print and view these files.
This is the problem:
%%PageOrder: Special
A ps document with "Special" page order can NOT be re-ordered. You cannot do a selection or range with this file because it is broken for this use. You must reprocess the file using Distiller or ghostscript (ps2ps or ps2pdf) in order to print selected or re-ordered pages from the document.
You can avoid this by generating your postscript files with a real Postscript™ driver (one not created by Microsoft).
The GSView Documentation has more about this.
Previously:
This line ...
%%TargetDevice: (HP Color LaserJet 4500) (2014.200) 0
... tells us that the file was generated with HP printers as a target. So this really is not an EPS file. Because it's not Encapsulatable. To generate output on a printer the file has to execute the showpage operator, which is a no-no for EPS files.
So uncheck the EPS box (it's a big fat lie, anyway), and select (install) a Generic Postscript driver. If you need to send it to multiple makes of printer, the file needs to make as few assumptions about the printer as possible.
The first thing is that this is not a valid EPS file, as it has PJL attached at the front. Many PostScript printers will strip this off, but by no means all.
This probably is not the source of the problem.
There is no way to 'review' the problem as you have not supplied the complete PostScript program. Without that there is no way to tell what is actually wrong, the error message tells you that the interpreter encountered 'F4547' while trying to parse a token, and that this has not been defined as a routine.
Most likely the file is corrupt, either damaged in some way, or possibly it is a biinary file and has been transmitted by some process which does has done some kind of conversion (CR/LF is common). The offending command looks like its ASCIIHex encoded, so that may be a red herring.
If you want additional help, you are going to have to make the whole program available somewhere.
I don't really understand how Content importer/processor works in XNA.
I need to read a text file (Content/levels/level1.txt) of the form:
x x
x x
x x
where x's are just integers, into an int[,] array.
Any tips on writting a SIMPLE .txt importer??? By searching google/msdn I only found .x/.fbx file importer examples. And they seem too complicated.
Do you actually need to process the text file? If not, then you can probably skip most of the content pipeline.
Something like:
string filename = "Content/TextFiles/sometext.txt";
string path = Path.Combine(StorageContainer.TitleLocation, filename);
string lineOfText;
StreamReader sr = new StreamReader(path);
while ((lineOfText = sr.ReadLine()) != null)
{
// do something
}
Also, be sure to set the "Build Action" to "None" and the "Copy to Output Directory" to "Copy if newer" on the text files you've added. This tells the content pipeline not to compile the text file but rather copy it to the output directory for use as is.
I got this (more or less) from the RacingGame sample provided by Microsoft. It foregoes much of the content pipeline and simply loads and processes text files (XML) for much of its level data.
XNA 4.0 uses
System.IO.Stream stream = TitleContainer.OpenStream("tilename.txt");
See http://msdn.microsoft.com/en-us/library/bb199094.aspx and also http://blogs.msdn.com/b/shawnhar/archive/2010/12/09/reading-files-in-xna-game-studio-4-0.aspx
There doesn't seem to be a lot of info out there, but this blog post does indicate how you can load .txt files through code using XNA.
Hopefully this can help you get the file into memory, from there it should be straightforward to parse it in any way you like.
XNA 3.0 - Reading Text Files on the Xbox
http://www.ziggyware.com/readarticle.php?article_id=69 is probably a good place to start. It covers creating a basic content processor.