Texture stripe appears when printing a PostScript file - printing

I am using PostScript language to describe a page of uniform arranged dots. These dots are 600dpi, which means there are 600 dots in an inch. And I use one bit to represent each dot in PostScript, 1 for blank and 0 for a black dot.
My PostScript file of a unit of dots:
%% SetPageSize
/setPageSize {<</PageSize [595 842] >> setpagedevice} def
setPageSize
%% Dot Code
gsave
/mm {2.834645 mul} def
1 mm 1 mm translate
5.76 5.76 scale
48 48 1 [48 0 0 -48 0 48]
{<
fff7ff7ff7ff
ffffffffffff
ffffffffffff
fdffffffffff
ffffffffff7f
ffffffffffff
ffffffffffff
ffffffffffff
fffff7ffffff
fffffffdffff
ffffffffffff
ffffffffffff
7ff7ff7ff7ff
ffffffffffff
ffffffffffff
ffffffffffff
ffffffffffff
ffffffffffff
fffefffffeff
ffffffffffff
f7ffffff7fff
ffffffffffff
ffffffffffff
ffffffffffff
7ff7ff7ff7ff
ffffffffffff
ffffffffffff
fffffffdffff
fffff7ffffff
ffffffffffff
ffbfffffffff
ffffffffffff
fffffffffff7
ffffffffffff
ffffffffffff
ffffffffffff
7ff7ff7ff7ff
ffffffffffff
ffffffffffff
ffffffffffff
f7ffffffffff
ffffffffffff
fffffffffffb
ffffffffffff
ffffffff7fff
7fffdfffffff
ffffffffffff
ffffffffffff
>}
image
grestore
The code above represents a unit of dots, which is 2.03mm*2.03mm in 600dpi.
translate is used to move the user space to the desired position, for example:
Supposed that a unit is in the position of (1mm, 1mm).
Its right unit is in the position of (3.03mm, 1mm).
Its up unit is in the position of (1mm, 3.03mm).
scale is used to change the output dpi to 600. The factor is calculated by 72*48/600 = 5.76.
The matrix 48 48 1 [48 0 0 -48 0 48] is the scanning pixels vertically and horizontally.
The image structure at last represents the binary value of dots, which is written in hex.
These dots can be opened with GhostScript or Adobe-Illustrator and displayed very distinctly and clearly on the monitor.
Sample dots zoomed 4800% in AI.
However the printed dots appears texture stripe. The printer I use is an ink-jet printer Cannon ip2780 and a laser printer FX DocuPrint CP105b.
Printing result of 600dpi (laser). Vertical texture stripe is less obvious than 800dpi.
Printing result of 800dpi (laser). Vertical texture stripe is more obvious. Line 2,4,6 is lighter than line 1,3,5. However the density of dots should be the same.
Printing result of 800dpi (ink-jet). Appears horizontal and vertical texture stripe.
Could someone help to explain how the strange behavior of printer happens? Or the way I compose unit of dots is wrong.
Can I use translate to frequently move user space, especially the position is float values (does the precision is enough)?
Can I use scale to manually change the dpi to 600. Is there any method to change the input dpi?
Thanks in advance!

What you are seeing is a aliasing of your signal, a moiré pattern to be more exact. What is happening is the dots you print do not entirely align up with the printers dot matrix (screen).
Different printers have different screens and your pixels align to them differently. As a result sometimes your dot is spread over 2 printer pixels and sometimes not. If you really want to do this then each device should need it own halftone pattern if you use this method.
further reading:
Halftoning
Article about halftones

joojaa is right, it's an interference pattern between 600 and 800 dpi. You need to either find a higher resolution printer or reduce the resolution of your dot pattern. Try reducing the number 48 in your image array [48 0 0 -48 0 48] by steps of 5 or 10 or so and printing again until the pattern disappears, that will be the best your printer can do.

Related

Calculate logical pixels from millimeters

I have a design with widths, heights, paddings... in millimeters. I'm now trying to figure out how to convert those numbers to the logical pixel system Flutter uses.
I found the docs for the device pixel ratio but I'm not sure how to interpret that number and I quote:
The Flutter framework operates in logical pixels, so it is rarely necessary to directly deal with this property.
So I am not sure if this is the way to go.
My question comes down to this: Is there an easy way to calculate from millimeter to logical pixels that works for both Android and iOS?
Flutter's unit is DP or dip (aka density independent pixels).
Like it's name is implying, it's independent from the screen pixel ratio.
What's the difference with millimeters ? None really important.
Converting mm>dp or dp>mm is similar to mm>inch/inch>mm.
The relationship between them is fairly simple :
1 inch = 25.4 mm = 160.0 DP
Which means 1 mm = 6.299 DP
I would say the current accepted answer is not really accurate.
You can print the number of logical pixels for real phones, like this:
maxWidth = MediaQuery.of(context).size.width;
maxHeight = MediaQuery.of(context).size.height;
print("maxWidth = $maxWidth / maxHeight = $maxHeight");
I would appreciate if someone with real iPhones and iPads could run this code and tell me the resulting maxWidth and maxHeights. So far, with Android phones, I've compiled the results comparing with the real phone sizes. What I got is this:
Galaxy S6 ➜ 5.668537826 dp/mm for height, and 5.668537826 dp/mm for width.
Galaxy S7 ➜ 5.668537826 dp/mm for height, and 5.668537826 dp/mm for width.
Galaxy S9 ➜ 5.223614747 dp/mm for height, and 5.585946405 dp/mm for width.
Pixel XL ➜ 5.612956709 dp/mm for height, and 6.007177748 dp/mm for width.
See my spreadsheet here: https://docs.google.com/spreadsheets/d/1zmGyeKSf4w4B-bX4HSY4oSuh9RkIIkFwYSd3P9C7eX8/edit?usp=sharing
Update:
Android docs (https://developer.android.com/training/multiscreen/screendensities#TaskUseDP) say "One dp is a virtual pixel unit that's roughly equal to one pixel on a medium-density screen (160dpi; the "baseline" density)". That's obviously not true in practice for Samsung and Google phones. Or it is, if you pay attention to the word "roughly".
Flutter docs (https://api.flutter.dev/flutter/dart-ui/FlutterView/devicePixelRatio.html) say it's 3.8 logical pixels per mm, which is obviously very false.
Logical pixel is the ratio of dots per unit distance(mm), so you have to change your question to How many dots per mm represents 1 logical pixel?
As it is mentioned here
Flutter follows a simple density-based format like iOS. Assets might
be 1.0x, 2.0x, 3.0x, or any other multiplier. Flutter doesn’t have dps
but there are logical pixels, which are basically the same as
device-independent pixels. The so-called devicePixelRatio expresses
the ratio of physical pixels in a single logical pixel.
And as mentioned 1.0x logical pixel ratio represents mdpi in Android density qualifiers. And according to this, mdpi ≃ 160DPI and as dpi represents the number of individual dots that can be placed in a line within the span of 1 inch (2.54 cm) so:
160dpi = 160 dots per inch = 6.299 dots per mm
And as mdpi ≃ 160DPI and 1 logical pixel represents mdpi so:
1.0x logical pixel ratio ≃ 6.299 dots per mm
To display a widget at a real size.
double millimeterToSize(double millimeter) => millimeter * 160 * 0.03937;

mapping highlights/annotations to text in pdf

So i have this sample pdf file with three words on separate lines:
"
hello
there
world
"
I have highlighted the word "there" on the second line. Internally, within the pdf, i'm trying to map the highlight/annotation structure to the text (BT) area.
The section corresponding to the word "there" looks like so:
BT
/F0 14.6599998 Tf
1 0 0 -1 0 130 Tm
96 0 Td <0057> Tj
4.0719757 0 Td <004B> Tj
8.1511078 0 Td <0048> Tj
8.1511078 0 Td <0055> Tj
4.8806458 0 Td <0048> Tj
ET
I also have an annotation section where I have my highlight which has the following rect dimensions:
18 0 19 15 20 694 21 786 22 853 23 1058 24 1331 [19 0 R 20 0 R]<</AP<</N 10 0 R>>
...
(I left the top part of the annotation out on purpose because it is long. I extracted what i thought were the most important parts.
Rect[68.0024 690.459 101.054 706.37]
I'm kind of confused about how my text is mapped to this one highlight that I have. The coordinates do not seem to match (130 y vs 690 y)? Am I looking in the right place and interpreting my text and/or highlight annotation coordinates correctly?
Update:
i want to add more info on how I created this test pdf.
Its pretty simple to recreate the pdf. I went to google docs and created an empty document. On three lines i wrote my text as described above. I downloaded that as a pdf and then opened it in adobe acrobat reader DC (the newest one i think). I then used adobe acrobat reader to highlight the specified line and re save it. After that I used some python to unzip the pdf sections.
The python code to decompress the pdf sections:
import re
import zlib
pdf = open("helloworld.pdf", "rb").read()
stream = re.compile(r'.*?FlateDecode.*?stream(.*?)endstream', re.S)
for s in stream.findall(pdf):
s = s.strip('\r\n')
try:
print(zlib.decompress(s))
print("")
except:
pass
Unfortunately the OP only explained how he created his document and did not share the document itself. I followed his instructions but the coordinates of the annotation differ. As I only have this document for explanation, though, the OP will have to mentally adapt the following to the precise numbers in his document.
The starting coordinate system
The starting (default) user coordinate system in the document is implied by the crop box. In the document at hand the crop box is defined as
/CropBox [0 0 596 843]
i.e. the visible page is 596 units wide and 843 units high (given the default user unit of 1/72" this is an A4 format) and the origin is in the lower left corner. x coordinates increase to the right, y coordinate increase upwards. Thus, a coordinate system as usually started with in math, too.
The annotation rectangle
This also is the coordinate system of the annotation rectangle coordinates.
In the case at hand they are
/Rect [68.0595 741.373 101.138 757.298]
i.e. the rectangle with the lower left corner at (68.0595, 741.373) and the upper right at (101.138, 757.298).
Transformations of the coordinate system
In the page content stream up to the text object already identified by the OP the coordinate system gets transformed a number of times.
Mirroring, translation
In the very first line of the page content
1 0 0 -1 0 843 cm
This transformation moves the origin up by 843 units and mirrors (multiplies by -1) the y coordinate.
Thus, now be have a coordinate system with the origin in the upper left and y coordinate increasing downwards.
Scaling
A bit later in the content stream the coordinate system is scaled
.75062972 0 0 .75062972 0 0 cm
Thus, the coordinate units are compressed to about 3/4 of their original width and height, i.e. each unit along the x or y is only 1/96" wide/high.
The text "there"
Only after these transformations have been applied to the coordinate system, the text object identified by the OP is drawn. It starts by setting and changing the text matrix:
1 0 0 -1 0 130 Tm
This sets the text matrix to translate by 130 units in y direction and mirroring y coordinates once again. (Mirroring back again is necessary as otherwise the text would be drawn upside down.)
96 0 Td
This changes the text matrix by moving 96 units along the x axis.
And the starting point where the text is drawn is at the origin of the coordinate system first changed by the mirroring and translation, and then by scaling of the current transformation matrix, and then by mirroring and translation according to the text matrix.
Does it match?
Which coordinate would this point be in the default user coordinate system?
x = (0 + 96) * .75062972 = 72 (approximately)
y = (((0 * (-1)) + 130) * .75062972) * (-1) + 843 = 745,4 (approximately)
This matches with the annotation rectangle (see above) with x coordinates between 68.0595 and 101.138 and y coordinates between 741.373 and 757.298.
So
I'm kind of confused about how my text is mapped to this one highlight that I have. The coordinates do not seem to match (130 y vs 690 y)? Am I looking in the right place and interpreting my text and/or highlight annotation coordinates correctly?
The coordinates do match, you merely have to make sure you apply the transformations of the current transformation matrix and the text matrix.

How to get the accurate font size(height) in pdf

I have a sample pdf (attached), and it includes a text object and a rectangle object that have almost the same height. Then I checked the content of the pdf by using itextrup as below:
1 1 1 RG
1 1 1 rg
0.12 0 0 0.12 16 50 cm
q
0 0 m
2926 0 l
2926 5759 l
0 5759 l
0 0 l
W
n
Q
1 1 1 RG
1 1 1 rg
q
0 0 m
2926 0 l
2926 5759 l
0 5759 l
0 0 l
W
n
/F1 205.252 Tf
BT
0 0 0 RG
0 0 0 rg
/DeviceGray CS
/OC /oc1 BDC
0 -1 1 0 1648 5330 Tm
0 Tc
100 Tz
(Hello World) Tj
ET
Q
q
0 0 m
2926 0 l
2926 5759 l
0 5759 l
0 0 l
W
n
0 0 0 RG
0 0 0 rg
/DeviceGray CS
6 w
1 j
1 J
1649 5324 m
1649 4277 l
1800 4277 l
1800 5324 l
1649 5324 l
S
EMC
Q
Obviously the user space matrix is determined by [0.12 0 0 0.12 16 50], and the height for the rectangle is (1800-1649)*0.12*1=18.12, and for the font size I use 205.252*0.12=24.63024. Since the two values are not close, my problem is how to get the height/size of the font?
sample.pdf
OK - I took a look at your file and you're basically hosed. That's the scientific answer, now let me clarify :)
Bad PDF!
The PDF you have up there as a sample contains a font that is not embedded. That "/F1 Tf" command you have there points to the font "ArialMT" in the resources dict for that page. Because the font has not been embedded, you only have two options:
Try to find the actual font on the system and extract the necessary information from there.
Live with the information in the PDF. Let's start with that.
Font Descriptor
Here is an image from pdfToolbox examining the font in the PDF file (caution: I'm associated with this tool):
I've cut off some of the "Widths" table, but other than that this is all of the information you have in the PDF document for this font. And this means you can access the widths for each glyph, but you don't have access to the heights of each glyph. The only information you have regarding heights is the font bounding box which is the union of all glyph bounding boxes. In other words, the font bounding box is guaranteed to be big enough to contain any glyph from the font (both horizontally and vertically).
System Information
You don't say why you need this information so it becomes a little harder to advise further. But if you can't get the information from the PDF, you're only option is to live with the inaccurate information from the PDF or to turn to the system your code is running on to get you more.
If you have the ArialMT font installed, you could basically try to find the font file and then parse the TrueType font file to find the bounding boxes for each glyph. I've done that, it's not funny.
Or you can see if your system can't provide you with the information in a better way. Many operating systems / languages have text calls that can get accurate measurements for you. If not, you can brute force it by rendering the text you want in black on a white image and then examining the pixels to see where you hit and thus how big the largest glyph in your text string was.
Wasteful though that last option sounds, it's probably the quickest and easiest to implement and it - depending on your needs - may actually be the best option all around.
I have a sample pdf (attached), and it includes a text object and a rectangle object that have almost the same height.
Indeed, your PDF is displayed like this:
But looking at this one quickly realizes that the glyphs in your text "Hello World" do not extend beneath the base line like a 'g', 'j' or some other glyphs would:
(The base line is the line through the glyph origins)
Since the two values are not close, my problem is how to get the height/size of the font
Obviously the space required for such descenders beneath the base line must also be part of the font size.
Thus, it is completely correct and not a problem that the height of the box (18.12) is considerably smaller than the font size (24.63024).
BTW, this corresponds with the specification which describes a font size of 1 to be arranged so that the nominal height of tightly spaced lines of text is 1 unit, cf. section 9.2.2 "Basics of Showing Text" of ISO 32000-1. Tightly spaced lines obviously need to include not only glyph parts above the base line but also those below. Additionally it furthermore includes a small gap between such lines as even tightly spaced lines are not expected to touch each other.

8086 Assembly: wrong data reading video memory

I'm working for a school project on a little game programmed in 8086 Assembly language.
I have to draw on the screen (color some pixels), to do so I use interrupt 10h with mode 13h (ax = 13h). This is a 320px X 200px video mode.
(Note: you can best read the text underneath with the code opened in another tab (you will better understand what I'm explaining in words))
I want to first initialize the screen so I'm sure each pixel is black. To do so I first initialize a palette with black = color number 0.
After that I use a primitive for loop procedure I wrote to initialize the screen (set each pixel black). I pass as arguments respectively the start index (index in video memory (i.e 0 for the first pixel)), the stop index (64 000, last pixel (320px X 200px = 64 000)) and the step size with which the index has to be incremented.
So all it does is looping from the specified begin adres to the specified stop adres in memory and for each adres putting it on 0 (because black = color number 0 of the palette).
So normally now every pixel of my screen is black. Indeed when I launch my little program, the 320x200 video mode appears and the screen is black.
Further in the program I often have to compare the color of a pixel on the screen. Normally when i acces a certain adress in the video memory it has to be 0 (because I initialized the whole screen on black (color number 0)) except if I colored that pixel with another color.
But when testing my program I found out that certain pixels were black on the screen (and since the initialization I never changed their color) but when I displayed their value, it appeared to be 512 instead of 0. I can not understand why, since I never changed the color since I initialized them.
I spent hours trying to debug it but I cannot figure out why that pixel suddenly changes from color number 0 of the palette (black) to 512.
Because the pixel with color value 512 is also black on the screen I suppose that is also a value for that color but I want explicitly use color number 0 for black so that I can compare it (because now there is 0 but also 512 for black and maybe other black values).
Relevant part of the code:
mov ax, 0a000h ; begin address of video memory
mov es, ax
mov ax, [bp+4][0] ; We put the 1st argument (index) in register ax
mov di, ax
;;;; FOR DEBUGGING PURPOSES
mov ax, es:[di]
push ax ; We print the color of the pixel we are checking (normally has to be 0 if that pixel is black on the screen)
call tprint ; 70% of the time the printed color number is 0 but sometimes it prints color number 512 (also a black color but I don't want that, I initialized it to 0!!)
;;;; END DEBUG
;;;; ALSO STRANGE IS THAT WHEN I OUTCOMMENT THESE 3 LINES ABOVE, THE LAST PIXEL OF THE FIRST ROW IS COLORED
;;;; WHEN I LEAVE THESE 3 LINES LIKE NOW (PRINTING THE VALUE OF THAT PIXEL) IT IS THE NEXT PIXEL THAT IS COLORED
;;;; (strange but i don't really care since it was introduced only to debug)
CMP es:[di], 0 ; Comparison to see if the pixel we are checking is black.
; But when it is 512, my program will think it isn't the black color, and will stop executing (because after this call I do a JNZ jump to quit the loop)
Thanks for your help!
As #nrz hinted, the problem is with data size, although slightly different than what he described. Actually you are loading 2 bytes, so 2 pixels at once instead of 1. You get a value of 512 if a pixel with color 0 is beside a pixel with color 2.
You need to change line 182 to movzx ax, byte ptr es:[di] and line 190 to cmp byte ptr es:[di], 0 (use whatever syntax your assembler supports for byte operations).

Duplicate iOS kCGBlendModeSoftLight blending

I'm trying to duplicate the CoreGraphics kCBBlendModeSoftLight blending using shaders. (I've already implemented a few other CG blend modes already)
The problem is that while there a lot if (different) description of Photoshop's Soft Light blending, I'm interested in the CoreGraphics' version.
Does anyone know the exact formula used in CG to determine the result of the blend?
For CG, it's not explicitly documented, but the documentation for Core Image's CISoftLightBlendMode filter says:
The formula used to create this filter is described in the PDF specification, which is available online from the Adobe Developer Center.
Here's the page you can get the PDF Reference from. The formula given is (in pseudocode, transcribed by me from the mathematical syntax in the PDF while hoping I didn't mess any part of it up):
D(float x) =
x ≤ 0.25
? ((16.0 × x - 12.0) × x + 4.0) × x
: sqrt(x)
softlight(__color backdrop, __color source) =
source ≤ 0.5
? backdrop - (1.0 - 2.0 × source) × backdrop × (1.0 - backdrop)
: backdrop + (2.0 × source - 1.0) × (D(backdrop) - backdrop)
(For GLSL, you'd use vec4 instead of CIKL's __color.)
The introduction to the section notes that blend modes whose definitions use a particular notation are “separable”, meaning that the formula is applied to the components separately. The soft light blend mode is one of these blend modes, so you don't need to compute a luminance value or anything like that.

Resources