I'm working for a school project on a little game programmed in 8086 Assembly language.
I have to draw on the screen (color some pixels), to do so I use interrupt 10h with mode 13h (ax = 13h). This is a 320px X 200px video mode.
(Note: you can best read the text underneath with the code opened in another tab (you will better understand what I'm explaining in words))
I want to first initialize the screen so I'm sure each pixel is black. To do so I first initialize a palette with black = color number 0.
After that I use a primitive for loop procedure I wrote to initialize the screen (set each pixel black). I pass as arguments respectively the start index (index in video memory (i.e 0 for the first pixel)), the stop index (64 000, last pixel (320px X 200px = 64 000)) and the step size with which the index has to be incremented.
So all it does is looping from the specified begin adres to the specified stop adres in memory and for each adres putting it on 0 (because black = color number 0 of the palette).
So normally now every pixel of my screen is black. Indeed when I launch my little program, the 320x200 video mode appears and the screen is black.
Further in the program I often have to compare the color of a pixel on the screen. Normally when i acces a certain adress in the video memory it has to be 0 (because I initialized the whole screen on black (color number 0)) except if I colored that pixel with another color.
But when testing my program I found out that certain pixels were black on the screen (and since the initialization I never changed their color) but when I displayed their value, it appeared to be 512 instead of 0. I can not understand why, since I never changed the color since I initialized them.
I spent hours trying to debug it but I cannot figure out why that pixel suddenly changes from color number 0 of the palette (black) to 512.
Because the pixel with color value 512 is also black on the screen I suppose that is also a value for that color but I want explicitly use color number 0 for black so that I can compare it (because now there is 0 but also 512 for black and maybe other black values).
Relevant part of the code:
mov ax, 0a000h ; begin address of video memory
mov es, ax
mov ax, [bp+4][0] ; We put the 1st argument (index) in register ax
mov di, ax
;;;; FOR DEBUGGING PURPOSES
mov ax, es:[di]
push ax ; We print the color of the pixel we are checking (normally has to be 0 if that pixel is black on the screen)
call tprint ; 70% of the time the printed color number is 0 but sometimes it prints color number 512 (also a black color but I don't want that, I initialized it to 0!!)
;;;; END DEBUG
;;;; ALSO STRANGE IS THAT WHEN I OUTCOMMENT THESE 3 LINES ABOVE, THE LAST PIXEL OF THE FIRST ROW IS COLORED
;;;; WHEN I LEAVE THESE 3 LINES LIKE NOW (PRINTING THE VALUE OF THAT PIXEL) IT IS THE NEXT PIXEL THAT IS COLORED
;;;; (strange but i don't really care since it was introduced only to debug)
CMP es:[di], 0 ; Comparison to see if the pixel we are checking is black.
; But when it is 512, my program will think it isn't the black color, and will stop executing (because after this call I do a JNZ jump to quit the loop)
Thanks for your help!
As #nrz hinted, the problem is with data size, although slightly different than what he described. Actually you are loading 2 bytes, so 2 pixels at once instead of 1. You get a value of 512 if a pixel with color 0 is beside a pixel with color 2.
You need to change line 182 to movzx ax, byte ptr es:[di] and line 190 to cmp byte ptr es:[di], 0 (use whatever syntax your assembler supports for byte operations).
Related
I have the following issue:
I'm creating a uniform gray color video (for testing) using OpenCV VideoWriter. The output video will reproduce a constant image where all the pixels must have the same value x (25, 51, 76,... and so on).
When I generate the video using MJPG Encoder:
vw = cv2.VideoWriter('./videos/input/gray1.mp4',
cv2.VideoWriter_fourcc(*'MJPG'),
fps,(resolution[1],resolution[0]))
and read the output using the VideoCapture class, everything just works fine. I got a frame array with all pixel values set to (25,51,76 and so on).
However when I generate the video using HEV1 (H.265) or also H264:
vw = cv2.VideoWriter('./videos/input/gray1.mp4',
cv2.VideoWriter_fourcc(*'HEV1'),
fps,(resolution[1],resolution[0]))
I run into the following issue. The frame I got in BGR format follows the next configuration:
The blue channel value is the expected value (x) minus 4 (25-4=21, 51-4=47, 76-4=72, and so on).
The green channel is the expected value (x) minus 1 (25-1=24, 51-1=50, 76-1=75).
The red channel is the expected value (x) minus 3 (25-3=22, 51-3=48, 76-3=73).
Notice that the value is reduced with a constant value of 4,1,3, independently of the pixel value (so there is a constant effect).
What I could explain is a pixel value dependable feature, instead of a fixed one.
What is worse is that if I choose to generate a video with frames consisting in every color (pixel values [255 0 0],[0 255 0] and [0 0 255]) I get the corresponding outputs values ([251 0 0],[0 254 0] and [0 0 252])
I though that this relation was related to the grayscale Y value, where:
Y = 76/256 * RED + 150/256 * GREEN + 29/256 * BLUE
But this coefficients are not related with the output obtained. Maybe the problem is the reading with VideoCapture?
EDIT:
In case that I want to have the same output value for the pixels (Ej: [10,10,10] experimentally I have to create a img where the red and blue channel has the green channel value plus 2:
value = 10
img = np.zeros((resolution[0],resolution[1],3),dtype=np.uint8)+value
img[:,:,2]=img[:,:,2]+2
img[:,:,1]=img[:,:,1]+0
img[:,:,0]=img[:,:,0]+2
Anyone has experience this issue? It is related to the encoding process or just that OpenCV treats the image differently, prior encoding, depending on the fourcc parameter value?
I am trying to accomplish something a bit backwards from everyone else. Given an array of sensor data, I wish to print a graph plot of it. My test bench uses a stepper motor to move the input shaft of a sensor, stop, get ADC value of sensor's voltage, repeat.
My current version 0.9 bench does not have a graphical output. The proper end solution will. Currently, I have 35 data points, and I'm looking to get 90 to 100. The results are simply stored in an int array. The index is linear, so it's not a complicated plot, but I'm having problems conceptualizing the plot from bottom-left to top-right to display to the operator. I figure on the TFT screen, I can literally translate an origin and then draw lines from point to point...
Worse, I want to also print out this to a thermal printer, so I'll need to translate this into a sub-384 pixel wide graph. I'm not too worried about the semantics of communicating the image to the printer, but how to convert the array to an image.
It gets better: I'm doing this on an Arduino Mega, so the libraries aren't very robust. At least it has a lot of RAM for the code. :/
Here's an example of when I take my data from the Arduino test and feed it into Excel. I'm not looking for color, but I'd like the graph to appear and this setup not be connected to a computer. Or the network. This is the ESC/POS printer, btw.
The algorithm for this took three main stages:
1) Translate the Y from top left to bottom left.
2) Break up the X into word:bit values.
3) Use Bresenham's algorithm to draw lines between the points. And then figure out how to make the line thicker.
For my exact case, the target bitmap is 384x384, so requires 19k of SRAM to store in memory. I had to ditch the "lame" Arduino Mega and upgrade to the ChipKIT uC32 to pull this off, 32k of RAM, 80 MHz cpu, & twice the I/O!
The way I figured out this was to base my logic on Adafruit's Thermal library for Arduino. In their examples, they include how to convert a 1-bit bitmap into a static array for printing. I used their GFX library to implement the setXY function as well as their GFX Bresenham's algorithm to draw lines between (X,Y)s using my setXY().
It all boiled down to the code in this function I wrote:
// *bitmap is global or class member pointer to byte array of size 384/8*384
// bytesPerRow is 384/8
void setXY(int x, int y) {
// integer divide by 8 (/8) because array size is byte or char
int xByte = x/8;
// modulus 8 (%8) to get the bit to set
uint8_t shifty = x%8;
// right shift because we start from the LEFT
int xVal = 0x80 >> shifty;
// inverts Y from bottom to start of array
int yRow = yMax - y;
// Get the actual byte in the array to manipulate
int offset = yRow*bytesPerRow + xByte;
// Use logical OR in case there is other data in the bitmap,
// such as a frame or a grid
*(bitmap+offset)|=xVal;
}
The big point is to remember with an array, we are starting at the top left of the bitmap, going right across the row, then down one Y row and repeating. The gotchya's are in translating the X into the word:bit combo. You've got to shift from the left (sort-of like translating the Y backwards). Another gotchya is one-off error in bookkeeping for the Y.
I put all of this in a class, which helped prevent me from making one big function to do it all and through better design made the implementation easier than I thought it would be.
Pic of the printout:
Write-up of the project is here.
I'm having difficulty printing an image data in page mode. I'm been able to print image data in standard mode as follows:
data[] = { ESC ,
'*' ,
0 , // 8-dot single density mode
width , // nl: image width
0 } // nh: image width
for each 8 x image_width block of pixels in a monochrome image
for each 8 x 1 (vertical) strip of pixels in the block
append pixel (0 or 1) data to the array, data[]
write data to COM port
My (unsuccessful) attempt at printing in page mode is a variation of the above and proceeds as follows:
select page mode by writing the chars, ESC and 'L' to the COM port
write pixel data as described above
print by writing the characters ESC and FF
What am I doing wrong? Do I have to specify a print region or something of the sort?
BTW, I'm programming an Epson TM-T88III.
Found the answer. Write ESC J n (print and paper feed) command after writing after each 8 x image_width block of pixels to the COM port.
In my application (Delphi 2010, OpenGL, windows XP), I need to read back the pixels of variable portions of the framebuffer.
The area of interest is input by the user through a selection rectangle (x1, y1, x2, y2).
With this coordinates I do this:
var
pixels : PGLUByte; //pointer to unsigned bytes
begin
[Transformation of coordinates to Opengl viewport offsets]
//reserve a block of memory for readpixels to write to
ReallocMem(pixels, width * height* sizeof(GLUByte)*3); //<<< crash on this line after a few iterations
if not assigned(pixels) then exit;
//read the pixels
glReadPixels(startx, viewport[3] - (starty+height),
width , height,
GL_RGB, GL_UNSIGNED_BYTE,
pixels);
//Processing of the pixel data follows here...
//when done, release the memory
ReallocMem(pixels, 0);
end;
This function seems to work as intended at the first few tries, but after a few calls to it, the application crashes with an Access violation at $0000000 on the first ReallocMem.
I tried using Getmem, Finalize and Freemem, but these functions lead to the same behaviour.
Is my design principially correct? I tried to debug it, but i could not identify the cause of trouble. width and height always have plausible values, and allocating 5-10 blocks of 30 to 120 KiB should not be an issue on a machine with 3 GB of RAM.
Update
Between the calls to this function, The render pipeline might Draw a few frames, objects may be added to the scene - in principle anything the application is capable of, as this function is called when the user decided to select a rectangular portion of my scene for capture through dragging a seelction box over my Canvas.
Here is a sample of widths and heights from a debug session of mine
width : 211 height: 484 size: 306372
width : 162 height: 395 size: 191970
width : 123 height: 275 size: 101475
width : 14 height: 346 size: 14532
The fourth Selection failed in this session. in Other session, more succesive selections were possible, others crashed when trying the second, but none on the first.
Another thing: when I comment out glReadPixels, no more crashes appear.
I found it after all.
My calculation for width and height were of by one, so i needed to change my reallocmem-line to
ReallocMem(pixels, (width+1) * (height+1) * sizeof(GLUByte)*3);
Thank you for you consideration
Do you initialize pixels to nil?
begin
pixels := nil;
...
Do you allocate enough memory? This example
allocates nWidth + 1, nHeight + 1
mentions that OpenGL might align memory to 4 bytes by default (see GL_PACK_ALIGNMENT).
I can't get this to work in my AS1 application. I am using the Color.setTransform method.
Am I correct in thinking the following object creation should result in transforming a colour to white?
var AColorTransform = {ra:100, rb:255, ga:100, gb:255, ba:100, bb:255, aa:100, ab:255};
And this one to black?
AColorTransform = {ra:100, rb:-255, ga:100, gb:-255, ba:100, bb:-255, aa:100, ab:-255};
I read on some websites that calling setRGB or setTransform may not result in actually changing the display colour when the object you're performing the operation on has some kind of dynamic behaviour. Does anyone know more about these situations? And how to change the colour under all circumstances?
Regards.
Been a long time since I've had to do anything is AS1, but I'll do my best.
The basic code for a color.setTransform() looks like this...
var AColorTransform = {ra:100, rb:255, ga:100, gb:255, ba:100, bb:255, aa:100, ab:255};
var myColor = new Color(mc);
myColor.setTransform(AColorTransform);
...where mc is a MovieClip on the stage somewhere.
Remember that you're asking about transform, which by its nature is intended to transform colors from what they are to something else. If you want to reliably paint in a specific color (such as black or white), you're usually far better off using setRGB, which would look like this:
var myColor = new Color(mc);
//set to black
myColor.setRGB(0x000000);
//or set to white
myColor.setRGB(0xFFFFFF);
These work reliably, though there can be some gotchas. Generally, just remember that the color is attached to the specific MovieClip...so if that MovieClip falls out of scope (ie, it disappears from the timeline) your color will be deleted with it.
Read further only if you want to understand color transform better:
Let's look at the components of that color transform.
a (multiplier 0 > 100%) b(offset -255 > 255)
r ra rb
g ga gb
b ba bb
a aa bb
There are four channels (r, g, b, and a). The first three are for red, green and blue, and the last one for alpha (transparency). Each channel has an 'a' component and a 'b' component, thus ra, rb, ga, gb, etc. The 'a' component is a percentage multiplier. That is, it will multiply any existing channel by the percent in that value. The 'b' component is an offset. So 'ra' multiplies the existing red channel. 'rb' offsets it. If your red channel starts as 'FF' (full on red), setting ra:100 will have no effect, since multiplying FF by 100% results in no change. Similarly, if red starts at '00' (no red at all), no value of 'ra' will have any effect, since (if you recall your Shakespeare) twice nothing is still nothing. Things in-between will multiply as you'd expect.
Offsets are added after multiplication. So you can multiply by some value, then offset it:
r (result red color) = (RR * ra%) + rb
g (result green color) = (GG * ga%) + gb
b (result blue color) = (BB * ba%) + bb
a (result alpha) = (AA * aa%) + ab
example: RR = 128 (hex 0x80), ra = 50 (50% or .5), rb = -20
resulting red channel: (128 * .5) + (-20) = 44 (hex 0x2C)
Frankly, this all gets so confusing that I tend to prefer the simple sanity of avoiding transforms altogether and go with the much simpler setRGB().