I implemented a Visualisation a few months ago on an old XP Machine with SlimDX and an old Geforce. It worked perfectly. Now i startet the stuff on an new Computer:
Windows7
Intel I7
Amd HD6350
It's still an DirectX 9 Device. But the Resolution now is very low!!! I tested a screenshot in Photoshop. a Pixel in the visualisation consists of 4 Pixels on my Screen. The Control i draw to has 1000 x 1000 Pixels, but Slimdx Stretch 500x500 Pixel to this control... Antialiasing and stuff were on and off - nothing happened.
Anyone an idea?
:-/
Solution found!!!
Since Win7 the Backbuffer Size has to set manually!
Related
I have Canon RAW images (.CR2) with resolution 5760x3840. When I read them to a Delphi (Embarcadero RAD Studio XE5 Update 2) application using Loadfromfile, I get bitmaps with a resolution 5760x3240. However, the bitmap represented by my CR2 images is 5760x3840 (CorelPhotopaint shows the full resolution in the same computer). As far as I understand, the TWICImage clips some lines from top and/or bottom of the bitmap to force it to 16x9 format. I am using a Microsoft camera codec downloaded a few days ago (6.3.9721.0). Is there a way to prevent clipping, and are the missing 600 lines 300 from top, 300 from bottom?
I have an issue where the 16x16 bitmaps in an imagelist get corrupted and are displayed as monochrome black and white images. This tends to happen between sessions in the IDE, so that after saving the project to disk, exiting the IDE, and opening up the project once again, lo and behold the images have transformed! This requires rebuilding the imagelist and is very annoying. A before and after example image is attached.
This project is being developed (and shuttled between) WindowsXP and Windows 7 x64 using Delphi 4, which is installed on both operating systems. The problem occurs rarely in XP and frequently in Windows 7. Does anyone have a clue as to what is going on here?
I've been experimenting with createjs to convert some flash as3 animations to HTML5. everything works fine in desktop browsers, but on an i-pad the animation are considerably slower. Where there are complex vector objects they are so slow as to be unusable. I can speed things up by caching the objects, but the quality of the resulting graphics is poor. Are there any solutions to this problem?
Thanks in advance
Pete
take a look on canvas size. after a centain size the mobile vídeo boards cannot accelerate the graphics like pc does.
Tip #4. Watch The Size of Your Canvas
Obviously, the larger the canvas the more costly the drawing
operation, but if you’re targeting mobile devices, there are some size
limits you must keep in mind.
From Safari Web Content Guide:
The maximum size for a canvas element is 3 megapixels for devices with
less than 256 MB RAM and 5 megapixels for devices with greater or
equal than 256 MB RAM
So if you want to support Apple’s older hardware, the size of your
canvas cannot exceed 2048×1464.
But that’s not all! Even with smaller sizes, you have to keep your
canvas’s aspect ratio between ~3/4 and ~4/3. If you step outside those
boundaries, webkit seems to switch to a totally different rendering
mode splitting the canvas into multiple fixed-size areas and rendering
them separately with a noticeable delay between them.
There doesn’t seem to be any documentation on this but I have
confirmed this happens on both Chrome and Safari on iOS versions 6.0.1
and 5.1.1.
source http://blog.toggl.com/2013/05/6-performance-tips-for-html-canvas-and-createjs/
I'm using twaindotnet to scan an image with 2400 DPI. Whenever I scan a full page in color I get a message that there is not enough memory to perform this action. I tried it on a different computer with around 4 GB of ram and got the same error message.
If I scan the image as black and white or gray-scale I don't get any error and everything is working fine.
Is that a problem that is related to the scanner driver (Canon 9000F) or is this a general TWAIN problem?
gray-scale images have a bit depth varying from 2 to 8. for an image of legal size and of 2400 dpi, the size can be 163 MB ~ 654 MB.
color images have higher bit depth. take 32 for example, the image of the same size and dpi can be around 2.62 GB. plus the memory occupied by other applications, 4 GB memory likely runs out.
File Size = (height x width x bit depth x dpi2) / 8
dpi2 means dpi square
Looks like that Twain library is scanning to memory, the Twain specification also has a file transfer mode which is generally used for very large images (ICAP_XFERMECH). Twaindotnet may allow you to choose the file transfer mode when scanning.
I'm currently working on an OpenGL ES 1.1 app for the iPad
its running at full 768x1024 iPad resolution, with textures, polygons, and the works
but only at about 30 fps! (not fast enough for my purposes)
im pretty sure its not my code, because when i lowered the resolution, the FPS increased, eventually the normal 60 at iPod touch resoultion
Is anyone else encountering this FPS slowdown?
should I reduce the size then scale up?... also, would upgrading to opengl 2.0 increase speed?
any guidance is much appreciated!
The iPad has the exact same GPU as the iPhone 3GS, so you would probably expect worse fullscreen performance on the iPad due to having to push 5 times as many pixels.
If this is the case, then using scaling is probably the best solution. After all, even console developers have to do it!
I had same problem when porting an iPhone game to iPad. There are few optimizations that raised FPS from 5-6 to 20+:
using vbo-s
reducing as much as possible per fragment operations(fog, blend, multi-texturing)
putting some operations on CPU(lights for example)
using multi-texturing instead of multi-pass with blend
improving culling algorithm(now we have a better CPU)