I am trying to access files that are stored as Jpeg files, is there an easy way to display these image files without performance loss ?
You can load the JPeg file using an instance of TJPEGImage and then assign it to a TBitmap to display. You find TJPEGImage in unit jpeg.
jpeg := TJPEGImage.Create;
jpeg.LoadFromFile('filename.jpg');
bitm := TBitmap.Create;
bitm.Assign(jpeg);
Image1.Height := bitm.Height;
Image1.Width := bitm.Width;
Image1.Canvas.Draw(0, 0, bitm);
Alternatively, this should also work:
bitm := TBitmap.Create;
bitm.Assign('filename.jpg');
Image1.Height := bitm.Height;
Image1.Width := bitm.Width;
Image1.Canvas.Draw(0, 0, bitm);
I found this page!
http://cc.embarcadero.com/Item/19723
Enhanced jpeg implementation
Author: Gabriel Corneanu
This unit contains a new jpeg implementation (based on Delphi original)
fixed bug accessing one pixel height picture
added lossless transformation support for jpeg images (based on Thomas G. Lane C library - visit jpegclub.org/jpegtran )
added CMYK support (read only)
compiled for D5-2010 AND BCB5-6-CMYK to RGB fast MMX conversion (not for Delphi5, lack of MMX ASM) (fallback to simple pascal implementation if not available)
fixed bug in Delphi 5 ASM (CMYK to RGB function)
You only need the jpeg.dcu file; it can be copied to program directory or to the LIB directory.I generated obj and hpp files for use with CBuilder 5 and 6 also.This is what you need to use it:
This is just an enum
TJpegTransform = (
jt_FLIP_H, { horizontal flip }
jt_FLIP_V, { vertical flip }
jt_TRANSPOSE, { transpose across UL-to-LR axis }
jt_TRANSVERSE, { transpose across UR-to-LL axis }
jt_ROT_90, { 90-degree clockwise rotation }
jt_ROT_180, { 180-degree rotation }
jt_ROT_270 { 270-degree clockwise (or 90 ccw) }
);
procedure Crop(xoffs, yoffs, newwidth, newheight: integer); this method is cropping the image
procedure Transform(Operation: TJpegTransform);this method is applying the specified transformation; read the transupp.h comments about limitations(my code is using crop option)
property IsCMYK: boolean read FIsCMYK; this will indicate if the last jpeg image loaded is CMYK encoded
property InverseCMYK: boolean read FInverseCMYK write SetInverseCMYK;if set (default, because I could only find this kind of images), the CMYK image is decoded with inversed CMYK values (I read that Photoshop is doing this).
The jpegex is the same unit compiled with a different name. It can be used to avoid conflicts when you have other components without source code linking to the original jpeg unit. In this case you might need to use qualified class names to solve names conflict: jpegex.TJpegImage.xxx. Be carefull when you use both versions in one program: even though the classes have the same name, they are not identical and you can't cast or assign them directly. The only way to exchange data is saving to/loading from a stream.
Send comments to:
gabrielcorneanuATyahooDOTcom
I dont believe D7 can handle CMYK JPEG's.
If you cant open it using the JPEG unit as Ralph posted, you might consider using something like GDI+ to load the graphic file.
Actually, I once modified Jpeg.pas unit to partial CMYK support. Basically after
jpeg_start_decompress(jc.d)
you should check
if jc.d.out_color_space = JCS_CMYK then
and if true following jpeg_read_scanlines will get 4 bytes data instead of 3 bytes.
Also cinfo.saw_Adobe_marker indicates inverted values (probably Adobe was first who introduced CMYK jpeg variation).
But the most difficult part is CMYK-RGB conversion. Since there's no universal formula, in best systems it's always table approach. I tried to find some simple approximation, but there's always a picture that does not fit. Just as an example, don't use this formulas as a reference:
R_:=Max(254 - (111*C + 2*M + 7*Y + 36*K) div 128, 0);
G_:=Max(254 - (30*C + 87*M + 15*Y + 30*K) div 128, 0);
B_:=Max(254 - (15*C + 44*M + 80*Y + 24*K) div 128, 0);
Easy!
I implemented the CMYK conversion in the JPEG.PAS
Include it in your project to handle CMYK JPEG's
Get it here:
http://delphi.andreotti.nl/
Related
i have shape file (.shp) with data in EPSG:2180.
I have extracted positions like this one:
303553.0249270061 580466.2644065879
Same values i see in equivalent .gml file.
That i am sure values are ok.
But how can i convert it to Latitude and Longitude?
(from comments GPS is WGS 84 (a.k.a. EPSG:4326)).
i see that params about EPSG:2180:
latitude_of_origin=0
central_meridian=19
scale_factor=0.9993
false_easting=500000
false_northing=-5300000
SPHEROID = 6378137,298.257222101
degree=0.0174532925199433
I have tried such simple calculation like
one_degree = 111196.672; //meters
X:= 303553.0249270061;
Y:= 580466.2644065879;
X:= X - false_northing;
Y:= Y - false_easting;
X:= latitude_of_origin + X/one_degree;
Y:= central_meridian + Y/one_degree;
but this show me:
50.3931720629823
19.7236391427847
which is not true. It is near but should be >20.
How this calculation should looks like?
I need it in Delphi application.
I don't know solution for You in Delphi but i found one in js and php.
I found this topic while i was searching "how to transform location units from one system to another" and google EPSG:2180 convert.
This problem name is:
Transform a point coordinate from one map projection to another
So solution is to make mathematical transformation and maby other calculations. You can Check:
Proj4 Library
with API Interfaces: C, C++, Python, Java, Ruby
My solution was to use library Proj4js.
Maby try to open source's on github of this library and analize function transform and with luck You will find answer ; )
I used php version of this package => Proj4jsphp
My problem was how to transform Coordinates in EPSG:2180 => GPS is WGS 84 (a.k.a. EPSG:4326).
Later i found out there are different names for this systems:
EPSG:2180 responds to Poland CS92
EPSG:4326 responds to WGS84 -> where WGS84 is the most popular system (lat, lon).
For the purpose of identifying and comparing JPG images taken from cameras I want to calculate a MD5 hash of the scan portion of the image inside the JPG. My idea is to take the bytes between the SOS and the EOI marker and perform a hash on those bytes based on the assumption that these bytes will never change unless the actual image is processed and altered.
Apparently this question has come up already several times 1,2, 3. Rather complicated solutions have been suggested, a fact that I find irritating looking at my rather simple but apparently effective approach. (Or is it too simple to be true?)
I know there can be multiple pairs of SOS ($FFDA) and EOI ($FFD9) in a JPG file, in my present files there are 3: A thumbnail, the actual image and an additional 1920x1080 image (Sony). My present approach is to parse the stream and locate the next SOS, then look for EOI, calculate the size and assume the actual image if the size exceeds 50% of the file size.
This approach works with my present files. I stripped all metadata from a JPG file with exiftool -all= image.jpg and found the MD5 hash to be identical. Yet the algorithm seems rather coarse to me.
So here are my questions:
Is there any risk that simply examining the space between SOS and EOI can fail? I have read this, but am still not sure.
Parsing every byte from the SOS of the actual image takes a lot of time. I take it from here that there is no shortcut to finding the end of the compressed data. But I might just leap forward 80% or so from the second SOS marker. I am talking about images from a camera - how much can I rely on the fact that there will be a thumbnail coming first and the actual image after it?
Should I start 6 Bytes after SOS (here?)
Any ideas for a better approach?
After doing some research and running an bunch of tests here I present my solution to my question.
First, I want to make clear that we are not talking about a forensic investigation. There are possibly ways to manipulate a JPG image in a way that markers appear where they shouldn't and do not appear where would have to according to the specs.
We are not talking about image identity or similarity, either. If you losslessly rotate a JPG you still have the very same image information, but not the identical image any more. We're not talking, either, about images that have been resized, optimized or altered in any other way.
What we are talking about is identifying simple duplicates or JPGs that have been renamed or where metadata has been modified or removed, but where the image itself has never been processed or tampered with in any way.
Is a hash of the bytes between the SOS and the EOI markers a reliable way to uniquely identify an image?
Yes, it is. Within bounds of reason there is no way two files with identical MD5 checksums of the image scan data can contain non-identical images and vice versa.
I examined sample photos taken with cameras from 12 different makers and edited/stripped the metadata. Actually, this wasn't really necessary, because from the specs and the code you know that all metadata resides in separate blocks (that's why you can hide all kind of stuff in a JPG) and the scan data will never be touched by metadata operations, but yes, identical MD5 checksums all over the place.
Is there any way to quickly locate the (right) SOS marker?
Definitely. The JPG specs are a mess and a punishment. After trying quite a few pieces of code I found NativeJPG by Nils Haeck to be the most straightforward.
This has been adapted from sdJpegImage:
function FindSOSPos(S: TStream): Cardinal;
var
B, MarkerTag, BytesRead: byte;
Size,W: word;
const
mkNone = 0; mkSOF0 = $c0; mkSOF1 = $c1; mkSOF2 = $c2; mkSOF3 = $c3; mkSOF5 = $c5;
mkSOF6 = $c6; mkSOF7 = $c7; mkSOF9 = $c9; mkSOF10 = $ca; mkSOF11 = $cb; mkSOF13 = $cd;
mkSOF14 = $ce; mkSOF15 = $cf; mkDHT = $c4; mkDAC = $cc; mkSOI = $d8; mkEOI = $d9; mkSOS = $da;
mkDQT = $db; mkDNL = $dc; mkDRI = $dd; mkDHP = $de; mkEXP = $df; mkAPP0 = $e0; mkAPP15 = $ef; mkCOM = $fe;
begin
Repeat
Result := 0;
// Read markers from the stream, until a non $FF is encountered
If S.Read(B, 1) = 0 then
exit;
// Do we have a marker?
if B = $FF then
begin
BytesRead := S.Read(MarkerTag, 1);
while (BytesRead > 0) and (MarkerTag = $FF) do
begin
MarkerTag := mkNone;
BytesRead := S.Read(MarkerTag, 1);
end;
Size := 0;
if MarkerTag in [mkAPP0..mkAPP15, mkDHT, mkDQT, mkDRI,
mkSOF0, mkSOF1, mkSOF2, mkSOF3, mkSOF5, mkSOF6, mkSOF7, mkSOF9, mkSOF10, mkSOF11, mkSOF13, mkSOF14, mkSOF15,
mkCOM, mkDNL] then
begin
// Read length of marker
If S.Read(W, 2) = 2 then
Size := Swap(W) - 2
else exit;
end else
If MarkerTag = mkSOS
then break;
S.Position := S.Position + Size;
end else
begin
// B <> $FF is an error, we try to be flexible
repeat
BytesRead := S.Read(B, 1);
until (BytesRead = 0) or (B = $FF);
if BytesRead = 0 then
exit;
S.Seek(-1, soFromCurrent);
end;
Until (MarkerTag = mkSOS) or (MarkerTag = mkNone);
Result := S.Position;
end;
Omit the first 6 Bytes after the SOS marker?
I decided to hash everything between SOS and EOI excluding the markers themselves.
Is there a fast way to locate the trailing EOI marker?
No. But this is irrelevant, since for performing a hash you have to read every single byte anyway.
How reliable is this approach?
As I said, I believe that within bounds of reason the chance that this approach will render no false positives is practically 100%. As to locating the right image: NativeJPG has been around for more than 10 years and you find very few complaints, if any they deal with decoding the image, not missing it.
In my application I offer the option to store the original filename, the EXIF DateTimeDigitized, the camera make, the GPS coordinates and MD5 hashes of the scan data (full and first 16 kB) in the UserComment field. I'm pretty confident that this will allow to lateron identify the file under most conditions (if the UserComment has remained intact).
Is there any risk that simply examining the space between SOS and EOI can fail?
Yes, for your purpose if you are only doing a checksum of the scan data. There could be multiple SOS markers and other markers in between them.
I am using Delphi 6 Pro with the DSPACK DirectShow component library to create a DirectShow filter that delivers data in Wav format from a custom audio source. Just to be very clear, I am delivering the raw PCM audio samples as Byte data. There are no Wave files involved, but other Filters downstream in my Filter Graph expect the output pin to deliver standard WAV format sample data in Byte form.
Note: When I get the data from the custom audio source, I format it to the desired number of channels, sample rate, and bits per sample and store it in a TWaveFile object I created. This object has a properly formatted TWaveFormatEx data member that is set correctly to reflect the underlying format of the data I stored.
I don't know how to properly set up the MediaType parameter during a GetMediaType() call:
function TBCPushPinPlayAudio.GetMediaType(MediaType: PAMMediaType): HResult;
.......
with FWaveFile.WaveFormatEx do
begin
MediaType.majortype := (1)
MediaType.subtype := (2)
MediaType.formattype := (3)
MediaType.bTemporalCompression := False;
MediaType.bFixedSizeSamples := True;
MediaType.pbFormat := (4)
// Number of bytes per sample is the number of channels in the
// Wave audio data times the number of bytes per sample
// (wBitsPerSample div 8);
MediaType.lSampleSize := nChannels * (wBitsPerSample div 8);
end;
What are the correct values for (1), (2), and (3)? I know about the MEDIATYPE_Audio, MEDIATYPE_Stream, and MEDIASUBTYPE_WAVE GUID constants, but I am not sure what goes where.
Also, I assume that I need to copy the WaveFormatEx stucture/record from the my FWaveFile object over to the pbFormat pointer (4). I have two questions about that:
1) I assume that should use CoTaskMemAlloc() to create a new TWaveFormatEx object and copy my FWaveFile object's TWaveFormatEx object on to it, before assigning the pbFormat pointer to it, correct?
2) Is TWaveFormatEx the correct structure to pass along? Here is how TWaveFormatEx is defined:
tWAVEFORMATEX = packed record
wFormatTag: Word; { format type }
nChannels: Word; { number of channels (i.e. mono, stereo, etc.) }
nSamplesPerSec: DWORD; { sample rate }
nAvgBytesPerSec: DWORD; { for buffer estimation }
nBlockAlign: Word; { block size of data }
wBitsPerSample: Word; { number of bits per sample of mono data }
cbSize: Word; { the count in bytes of the size of }
end;
UPDATE: 11-12-2011
I want to highlight one of the comments by #Roman R attached to his accepted reply where he tells me to use MEDIASUBTYPE_PCM for the sub-type, since it is so important. I lost a significant amount of time chasing down a DirectShow "no intermediate filter combination" error because I had forgotten to use that value for the sub-type and was using (incorrectly) MEDIASUBTYPE_WAVE instead. MEDIASUBTYPE_WAVE is incompatible with many other filters such as system capture filters and that was the root cause of the failure. The bigger lesson here is if you are debugging an inter-Filter media format negotiation error, make sure that the formats between the pins being connected are completely equal. I made the mistake during initial debugging of only comparing the WAV format parameters (format tag, number of channels, bits per sample, sample rate) which were identical between the pins. However, the difference in sub-type due to my improper usage of MEDIASUBTYPE_WAVE caused the pin connection to fail. As soon as I changed the sub-type to MEDIASUBTYPE_PCM as Roman suggested the problem went away.
(1) is MEDIATYPE_Audio.
(2) is typically a mapping from FOURCC code into GUID, see Media Types, Audio Media Types section.
(3) is FORMAT_WaveFormatEx.
(4) is a pointer (typically allocated by COM task memory allocator API) to WAVEFORMATEX structure.
1) - yes you should allocate memory, put valid data there, by copying or initializing directly, and put this pointer to pbFormat and structure size into cbFormat.
2) - yes it looks good, it is defined like this in first place: WAVEFORMATEX structure.
When searching for files with FindFirst() I get an attribute value in the TSearchRec.Attr field of 2080. It is not specified in the help as there are only these values available and no combination of them yields 2080:
1 faReadOnly
2 faHidden
4 faSysFile
8 faVolumeID
16 faDirectory
32 faArchive
64 faSymLink
71 faAnyFile
Does anyone know what 2080 means and why I get that attribute value? The OS is XP embedded.
It turns out that the file found by FindFirst() was compressed and thus had the compressed bit set. Took me a while to figure out and I could not find a reference on the web that stated the actual value of TSearchRec.Attr when the compressed bit is set. Unclicking "Compress file" in the files advanced properties did the trick.
Attributes in TSearchRec map directly to the Windows file attributes used with the TWin32FindData record from FindFirstFile.
In hex (always render bit fields in hex, not decimal), 2080 is $0820, where it's clear there are two bits set. The lower bit corresponds to File_Attribute_Archive, or Delphi's faArchive, and the upper bit corresponds to File_Attribute_Compressed. It has no equivalent in the units that come with Delphi, but you can use the JclFileUtils.faCompressed symbol from the JCL.
In JclFileUtils unit from Jedi Code Library I found:
faNormalFile = $00000080;
...
faNotContentIndexed = $00002000;
If 2080 is in hex then this is it.
Look also at: http://www.tek-tips.com/viewthread.cfm?qid=1543818&page=9
EDIT:
While 2080 id decimal, and 2080 dec = 820 hex then attributes are combination of:
faArchive = $00000020;
faCompressed = $00000800;
This will extract the faDirectory bit and you dont have to worry about the compression bit set or not.
if ((sr.Attr AND faDirectory) <> 0) then
begin
.......
end;
I'd like to create a printable output file from within Squeak, for instance to create a report.
I've done a little Googling and I'm surprised by how little material in the way of examples relating to creating printable files exist. However, I've found a couple of classes class called PostscriptCanvas and EPSCanvas and a method within it called morphAsPostscript.
To try these classes out I created a tiny code example and my first workspace example was:
p := PasteUpMorph new.
p extent: 300#300.
p position: 20#20.
p borderColor: Color black.
p setProperty: #cornerStyle toValue: #rounded.
p openInWorld.
(FileStream newFileNamed: 'test1.ps') nextPutAll: (PostscriptCanvas morphAsPostscript: p)
unfortunately the above doesn't work and halts with doesnotUnderstand #pageBBox.
when I try the example again but this time using the EPSCanvas class:
p := PasteUpMorph new.
p extent: 300#300.
p position: 20#20.
p borderColor: Color black.
p setProperty: #cornerStyle toValue: #rounded.
p openInWorld.
(FileStream newFileNamed: 'test2.eps') nextPutAll: (EPSCanvas morphAsPostscript: p).
this time I generate output but the corners of the box aren't rounded in the eps file (they are rounded on the screen).
So, my questions are:
Am I on the right track as far as generating printable output or should I be using an alternative technique?
Why does the first example crash with doesnotUnderstand #pageBBox?
Why does the second example almost work but does not render the rounded corners?
Thanks
Kevin
It's not just Squeak - producing printable output is fearsomely difficult in any programming language. Whenever I've done project planning and people mention reports, I immediatel double (at least) the project estimates. Personally, I would recommend writing the data to a file in some well-known format such as XML or CSV and then use a report-writing package to produce the actual reports.
Sorry not to be more helpful!