I'm trying to find a way to read only text from an email. Unfortunately, in Memo1 there is still information about attachments as well as all information about the structure of the e-mail.
if ( IdIMAP1->UIDRetrieve( UID, IdMessage1 ) )
{
IdMessage1->MessageParts->CountParts();
if( IdMessage1->MessageParts->TextPartCount )
{
String b;
IdIMAP1->UIDRetrieveText(UID, b);
Memo1->Lines->Add( UTF8Decode(b) );
}
}
I tried to use the TIdText function but the result Str is empty.
TStrings* Str = new TStringList;
TIdText( IdMessage1->MessageParts, Str );
ContentType of mesage is multipart but parts of mesage contains also text/plain type.
IdIMAP1->UIDRetrieveHeader(UID, IdMessage1);
IdMessage1->ContentType;
I tried to isolate the text part of the email but that didn't help either.
TIdMessagePart *Part;
for( int j = 0; j < IdMessage1->MessageParts->Count; ++j )
{
Part = IdMessage1->MessageParts->Items[j];
if( IsHeaderMediaType(Part->ContentType, "text/plain")
{
String b;
IdIMAP1->UIDRetrieveText(UID, b);
}
}
Do you have any idea?
Is this ok using stucture?
TIdMessagePart *Part;
IdIMAP1->UIDRetrieveHeader(UID, IdMessage1);
IdIMAP1->UIDRetrieveStructure( UID, IdMessage1 );
for( int j = 0; j < IdMessage1->MessageParts->Count; ++j )
{
Part = IdMessage1->MessageParts->Items[j];
if( IsHeaderMediaType(Part->ContentType, "text/plain")
{
String b;
IdIMAP1->UIDRetrieveText(UID, b);
}
else { }
}
The statement TIdText( IdMessage1->MessageParts, Str ); does not do what you think. It adds a new TIdText object to the MessageParts, initializing the object's Body text with the specified TStrings text. Which is not what you want in this situation. Also, because Delphi-style objects (derived from TObject) must be created on the heap via new, so this statement should not even compile, producing an E2459 error.
Since you are downloading the email's full content via UIDRetrieve(), there is no reason to download individual text pieces via UIDRetrieveText() afterwards. You can just scan the TIdMessage contents you already have for the text portion you want. In your 2nd example, replace this:
String b;
IdIMAP1->UIDRetrieveText(UID, b);
with this:
String b = static_cast<TIdText*>(Part)->Body->Text;
Otherwise, you should take a different approach. IMAP is a very complex and flexible protocol. One of its features is it allows you to download an email's structure first (via UIDRetrieveStructure()), analyze those headers as needed to discover the PartNumber of the specific part(s) you want, and then you can download only those parts by themselves (via UIDRetrievePart()).
Related
I have a string like A=B&C=D&E=F, how to parse it into map in golang?
Here is example on Java, but I don't understand this split part
String text = "A=B&C=D&E=F";
Map<String, String> map = new LinkedHashMap<String, String>();
for(String keyValue : text.split(" *& *")) {
String[] pairs = keyValue.split(" *= *", 2);
map.put(pairs[0], pairs.length == 1 ? "" : pairs[1]);
}
Maybe what you really want is to parse an HTTP query string, and url.ParseQuery does that. (What it returns is, more precisely, a url.Values storing a []string for every key, since URLs sometimes have more than one value per key.) It does things like parse HTML escapes (%0A, etc.) that just splitting doesn't. You can find its implementation if you search in the source of url.go.
However, if you do really want to just split on & and = like that Java code did, there are Go analogues for all of the concepts and tools there:
map[string]string is Go's analog of Map<String, String>
strings.Split can split on & for you. SplitN limits the number of pieces split into like the two-argument version of split() in Java does. Note that there might only be one piece so you should check len(pieces) before trying to access pieces[1] say.
for _, piece := range pieces will iterate the pieces you split.
The Java code seems to rely on regexes to trim spaces. Go's Split doesn't use them, but strings.TrimSpace does something like what you want (specifically, strips all sorts of Unicode whitespace from both sides).
I'm leaving the actual implementation to you, but perhaps these pointers can get you started.
import ( "strings" )
var m map[string]string
var ss []string
s := "A=B&C=D&E=F"
ss = strings.Split(s, "&")
m = make(map[string]string)
for _, pair := range ss {
z := strings.Split(pair, "=")
m[z[0]] = z[1]
}
This will do what you want.
There is a very simple way provided by golang net/url package itself.
Change your string to make it a url with query params text := "method://abc.xyz/A=B&C=D&E=F";
Now just pass this string to Parse function provided by net/url.
import (
netURL "net/url"
)
u, err := netURL.Parse(textURL)
if err != nil {
log.Fatal(err)
}
Now u.Query() will return you a map containing your query params. This will also work for complex types.
Here is a demonstration of a couple of methods:
package main
import (
"fmt"
"net/url"
)
func main() {
{
q, e := url.ParseQuery("west=left&east=right")
if e != nil {
panic(e)
}
fmt.Println(q) // map[east:[right] west:[left]]
}
{
u := url.URL{RawQuery: "west=left&east=right"}
q := u.Query()
fmt.Println(q) // map[east:[right] west:[left]]
}
}
https://golang.org/pkg/net/url#ParseQuery
https://golang.org/pkg/net/url#URL.Query
I plan to include text metadata (like bold, font-size, etc.) in the process of parsing to achieve better recognition.
For instance, I have a given structure, where a word on its own line word/r/n which is bold and sized 24px, is the title for some article. In order to get better recognition results, I want to take the characters as well as the metadata in account. In terms of ANTRL I'm not sure how this could be done best. I'd like to do something like:
Wrap each character of the original text into a custom object with fields for the metadata and pass that to ANTLR.
Preprocess the text and insert at specific places annotations for the metadata which is considered by the grammer.
I really like to take option 1. but I'm not sure which part from ANTLR I need to subclass etc. Do I have to start at the ANTLRInputStream-Object, in order to get a proper stream for a subclassed Lexer to get custom Tokens for a subclassed Parser etc. Is there a more elegant way, especially in querying the tokens while parsing with actions in a {} block ?
If anyone has some hints and/or experiences this would be great!
EDIT:
Here is a more specific simple example: I have a file wich includes the encoding of metadata which I parse forehand. the actual text including newline look like the following:
entryOne
Here is some content one.
entryTwo
Here is some content two.
Where the titlesentryOneand entryTwo are originally font-size of 24px and the content is font-size of 12px (as exemplary given values). Char by char I create a new instance of a custom object encapsulating the character as String and the font-size.
I initialize respective objects for each of the characters with fields of the font-size, e.g for the first letter of entryOne like
MyChar aTitelChar = new MyChar("e", 24);
For the content, like the second line Here is some content one. I create instances of MyChar like:
MyChar aContentChar= new MyChar("H", 12);
All characters of the texts are wrapped in instances of the below MyChar-Class and added to a List<MyChar> in order to produce a new input for ANTLR.
below is the Java Class for the characters:
public class MyChar {
private int fontSizePx;
private String text;
public MyChar(String text, int fontSizePx) {
this.text = text;
this.fontSizePx = fontSizePx;
}
public int getFontSizePx() {
return fontSizePx;
}
public String getText() {
return text;
}
}
I want that my grammar matches the above two entries (or more formatted this way) which in turn consist each of a title and a content which is terminated with a fullstop. This grammar could look like this:
rule: entry+ NEWLINE
;
entry:
title
content
;
title:
letters NEWLINE
;
content:
(letters)+ '.' NEWLINE
;
letters:
LETTERS
;
LETTERS:
('a'..'z' | 'A'..'Z')+
;
WS:
(' ' | '\t' | 'f' ) + {$channel = HIDDEN;};
NEWLINE:'\r'? '\n';
Now, for instance, what I want to do is to find out if it's really a title of an entry by checking the font-size of all letters encompassing the title-token before titel-rule returns. In case the input conforms to the grammar but is actually some kind of mistake (the original metadata-encoded file starts with something that conforms to the title-rule but its actually the content) the author of the grammar could sort that out if he knows that the original font-size for titles is 24 and check this. If one of the letter-tokens doesn't equal to font-size 24 throw an exception/don't return/do smthg. appropriate.
The thing I'm pondering on is where to plug in the List<MyChar> to provide this functionality (to query kinds of metadata while parsing in context of ANTLR). I'm experimenting with ANTLR's Classes but as I'm new to ANTLR I thought probably some of the experienced users can point me in the right direction, like where would be a good insertion points for custom objects? should I start by implenting CharStream and override some methods? Probably there is something which ANTLR provides which I haven't found yet?
Here's one way to accomplish what I think you're going for, using the parser to manage matching input to metadata. Note that I made whitespace significant because it's part of the content and can't be skipped. I also made periods part of content to simplify the example, rather than using them as a marker.
SysEx.g
grammar SysEx;
#header {
import java.util.List;
}
#parser::members {
private List<MyChar> metadata;
private int curpos;
private boolean isTitleInput(String input) {
return isFontSizeInput(input, 24);
}
private boolean isContentInput(String input){
return isFontSizeInput(input, 12);
}
private boolean isFontSizeInput(String input, int fontSize){
List<MyChar> sublist = metadata.subList(curpos, curpos + input.length());
System.out.println(String.format("Testing metadata for input=\%s, font-size=\%d", input, fontSize));
int start = curpos;
//move our metadata pointer forward.
skipInput(input);
for (int i = 0, count = input.length(); i < count; ++i){
MyChar chardata = sublist.get(i);
char c = input.charAt(i);
if (chardata.getText().charAt(0) != c){
//This character doesn't match the metadata (ERROR!)
System.out.println(String.format("Content mismatch at metadata position \%d: metadata=(\%s,\%d); input=\%c", start + i, chardata.getText(), chardata.getFontSizePx(), c));
return false;
} else if (chardata.getFontSizePx() != fontSize){
//The font is wrong.
System.out.println(String.format("Format mismatch at metadata position \%d: metadata=(\%s,\%d); input=\%c", start + i, chardata.getText(), chardata.getFontSizePx(), c));
return false;
}
}
//All characters check out.
return true;
}
private void skipInput(String str){
curpos += str.length();
System.out.println("\t\tMoving metadata pointer ahead by " + str.length() + " to " + curpos);
}
}
rule[List<MyChar> metadata]
#init {
this.metadata = metadata;
}
: entry+ EOF
;
entry
: title content
{System.out.println("Finished reading entry.");}
;
title
: line {isTitleInput($line.text)}? newline {System.out.println("Finished reading title " + $line.text);}
;
content
: line {isContentInput($line.text)}? newline {System.out.println("Finished reading content " + $line.text);}
;
newline
: (NEWLINE{skipInput($NEWLINE.text);})+
;
line returns [String text]
#init {
StringBuilder builder = new StringBuilder();
}
#after {
$text = builder.toString();
}
: (ANY{builder.append($ANY.text);})+
;
NEWLINE:'\r'? '\n';
ANY: .; //whitespace can't be skipped because it's content.
A title is a line that matches the title metadata (size 24 font) followed by one or more newline characters.
A content is a line that matches the content metadata (size 12 font) followed by one or more newline characters. As mentioned above, I removed the check for a period for simplification.
A line is a sequence of characters that does not include newline characters.
A validating semantic predicate (the {...}? after line) is used to validate that the line matches the metadata.
Here is the code I used to test the grammar (minus imports, for brevity):
SysExGrammar.java
public class SysExGrammar {
public static void main(String[] args) throws Exception {
//Create some metadata that matches our input.
List<MyChar> matchingMetadata = new ArrayList<MyChar>();
appendMetadata(matchingMetadata, "entryOne\r\n", 24);
appendMetadata(matchingMetadata, "Here is some content one.\r\n", 12);
appendMetadata(matchingMetadata, "entryTwo\r\n", 24);
appendMetadata(matchingMetadata, "Here is some content two.\r\n", 12);
parseInput(matchingMetadata);
System.out.println("Finished example #1");
//Create some metadata that doesn't match our input (negative test).
List<MyChar> mismatchingMetadata = new ArrayList<MyChar>();
appendMetadata(mismatchingMetadata, "entryOne\r\n", 24);
appendMetadata(mismatchingMetadata, "Here is some content one.\r\n", 12);
appendMetadata(mismatchingMetadata, "entryTwo\r\n", 12); //content font size!
appendMetadata(mismatchingMetadata, "Here is some content two.\r\n", 12);
parseInput(mismatchingMetadata);
System.out.println("Finished example #2");
}
private static void parseInput(List<MyChar> metadata) throws Exception {
//Test setup
InputStream resource = SysExGrammar.class.getResourceAsStream("SysExTest.txt");
CharStream input = new ANTLRInputStream(resource);
resource.close();
SysExLexer lexer = new SysExLexer(input);
CommonTokenStream tokens = new CommonTokenStream(lexer);
SysExParser parser = new SysExParser(tokens);
parser.rule(metadata);
System.out.println("Parsing encountered " + parser.getNumberOfSyntaxErrors() + " syntax errors");
}
private static void appendMetadata(List<MyChar> metadata, String string,
int fontSize) {
for (int i = 0, count = string.length(); i < count; ++i){
metadata.add(new MyChar(string.charAt(i) + "", fontSize));
}
}
}
SysExTest.txt (note this uses Windows newlines (\r\n)
entryOne
Here is some content one.
entryTwo
Here is some content two.
Test output (trimmed; the second example has deliberately-mismatched metadata):
Parsing encountered 0 syntax errors
Finished example #1
Parsing encountered 2 syntax errors
Finished example #2
This solution requires that each MyChar corresponds to a character in the input (including newline characters, although you can remove that limitation if you like -- I would remove it if I didn't already have this answer written up ;) ).
As you can see, it's possible to tie the metadata to the parser and everything works as expected. I hope this helps.
Is there a reliable way to extract a document name or job name from a postscript print job if you have the raw postscript data?
I've seen print release station software that labels each job with a document name or url it was printed from, so it seems possible.
There is no reliable way to do this, as there is no such (metadata) information in the PostScript language. If your files are DSC (Document Structuring Convention) compliant then you can look for comments. These are documented in the DSC reference manual. Valid PostScript files need not be DSC compliant.
Other than that, there is no information there to extract, at least as far as PostScript is concerned.
To extract document name from print job using C++.
#include <WinSpool.h>
wstring GetDocumentName(wstring m_strFriendlyName)
{
wstring strDocName = L"";
HANDLE hPrinter ;
if ( OpenPrinter(const_cast<LPWSTR>(m_strFriendlyName.c_str()), &hPrinter, NULL) == 0 )
{
/*OpenPrinter call failed*/
return false;
}
DWORD dwBufsize = 0;
PRINTER_INFO_2* pinfo = 0;
GetPrinter(hPrinter, 2,(LPBYTE)pinfo, dwBufsize, &dwBufsize); //Get dwBufsize
PRINTER_INFO_2* pinfo2 = (PRINTER_INFO_2*)malloc(dwBufsize); //Allocate with dwBufsize
GetPrinter(hPrinter, 2,(LPBYTE)pinfo2, dwBufsize, &dwBufsize);
DWORD numJobs = pinfo2->cJobs;
free(pinfo2);
JOB_INFO_1 *pJobInfo = 0;
DWORD bytesNeeded = 0, jobsReturned = 0;
//Get info about jobs in queue.
EnumJobs(hPrinter, 0, numJobs, 1, (LPBYTE)pJobInfo, 0,&bytesNeeded,&jobsReturned);
pJobInfo = (JOB_INFO_1*) malloc(bytesNeeded);
EnumJobs(hPrinter, 0, numJobs, 1, (LPBYTE)pJobInfo, bytesNeeded, &bytesNeeded, &jobsReturned);
JOB_INFO_1 *pJobInfoInitial = pJobInfo;
for(unsigned short count = 0; count < jobsReturned; count++)
{
if (pJobInfo != NULL)
{
strDocName = pJobInfo->pDocument; // Document name
DWORD dw = pJobInfo->Status;
}
pJobInfo++;
}
free(pJobInfoInitial);
ClosePrinter( hPrinter );
return strDocName;
}
What you might be seeing is the document name that the application submitted to the print spooler. Also, it may not be reliable but most print drivers put the document name in PJL or XML at the top of the print job. With some flexible rules you might be able to pull this data with some confidence.
This, of course, assumes that the PS data was generated by a printer drivers.
I have to parse a document containing groups of variable-value-pairs which is serialized to a string e.g. like this:
4^26^VAR1^6^VALUE1^VAR2^4^VAL2^^1^14^VAR1^6^VALUE1^^
Here are the different elements:
Group IDs:
4^26^VAR1^6^VALUE1^VAR2^4^VAL2^^1^14^VAR1^6^VALUE1^^
Length of string representation of each group:
4^26^VAR1^6^VALUE1^VAR2^4^VAL2^^1^14^VAR1^6^VALUE1^^
One of the groups:
4^26^VAR1^6^VALUE1^VAR2^4^VAL2^^1^14 ^VAR1^6^VALUE1^^
Variables:
4^26^VAR1^6^VALUE1^VAR2^4^VAL2^^1^14^VAR1^6^VALUE1^^
Length of string representation of the values:
4^26^VAR1^6^VALUE1^VAR2^4^VAL2^^1^14^VAR1^6^VALUE1^^
The values themselves:
4^26^VAR1^6^VALUE1^VAR2^4^VAL2^^1^14^VAR1^6^VALUE1^^
Variables consist only of alphanumeric characters.
No assumption is made about the values, i.e. they may contain any character, including ^.
Is there a name for this kind of grammar? Is there a parsing library that can handle this mess?
So far I am using my own parser, but due to the fact that I need to detect and handle corrupt serializations the code looks rather messy, thus my question for a parser library that could lift the burden.
The simplest way to approach it is to note that there are two nested levels that work the same way. The pattern is extremely simple:
id^length^content^
At the outer level, this produces a set of groups. Within each group, the content follows exactly the same pattern, only here the id is the variable name, and the content is the variable value.
So you only need to write that logic once and you can use it to parse both levels. Just write a function that breaks a string up into a list of id/content pairs. Call it once to get the groups, and then loop through them calling it again for each content to get the variables in that group.
Breaking it down into these steps, first we need a way to get "tokens" from the string. This function returns an object with three methods, to find out if we're at "end of file", and to grab the next delimited or counted substring:
var tokens = function(str) {
var pos = 0;
return {
eof: function() {
return pos == str.length;
},
delimited: function(d) {
var end = str.indexOf(d, pos);
if (end == -1) {
throw new Error('Expected delimiter');
}
var result = str.substr(pos, end - pos);
pos = end + d.length;
return result;
},
counted: function(c) {
var result = str.substr(pos, c);
pos += c;
return result;
}
};
};
Now we can conveniently write the reusable parse function:
var parse = function(str) {
var parts = {};
var t = tokens(str);
while (!t.eof()) {
var id = t.delimited('^');
var len = t.delimited('^');
var content = t.counted(parseInt(len, 10));
var end = t.counted(1);
if (end !== '^') {
throw new Error('Expected ^ after counted string, instead found: ' + end);
}
parts[id] = content;
}
return parts;
};
It builds an object where the keys are the IDs (or variable names). I'm asuming as they have names that the order isn't significant.
Then we can use that at both levels to create the function to do the whole job:
var parseGroups = function(str) {
var groups = parse(str);
Object.keys(groups).forEach(function(id) {
groups[id] = parse(groups[id]);
});
return groups;
}
For your example, it produces this object:
{
'1': {
VAR1: 'VALUE1'
},
'4': {
VAR1: 'VALUE1',
VAR2: 'VAL2'
}
}
I don't think it's a trivial task to create a grammar for this. But on the other hand, a simple straight forward approach is not that hard. You know the corresponding string length for every critical string. So you just chop your string according to those lengths apart..
where do you see problems?
I am writing code to ingest the IOR file generated by the team responsible for the server and use it to bind my client to their object. Sounds easy, right?
For some reason a bit beyond my grasp (having to do with firewalls, DMZs, etc.), the value for the server inside the IOR file is not something we can use. We have to modify it. However, the IOR string is encoded.
What does Visibroker provide that will let me decode the IOR string, change one or more values, then re-encode it and continue on as normal?
I've already looked into IORInterceptors and URL Naming but I don't think either will do the trick.
Thanks in advance!
When you feel like you need to hack an IOR, resist the urge to do so by writing code and whatnot to mangle it to your liking. IORs are meant to be created and dictated by the server that contains the referenced objects, so the moment you start mucking around in there, you're kinda "voiding your warranty".
Instead, spend your time finding the right way to make the IOR usable in your environment by having the server use an alternative hostname when it generates them. Most ORBs offer such a feature. I don't know Visibroker's particular configuration options at all, but a quick Google search revealed this page that shows a promising value:
vbroker.se.iiop_ts.host
Specifies the host name used by this server engine.
The default value, null, means use the host name from the system.
Hope that helps.
Long time ago I wrote IorParser for GNU Classpath, the code is available. It is a normal parser written being aware about the format, should not "void a warranty" I think. IOR contains multiple tagged profiles that are encapsulated very much like XML so we could parse/modify profiles that we need and understand and leave the rest untouched.
The profile we need to parse is TAG_INTERNET_IOP. It contains version number, host, port and object key. Code that reads and writes this profile can be found in gnu.IOR class. I am sorry this is part of the system library and not a nice piece of code to copy paste here but it should not be very difficult to rip it out with a couple of dependent classes.
This question has been repeatedly asked as CORBA :: Get the client ORB address and port with use of IIOP
Use the FixIOR tool (binary) from jacORB to patch the address and port of an IOR. Download the binary (unzip it) and run:
fixior <new-address> <new-port> <ior-file>
The tool will override the content of the IOR file with the 'patched' IOR
You can use IOR Parser to check the resulting IOR and compare it to your original IOR
Use this function to change the IOR. pass stringified IOR as first argument.
void hackIOR(const char* str, char* newIOR )
{
size_t s = (str ? strlen(str) : 0);
char temp[1000];
strcpy(newIOR,"IOR:");
const char *p = str;
s = (s-4)/2; // how many octets are there in the string
p += 4;
int i;
for (i=0; i<(int)s; i++) {
int j = i*2;
char v=0;
if (p[j] >= '0' && p[j] <= '9') {
v = ((p[j] - '0') << 4);
}
else if (p[j] >= 'a' && p[j] <= 'f') {
v = ((p[j] - 'a' + 10) << 4);
}
else if (p[j] >= 'A' && p[j] <= 'F') {
v = ((p[j] - 'A' + 10) << 4);
}
else
cout <<"invalid octet"<<endl;
if (p[j+1] >= '0' && p[j+1] <= '9') {
v += (p[j+1] - '0');
}
else if (p[j+1] >= 'a' && p[j+1] <= 'f') {
v += (p[j+1] - 'a' + 10);
}
else if (p[j+1] >= 'A' && p[j+1] <= 'F') {
v += (p[j+1] - 'A' + 10);
}
else
cout <<"invalid octet"<<endl;
temp[i]=v;
}
temp[i] = 0;
// Now temp has decoded IOR string. print it.
// Replace the object ID in temp.
// Encoded it back, with following code.
int temp1,temp2;
int l,k;
for(k = 0, l = 4 ; k < s ; k++)
{
temp1=temp2=temp[k];
temp1 &= 0x0F;
temp2 = temp2 & 0xF0;
temp2 = temp2 >> 4;
if(temp2 >=0 && temp2 <=9)
{
newIOR[l++] = temp2+'0';
}
else if(temp2 >=10 && temp2 <=15)
{
newIOR[l++] = temp2+'A'-10;
}
if(temp1 >=0 && temp1 <=9)
{
newIOR[l++] = temp1+'0';
}
else if(temp1 >=10 && temp1 <=15)
{
newIOR[l++] = temp1+'A'-10;
}
}
newIOR[l] = 0;
//new IOR is present in new variable newIOR.
}
Hope this works for you.