Which Alphabet type should I use with FASTA files in Biopython? - biopython

If I'm using the FASTA files from the link below, what Alphabet type should I use in Biopython? Would it be IUPAC.unambiguous_dna?
link to FASTA files: http://hgdownload.cse.ucsc.edu/goldenPath/hg19/chromosomes/?C=S;O=A

Did you read 3.1 Sequences and Alphabets? It explains the different alphabets available, and what cases they cover.
There's a lot of sequences in the link you provided (too many for us to pore through). My recommendation would be to just go with UnambiguousDNA. If the four basic nucleotides aren't enough, the parser will complain, and you should pick a more extensive alphabet.

Related

How to prepare data for weka in word sense disambiguation

I want to use weka for word sense diasambiguation. I prepared some files containing a Persian sentence, a tab, a Persian word, a tab and then an English word. they are in notepad++ in txt format. Now how should I use these files for weka? How should I change them?
The sample file:
https://www.dropbox.com/s/o7wtvrvkiir80la/F.txt?dl=0
I found it. The files should have the same number of columns. So I put the sentences in quotations, then a comma and the the English word in quotation. Above these, we should write proper relations and attributes.

How to read a text file in ancient encoding?

There is a public project called Moby containing several word lists. Some files contain European alphabets symbols and were created in pre-Unicode time. Readme, dated 1993, reads:
"Foreign words commonly used in English usually include their
diacritical marks, for example, the acute accent e is denoted by ASCII
142."
Wikipedia says that the last ASCII symbol has number 127.
For example this file: http://www.gutenberg.org/files/3203/files/mobypos.txt contains symbols that I couldn't read in any of vatious Latin encodings. (There are plenty of such symbols in the very end of section of words beginning with B, just before C letter. )
Could someone advise please what encoding should be used for reading this file or how can it be converted to some readable modern encoding?
A little research suggests that the encoding for this page is Mac OS Roman, which has é at position 142. Viewing the page you linked and changing the encoding (in Chrome, View → Encoding → Western (Macintosh)) seems to display all the words correctly (it is incorrectly reporting ISO-8859-1).
How you deal with this depends on the language / tools you are using. Here’s an example of how you could convert into UTF-8 with Ruby:
require 'open-uri'
s = open('http://www.gutenberg.org/files/3203/files/mobypos.txt').read
s.force_encoding('macroman')
s.encode!('utf-8')
You are right in that ASCII only goes up to position 127 (it’s a 7-bit encoding), but there are a large number of 8 bit encodings that are supersets of ASCII and people sometimes refer to those as “Extended ASCII”. It appears that whoever wrote the readme you refer to didn’t know about the variety of encodings and thought the one he happened to be using at the time was universal.
There isn’t a general solution to problems like this, as there is no guaranteed way to determine the encoding of some text from the text itself. In this case I just used Wikipedia to look through a few until I found one that matched. Joel Spolsky’s article The Absolute Minimum Every Software Developer Absolutely, Positively Must Know About Unicode and Character Sets (No Excuses!) is a good place to start reading about character sets and encodings if you want to learn more.

Octave: saving figure with greek letters and subscripts

I'm currently trying to save a stress vs. strain curve using Octave. On this plot, I want to include text showing the equation for calculating engineering stress and engineering strain. Both of these require greek letters (\sigma and \epsilon respectively) as well as subscripts for the formulae.
Currently, using print with -deps, -dpng, or any other device, it creates a file, however the greek letters appear as the words "sigma" and "epsilon", and wherever I have a subscript, such as 0, it just appears as "_0". This looks very unprofessional.
Since I'm generating some 25 graphs, I don't want to have to go through and do a screenshot for each one. Does octave support saving the generated figure as displayed? I intend to use the generated files in a LaTeX document later (preferably as png so I can email them separately too).
I've also tried changing the "graphics_toolkit" option between fltk and gnuplot however it doesn't seem to help.
Attached to this post is a screenshot of the desired results and the actual results.
I am currently "not allowed" to post images, so I'll link them:
http://i.imgur.com/Tjt5Ecn.png (screenshot, desired result) and http://i.imgur.com/SP3hekd.png (directly saved, actual result)
Does anyone know a good way to print a figure from Octave which includes greek characters and subscripts in the titles?
Since you plan to use your graph in a Latex document, generating the graphs with -depslatex and converting them to pdf is a good idea . (Results look slightly better than direct -dpdflatex).
With -depslatex, you can include Latex code in your figures that will be written to a separate tex file.
Note that you need to use double backslashes \\ to export a single backslash.
graphics_toolkit("gnuplot");
...
legend("$\\varepsilon$");
print(sprintf("graph%s_%d.eps", name, type), '-depslatex', '-S200,270', '-F:9');
system(sprintf("epstopdf graph%s_%d.eps", name, type));
On the Latex side, you then \input the tex file generated by Octave. On the plus side, since you need 25 graphs, you can automatize this process on both sides Octave and Latex.
\newcommand{\mygraph}[1]{%
\graphicspath{{./figures/}}
\resizebox{0.495\linewidth}{!}{\relscale{1.0}\small%
\input{./figures/#1.tex}
}%
}
\mygraph{graph1_1}
Here, a Latex command \mygraph is defined to scale and include a figure located in a subfolder.
(I am using Octave 4.0.0 with gnuplot 4.4 on Ubuntu 12)

Parsing PDF files

I'm finding it difficult to parse a pdf file that's created in a non-english language. I used pdfbox and itext but couldn't find anything in there that could help parse this file. Here's the pdf file that I'm talking about: http://prapatti.com/slokas/telugu/vishnusahasranaamam.pdf The pdf says that it's created use LaTeX and Tikkana font. I have Tikkana font installed on my machine, but that didn't help. Please help me in this.
Thanks, K
When you say "parse PDF files", my first thought was that the PDF in question wasn't opening in various PDF viewers & libraries, and was therefore corrupt in some way.
But that's not the case at all. It opens just fine in Acrobat Reader X. And then I see the text on the page.
And when I copy/paste that text from the first page, I get:
Ûûp{¨¶ðQ{p{¨|={pÛû{¨>üb¶úN}l{¨d{p{¨> >Ûpû¶bp{¨}|=/}pT¶=}Nm{Z{Úpd{m}a¾Ú}mp{Ú¶¨>ztNð{øÔ_c}m{ТÁ}=N{Nzt¶ztbm}¥Ázv¬b¢Á
Á ÛûÁøÛûzÏrze¨=ztTzv}lÛzt{¨d¨c}p{Ðu{¨½ÐuÛ½{=Û Á{=Á Á ÁÛûb}ßb{q{d}p{¨ze=Vm{Ðu½Û{=Á
That's from Reader.
Much of the text in this PDF is written using various "Type 3" fonts. These fonts claim to use "WinAnsiEncoding" (Also Known As code page 1252), with a "differences" array. This differences array is wrong:
47 /BB 61 /BP /BQ 81 /C6...
The first number is the code point being replaced, the second is a Name of a character that replaces the original value at that code point.
There's no such character names as BB, BP, BQ, C9... and so on. So when you copy-paste that text, you get the above garbage.
I'm sorry, but the only reliable way to extract text from such a PDF is OCR (optical character recognition).
Eh... Long shot idea:
If you can find the specific versions of the specific fonts used to generate this PDF, you just might be able to determine the actual stream contents of known characters converted to Type 3 fonts in this way.
Once you have these known streams, you can compare them to the streams in the PDF and use that to build your own translation table.
You could either fix the existing PDF[s] (by changing the names in the encoding dictionary and Type 3 charproc entries) such that these text extractors will work correctly, or just grab the bytes out of the stream and translate them yourself.
The workflow would go something like this:
For each character in a font used in the form:
render it to PDF by itself using the same LaTeK/GhostScript versions.
Open the PDF and find the CharProc for that particular known character.
Store that stream along with the known character used to build it.
For each text byte in the PDF to be interpreted.
Get the glyph name for the given byte based on the existing encoding array
Get the "char proc" stream for that glyph name and compare it to your known char procs.
NOTE: This could be rewritten to be much more efficient with some caching, but it gets the idea across (I hope).
All that requires a fairly deep understanding of PDF and the parsing methods involved. But it just might work. Might not too...

How to understand an EDI file?

I've seen XML before, but I've never seen anything like EDI.
How do I read this file and get the data that I need? I see things like ~, REF, N1, N2, N4 but have no idea what any of this stuff means.
I am looking for Examples and Documentations.
Where can I find them?
Aslo
EDI guide i found says that it is based on " ANSI ASC X12/ ver. 4010".
Should I search form X12 ?
Kindly help.
Several of these other answers are very good. I'll try to fill in some things they haven't mentioned.
EDI is a set of standards, the most common of which are:
ANSI X12 (popular in the states)
EDIFACT (popular in Europe)
Sounds like you're looking at X12 version 4010. That's the most widely used (in my experience, anyway) version. There are lots and lots of different versions.
The file, or properly "interchange," is made up of Segments and Elements (and sometimes subelements). Each segment begins with a two- or three-word identifier (ISA, GS, ST, N1, REF).
The structure for all documents begins and ends with an envelope. The envelope is usually made up of the ISA segment and the GS segments. There can be more than one GS segment per file, but there should only be one ISA segment per file (note the should, not everyone plays by the rules).
The ISA is a special segment. Whereas all the other segments are delimited, and therefore can be of varying lenghts, the ISA segment is of fixed width. This is because it tells you how to read the rest of the file.
Start with the last three characters of the ISA segment. Those will tell you the element delimiter, the sub-element delimiter, and the segment delimiter. Here's an example ISA line.
ISA:00: :00: :01:1515151515 :01:5151515151 :041201:1217:U:00403:000032123:0:P:*~
In this case, the ":" is the element delimiter, "*" is a subelement delimiter, and "~" the segment delimiter. It's much easier if you're just trying to look at a file to put linebreaks after each segment delimiter (~).
The ISA also tells you who the document is from and to, what the version is (00403, which is also known as 4030), and the interchange control number (0000321233). The other stuff is probably not important to you at this stage.
This document is from sender "01:1515151515" and to receiver "01:5151515151". So what's with the "01:"? Well, this introduces an important concept in EDI, the qualifier. Several elements have qualifiers, which tell you what type of data the next element is. In this case, the 01 is supposed to be a Dunn and Bradstreet number. Other qualifiers for the ISA05 and ISA07 elements are 12 for phone number, and ZZ for "user defined". You'll find the concept of qualifiers all over EDI segments. A decent rule of thumb is that if it's two characters, it's a qualifier. In order to know what all the qualifiers mean, you'll need a standards guide (either in hard copy from the EDI standards body, or in some software).
The next line is the GS. This is a functional group (a way to group like documents together within an interchange.) For instance, you can have several purchase orders, and several functional acknowledgements within an ISA. These should be placed in separate functional groups (GS segments). You can figure out what type of documents are in a GS segment by looking at the first GS01 element.
GS:PO:9988776655:1122334455:20041201:1217:128:X:004030
Besides the document type, you can see the from (9988776655) and to (1122334455) again. This time they're using different identifiers, which is legal, because you may be receiving an interchange on behalf of someone else (if you're an intermediary, for instance). You can also see the version number again, this time with the trailing "0" (0004030). Use significant digits logic to strip off the leading zeros. Why is there an extra zero here and not in the ISA? I don't know. Lastly this GS segment also has it's own identifier, 128.
That's it for the beginning of the envelope. After that there will be a loop of documents beginning with ST. In this case they'd all be POs, which have a code (850), so the line would start with ST:850:blablabla
The envelope stuff ends with a GE segment which references the GS identifier (128) so you know which segment is being closed. Then comes an IEA which similarly closes out the ISA.
GE:1:128~
IEA:1:000032123~
That's an overview of the structure and how to read it. To understand it you'll need a reference book or software so you understand the codes, lots and lots of time, and lots and lots of practice. Good luck, and post again if you have more specific questions.
Wow, flashbacks. It's been over sixteen years ...
In principle, each line is a "segment", and the identifiers are the beginning of the line is a segment identifier. Each segment contains "elements" which are essentially positional fields. They are delimited by "element delimiters".
Different segments mean different things, and can indicate looping constructs, repeats, etc.
You need to get a current version of the standard for the basic parsing, and then you need the data dictionary to describe the content of the document you are dealing with, and then you might need an industry profile, implementation guide, or similar to deal with the conventions for the particular document type in your environment.
Examples? Not current, but I'm sure you could find a whole bunch using your search engine of choice. Once you get the basic segment/element parsing done, you're dealing with your application level data, and I don't know how much a general example will help you there.
EDI is a file format for structured text files, used by lots of larger organisations and companies for standard database exchange. It tends to be much shorter than XML which used to be great when data packets had to be small. Many organisations still use it, since many mainframe systems use EDI instead of XML.
With EDI messages, you're dealing with text messages that match a specific format. This would be similar to an XML schema, but EDI doesn't really have a standardized schema language. EDI messages themselves aren't really human-readable while most specifications aren't really machine-readable. This is basically the advantage of XML, where both the XML and it's schema can be read by humans and machines.
Chances are that when you're doing electronic banking through some client-side software (not browser-based) then you might already have several EDI files on your system. Banks still prefer EDI over XML to send over transaction data, although many also use their own custom text-based formats.
To understand EDI, you'll have to understand the data first, plus the EDI standard that you want to follow.
Assuming the data stream starts with “ISA”, towards the beginning there should be a section “~ST*” followed by three numeric digits. If you can post these three digits, I can probably provide you with more information. Also, knowing the industry would be helpful. For example, healthcare uses 270, 271, 276, 277 and a few others.

Resources