Can the PDB module of Biopython handle CONECT records - biopython

I'm investigating using biopython to process PDB files, but it looks to me like the CONECT records are not handled. For instance, I'm wanting to use it to extract a ligand from a PDB structure, and I'm getting the atoms (HETATM) written but the corresponding CONECT records are lost. Is there a way to include these?

https://github.com/biopython/biopython/issues/3468:
Feature Request: Support for CONECT Records #3468
Open
zacharyrs opened this issue on 15 Jan · 4 comments
Open
Feature Request: Support for CONECT Records
#3468
zacharyrs opened this issue on 15 Jan · 4 comments
Comments
#zacharyrs
zacharyrs commented on 15 Jan
Hey all,
This isn't so much a feature request but a query to gauge interest.
I noticed in BioJava, there is built in support within the PDBParser for CONECT records (see here).
In contrast, BioPython lacks this, with the only related issue being here.
I'm just curious if there would be interest in me PR'ing with an implementation akin to BioJava, or if this is something I should just handle independently?
Cheers!
#JoaoRodrigues
Member
JoaoRodrigues commented on 15 Jan
Hi #zacharyrs
Thanks for opening the issue. CONECT records are a PDB-specific feature and have no correspondence to the newer (and now standard) mmCIF format. That said, I don't think there are any objections to adding them to the current parser, if you are so inclined! The CONECT record format parsed by Biojava is not standard either, by the way. There are no fields for the type of bond in the original format specification.
#zacharyrs
Author
zacharyrs commented on 17 Jan •
So I've read into the source for BioJava CONECT record handling, and also how it handles mmCIF with the bonds list. I'm thinking of implementing a connects instance attribute on the structure, and only reading/writing the CONECTs per the PDB spec (no bond type information).
The biggest hold up in this would be the copy methods - if you copy the structure you should retain the CONECTs, but for chain, residue, or atom I think you'd lose it, though I think this is similar to BioJava.
Old approach thoughts...
With a second day's thought and an implementation attempt, I've realised a few issues with the above (hidden) approaches.
My current approach (at zacharyrs:biopython/feature/conect_record_support) implements a Conect object, that takes two atom serial numbers, and the bond order.
When parsing, for each record I iterate through the bonded atoms and create individual Conect instances under the connects list attribute on the structure. For records that share a prior records atom serial numbers, I just increment the bond order. When writing, I construct a temporary dictionary, like shown, then just loop through and repeat records according to the order.
{
atom_a_serial: {
bond_order: [atom_b_serial]
}
}
This works to read and then write a PDB, whilst preserving the CONECTS, however it does not behave if you modify the structure, as there's no link between the connects and the actual atoms. I could store the bond onto the actual atoms, akin to mmCIF under BioJava, but it might just be adding a lot of bloat for people who don't need the CONECT records...
#JoaoRodrigues
Member
JoaoRodrigues commented on 22 Mar
Hi #zacharyrs
Sorry for missing this follow-up. Implementing CONECT statements properly is hard because you'd need to keep track of who is connected to who. The simplest way is to add a dictionary of pairs of atom indexes but if you delete or add atoms, you risk mangling this info. Also, again, mmCIF files do not have an equivalent of CONECT statments.
You could, although I'm not sure how much I personally like that approach, just have a "bonds" attribute per atom that stores references to the atom objects it is bonded to. That way you wouldn't need to worry about indexes, but you'd have to have a way to cleanup these lists when you delete atoms otherwise you would have references left behind and the garbage cleaner wouldn't be happy about it.
#zacharyrs
Author
zacharyrs commented on 22 Mar
Hi #JoaoRodrigues - no worries, it's been a busy time for me too!
I ended up actually shifting to mmtf files, given they hold the bond information, so this has been dropped a little from my mind. I implemented a custom parser that retains all information and can serialise and deserialise easily, but that's a closed source project right now.
For the mmCIF/PDB approach, the bonds/conects attribute on the atom seems like the only feasible way to handle it. This is basically what I've done for mmtf files in my parser. I have a Bond instance that stores bond order and atom references, akin to the Conect instance described/implemented above. There's a helper on the Bond class that deletes it from both atoms, which I can then call as needed.

Related

COBOL - Microfocus - Generic I/O

I am responsible for converting an old UNIX based COBOL batch application that was developed by a consultant back in the 1990s to a Windows environment but still in COBOL using Microfocus (Eclipse, etc).
This is a pretty straight-forward task except for one little glitch.
The old application never did any explicit file handling within the COBOL. That is there are no FDs, OPENs, READs, WRITEs or CLOSE commands in the COBOL programs. Instead they wrote a C program that would do one of those different functions based on parameters passed to it (including, but not limited to file name, rec length, and the function desired.)
I would like to rewrite that subroutine in COBOL, which would require very little modifications to the COBOL main programs being converted. That is, it would still call that subroutine, but it would now be in COBOL instead of C.
But the challenge is how to write that subroutine so that it is able to act on most any file. I would think I have to go the route of variable length records because they could literally be any length up to to-be-determined maximum size, but seems like it would be vulnerable to error (as it tries to open different types of files).
Does anybody have any experience on this or ideas on a task like this? If not,l I may have to go the blunt force route of replacing each call statement to that subroutine with the specific COBOL command (Open, Read, etc) that needs to be performed and obviously FD and SELECT for every file would need to be added to the main program.
Thanks in advance.
You might be able to
CALL "subprogram" USING fd-name
where fd-name is
FD fd-name.
...
So, yes? maybe?, you might be able to pull off a subprogram that can take generic COBOL files. But, then you get into matching record layouts and other fun things, so, be wary. This might not work COBOL to COBOL, but it does work COBOL to C and back, as you end up passing a reference to the file control block.
You'll likely be better off looking into stock system libraries. Things like CBL_OPEN_FILE and CBL_READ_FILE if they are available. This will give you a much closer match to streaming IO that will be assumed in the current C subprogram.
Or, as Bill is suggesting in the comments, try and figure out why C was used and if you don't want the foreign functions, just dig in and write new COBOL procedures, as that will likely read better in the end.

Profanity filter import

I am looking to write a basic profanity filter in a Rails based application. This will use a simply search and replace mechanism whenever the appropriate attribute gets submitted by a user. My question is, for those who have written these before, is there a CSV file or some database out there where a list of profanity words can be imported into my database? We are submitting the words that we will replace the profanities with on our own. We more or less need a database of profanities, racial slurs and anything that's not exactly rated PG-13 to get triggered.
As the Tin Man suggested, this problem is difficult, but it isn't impossible. I've built a commercial profanity filter named CleanSpeak that handles everything mentioned above (leet speak, phonetics, language rules, whitelisting, etc). CleanSpeak is capable of filtering 20,000 messages per second on a low end server, so it is possible to build something that works well and performs well. I will mention that CleanSpeak is the result of about 3 years of on-going development though.
There are a few things I tell everyone that is looking to try and tackle a language filter.
Don't use regular expressions unless you have a small list and don't mind a lot of things getting through. Regular expressions are relatively slow overall and hard to manage.
Determine if you want to handle conjugations, inflections and other language rules. These often add a considerable amount of time to the project.
Decide what type of performance you need and whether or not you can make multiple passes on the String. The more passes you make the slow your filter will be.
Understand the scunthrope and clbuttic problems and determine how you will handle these. This usually requires some form of language intelligence and whitelisting.
Realize that whitespace has a different meaning now. You can't use it as a word delimiter any more (b e c a u s e of this)
Be careful with your handling of punctuation because it can be used to get around the filter (l.i.k.e th---is)
Understand how people use ascii art and unicode to replace characters (/ = v - those are slashes). There are a lot of unicode characters that look like English characters and you will want to handle those appropriately.
Understand that people make up new profanity all the time by smashing words together (likethis) and figure out if you want to handle that.
You can search around StackOverflow for my comments on other threads as I might have more information on those threads that I've forgotten here.
Here's one you could use: Offensive/Profane Word List from CMU site
Based on personal experience, you do understand that it's an exercise in futility?
If someone wants to inject profanity, there's a slew of words that are innocent in one context, and profane in another so you'll have to write a context parser to avoid black-listing clean words. A quick glance at CMU's list shows words I'd never consider rude/crude/socially unacceptable. You'll see there are many words that could be proper names or nouns, countries, terms of endearment, etc. And, there are myriads of ways to throw your algorithm off using L33T speak and such. Search Wikipedia and the internets and you can build tables of variations of letters.
Look at CMU's list and imagine how long the list would be if, in addition to the correct letter, every a could also be 4, o could be 0 or p, e could be 3, s could be 5. And, that's a very, very, short example.
I was asked to do a similar task and wrote code to generate L33T variations of the words, and generated a hit-list of words based on several profanity/offensive lists available on the internet. After running the generator, and being a little over 1/4 of the way through the file, I had over one million entries in my DB. I pulled the plug on the project at that point, because the time spent searching, even using Perl's Regex::Assemble, was going to be ridiculous, especially since it'd still be so easy to fool.
I recommend you have a long talk with whoever requested that, and ask if they understand the programming issues involved, and low-likelihood of accuracy and success, especially over the long-term, or the possible customer backlash when they realize you're censoring them.
I have one that I've added to (obfuscated a bit) but here it is: https://github.com/rdp/sensible-cinema/blob/master/lib/subtitle_profanity_finder.rb

Fast Search to see if a String Exists in Large Files with Delphi

I have a FindFile routine in my program which will list files, but if the "Containing Text" field is filled in, then it should only list files containing that text.
If the "Containing Text" field is entered, then I search each file found for the text. My current method of doing that is:
var
FileContents: TStringlist;
begin
FileContents.LoadFromFile(Filepath);
if Pos(TextToFind, FileContents.Text) = 0 then
Found := false
else
Found := true;
The above code is simple, and it generally works okay. But it has two problems:
It fails for very large files (e.g. 300 MB)
I feel it could be faster. It isn't bad, but why wait 10 minutes searching through 1000 files, if there might be a simple way to speed it up a bit?
I need this to work for Delphi 2009 and to search text files that may or may not be Unicode. It only needs to work for text files.
So how can I speed this search up and also make it work for very large files?
Bonus: I would also want to allow an "ignore case" option. That's a tougher one to make efficient. Any ideas?
Solution:
Well, mghie pointed out my earlier question How Can I Efficiently Read The First Few Lines of Many Files in Delphi, and as I answered, it was different and didn't provide the solution.
But he got me thinking that I had done this before and I had. I built a block reading routine for large files that breaks it into 32 MB blocks. I use that to read the input file of my program which can be huge. The routine works fine and fast. So step one is to do the same for these files I am looking through.
So now the question was how to efficiently search within those blocks. Well I did have a previous question on that topic: Is There An Efficient Whole Word Search Function in Delphi? and RRUZ pointed out the SearchBuf routine to me.
That solves the "bonus" as well, because SearchBuf has options which include Whole Word Search (the answer to that question) and MatchCase/noMatchCase (the answer to the bonus).
So I'm off and running. Thanks once again SO community.
The best approach here is probably to use memory mapped files.
First you need a file handle, use the CreateFile windows API function for that.
Then pass that to CreateFileMapping to get a file mapping handle. Finally use MapViewOfFile to map the file into memory.
To handle large files, MapViewOfFile is able to map only a certain range into memory, so you can e.g. map the first 32MB, then use UnmapViewOfFile to unmap it followed by a MapViewOfFile for the next 32MB and so on. (EDIT: as was pointed out below, make sure that the blocks you map this way overlap by a multiple of 4kb, and at least as much as the length of the text you are searching for, so that you are not overlooking any text which might be split at the block boundary)
To do the actual searching once the (part of) the file is mapped into memory, you can make a copy of the source for StrPosLen from SysUtils.pas (it's unfortunately defined in the implementation section only and not exposed in the interface). Leave one copy as is and make another copy, replacing Wide with Ansi every time. Also, if you want to be able to search in binary files which might contain embedded #0's, you can remove the (Str1[I] <> #0) and part.
Either find a way to identify if a file is ANSI or Unicode, or simply call both the Ansi and Unicode version on each mapped part of the file.
Once you are done with each file, make sure to call CloseHandle first on the file mapping handle and then on the file handling. (And don't forget to call UnmapViewOfFile first).
EDIT:
A big advantage of using memory mapped files instead of using e.g. a TFileStream to read the file into memory in blocks is that the bytes will only end up in memory once.
Normally, on file access, first Windows reads the bytes into the OS file cache. Then copies them from there into the application memory.
If you use memory mapped files, the OS can directly map the physical pages from the OS file cache into the address space of the application without making another copy (reducing the time needed for making the copy and halfing memory usage).
Bonus Answer: By calling StrLIComp instead of StrLComp you can do a case insensitive search.
If you are looking for text string searches, look for the Boyer-Moore search algorithm. It uses memory mapped files and a really fast search engine. The is some delphi units around that contain implementations of this algorithm.
To give you an idea of the speed - i currently search through 10-20MB files and it takes in the order of milliseconds.
Oh just read that it might be unicode - not sure if it supports that - but definately look down this path.
This is a problem connected with your previous question How Can I Efficiently Read The First Few Lines of Many Files in Delphi, and the same answers apply. If you don't read the files completely but in blocks then large files won't pose a problem. There's also a big speed-up to be had for files containing the text, in that you should cancel the search upon the first match. Currently you read the whole files even when the text to be found is in the first few lines.
May I suggest a component ? If yes I would recommend ATStreamSearch.
It handles ANSI and UNICODE (and even EBCDIC and Korean and more).
Or the class TUTBMSearch from the JclUnicode (Jedi-jcl). It was mainly written by Mike Lischke (VirtualTreeview). It uses a tuned Boyer-Moore algo that ensure speed. The bad point in your case, is that is fully works in unicode (widestrings) so the trans-typing from String to Widestring risk to be penalizing.
It depends on what kind of data yre you going to search with it, in order for you to achieve a real efficient results you will need to let your programm parse the interesting directories including all files in there, and keep the data in a database which you can access each time for a specific word in a specific list of files which can be generated up to the searching path. A Database statement can provide you results in milliseconds.
The Issue is that you will have to let it run and parse all files after the installation, which may take even more than 1 hour up to the amount of data you wish to parse.
This Database should be updated eachtime your programm starts, this can be done by comparing the MD5-Value of each file if it was changed, so you dont have to parse all your files each time.
If this way of working can be interesting if you have all your data in a constant place and you analyse data in the same files more than each time totally new files, some code analyser work like this and they are real efficient. So you invest some time on parsing and saving intresting data and you can jump to the exact place where a searching word appears and provide a list of all places it appears on in a very short time.
If the files are to be searched multiple times, it could be a good idea to use a word index.
This is called "Full Text Search".
It will be slower the first time (text must be parsed and indexes must be created), but any future search will be immediate: in short, it will use only the indexes, and not read all text again.
You have the exact parser you need in The Delphi Magazine Issue 78, February 2002:
"Algorithms Alfresco: Ask A Thousand Times
Julian Bucknall discusses word indexing and document searches: if you want to know how Google works its magic this is the page to turn to."
There are several FTS implementation for Delphi:
Rubicon
Mutis
ColiGet
Google is your friend..
I'd like to add that most DB have an embedded FTS engine. SQLite3 even has a very small but efficient implementation, with page ranking and such.
We provide direct access from Delphi, with ORM classes, to this Full Text Search engine, named FTS3/FTS4.

Delphi TStringList wrapper to implement on-the-fly compression

I have an application for storing many strings in a TStringList. The strings will be largely similar to one another and it occurs to me that one could compress them on the fly - i.e. store a given string in terms of a mixture of unique text fragments plus references to previously stored fragments. StringLists such as lists of fully-qualified path and filenames should be able to be compressed greatly.
Does anyone know of a TStringlist descendant that implement this - i.e. provides read and write access to the uncompressed strings but stores them internally compressed, so that a TStringList.SaveToFile produces a compressed file?
While you could implement this by uncompressing the entire stringlist before each access and re-compressing it afterwards, it would be unnecessarily slow. I'm after something that is efficient for incremental operations and random "seeks" and reads.
TIA
Ross
I don't think there's any freely available implementation around for this (not that I know of anyway, although I've written at least 3 similar constructs in commercial code), so you'd have to roll your own.
The remark Marcelo made about adding items in order is very relevant, as I suppose you'll probably want to compress the data at addition time - having quick access to entries already similar to the one being added, gives a much better performance than having to look up a 'best fit entry' (needed for similarity-compression) over the entire set.
Another thing you might want to read up about, are 'ropes' - a conceptually different type than strings, which I already suggested to Marco Cantu a while back. At the cost of a next-pointer per 'twine' (for lack of a better word) you can concatenate parts of a string without keeping any duplicate data around. The main problem is how to retrieve the parts that can be combined into a new 'rope', representing your original string. Once that problem is solved, you can reconstruct the data as a string at any time, while still having compact storage.
If you don't want to go the 'rope' route, you could also try something called 'prefix reduction', which is a simple form of compression - just start out each string with an index of a previous string and the number of characters that should be treated as a prefix for the new string. Be aware that you should not recurse this too far back, or access-speed will suffer greatly. In one simple implementation, I did a mod 16 on the index, to establish the entry at which prefix-reduction started, which gave me on average about 40% memory savings (this number is completely data-dependant of course).
You could try to wrap a Delphi or COM API around Judy arrays. The JudySL type would do the trick, and has a fairly simple interface.
EDIT: I assume you are storing unique strings and want to (or are happy to) store them in lexicographical order. If these constraints aren't acceptable, then Judy arrays are not for you. Mind you, any compression system will suffer if you don't sort your strings.
I suppose you expect general flexibility from the list (including delete operation), in this case I don't know about any out of the box solution, but I'd suggest one of the two approaches:
You split your string into words and
keep separated growning dictionary
to reference the words and save list of indexes internally
You implement something related to
zlib stream available in Delphi, but operating by the block that
for example can contains 10-100
strings. In this case you still have
to recompress/compress the complete
block, but the "price" you pay is lower.
I dont think you really want to compress TStrings items in memory, because it terribly ineffecient. I suggest you to look at TStream implementation in Zlib unit. Just wrap regular stream into TDecompressionStream on load and TCompressionStream on save (you can even emit gzip header there).
Hint: you will want to override LoadFromStream/SaveToStream instead of LoadFromFile/SaveToFile

How to understand an EDI file?

I've seen XML before, but I've never seen anything like EDI.
How do I read this file and get the data that I need? I see things like ~, REF, N1, N2, N4 but have no idea what any of this stuff means.
I am looking for Examples and Documentations.
Where can I find them?
Aslo
EDI guide i found says that it is based on " ANSI ASC X12/ ver. 4010".
Should I search form X12 ?
Kindly help.
Several of these other answers are very good. I'll try to fill in some things they haven't mentioned.
EDI is a set of standards, the most common of which are:
ANSI X12 (popular in the states)
EDIFACT (popular in Europe)
Sounds like you're looking at X12 version 4010. That's the most widely used (in my experience, anyway) version. There are lots and lots of different versions.
The file, or properly "interchange," is made up of Segments and Elements (and sometimes subelements). Each segment begins with a two- or three-word identifier (ISA, GS, ST, N1, REF).
The structure for all documents begins and ends with an envelope. The envelope is usually made up of the ISA segment and the GS segments. There can be more than one GS segment per file, but there should only be one ISA segment per file (note the should, not everyone plays by the rules).
The ISA is a special segment. Whereas all the other segments are delimited, and therefore can be of varying lenghts, the ISA segment is of fixed width. This is because it tells you how to read the rest of the file.
Start with the last three characters of the ISA segment. Those will tell you the element delimiter, the sub-element delimiter, and the segment delimiter. Here's an example ISA line.
ISA:00: :00: :01:1515151515 :01:5151515151 :041201:1217:U:00403:000032123:0:P:*~
In this case, the ":" is the element delimiter, "*" is a subelement delimiter, and "~" the segment delimiter. It's much easier if you're just trying to look at a file to put linebreaks after each segment delimiter (~).
The ISA also tells you who the document is from and to, what the version is (00403, which is also known as 4030), and the interchange control number (0000321233). The other stuff is probably not important to you at this stage.
This document is from sender "01:1515151515" and to receiver "01:5151515151". So what's with the "01:"? Well, this introduces an important concept in EDI, the qualifier. Several elements have qualifiers, which tell you what type of data the next element is. In this case, the 01 is supposed to be a Dunn and Bradstreet number. Other qualifiers for the ISA05 and ISA07 elements are 12 for phone number, and ZZ for "user defined". You'll find the concept of qualifiers all over EDI segments. A decent rule of thumb is that if it's two characters, it's a qualifier. In order to know what all the qualifiers mean, you'll need a standards guide (either in hard copy from the EDI standards body, or in some software).
The next line is the GS. This is a functional group (a way to group like documents together within an interchange.) For instance, you can have several purchase orders, and several functional acknowledgements within an ISA. These should be placed in separate functional groups (GS segments). You can figure out what type of documents are in a GS segment by looking at the first GS01 element.
GS:PO:9988776655:1122334455:20041201:1217:128:X:004030
Besides the document type, you can see the from (9988776655) and to (1122334455) again. This time they're using different identifiers, which is legal, because you may be receiving an interchange on behalf of someone else (if you're an intermediary, for instance). You can also see the version number again, this time with the trailing "0" (0004030). Use significant digits logic to strip off the leading zeros. Why is there an extra zero here and not in the ISA? I don't know. Lastly this GS segment also has it's own identifier, 128.
That's it for the beginning of the envelope. After that there will be a loop of documents beginning with ST. In this case they'd all be POs, which have a code (850), so the line would start with ST:850:blablabla
The envelope stuff ends with a GE segment which references the GS identifier (128) so you know which segment is being closed. Then comes an IEA which similarly closes out the ISA.
GE:1:128~
IEA:1:000032123~
That's an overview of the structure and how to read it. To understand it you'll need a reference book or software so you understand the codes, lots and lots of time, and lots and lots of practice. Good luck, and post again if you have more specific questions.
Wow, flashbacks. It's been over sixteen years ...
In principle, each line is a "segment", and the identifiers are the beginning of the line is a segment identifier. Each segment contains "elements" which are essentially positional fields. They are delimited by "element delimiters".
Different segments mean different things, and can indicate looping constructs, repeats, etc.
You need to get a current version of the standard for the basic parsing, and then you need the data dictionary to describe the content of the document you are dealing with, and then you might need an industry profile, implementation guide, or similar to deal with the conventions for the particular document type in your environment.
Examples? Not current, but I'm sure you could find a whole bunch using your search engine of choice. Once you get the basic segment/element parsing done, you're dealing with your application level data, and I don't know how much a general example will help you there.
EDI is a file format for structured text files, used by lots of larger organisations and companies for standard database exchange. It tends to be much shorter than XML which used to be great when data packets had to be small. Many organisations still use it, since many mainframe systems use EDI instead of XML.
With EDI messages, you're dealing with text messages that match a specific format. This would be similar to an XML schema, but EDI doesn't really have a standardized schema language. EDI messages themselves aren't really human-readable while most specifications aren't really machine-readable. This is basically the advantage of XML, where both the XML and it's schema can be read by humans and machines.
Chances are that when you're doing electronic banking through some client-side software (not browser-based) then you might already have several EDI files on your system. Banks still prefer EDI over XML to send over transaction data, although many also use their own custom text-based formats.
To understand EDI, you'll have to understand the data first, plus the EDI standard that you want to follow.
Assuming the data stream starts with “ISA”, towards the beginning there should be a section “~ST*” followed by three numeric digits. If you can post these three digits, I can probably provide you with more information. Also, knowing the industry would be helpful. For example, healthcare uses 270, 271, 276, 277 and a few others.

Resources