from Bio.Blast import NCBIXML
from Bio.Blast import NCBIWWW
result_handle = NCBIWWW.qblast(
"blastn",
"nr",
"CACTTATTTAGTTAGCTTGCAACCCTGGATTTTTGTTTACTGGAGAGGCC",
entrez_query='"Beutenbergia cavernae DSM 12333" [Organism]')
blast_records = NCBIXML.parse(result_handle)
for blast_record in blast_records:
for alignment in blast_record.alignments:
for hsp in alignment.hsps:
print(hsp.query[0:75] + '...')
print(hsp.match[0:75] + '...')
print(hsp.sbjct[0:75] + '...')
this does not give me an output, although the sequence is actually a sequence of the genome,
so i must get a result.
where is the error?
the query is correct?
Your query isn't returning any results. The default parameters for blast are the cause. These parameters work better in this particular case of small length queries:
result_handle = NCBIWWW.qblast(
"blastn",
"nr",
"CACTTATTTAGTTAGCTTGCAACCCTGGATTTTTGTTTACTGGAGAGGCC",
megablast=False,
expect=1000,
word_size=7,
nucl_reward=1,
nucl_penalty=-3,
gapcosts="5 2",
entrez_query='Beutenbergia cavernae DSM 12333 [Organism]')
Particularly the expect parameter plays a major role here.
Related
I want to make a function that reads a FASTA-file with DNA sequences(possibly ambiguous) and inputs a subsequence that returns all sequence IDs of the sequences that contain the given subsequence.
To make the script more efficient, I tried to use nt_search to make give all possibilities of the ambiguous sequence from the FASTA. This seemed more efficient than producing all unambiguous possibilities, especially for larger sequences an FASTA-files.
Right now, I'm struggling to see how I can check whether the subsequence is part of the output given bynt_search.
I want to see if eg 'CGC' (input subsequence) is part of the possibilities given by nt_search: ['TA[GATC][AT][GT]GCGGT'] and return all sequence IDs of sequences for which this is true.
What I have so far:
def bonus_subsequence(file, unambiguous_sequence):
seq_records = SeqIO.parse(file,'fasta', alphabet =ambiguous_dna)
resultListOfSeqIds = []
print(f'Unambiguous sequence {unambiguous_sequence} could be a subsequence of:')
for record in seq_records:
d = Seq.IUPAC.IUPACData.ambiguous_dna_values
couldBeSubSequence = False;
if unambiguous_sequence in nt_search(unambiguous_sequence,record):
couldBeSubSequence = True;
if couldBeSubSequence == True:
print(f'{record.id}')
resultListOfSeqIds.append({record.id})
In a second phase, I want to be able to also use this for ambiguous subsequences, but I'd be more than happy with help on this first question, thanks in advance!
I don't know if I understood You well but you can try this:
Example fasta file:
>seq1
ATGTACGTACGTACNNNNACTG
>seq2
NNNATCGTAGTCANNA
>seq3
NNNNATGNNN
Code:
from Bio import SeqIO
from Bio import SeqUtils
from Bio.Alphabet.IUPAC import ambiguous_dna
if __name__ == '__main__':
sub_seq = input('Enter a subsequence: ')
results = []
with open('test.fasta', 'r') as fh:
for seq in SeqIO.parse(fh, 'fasta', alphabet=ambiguous_dna):
if sub_seq in seq:
results.append((seq.name))
print(results, sep='\n')
Results (console):
Enter a subsequence: ATG
Results:
seq1
seq3
Enter a subsequence: NNNA
Results:
seq1
seq2
seq3
I want to download in fasta format all the peptide sequences in the NCBI protein database (i.e. > and the peptide name, followed by the peptide sequence), I saw there is a MESH term describing what a peptide is here, but I can't work out how to incorporate it.
I wrote this:
import Bio
from Bio import Entrez
Entrez.email = 'test#gmail.com'
handle = Entrez.esearch(db="protein", term="peptide")
record = handle.read()
out_handle = open('myfasta.fasta', 'w')
out_handle.write(record.rstrip('\n'))
but it only prints out 995 IDs, no sequences to file, I'm wondering if someone could demonstrate where I'm going wrong.
Note that a search for the term peptide in the NCBI protein database returns 8187908 hits, so make sure that you actually want to download the peptide sequences for all these hits into one big fasta file.
>>> from Bio import Entrez
>>> Entrez.email = 'test#gmail.com'
>>> handle = Entrez.esearch(db="protein", term="peptide")
>>> record = Entrez.read(handle)
>>> record["Count"]
'8187908'
The default number of records that Entrez.esearch returns is 20. This is to prevent overloading NCBI's servers.
>>> len(record["IdList"])
20
To get the full list of records, change the retmax parameter:
>>> count = record["Count"]
>>> handle = Entrez.esearch(db="protein", term="peptide", retmax=count)
>>> record = Entrez.read(handle)
>>> len(record['IdList'])
8187908
The way to download all the records is to use Entrez.epost
From chapter 9.4 of the BioPython tutorial:
EPost uploads a list of UIs for use in subsequent search strategies; see the EPost help page for more information. It is available from Biopython through the Bio.Entrez.epost() function.
To give an example of when this is useful, suppose you have a long list of IDs you want to download using EFetch (maybe sequences, maybe citations – anything). When you make a request with EFetch your list of IDs, the database etc, are all turned into a long URL sent to the server. If your list of IDs is long, this URL gets long, and long URLs can break (e.g. some proxies don’t cope well).
Instead, you can break this up into two steps, first uploading the list of IDs using EPost (this uses an “HTML post” internally, rather than an “HTML get”, getting round the long URL problem). With the history support, you can then refer to this long list of IDs, and download the associated data with EFetch.
[...] The returned XML includes two important strings, QueryKey and WebEnv which together define your history session. You would extract these values for use with another Entrez call such as EFetch.
Read [chapter 9.15.: Searching for and downloading sequences using the history][3] to learn how to use QueryKey and WebEnv
A full working example would then be:
from Bio import Entrez
import time
from urllib.error import HTTPError
DB = "protein"
QUERY = "peptide"
Entrez.email = 'test#gmail.com'
handle = Entrez.esearch(db=DB, term=QUERY, rettype='fasta')
record = Entrez.read(handle)
count = record['Count']
handle = Entrez.esearch(db=DB, term=QUERY, retmax=count, rettype='fasta')
record = Entrez.read(handle)
id_list = record['IdList']
post_xml = Entrez.epost(DB, id=",".join(id_list))
search_results = Entrez.read(post_xml)
webenv = search_results['WebEnv']
query_key = search_results['QueryKey']
batch_size = 200
with open('peptides.fasta', 'w') as out_handle:
for start in range(0, count, batch_size):
end = min(count, start+batch_size)
print(f"Going to download record {start+1} to {end}")
attempt = 0
success = False
while attempt < 3 and not success:
attempt += 1
try:
fetch_handle = Entrez.efetch(db=DB, rettype='fasta',
retstart=start, retmax=batch_size,
webenv=webenv, query_key=query_key)
success = True
except HTTPError as err:
if 500 <= err.code <= 599:
print(f"Received error from server {err}")
print(f"Attempt {attempt} of 3")
time.sleep(15)
else:
raise
data = fetch_handle.read()
fetch_handle.close()
out_handle.write(data)
The first few lines of peptides.fasta then look like this:
>QGT67293.1 RepA leader peptide Tap (plasmid) [Klebsiella pneumoniae]
MLRKLQAQFLCHSLLLCNISAGSGD
>QGT67288.1 RepA leader peptide Tap (plasmid) [Klebsiella pneumoniae]
MLRKLQAQFLCHSLLLCNISAGSGD
>QGT67085.1 thr operon leader peptide [Klebsiella pneumoniae]
MNRIGMITTIITTTITTGNGAG
>QGT67083.1 leu operon leader peptide [Klebsiella pneumoniae]
MIRTARITSLLLLNACHLRGRLLGDVQR
>QGT67062.1 peptide antibiotic transporter SbmA [Klebsiella pneumoniae]
MFKSFFPKPGPFFISAFIWSMLAVIFWQAGGGDWLLRVTGASQNVAISAARFWSLNYLVFYAYYLFCVGV
FALFWFVYCPHRWQYWSILGTSLIIFVTWFLVEVGVAINAWYAPFYDLIQSALATPHKVSINQFYQEIGV
FLGIAIIAVIIGVMNNFFVSHYVFRWRTAMNEHYMAHWQHLRHIEGAAQRVQEDTMRFASTLEDMGVSFI
NAVMTLIAFLPVLVTLSEHVPDLPIVGHLPYGLVIAAIVWSLMGTGLLAVVGIKLPGLEFKNQRVEAAYR
KELVYGEDDETRATPPTVRELFRAVRRNYFRLYFHYMYFNIARILYLQVDNVFGLFLLFPSIVAGTITLG
LMTQITNVFGQVRGSFQYLISSWTTLVELMSIYKRLRSFERELDGKPLQEAIPTLR
The LDA code generates topics say from 0 to 5 . Is there a standard way (a norm) used to link the generated topics and the documents themselves. Eg: doc1 is of Topic0 , doc5 is of topic Topic1 etc.
One way i can think of is to string search each of geenrated key words in each topic on the docs , is there a generic way or practice followed for this?
Ex LDA code - https://github.com/manhcompany/lda/blob/master/lda.py
I "collected some code", and this worked for me. Assuming you have a term frequency
tf_vectorizer = CountVectorizer("parameters of your choice")
tf = tf_vectorizer.fit_transform("your data)`
lda_model = LatentDirichletAllocation("other parameters of your choice")
lda_model.fit(tf)
create the topic-document matrix (the crucial step), and select the num_topic most important topics
doc_topic = lda_model.transform(tf)
num_most_important_topic = 2
dominant_topic = []
for ind_doc in range(doc_topic.shape[0]):
dominant_topic.append(sorted(range(len(doc_topic[ind_doc])),
key=lambda ind_top: doc_topic[ind_doc][ind_top],
reverse=True)[:num_most_important_topic])
This should give you an array of the num_most_important_topic topics. Good luck!
I want to get a random number using template toolkit. It doesn't have to be particularly random. How do I do it?
Hmm, you might have issues if you don't have (or cannot import) Slash::Test.
From a "vanilla" installation of TT, you can simply use the Math plugin:
USE Math;
GET Math.rand; # outputs a random number from 0 to 1
See this link in the template toolkit manual for more information on the Math plugin and the various methods.
Update: Math.rand requires a parameter. Therefore to get a random number from 0 to 1, use:
GET Math.rand(1);
From this post at Slashcode:
[slash#yaz slash]$ perl -MSlash::Test -leDisplay
[%
digits = [ 0 .. 9 ];
anumber = digits.rand _ digits.rand _ digits.rand;
anumber;
%]
^D
769
I need a well tested Regular Expression (.net style preferred), or some other simple bit of code that will parse a USA/CA phone number into component parts, so:
3035551234122
1-303-555-1234x122
(303)555-1234-122
1 (303) 555 -1234-122
etc...
all parse into:
AreaCode: 303
Exchange: 555
Suffix: 1234
Extension: 122
None of the answers given so far was robust enough for me, so I continued looking for something better, and I found it:
Google's library for dealing with phone numbers
I hope it is also useful for you.
This is the one I use:
^(?:(?:[\+]?(?<CountryCode>[\d]{1,3}(?:[ ]+|[\-.])))?[(]?(?<AreaCode>[\d]{3})[\-/)]?(?:[ ]+)?)?(?<Number>[a-zA-Z2-9][a-zA-Z0-9 \-.]{6,})(?:(?:[ ]+|[xX]|(i:ext[\.]?)){1,2}(?<Ext>[\d]{1,5}))?$
I got it from RegexLib I believe.
This regex works exactly as you want with your examples:
Regex regexObj = new Regex(#"\(?(?<AreaCode>[0-9]{3})\)?[-. ]?(?<Exchange>[0-9]{3})[-. ]*?(?<Suffix>[0-9]{4})[-. x]?(?<Extension>[0-9]{3})");
Match matchResult = regexObj.Match("1 (303) 555 -1234-122");
// Now you have the results in groups
matchResult.Groups["AreaCode"];
matchResult.Groups["Exchange"];
matchResult.Groups["Suffix"];
matchResult.Groups["Extension"];
Strip out anything that's not a digit first. Then all your examples reduce to:
/^1?(\d{3})(\d{3})(\d{4})(\d*)$/
To support all country codes is a little more complicated, but the same general rule applies.
Here is a well-written library used with GeoIP for instance:
http://highway.to/geoip/numberparser.inc
here's a method easier on the eyes provided by the Z Directory (vettrasoft.com),
geared towards American phone numbers:
string_o s2, s1 = "888/872.7676";
z_fix_phone_number (s1, s2);
cout << s2.print(); // prints "+1 (888) 872-7676"
phone_number_o pho = s2;
pho.store_save();
the last line stores the number to database table "phone_number".
column values: country_code = "1", area_code = "888", exchange = "872",
etc.