I'm using Biopython in my code and i need to extract the abstract out of articles. For searching the article I'm using the function:
def search(query):
Entrez.email = 'your.email#example.com'
handle = Entrez.esearch(db='pubmed',
sort='relevance',
retmax='20',
retmode='xml',
term=query)
results = Entrez.read(handle)
return results
I'm looking for the simpliest way to get the text as a string after searching the article (I'm aiming just for one result in a search using the pmid).
cheers
Try use metapub:
from metapub import PubMedFetcher
fetch = PubMedFetcher()
article = fetch.article_by_pmid('31326596')
article.abstract
Related
Anyone know how to do this? Just looking to pull in price data for certain cryptos into google sheets.
Answer
You can use the built-in funcion GOOGLEFINANCE.
Examples
Bitcoin: =GOOGLEFINANCE("CURRENCY:BTCUSD") : $54,348.20
Etherum: =GOOGLEFINANCE("CURRENCY:ETHUSD"): $2,614.33
Litecoin: =GOOGLEFINANCE("CURRENCY:LTCUSD"): $254.00
Euro: =GOOGLEFINANCE("CURRENCY:EURUSD") : $1.21
More features
I recommend you to take a look to the documentation of the function, as it has different features that can be helpful to you. For example, you can creates a chart inside a cell to display the currency exchange trend in a specific time range.
You can use Binance and this function
function pricePair(currencyPair) {
var url = 'https://api3.binance.com/api/v3/ticker/price?symbol=' + currencyPair;
var reponse = UrlFetchApp.fetch(url);
var json = reponse.getContentText();
var data = JSON.parse(json);
return data.price;
}
with parameter currencyPair as BTCUSDT, ETHUSDT, DOGEUSDT, ...
To add to the previous answers
Google Finance is quite limited, with only a few of the biggest coins being tracked.
In alternative, use a proper API which gives you an easy to use endpoint, example:
=IMPORTDATA("https://cryptoprices.cc/BTC/")
I'm trying praw for parsing this reddit page. And I found that this code doesn't save the order:
sm = reddit.submission(url="https://www.reddit.com/r/AskReddit/comments/1irtkq/taxi_drivers_whats_the_deepest_secret_youve/")
sm.comment_sort = 'top'
sm.comments.replace_more(0)
allComments = sm.comments.list()
for i in allComments[1].replies:
print(i.body[:10])
Is it possible to fix that and get the same order for all trees?
I need to translate a message key using a Hashmap using the Grails standard internationalization method.
I receive an Enum and a map with the binding, which are going to be replaced in the text.
The Enum indicates, which key is going to be recovered. The bindings have the values to replace on the translation.
messageSource.getMessage("mail.layout.subject.${templateName}",ARGS,"",locale)
The problem is that I need to pass the map to the args like an array, not like a map, but I don't know the order of the args.
My question is, if there are any ways to create a tranlation key like:
mail.layout.subject.ENUM1=Blablabl {name} bablablabl {age}
Instead of
mail.layout.subject.ENUM1=Blablabl {0} bablablabl {1}
Finally I did it with brute force. May be is not the best answer but I coudln't find any one better.
Basically I get the translation with te message resources and then I work with it finding my custom expresions.
def messageSource = grailsApplication.getMainContext().getBean('messageSource')
def subject = messageSource.getMessage("mail.layout.subject.NOTIFICATION",null,"",locale)
An example of subject resource
mail.layout.subject.NOTIFICATION=The user {friend.name} is friend of {user}
An example bindings:
def bindings = [friend:[name:"Jhon",surname:"Smith"],user:"David"]
With this senteces I replace my expresions with the value of the bindings
Pattern pattern = Pattern.compile("\\{[^}]+\\}")
def res = subject.replaceAll(pattern,{
def expresion = it[1..it.size()-2] // removes brackets
def fields = expresion.split("\\.");
def res = bindings
fields.each{
println(it)
res = res."${it}"
}
return res
})
After the proces the subject becomes like: "The user Jhon is friend of David"
The example use a HashMap of HashMaps, but it also works with object because grails/groovy handles the object like HashMaps and viceversa
This is much cleaner. :)
import groovy.text.SimpleTemplateEngine
def text = 'Dear "$firstname $lastname",So nice to meet you in ${city.name}.See you in ${month},${signed}'
def binding = ["firstname":"Sam", "lastname":"Pullara", "city":["name":"San Francisco", "id":"28"], "month":"December", "signed":"Groovy-Dev"]
def engine = new SimpleTemplateEngine()
template = engine.createTemplate(text).make(binding)
I'm new to this python/biopyhton stuff, so am struggling to work out why the following code, pretty much lifted straight out of the Biopython Cookbook, isn't doing what I'd expect.
I'd have thought it'd end up with the interpreter display two list containing the same number, but all i get is one list and then a message saying TypeError: 'generator' object is not subscriptable.
I'm guessing something is going wrong with the Medline.parse step and the result of the efetch isn't being processed in a way that allows subsequent interation to extract the PMID values. Or, the efetch isn't returning anything.
Any pointers at to what I'm doing wrong?
Thanks
from Bio import Medline
from Bio import Entrez
Entrez.email = 'A.N.Other#example.com'
handle = Entrez.esearch(db="pubmed", term="biopython")
record = Entrez.read(handle)
print(record['IdList'])
items = record['IdList']
handle2 = Entrez.efetch(db="pubmed", id=items, rettype="medline", retmode="text")
records = Medline.parse(handle2)
for r in records:
print(records['PMID'])
You're trying to print records['PMID'] which is a generator. I think you meant to do print(r['PMID']) which will print the 'PMID' entry in the current record dictionary object for each iteration. This is confirmed by the example given in the Bio.Medline.parse() documentation.
After looking at the documentation I still can't understand how it's all tied up. What I am trying to accomplish is simple: given an url, return the text contents of that url.
For example:
import praw
r = praw.Reddit(user_agent='my_cool_app')
post = "http://www.reddit.com/r/askscience/comments/10kp2h\
/lots_of_people_dont_feel_identified_or_find/"
comment = "http://www.reddit.com/r/askscience/comments/10kp2h\
/lots_of_people_dont_feel_identified_or_find/c6ec6hf"
Establishing which is a comment and which is a post can be done using regex but if there's a better way I will use that.
So my question is: what is the best way to determine the nature of a reddit url and how do I get the contents of that url?
What I tried so far:
post=praw.objects.Submission.get_info(r, url).selftext
#returns the self.text of a post regardless if that url is a permalink to a comment
comment_text = praw.objects.?????() # how to do this ?
Thanks in advance.
import praw
r = praw.Reddit('<USERAGENT>')
comment_url = ('http://www.reddit.com/r/askscience/comments/10kp2h'
'/lots_of_people_dont_feel_identified_or_find/c6ec6hf')
comment = r.get_submission(comment_url).comments[0]
print comment.body
My responses here and here should provide additional useful information related to your question.