I'm trying to filter a Dask dataframe with groupby.
df = df.set_index('ngram');
sizes = df.groupby('ngram').size();
df = df[sizes > 15];
However, df.head(15) throws the error ValueError: Not all divisions are known, can't align partitions. Please use `set_index` to set the index.. The divisions on sizes are not known:
>>> df.known_divisions
True
>>> sizes.known_divisions
False
A workaround is to do sizes.compute() or .to_csv(...) and then read it back to Dask with dd.from_pandas or dd.read_csv. Then sizes.known_divisions would return True. That's a notable inconvenience.
How else can this be solved? Am I using Dask wrong?
Note: there's an unanswered dublicate here.
In the common case you are using, it appears to be that your indexing series is in fact much smaller than the source dataframe you want to apply it to. In this case, it makes sense to materialise it and use simple indexing like this:
df = pd.DataFrame({'ngram': np.random.choice([1, 2, 3], size=1000),
'other': np.random.randn(1000)}) # fake data
d = dd.from_pandas(df, npartitions=3)
sizes = d.groupby('ngram').size().compute()
d = d.set_index('ngram') # also sorts the divisions
ngrams = sizes[sizes > 300].index.tolist() # a list of good ngrams
d.loc[ngrams].compute()
Related
I am trying to visualise a friedman's test followed by pairwise comparisons using a boxplot with p-values.
Here is an example of how it should look like:
[example graph downloaded from the internet][1]
However, since there are way too many significant comparisons in my case, my graph currently looks like this:
[my graph][2]
[1]: https://i.stack.imgur.com/DO6Vz.png
[2]: https://i.stack.imgur.com/94OXK.png
Here is the code I used to generate the graph with p-value
pwc_IFX_plot <- pwc_IFX %>% add_xy_position(x = "Variant")
ggboxplot(IFX_variant, x = "Variant", y = "Concentration", add = "point") +
stat_pvalue_manual(pwc_IFX_plot, hide.ns = TRUE)+
labs(
subtitle = get_test_label(res.fried_IFX, detailed = TRUE),
caption = get_pwc_label(pwc_IFX)
)+scale_y_log10(breaks = trans_breaks("log10", function(x) 10^x),
labels = trans_format("log10", math_format(10^.x)))
I hope to only show the comparisons of each group to my control group, rather than all the intergroup comparisons.
Thank you for your time.
Any suggestions would be highly appreciated!
I have an image stack stored in an XArray DataArray with dimensions time, x, y on which I'd like to apply a custom function along the time axis of each pixel such that the output is a single image of dimensions x,y.
I have tried: apply_ufunc but the function fails stating that I need to first load the data into RAM (i.e. cannot use a Dask Array). Ideally, I'd like to keep the DataArray as Dask Arrays internally as it isn't possible to load the entire stack into RAM. The exact error message is:
ValueError: apply_ufunc encountered a dask array on an argument, but handling for dask arrays has not been enabled. Either set the dask argument or load your data into memory first with .load() or .compute()
My code currently looks like this:
import numpy as np
import xarray as xr
import pandas as pd
def special_mean(x, drop_min=False):
s = np.sum(x)
n = len(x)
if drop_min:
s = s - x.min()
n -= 1
return s/n
times = pd.date_range('2019-01-01', '2019-01-10', name='time')
data = xr.DataArray(np.random.rand(10, 8, 8), dims=["time", "y", "x"], coords={'time': times})
data = data.chunk({'time':10, 'x':1, 'y':1})
res = xr.apply_ufunc(special_mean, data, input_core_dims=[["time"]], kwargs={'drop_min': True})
If I do load the data into RAM using .compute then I still end up with an error which states:
ValueError: applied function returned data with unexpected number of dimensions: 0 vs 2, for dimensions ('y', 'x')
I'm not sure entirely what I am missing/doing wrong.
def special_mean(x, drop_min=False):
s = np.sum(x)
n = len(x)
if drop_min:
s = s - x.min()
n -= 1
return s/n
times = pd.date_range('2019-01-01', '2019-01-10', name='time')
data = xr.DataArray(np.random.rand(10, 8, 8), dims=["time", "y", "x"], coords={'time': times})
data = data.chunk({'time':10, 'x':1, 'y':1})
res = xr.apply_ufunc(special_mean, data, input_core_dims=[["time"]], kwargs={'drop_min': True}, dask = 'allowed', vectorize = True)
The code above using the vectorize argument should work.
My aim was also to implement apply_ufunc from Xarray such that it can compute the special mean across x and y.
I enjoyed Ales example; of course by omitting the line related to the chunk. Otherwise:
ValueError: applied function returned data with unexpected number of dimensions. Received 0 dimension(s) but expected 2 dimensions with names: ('y', 'x')
Interestingly, I realized that, in a situation, to have the output of apply_ufunc 3D instead of 2D, we need to add "out_core_dims=[["time"]]" to the apply_ufunc.
I have a column in my data frame which contains Url information. It has 1200+ unique values. I wanted to use text mining to generate features from these values. I have used tfidfvectorizer to generate vectors and then used kmeans to identify clusters. I now want to assign these cluster labels back into my original dataframe, so that I can bin the URL information into these clusters.
Below code to generate vectors and cluster labels
from scipy.spatial.distance import cdist
vectorizer = TfidfVectorizer(min_df = 1,lowercase = False, ngram_range = (1,1), use_idf = True, stop_words='english')
X = vectorizer.fit_transform(sample\['lead_lead_source_modified'\])
X = X.toarray()
distortions=\[\]
K = range(1,10)
for k in K:
kmeanModel = KMeans(n_clusters=k).fit(X)
kmeanModel.fit(X)
distortions.append(sum(np.min(cdist(X, kmeanModel.cluster_centers_, 'euclidean'), axis=1)) / X.shape\[0\])
#append cluster labels
km = KMeans(n_clusters=4, random_state=0)
km.fit_transform(X)
cluster_labels = km.labels_
cluster_labels = pd.DataFrame(cluster_labels, columns=\['ClusterLabel_lead_lead_source'\])
cluster_labels
Through the elbow method, I decided on 4 clusters. I now have cluster labels, but I am not sure how to add them bank to dataframe on its respective index. Concatenating along axis=1 is creating Nans due to indexing issues. Below is the sample output after concatenation.
lead_lead_source_modified ClusterLabel_lead_lead_source
0 NaN 3.0
1 NaN 0.0
2 NaN 0.0
3 ['direct', 'salesline', 'website', ''] 0.0
I want to know if this approach is the right way to do, if so then how to solve this issue. If not, is there a better way to do.
Adding index value during dataframe conversion solved the issue.
But it still want to know if this is the right approach
I have list of pmids
i want to get abstracts for both of them in a single url hit
pmids=[17284678,9997]
abstract_dict={}
url = https://eutils.ncbi.nlm.nih.gov/entrez/eutils/efetch.fcgi?
db=pubmed&id=**17284678,9997**&retmode=text&rettype=xml
My requirement is to get in this format
abstract_dict={"pmid1":"abstract1","pmid2":"abstract2"}
I can get in above format by trying each id and updating the dictionary, but to optimize time I want to give all ids to url and process and get only abstracts part.
Using BioPython, you can give the joined list of Pubmed IDs to Entrez.efetch and that will perform a single URL lookup:
from Bio import Entrez
Entrez.email = 'your_email#provider.com'
pmids = [17284678,9997]
handle = Entrez.efetch(db="pubmed", id=','.join(map(str, pmids)),
rettype="xml", retmode="text")
records = Entrez.read(handle)
abstracts = [pubmed_article['MedlineCitation']['Article']['Abstract']['AbstractText'][0]
for pubmed_article in records['PubmedArticle']]
abstract_dict = dict(zip(pmids, abstracts))
This gives as result:
{9997: 'Electron paramagnetic resonance and magnetic susceptibility studies of Chromatium flavocytochrome C552 and its diheme flavin-free subunit at temperatures below 45 degrees K are reported. The results show that in the intact protein and the subunit the two low-spin (S = 1/2) heme irons are distinguishable, giving rise to separate EPR signals. In the intact protein only, one of the heme irons exists in two different low spin environments in the pH range 5.5 to 10.5, while the other remains in a constant environment. Factors influencing the variable heme iron environment also influence flavin reactivity, indicating the existence of a mechanism for heme-flavin interaction.',
17284678: 'Eimeria tenella is an intracellular protozoan parasite that infects the intestinal tracts of domestic fowl and causes coccidiosis, a serious and sometimes lethal enteritis. Eimeria falls in the same phylum (Apicomplexa) as several human and animal parasites such as Cryptosporidium, Toxoplasma, and the malaria parasite, Plasmodium. Here we report the sequencing and analysis of the first chromosome of E. tenella, a chromosome believed to carry loci associated with drug resistance and known to differ between virulent and attenuated strains of the parasite. The chromosome--which appears to be representative of the genome--is gene-dense and rich in simple-sequence repeats, many of which appear to give rise to repetitive amino acid tracts in the predicted proteins. Most striking is the segmentation of the chromosome into repeat-rich regions peppered with transposon-like elements and telomere-like repeats, alternating with repeat-free regions. Predicted genes differ in character between the two types of segment, and the repeat-rich regions appear to be associated with strain-to-strain variation.'}
Edit:
In the case of pmids without corresponding abstracts, watch out with the fix you suggested:
abstracts = [pubmed_article['MedlineCitation']['Article']['Abstract'] ['AbstractText'][0]
for pubmed_article in records['PubmedArticle'] if 'Abstract' in
pubmed_article['MedlineCitation']['Article'].keys()]
Suppose you have the list of Pubmed IDs pmids = [1, 2, 3], but pmid 2 doesn't have an abstract, so abstracts = ['abstract of 1', 'abstract of 3']
This will cause problems in the final step where I zip both lists together to make a dict:
>>> abstract_dict = dict(zip(pmids, abstracts))
>>> print(abstract_dict)
{1: 'abstract of 1',
2: 'abstract of 3'}
Note that abstracts are now out of sync with their corresponding Pubmed IDs, because you didn't filter out the pmids without abstracts and zip truncates to the shortest list.
Instead, do:
abstract_dict = {}
without_abstract = []
for pubmed_article in records['PubmedArticle']:
pmid = int(str(pubmed_article['MedlineCitation']['PMID']))
article = pubmed_article['MedlineCitation']['Article']
if 'Abstract' in article:
abstract = article['Abstract']['AbstractText'][0]
abstract_dict[pmid] = abstract
else:
without_abstract.append(pmid)
print(abstract_dict)
print(without_abstract)
from Bio import Entrez
import time
Entrez.email = 'your_email#provider.com'
pmids = [29090559 29058482 28991880 28984387 28862677 28804631 28801717 28770950 28768831 28707064 28701466 28685492 28623948 28551248]
handle = Entrez.efetch(db="pubmed", id=','.join(map(str, pmids)),
rettype="xml", retmode="text")
records = Entrez.read(handle)
abstracts = [pubmed_article['MedlineCitation']['Article']['Abstract']['AbstractText'][0] if 'Abstract' in pubmed_article['MedlineCitation']['Article'].keys() else pubmed_article['MedlineCitation']['Article']['ArticleTitle'] for pubmed_article in records['PubmedArticle']]
abstract_dict = dict(zip(pmids, abstracts))
print abstract_dict
I am using a multiBarHorizontalChart with nplot() to show variance from a mean rate. I have "negative change" bars highlighted in red and positive rate change bars in green– via grouping by a "posneg" variable. When I group by this variable however, the years on the y axis are no longer ordered. Any idea how I could maintain the order of the years while still grouping by this variable? Personally, I think the color difference makes the graph a lot easier to interpret. Here's a reproducible example, using the data hosted on Socrata:
install.packages("RSocrata")
library(RSocrata)
url="https://opendata.socrata.com/dataset/Preliminary-Data-Data-Visulaization-Project-8-12-1/4xgc-ygke"
dfRatePer100= read.socrata(url)
dfRatePer100=subset(dfRatePer100, select=c(1,3), Year!="NA")
colnames(dfRatePer100)= c("Year", "Dollar.Rate")
dfRatePer100$Dollar.Rate= as.numeric(dfRatePer100$Dollar.Rate, 3)
dfRatePer100$mean= mean(dfRatePer100$Dollar.Rate)
dfRatePer100=dfRatePer100%>%
mutate(rateVariance= Dollar.Rate - mean) %>%
arrange(desc(Year))
dfRatePer100$PosNeg=ifelse(dfRatePer100$rateVariance>0, "Positive rate change from mean", ifelse(dfRatePer100$rateVariance<0, "Negative rate change from mean", "No change from mean"))
ratePer100 <- nPlot(rateVariance~ Year, group="PosNeg",data = dfRatePer100,type = 'multiBarHorizontalChart')
ratePer100$chart(showLegend=T)
ratePer100$chart(showControls=F)
ratePer100$chart(color = c('green','red'))
ratePer100$yAxis(axisLabel='Variance from mean rate (in dollars)')
ratePer100$yAxis(tickFormat = "#! function(d) {return d3.format('.2f')(d)} !#")
ratePer100$set(width=600)
ratePer100
I appreciate any help! Thanks.
Not an answer but a suggestion, since looking at the source code, nvd3 multiBarHorizontalChart will group by the groups first and sort then by values, so don't think possible. taucharts might be a good option if rCharts is not a requirement.
library(rCharts)
df <- data.frame(
year = as.character(2000:2012)
,value = runif(13,-1,1)
)
df$group <- ifelse(df$value>0,"positive","negative")
np <-nPlot(
value ~ year,
group = "group",
data = df,
type = 'multiBarHorizontalChart'
)
np$chart(color = c('green','red'))
np
library(taucharts)
tauchart( df ) %>%
tau_bar( "value", "year", "group", horizontal=TRUE) %>%
tau_legend()