How to load Seurat Object into WGCNA Tutorial Format - seurat
As far as I can find, there is only one tutorial about loading Seurat objects into WGCNA (https://ucdavis-bioinformatics-training.github.io/2019-single-cell-RNA-sequencing-Workshop-UCD_UCSF/scrnaseq_analysis/scRNA_Workshop-PART6.html). I am really new to programming so it's probably just my inexperience, but I am not sure how to load my Seurat object into a format that works with WGCNA's tutorials (https://horvath.genetics.ucla.edu/html/CoexpressionNetwork/Rpackages/WGCNA/Tutorials/).
Here is what I have tried thus far:
This tries to replicate datExpr and datTraits from part I.1:
library(WGCNA)
library(Seurat)
#example Seurat object -----------------------------------------------
ERlist <- list(c("CPB1", "RP11-53O19.1", "TFF1", "MB", "ANKRD30B",
"LINC00173", "DSCAM-AS1", "IGHG1", "SERPINA5", "ESR1",
"ILRP2", "IGLC3", "CA12", "RP11-64B16.2", "SLC7A2",
"AFF3", "IGFBP4", "GSTM3", "ANKRD30A", "GSTT1", "GSTM1",
"AC026806.2", "C19ORF33", "STC2", "HSPB8", "RPL29P11",
"FBP1", "AGR3", "TCEAL1", "CYP4B1", "SYT1", "COX6C",
"MT1E", "SYTL2", "THSD4", "IFI6", "K1AA1467", "SLC39A6",
"ABCD3", "SERPINA3", "DEGS2", "ERLIN2", "HEBP1", "BCL2",
"TCEAL3", "PPT1", "SLC7A8", "RP11-96D1.10", "H4C8",
"PI15", "PLPP5", "PLAAT4", "GALNT6", "IL6ST", "MYC",
"BST2", "RP11-658F2.8", "MRPS30", "MAPT", "AMFR", "TCEAL4",
"MED13L", "ISG15", "NDUFC2", "TIMP3", "RP13-39P12.3", "PARD68"))
tnbclist <- list(c("FABP7", "TSPAN8", "CYP4Z1", "HOXA10", "CLDN1",
"TMSB15A", "C10ORF10", "TRPV6", "HOXA9", "ATP13A4",
"GLYATL2", "RP11-48O20.4", "DYRK3", "MUCL1", "ID4", "FGFR2",
"SHOX2", "Z83851.1", "CD82", "COL6A1", "KRT23", "GCHFR",
"PRICKLE1", "GCNT2", "KHDRBS3", "SIPA1L2", "LMO4", "TFAP2B",
"SLC43A3", "FURIN", "ELF5", "C1ORF116", "ADD3", "EFNA3",
"EFCAB4A", "LTF", "LRRC31", "ARL4C", "GPNMB", "VIM",
"SDR16C5", "RHOV", "PXDC1", "MALL", "YAP1", "A2ML1",
"RP1-257A7.5", "RP11-353N4.6", "ZBTB18", "CTD-2314B22.3", "GALNT3",
"BCL11A", "CXADR", "SSFA2", "ADM", "GUCY1A3", "GSTP1",
"ADCK3", "SLC25A37", "SFRP1", "PRNP", "DEGS1", "RP11-110G21.2",
"AL589743.1", "ATF3", "SIVA1", "TACSTD2", "HEBP2"))
genes = c(unlist(c(ERlist,tnbclist)))
mat = matrix(rnbinom(500*length(genes),mu=500,size=1),ncol=500)
rownames(mat) = genes
colnames(mat) = paste0("cell",1:500)
sobj = CreateSeuratObject(mat)
sobj = NormalizeData(sobj)
sobj$ClusterName = factor(sample(0:1,ncol(sobj),replace=TRUE))
sobj$Patient = paste0("Patient", 1:500)
sobj = AddModuleScore(object = sobj, features = tnbclist,
name = "TNBC_List",ctrl=5)
sobj = AddModuleScore(object = sobj, features = ERlist,
name = "ER_List",ctrl=5)
#WGCNA -----------------------------------------------------------------
sobjwgcna <- sobj
sobjwgcna <- FindVariableFeatures(sobjwgcna, selection.method = "vst", nfeatures = 2000,
verbose = FALSE, assay = "RNA")
options(stringsAsFactors = F)
sobjwgcnamat <- GetAssayData(sobjwgcna)
datExpr <- t(sobjwgcnamat)[,VariableFeatures(sobjwgcna)]
datTraits <- sobjwgcna#meta.data
datTraits = subset(datTraits, select = -c(nCount_RNA, nFeature_RNA))
I then copy-paste the code as written in the WGCNA I.2a tutorial (https://horvath.genetics.ucla.edu/html/CoexpressionNetwork/Rpackages/WGCNA/Tutorials/FemaleLiver-02-networkConstr-auto.pdf), and that all works until I get to this line in the I.3 tutorial (https://horvath.genetics.ucla.edu/html/CoexpressionNetwork/Rpackages/WGCNA/Tutorials/FemaleLiver-03-relateModsToExt.pdf):
MEList = moduleEigengenes(datExpr, colors = moduleColors)
Error in t.default(expr[, restrict1]) : argument is not a matrix
I tried converting both moduleColors and datExpr into a matrix with as.matrix(), but the error still persists.
Hopefully this makes sense, and thanks for reading!
So doing as.matrix(datExpr) right after datExpr <- t(sobjwgcnamat)[,VariableFeatures(sobjwgcna)] worked. I had been trying it right before MEList = moduleEigengenes(datExpr, colors = moduleColors)
and that didn't work. Seems simple but order matters I guess.
Related
Why does all my emission mu of HMM in pyro converge to the same number?
I'm trying to create a Gaussian HMM model in pyro to infer the parameters of a very simple Markov sequence. However, my model fails to infer the parameters and something wired happened during the training process. Using the same sequence, hmmlearn has successfully infer the true parameters. Full code can be accessed in here: https://colab.research.google.com/drive/1u_4J-dg9Y1CDLwByJ6FL4oMWMFUVnVNd#scrollTo=ZJ4PzdTUBgJi My model is modified from the example in here: https://github.com/pyro-ppl/pyro/blob/dev/examples/hmm.py I manually created a first order Markov sequence where there are 3 states, the true means are [-10, 0, 10], sigmas are [1,2,1]. Here is my model def model(observations, num_state): assert not torch._C._get_tracing_state() with poutine.mask(mask = True): p_transition = pyro.sample("p_transition", dist.Dirichlet((1 / num_state) * torch.ones(num_state, num_state)).to_event(1)) p_init = pyro.sample("p_init", dist.Dirichlet((1 / num_state) * torch.ones(num_state))) p_mu = pyro.param(name = "p_mu", init_tensor = torch.randn(num_state), constraint = constraints.real) p_tau = pyro.param(name = "p_tau", init_tensor = torch.ones(num_state), constraint = constraints.positive) current_state = pyro.sample("x_0", dist.Categorical(p_init), infer = {"enumerate" : "parallel"}) for t in pyro.markov(range(1, len(observations))): current_state = pyro.sample("x_{}".format(t), dist.Categorical(Vindex(p_transition)[current_state, :]), infer = {"enumerate" : "parallel"}) pyro.sample("y_{}".format(t), dist.Normal(Vindex(p_mu)[current_state], Vindex(p_tau)[current_state]), obs = observations[t]) My model is compiled as device = torch.device("cuda:0") obs = torch.tensor(obs) obs = obs.to(device) torch.set_default_tensor_type("torch.cuda.FloatTensor") guide = AutoDelta(poutine.block(model, expose_fn = lambda msg : msg["name"].startswith("p_"))) Elbo = Trace_ELBO elbo = Elbo(max_plate_nesting = 1) optim = Adam({"lr": 0.001}) svi = SVI(model, guide, optim, elbo) As the training goes, the ELBO has decreased steadily as shown. However, the three means of the states converges. I have tried to put the for loop of my model into a pyro.plate and switch pyro.param to pyro.sample and vice versa, but nothing worked for my model.
I have not tried this model, but I think it should be possible to solve the problem by modifying the model in the following way: def model(observations, num_state): assert not torch._C._get_tracing_state() with poutine.mask(mask = True): p_transition = pyro.sample("p_transition", dist.Dirichlet((1 / num_state) * torch.ones(num_state, num_state)).to_event(1)) p_init = pyro.sample("p_init", dist.Dirichlet((1 / num_state) * torch.ones(num_state))) p_mu = pyro.sample("p_mu", dist.Normal(torch.zeros(num_state), torch.ones(num_state)).to_event(1)) p_tau = pyro.sample("p_tau", dist.HalfCauchy(torch.zeros(num_state)).to_event(1)) current_state = pyro.sample("x_0", dist.Categorical(p_init), infer = {"enumerate" : "parallel"}) for t in pyro.markov(range(1, len(observations))): current_state = pyro.sample("x_{}".format(t), dist.Categorical(Vindex(p_transition)[current_state, :]), infer = {"enumerate" : "parallel"}) pyro.sample("y_{}".format(t), dist.Normal(Vindex(p_mu)[current_state], Vindex(p_tau)[current_state]), obs = observations[t]) The model would then be trained using MCMC: # MCMC hmc_kernel = NUTS(model, target_accept_prob = 0.9, max_tree_depth = 7) mcmc = MCMC(hmc_kernel, num_samples = 1000, warmup_steps = 100, num_chains = 1) mcmc.run(obs) The results could then be analysed using: mcmc.get_samples()
creating learner in mlr3: Error in sprintf(msg, ...) : too few arguments
I want to create a learner in mlr3, using the distRforest package. my code: library(mlr3extralearners) create_learner( pkg = "." , classname = 'distRforest', algorithm = 'regression tree', type = 'regr', key = 'distRforest', package = 'distRforest', caller = 'rpart', feature_types = c("logical", "integer", "numeric","factor", "ordered"), predict_types = c('response'), properties = c("importance", "missings", "multiclass", "selected_features", "twoclass", "weights"), references = FALSE, gh_name = 'CL' ) gives the following error : Error in sprintf(msg, ...) : too few arguments in fact, replicating the code in the tutorial https://mlr3book.mlr-org.com/extending-learners.html throws the same error. Any ideas? Thanks a lot - c
thanks for your interest in extending the mlr3 universe! Couple of things, firstly the example in the book works fine for me, and secondly your example cannot work because you are including classif properties for a regr learner. As I am unable to reproduce your error it's hard for me to debug what's going wrong, it would be helpful if you could run the following: reprex::reprex({ create_learner( pkg = ".", classname = "Rpart", algorithm = "decision tree", type = "classif", key = "rpartddf", package = "rpart", caller = "rpart", feature_types = c("logical", "integer", "numeric", "factor", "ordered"), predict_types = c("response", "prob"), properties = c("importance", "missings", "multiclass", "selected_features", "twoclass", "weights"), references = TRUE, gh_name = "CL" ) }, si = TRUE) If you're still getting an error and the output is too long to print here then head over to the GitHub and open an issue there.
How to use prepare_analogy_questions and check_analogy_accuracy functions in text2vec package?
Following code: library(text2vec) text8_file = "text8" if (!file.exists(text8_file)) { download.file("http://mattmahoney.net/dc/text8.zip", "text8.zip") unzip ("text8.zip", files = "text8") } wiki = readLines(text8_file, n = 1, warn = FALSE) # Create iterator over tokens tokens <- space_tokenizer(wiki) # Create vocabulary. Terms will be unigrams (simple words). it = itoken(tokens, progressbar = FALSE) vocab <- create_vocabulary(it) vocab <- prune_vocabulary(vocab, term_count_min = 5L) # Use our filtered vocabulary vectorizer <- vocab_vectorizer(vocab) # use window of 5 for context words tcm <- create_tcm(it, vectorizer, skip_grams_window = 5L) RcppParallel::setThreadOptions(numThreads = 4) glove_model = GloVe$new(word_vectors_size = 50, vocabulary = vocab, x_max = 10, learning_rate = .25) word_vectors_main = glove_model$fit_transform(tcm, n_iter = 20) word_vectors_context = glove_model$components word_vectors = word_vectors_main + t(word_vectors_context) causes error: qlst <- prepare_analogy_questions("questions-words.txt", rownames(word_vectors)) > Error in (function (fmt, ...) : invalid format '%d'; use format %s for character objects File questions-words.txt from word2vec sources https://github.com/nicholas-leonard/word2vec/blob/master/questions-words.txt
This was a small bug in information message formatting (after introduction of futille.logger). Just fixed it and pushed to github. You can install updated version of the package with devtools::install_github("dselivanov/text2vec"
How to Display a decomposition wavelet in 3 level?
I want to display a decomposition wavelet in 3 level. so can any help me in give a Matlab function to display it? [cA cH cV cD]=dwt2(a,waveletname); out=[cA cH;cV cD]; figure;imshow(out,[]); That only works for the first level. actually, I want to representation square mode such wavemenu in Matlab. example of the view decomposition I am fairly new to it. thanx.
You should use the function wavedec2(Image,numberOfLevels,'wname') with the amount of levels that you need. For more information look at http://www.mathworks.com/help/wavelet/ref/wavedec2.html Code for example with db1 clear all im = imread('cameraman.tif'); [c,s] = wavedec2(im,3,'db1'); A1 = appcoef2(c,s,'db1',1); [H1,V1,D1] = detcoef2('all',c,s,1); A2 = appcoef2(c,s,'db1',2); [H2,V2,D2] = detcoef2('all',c,s,2); A3 = appcoef2(c,s,'db1',3); [H3,V3,D3] = detcoef2('all',c,s,3); V1img = wcodemat(V1,255,'mat',1); H1img = wcodemat(H1,255,'mat',1); D1img = wcodemat(D1,255,'mat',1); A1img = wcodemat(A1,255,'mat',1); V2img = wcodemat(V2,255,'mat',1); H2img = wcodemat(H2,255,'mat',1); D2img = wcodemat(D2,255,'mat',1); A2img = wcodemat(A2,255,'mat',1); V3img = wcodemat(V3,255,'mat',1); H3img = wcodemat(H3,255,'mat',1); D3img = wcodemat(D3,255,'mat',1); A3img = wcodemat(A3,255,'mat',1); mat3 = [A3img,V3img;H3img,D3img]; mat2 = [mat3,V2img;H2img,D2img]; mat1 = [mat2,V1img;H1img,D1img]; imshow(uint8(mat1)) The final result
Is there an easier way to modify a value in a subsubsub record field in Erlang?
So I've got a fairly deep hierarchy of record definitions: -record(cat, {name = '_', attitude = '_',}). -record(mat, {color = '_', fabric = '_'}). -record(packet, {cat = '_', mat = '_'}). -record(stamped_packet, {packet = '_', timestamp = '_'}). -record(enchilada, {stamped_packet = '_', snarky_comment = ""}). And now I've got an enchilada, and I want to make a new one that's just like it except for the value of one of the subsubsubrecords. Here's what I've been doing. update_attitude(Ench0, NewState) when is_record(Ench0, enchilada)-> %% Pick the old one apart. #enchilada{stamped_packet = SP0} = Ench0, #stamped_packet{packet = PK0} = SP0, #packet{cat = Tag0} = PK0, %% Build up the new one. Tude1 = Tude0#cat{attitude = NewState}, PK1 = PK0#packet{cat = Tude1}, SP1 = SP0#stamped_packet{packet = PK1}, %% Thank God that's over. Ench0#enchilada{stamped_packet = SP1}. Just thinking about this is painful. Is there a better way?
As Hynek suggests, you can elide the temporary variables and do: update_attitude(E = #enchilada{stamped_packet = (P = #packet{cat=C})}, NewAttitude) -> E#enchilada{stamped_packet = P#packet{cat = C#cat{attitude=NewAttitude}}}. Yariv Sadan got frustrated with the same issue and wrote Recless, a type inferring parse transform for records which would allow you to write: -compile({parse_transform, recless}). update_attitude(Enchilada = #enchilada{}, Attitude) -> Enchilada.stamped_packet.packet.cat.attitude = Attitude.
Try this: update_attitude(E = #enchilada{ stamped_packet = (SP = #stamped_packet{ packet = (P = #packet{ cat = C })})}, NewState) -> E#enchilada{ stamped_packet = SP#stamped_packet{ packet = P#packet{ cat = C#cat{ attitude = NewState }}}}. anyway, structures is not most powerful part of Erlang.