Ruby Case Statements with Multiple conditions - ruby-on-rails

I have a method that returns image size based upon the users selection and now I want to add another condition to my case statement. It is not setting the correct image size when I call the method again after my system call doing pdfinfo if the user chose STANDARD it should be 1250x1075 but it does not even do my case statement, it directly goes to else and sets 1728x1075
This is what I've tried
205 def FaxCall.set_image_size(resolution, pdf_size=nil)
206 case resolution
207 when STANDARD && (pdf_size != LEGAL_PDF_SIZE)]
208 image="1728x1075"
209 when FINE && pdf_size != LEGAL_PDF_SIZE
210 image="1728x2150"
211 when SUPERFINE && pdf_size != LEGAL_PDF_SIZE
212 image="1728x4300"
213 when [STANDARD, (pdf_size === LEGAL_PDF_SIZE)]
214 image="1250x1720"
215 when FINE && pdf_size == LEGAL_PDF_SIZE
216 image="1700x2800"
217 when SUPERFINE && pdf_size == LEGAL_PDF_SIZE
218 image="3400x5572"
219 else
220 image="1728x1075"
221 end
222 return image
223 end
This is where I call my method
135 def FaxCall.prepare_doc(in_file,out_file,res=STANDARD)
139 image = FaxCall.set_image_size(res)
140 res = STANDARD unless RESOLUTION_OPTIONS.values.include?(res)
145 if ext.eql?("pdf")
146 pdf_size = `pdfinfo "#{in_file}" | grep 'Page size:'`.gsub(/Page size:\s*\b/, '').chomp
147 if pdf_size == LEGAL_PDF_SIZE
148 image = FaxCall.set_image_size(res,pdf_size)

STANDARD && (pdf_size != LEGAL_PDF_SIZE), FINE && pdf_size != LEGAL_PDF_SIZE, SUPERFINE && pdf_size != LEGAL_PDF_SIZE, FINE && pdf_size == LEGAL_PDF_SIZE, and SUPERFINE && pdf_size == LEGAL_PDF_SIZE are all booleans, but resolution is a String, so they will never match.
[STANDARD, (pdf_size === LEGAL_PDF_SIZE)] is an Array. An Array will never match a String.
So, therefore, none of your cases will ever match, and you will always fall into the else case.

I'd create an array of hashes...
RES_MAP = [{res: STANDARD, legal: false, image: "1728x1075"},
{res: FINE, legal: false , image: "1728x2150"},
{res: SUPERFINE, legal: false , image: "1728x4300"},
{res: STANDARD, legal: true, image: "1250x1720"},
{res: FINE, legal: true , image: "1700x2800"},
{res: SUPERFINE, legal: true , image: "3400x5572"}]
and then change FaxCall.set_image_size(resolution, pdf_size=nil) to look up the matching hash and grab the image size.
def FaxCall.set_image_size(resolution, pdf_size)
is_legal = (pdf_size == LEGAL_PDF_SIZE)
match_res = RES_MAP.select{ |r| r[:res] == resolution && r[:legal] == is_legal}.first
return match_res.present? : match_res[:image] ? "1728x1075"
end
Easier to read and to add both more map values and extra criteria.

That's because STANDARD and ELSE have the same image size.
207 when STANDARD && (pdf_size != LEGAL_PDF_SIZE)]
208 image="1728x1075"
219 else
220 image="1728x1075"
See what I mean ?

#Jörg has explained the problem with your code. You might consider writing your method as follows.
DEFAULT_IMAGE_SIZE = "1728x1075"
def FaxCall.set_image_size(resolution, pdf_size=nil)
case pdf_size
when LEGAL_PDF_SIZE
case resolution
when STANDARD then "1250x1720"
when FINE then "1700x2800"
when SUPERFINE then "3400x5572"
else DEFAULT_IMAGE_SIZE
end
else
case resolution
when STANDARD then "1728x1075"
when FINE then "1728x2150"
when SUPERFINE then "1728x4300"
else DEFAULT_IMAGE_SIZE
end
end
end

Related

load_from_checkpoint fails after transfer learning a LightningModule

I try to transfer learn a LightningModule. The relevant part of the code is this:
class DeepFilteringTransferLearning(pl.LightningModule):
def __init__(self, chk_path = None):
super().__init__()
#init class members
self.prediction = []
self.label = []
self.loss = MSELoss()
#init pretrained model
self.chk_path = chk_path
model = DeepFiltering.load_from_checkpoint(chk_path)
backbone = model.sequential
layers = list(backbone.children())[:-1]
self.groundModel = Sequential(*layers)
#use the pretrained modell the same way to regress Lshall and neq
self.regressor = nn.Linear(64,2)
def forward(self, x):
self.groundModel.eval()
with torch.no_grad():
groundOut = self.groundModel(x)
yPred = self.regressor(groundOut)
return yPred
I save my model in a different, main file which relevant part is:
#callbacks
callbacks = [
ModelCheckpoint(
dirpath = "checkpoints/maxPooling16StandardizedL2RegularizedReproduceableSeeded42Ampl1ConvTransferLearned",
save_top_k=5,
monitor="val_loss",
),
]
#trainer
trainer = pl.Trainer(gpus=[1,2],strategy="dp",max_epochs=150,logger=wandb_logger,callbacks=callbacks,precision=32,deterministic=True)
trainer.fit(model,train_dataloaders=trainDl,val_dataloaders=valDl)
After try to load the modell from checkpoint like this:
chk_patH = "path/to/transfer_learned/model"
standardizedL2RegularizedL1 = DeepFilteringTransferLearning("path/to/model/trying/to/use/for/transfer_learning").load_from_checkpoint(chk_patH)
I got the following error:
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
~/anaconda3/envs/skimageTrial/lib/python3.6/site-packages/torch/serialization.py in _check_seekable(f)
307 try:
--> 308 f.seek(f.tell())
309 return True
AttributeError: 'NoneType' object has no attribute 'seek'
During handling of the above exception, another exception occurred:
AttributeError Traceback (most recent call last)
<ipython-input-6-13f5fd0c7b85> in <module>
1 chk_patH = "checkpoints/maxPooling16StandardizedL2RegularizedReproduceableSeeded42Ampl1/epoch=4-step=349.ckpt"
----> 2 standardizedL2RegularizedL1 = DeepFilteringTransferLearning("checkpoints/maxPooling16StandardizedL2RegularizedReproduceableSeeded42Ampl2/epoch=145-step=10219.ckpt").load_from_checkpoint(chk_patH)
~/anaconda3/envs/skimageTrial/lib/python3.6/site-packages/pytorch_lightning/core/saving.py in load_from_checkpoint(cls, checkpoint_path, map_location, hparams_file, strict, **kwargs)
154 checkpoint[cls.CHECKPOINT_HYPER_PARAMS_KEY].update(kwargs)
155
--> 156 model = cls._load_model_state(checkpoint, strict=strict, **kwargs)
157 return model
158
~/anaconda3/envs/skimageTrial/lib/python3.6/site-packages/pytorch_lightning/core/saving.py in _load_model_state(cls, checkpoint, strict, **cls_kwargs_new)
196 _cls_kwargs = {k: v for k, v in _cls_kwargs.items() if k in cls_init_args_name}
197
--> 198 model = cls(**_cls_kwargs)
199
200 # give model a chance to load something
~/whistlerProject/gitHub/whistler/mathe/gwInspired/deepFilteringTransferLearning.py in __init__(self, chk_path)
34 #init pretrained model
35 self.chk_path = chk_path
---> 36 model = DeepFiltering.load_from_checkpoint(chk_path)
37 backbone = model.sequential
38 layers = list(backbone.children())[:-1]
~/anaconda3/envs/skimageTrial/lib/python3.6/site-packages/pytorch_lightning/core/saving.py in load_from_checkpoint(cls, checkpoint_path, map_location, hparams_file, strict, **kwargs)
132 checkpoint = pl_load(checkpoint_path, map_location=map_location)
133 else:
--> 134 checkpoint = pl_load(checkpoint_path, map_location=lambda storage, loc: storage)
135
136 if hparams_file is not None:
~/anaconda3/envs/skimageTrial/lib/python3.6/site-packages/pytorch_lightning/utilities/cloud_io.py in load(path_or_url, map_location)
31 if not isinstance(path_or_url, (str, Path)):
32 # any sort of BytesIO or similiar
---> 33 return torch.load(path_or_url, map_location=map_location)
34 if str(path_or_url).startswith("http"):
35 return torch.hub.load_state_dict_from_url(str(path_or_url), map_location=map_location)
~/anaconda3/envs/skimageTrial/lib/python3.6/site-packages/torch/serialization.py in load(f, map_location, pickle_module, **pickle_load_args)
579 pickle_load_args['encoding'] = 'utf-8'
580
--> 581 with _open_file_like(f, 'rb') as opened_file:
582 if _is_zipfile(opened_file):
583 # The zipfile reader is going to advance the current file position.
~/anaconda3/envs/skimageTrial/lib/python3.6/site-packages/torch/serialization.py in _open_file_like(name_or_buffer, mode)
233 return _open_buffer_writer(name_or_buffer)
234 elif 'r' in mode:
--> 235 return _open_buffer_reader(name_or_buffer)
236 else:
237 raise RuntimeError(f"Expected 'r' or 'w' in mode but got {mode}")
~/anaconda3/envs/skimageTrial/lib/python3.6/site-packages/torch/serialization.py in __init__(self, buffer)
218 def __init__(self, buffer):
219 super(_open_buffer_reader, self).__init__(buffer)
--> 220 _check_seekable(buffer)
221
222
~/anaconda3/envs/skimageTrial/lib/python3.6/site-packages/torch/serialization.py in _check_seekable(f)
309 return True
310 except (io.UnsupportedOperation, AttributeError) as e:
--> 311 raise_err_msg(["seek", "tell"], e)
312 return False
313
~/anaconda3/envs/skimageTrial/lib/python3.6/site-packages/torch/serialization.py in raise_err_msg(patterns, e)
302 + " Please pre-load the data into a buffer like io.BytesIO and"
303 + " try to load from it instead.")
--> 304 raise type(e)(msg)
305 raise e
306
AttributeError: 'NoneType' object has no attribute 'seek'. You can only torch.load from a file that is seekable. Please pre-load the data into a buffer like io.BytesIO and try to load from it instead.
which I can't resolve.
I try to this according to the available tutorials on the official page of pytorch lightning here. I can't figure it out what I miss.
Could somebody point me in the right direction?

Why cant i parse this pdf using pdfminer?

I wrote code that sucessfully parses thousands of different kind of pdfs.
However with this pdf, i get an error. Here is a very simple test code sample, that reproduces the error. My original code is too long to share here
file = open('C:/Users/username/file.pdf', 'rb')
rsrcmgr = PDFResourceManager()
device = PDFPageAggregator(rsrcmgr, laparams=LAParams())
interpreter = PDFPageInterpreter(rsrcmgr, device)
pages = PDFPage.get_pages(file)
for page in pages:
interpreter.process_page(page)
layout = device.get_result()
https://filetransfer.io/data-package/dWnZbcWl#link
Here is the full error message
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
~\AppData\Local\Temp/ipykernel_15652/28568702.py in <module>
7 for page in pages:
----> 8 interpreter.process_page(page)
9 layout = device.get_result()
C:\ProgramData\miniforge3\lib\site-packages\pdfminer\pdfinterp.py in process_page(self, page)
839 ctm = (1, 0, 0, 1, -x0, -y0)
840 self.device.begin_page(page, ctm)
--> 841 self.render_contents(page.resources, page.contents, ctm=ctm)
842 self.device.end_page(page)
843 return
C:\ProgramData\miniforge3\lib\site-packages\pdfminer\pdfinterp.py in render_contents(self, resources, streams, ctm)
852 self.init_resources(resources)
853 self.init_state(ctm)
--> 854 self.execute(list_value(streams))
855 return
856
C:\ProgramData\miniforge3\lib\site-packages\pdfminer\pdfinterp.py in execute(self, streams)
857 def execute(self, streams):
858 try:
--> 859 parser = PDFContentParser(streams)
860 except PSEOF:
861 # empty page
C:\ProgramData\miniforge3\lib\site-packages\pdfminer\pdfinterp.py in __init__(self, streams)
219 self.streams = streams
220 self.istream = 0
--> 221 PSStackParser.__init__(self, None)
222 return
223
C:\ProgramData\miniforge3\lib\site-packages\pdfminer\psparser.py in __init__(self, fp)
513
514 def __init__(self, fp):
--> 515 PSBaseParser.__init__(self, fp)
516 self.reset()
517 return
C:\ProgramData\miniforge3\lib\site-packages\pdfminer\psparser.py in __init__(self, fp)
167 def __init__(self, fp):
168 self.fp = fp
--> 169 self.seek(0)
170 return
171
C:\ProgramData\miniforge3\lib\site-packages\pdfminer\pdfinterp.py in seek(self, pos)
233
234 def seek(self, pos):
--> 235 self.fillfp()
236 PSStackParser.seek(self, pos)
237 return
C:\ProgramData\miniforge3\lib\site-packages\pdfminer\pdfinterp.py in fillfp(self)
229 else:
230 raise PSEOF('Unexpected EOF, file truncated?')
--> 231 self.fp = BytesIO(strm.get_data())
232 return
233
C:\ProgramData\miniforge3\lib\site-packages\pdfminer\pdftypes.py in get_data(self)
290 def get_data(self):
291 if self.data is None:
--> 292 self.decode()
293 return self.data
294
C:\ProgramData\miniforge3\lib\site-packages\pdfminer\pdftypes.py in decode(self)
271 raise PDFNotImplementedError('Unsupported filter: %r' % f)
272 # apply predictors
--> 273 if 'Predictor' in params:
274 pred = int_value(params['Predictor'])
275 if pred == 1:
TypeError: argument of type 'PDFObjRef' is not iterable
Can somebody try to load this into memory and if successful tell me how they did it?
Package versions used
conda 4.11.0 py39hcbf5309_0 conda-forge
ipython 7.28.0 py39h832f523_0 conda-forge
notebook 6.4.4 pyha770c72_0 conda-forge
pdfminer 20191125 pyhd8ed1ab_1 conda-forge
pillow 8.3.2 py39h916092e_0 conda-forge
pyparsing 2.4.7 pyh9f0ad1d_0 conda-forge
pytesseract 0.3.8 pyhd8ed1ab_0 conda-forge
python 3.9.7 h7840368_3_cpython conda-forge
wcwidth 0.2.5 pyh9f0ad1d_2 conda-forge
wheel 0.37.0 pyhd8ed1ab_1 conda-forge
I checked for problems with metadata but that is fine. I checked for encryption but that is also not the problem. Multipage is also no problem.
When I change
if 'Predictor' in params:
to:
if isinstance(params, dict) and 'Predictor' in params:
in file pdftypes.py (line 273), I don't get the error any more.
See: https://github.com/pdfminer/pdfminer.six/pull/471
The fix from PR 471, is not included in version 20191125.

unmapped reads using bwa

i'm trying to use BWA MEM to align some WGS files, but i notice something strange.
When I used samtools flagstat to check these .bam files, I notice that most reads were unmapped.
76124692 + 0 in total (QC-passed reads + QC-failed reads)
308 + 0 secondary
0 + 0 supplementary
0 + 0 duplicates
708109 + 0 mapped (0.93% : N/A)
76124384 + 0 paired in sequencing
38062192 + 0 read1
38062192 + 0 read2
0 + 0 properly paired (0.00% : N/A)
12806 + 0 with itself and mate mapped
694995 + 0 singletons (0.91% : N/A)
11012 + 0 with mate mapped to a different chr
1682 + 0 with mate mapped to a different chr (mapQ>=5)
Previously, I used Samtofastq to convert my .bam file to .fastq. When I head this file, this is shown:
#SRR1513845.100000000/1
AACGAAACGAAAAGAAAAGAAAAGAAAGAAAAAGAAAGGAACAGAAAAG
+
AAA?=>'2&)&)&&))2(-'(,.%)&31%%'6/6,(1,501046124&6
#SRR1513845.100000000/2
AATTAATTAAGCCCCGAAGGAAGCGAGAAACACTG
+
AAA?B=AB#A#A=?A>AA#?.#?8<.1;><*17?<
#SRR1513845.100000001/1
TATAACCATATAACAAATCCAAGCCCAACAGAGAAGAGAAACAAAAAGA
+
>27<#>&856;.'.&9.%>%::-5194&:+'5);;%1&'/%%999%5(8
#SRR1513845.100000001/2
TCCAACTGATATCGTAATT
+
#3<#A>:8;?:383>=3:=
#SRR1513845.100000003/1
TATCGGTCTTGTTTAG
+
=1;=6?(4>4A13?0A
#SRR1513845.100000003/2
TTCAGGTGCCTCGAAGTTGGATAAGG
+
==>>9#;?3<A5>7);)<9-<25<9?
#SRR1513845.100000004/1
GTCATTTAGCCCAAGAGAATGGC
+
BB#ABA##A?</A>>25A;#4:5
#SRR1513845.100000004/2
GGAGATCGAGTCAAATTTTATGCTAGGTAT
+
%A:<#7A##=4AA?7<A5>#;3&?>>:;:>
#SRR1513845.100000012/1
GCGTCGTTATCCAAAA
+
>A:9:?88=<=0&>>9
#SRR1513845.100000012/2
TGGAAATATTTATTACCCCCCCCCCCCCCCCCCCCCCCCCC
+
A;>#A;4;=??8=:#;-4<?632;=:67;>=):9>9%88=9
#SRR1513845.100000016/1
CGTGGAATGGGGTGTGATTTAATTATCGAATGGCGTCCGATCCAGATT
These characters (<.#;:) are normal and influence in bwa's alignment?
Here is my bwa code:
bwa mem -M -t 38 -p hsa_GRCh38.fa SRR1513_fastqtosam.fq -o SRRR1513_aligned.bam
and my samtofastq code
java -Xmx8G -jar picard.jar SamToFastq \
I= SRR1513_fastqtosam.bam \
FASTQ= SRR1513_fastqtosam.fq \
CLIPPING_ATTRIBUTE=XT \
CLIPPING_ACTION=2 \
INTERLEAVE=true \
NON_PF=true TMP_DIR=./temp
I'm stuck in this from a few hours.
Thanks in advance!
UPDATE:
I just notice a flag during bwa mem alignment
[M::mem_pestat] skip orientation FF as there are not enough pairs
[M::mem_pestat] skip orientation RF as there are not enough pairs
[M::mem_pestat] skip orientation FR as there are not enough pairs
[M::mem_pestat] skip orientation RR as there are not enough pairs

Get value from string if finded match regex

Imagine i have a list of string´s like this:
Hello Word 132 132 132 GoodBye!! Should return 132132132
Hello Word 132 132 GoodBye!! Should return nil
132 132 132 GoodBye! Should return 132132132
132132132 GoodBye! Should return 132132132
1321321321GoodBye! Should return nil
132 132 1321 Should return nil
How can i check whether the phrase has 9 followed algorithms, or separated by space, and get that same number?
Thanks
You can use this regex
(\d{3})(\s?\1){2}
and remove any whitespace in the match.
DEMO BTW, if you don't want to match in
Some123 123 123Thing
you can use word boundaries \b(\d{3})(\s?\1){2}\b

why is code not executing on return from Future in Dart program

Could someone please explain to me why in the following code (using r25630 Windows), the value of iInsertTot at line 241 is null, or more to the point, why is line 234 ("return iInsertTot;") not executed and therefore at line 241, iInsertTot is null. The value of iInsertTot at lines 231/232 is an integer. While I can and probably should code this differently, I thought that I would try and see if it worked, because my understanding of Futures and Chaining was that it would work. I have used “return” in a similar way before and it worked, but I was returning null in those cases (eg. line 201 below).
/// The problem lines are :
233 fUpdateTotalsTable().then((_) {
234 return iInsertTot;
235 });
While running in the debugger, it appears that line 234 “return iInsertTot;” is never actually executed. Running from command line has the same result.
The method being called on line 233 (fUpdateTotalsTable) is something I am just in the process of adding, and it consists basically of sync code at this stage. However, the debugger appears to go through it correctly.
I have included the method “fUpdateTotalsTable()” (line 1076) just in case that is causing a problem.
Lines 236 to 245 have just been added, however just in case that code is invalid I have commented those lines out and run with the same problem occurring.
218 /*
219 * Process Inserts
220 */
221 }).then((_) {
222 sCheckpoint = "fProcessMainInserts";
223 ogPrintLine.fPrintForce ("Processing database ......");
224 int iMaxInserts = int.parse(lsInput[I_MAX_INSERTS]);
225 print ("");
226 return fProcessMainInserts(iMaxInserts, oStopwatch);
227 /*
228 * Update the 'totals' table with the value of Inserts
229 */
230 }).then((int iReturnVal) {
231 int iInsertTot = iReturnVal;
232 sCheckpoint = "fUpdateTotalsTable (insert value)";
233 fUpdateTotalsTable().then((_) {
234 return iInsertTot;
235 });
236 /*
237 * Display totals for inserts
238 */
239 }).then((int iInsertTot) {
240 ogTotals.fPrintTotals(
241 "${iInsertTot} rows inserted - Inserts completed",
242 iInsertTot, oStopwatch.elapsedMilliseconds);
243
244 return null;
245 /*
192 /*
193 * Clear main table if selected
194 */
195 }).then((tReturnVal) {
196 if (tReturnVal)
197 ogPrintLine.fPrintForce("Random Keys Cleared");
198 sCheckpoint = "Clear Table ${S_TABLE_NAME}";
199 bool tClearTable = (lsInput[I_CLEAR_YN] == "y");
200 if (!tFirstInstance)
201 return null;
202 return fClearTable(tClearTable, S_TABLE_NAME);
203
204 /*
205 * Update control row to increment count of instances started
206 */
207 }).then((_) {
1073 /*
1074 * Update totals table with values from inserts and updates
1075 */
1076 async.Future<bool> fUpdateTotalsTable() {
1077 async.Completer<bool> oCompleter = new async.Completer<bool>();
1078
1079 String sCcyValue = ogCcy.fCcyIntToString(ogTotals.iTotAmt);
1080
1081 print ("\n********* Total = ${sCcyValue} \n");
1082
1083 oCompleter.complete(true);
1084 return oCompleter.future;
1085 }
Your function L230-235 does not return anything and that's why your iInsertTot is null L239. To make it work you have to add a return at line 233.
231 int iInsertTot = iReturnVal;
232 sCheckpoint = "fUpdateTotalsTable (insert value)";
233 return fUpdateTotalsTable().then((_) {
234 return iInsertTot;
235 });

Resources