I want to train semantic segmentation to recognize small objects. For this I use DeepLabVabV3. But when installing the config, I get a recursion error that reloads the environment. Am I setting the config correctly or is there a problem in my yaml file?
cfg = get_cfg()
cfg.load_yaml_with_base('/content/Base3-DeepLabV3-OS16-Semantic.yaml')
cfg.MODEL.WEIGHTS = '/content/model_final_0dff1b.pkl'`
Error logs
---------------------------------------------------------------------------
RecursionError Traceback (most recent call last)
[<ipython-input-6-83854a40b4a3>](https://localhost:8080/#) in <module>()
1 cfg = get_cfg()
----> 2 cfg.load_yaml_with_base('/content/Base3-DeepLabV3-OS16-Semantic.yaml')
3 cfg.MODEL.WEIGHTS = '/content/model_final_0dff1b.pkl'
2 frames
[/usr/local/lib/python3.7/dist-packages/fvcore/common/config.py](https://localhost:8080/#) in load_yaml_with_base(cls, filename, allow_unsafe)
92 # the path to base cfg is relative to the config file itself.
93 base_cfg_file = os.path.join(os.path.dirname(filename), base_cfg_file)
---> 94 base_cfg = cls.load_yaml_with_base(base_cfg_file, allow_unsafe=allow_unsafe)
95 del cfg[BASE_KEY]
96
[/usr/local/lib/python3.7/dist-packages/fvcore/common/config.py](https://localhost:8080/#) in load_yaml_with_base(cls, filename, allow_unsafe)
92 # the path to base cfg is relative to the config file itself.
93 base_cfg_file = os.path.join(os.path.dirname(filename), base_cfg_file)
---> 94 base_cfg = cls.load_yaml_with_base(base_cfg_file, allow_unsafe=allow_unsafe)
95 del cfg[BASE_KEY]
96
... last 2 frames repeated, from the frame below ...
[/usr/local/lib/python3.7/dist-packages/fvcore/common/config.py](https://localhost:8080/#) in load_yaml_with_base(cls, filename, allow_unsafe)
92 # the path to base cfg is relative to the config file itself.
93 base_cfg_file = os.path.join(os.path.dirname(filename), base_cfg_file)
---> 94 base_cfg = cls.load_yaml_with_base(base_cfg_file, allow_unsafe=allow_unsafe)
95 del cfg[BASE_KEY]
96
RecursionError: maximum recursion depth exceeded
My .yaml config
_BASE_: Base3-DeepLabV3-OS16-Semantic.yaml
MODEL:
WEIGHTS: "/content/deeplab_v3_R_103_os16_mg124_poly_90k_bs16.yaml"
PIXEL_MEAN: [123.675, 116.280, 103.530]
PIXEL_STD: [58.395, 57.120, 57.375]
BACKBONE:
NAME: "build_resnet_deeplab_backbone"
RESNETS:
DEPTH: 101
NORM: "SyncBN"
RES5_MULTI_GRID: [1, 2, 4]
STEM_TYPE: "deeplab"
STEM_OUT_CHANNELS: 128
STRIDE_IN_1X1: False
SEM_SEG_HEAD:
NAME: "DeepLabV3Head"
NORM: "SyncBN"
INPUT:
FORMAT: "RGB"
Related
I would like to add random numbers to a dask dataframe that uses a column intensity of the original dataframe to set the limits of the random numbers for each row. The code works with pandas and numpy.random, but not with dask and dask.array.
import dask.array as da
import dask.dataframe as dd
from dask.distributed import Client
client = Client()
fns = [list-of-filenames]
df = dd.read_parquet(fns)
# dataframe has a column called intensity of type float
# and no missing values
df['separation_dimension_1'] = da.random.uniform(size=N, low=-noise_level/df.intensity, high=noise_level/df.intensity)
The error is:
ValueError: shape mismatch: objects cannot be broadcast to a single shape. Mismatch is between arg 0 with shape (0,) and arg 1 with shape (33276691,).
Seems the syntax of numpy.random.uniform is a bit different than dask_array.random.uniform?
Full traceback
Cell In[21], line 7
5 df['mz_'] = df.mz * 1000000000
6 df['rt_'] = df.scan_time*10
----> 7 df['separation_dimension_1'] = da.random.uniform(size=N, low=-noise_level/df.intensity, high=noise_level/df.intensity)
8 #df['separation_dimension_2'] = da.random.uniform(size=N, low=-noise_level/df.intensity, high=noise_level/df.intensity)
9 #df['separation_dimension_3'] = da.random.uniform(size=N, low=-noise_level/df.intensity, high=noise_level/df.intensity)
11 df = df[df.intensity > 1e5][['rt_', 'mz_', 'logint']]
File ~/miniconda3/envs/dask/lib/python3.9/site-packages/dask/array/random.py:465, in _make_api.<locals>.wrapper(*args, **kwargs)
462 if backend not in _cached_random_states:
463 # Cache the default RandomState object for this backend
464 _cached_random_states[backend] = RandomState()
--> 465 return getattr(
466 _cached_random_states[backend],
467 attr,
468 )(*args, **kwargs)
File ~/miniconda3/envs/dask/lib/python3.9/site-packages/dask/array/random.py:423, in RandomState.uniform(self, low, high, size, chunks, **kwargs)
421 #derived_from(np.random.RandomState, skipblocks=1)
422 def uniform(self, low=0.0, high=1.0, size=None, chunks="auto", **kwargs):
--> 423 return self._wrap("uniform", low, high, size=size, chunks=chunks, **kwargs)
File ~/miniconda3/envs/dask/lib/python3.9/site-packages/dask/array/random.py:170, in RandomState._wrap(self, funcname, size, chunks, extra_chunks, *args, **kwargs)
165 kwrg[k] = (getitem, lookup[k], slc)
166 vals.append(
167 (_apply_random, self._RandomState, funcname, seed, size, arg, kwrg)
168 )
--> 170 meta = _apply_random(
171 self._RandomState,
172 funcname,
173 seed,
174 (0,) * len(size),
175 small_args,
176 small_kwargs,
177 )
179 dsk.update(dict(zip(keys, vals)))
181 graph = HighLevelGraph.from_collections(name, dsk, dependencies=dependencies)
File ~/miniconda3/envs/dask/lib/python3.9/site-packages/dask/array/random.py:453, in _apply_random(RandomState, funcname, state_data, size, args, kwargs)
451 state = RandomState(state_data)
452 func = getattr(state, funcname)
--> 453 return func(*args, size=size, **kwargs)
File mtrand.pyx:1134, in numpy.random.mtrand.RandomState.uniform()
File _common.pyx:600, in numpy.random._common.cont()
File _common.pyx:517, in numpy.random._common.cont_broadcast_2()
File __init__.pxd:741, in numpy.PyArray_MultiIterNew3()
ValueError: shape mismatch: objects cannot be broadcast to a single shape. Mismatch is between arg 0 with shape (0,) and arg 1 with shape (6249365,).
As is often the case, you will be able to do this using map_partitions, which applies the operation you are after on each component real pandas dataframe
def op(df):
df['separation_dimension_1'] = np.random.uniform(size=N, low=-noise_level/df.intensity, high=noise_level/df.intensity)
return df
df2 = df.map_partitions(op)
I've been receiving the following error after running the following line:
transformer = preprocessing.FunctionTransformer(func=np.log1p, inverse_func=np.expm1)
scaler = preprocessing.StandardScaler()
X1_t = transformer.fit_transform(X_t)
Error:
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
Cell In [103], line 3
1 transformer = preprocessing.FunctionTransformer(func=np.log1p, inverse_func=np.expm1)
2 scaler = preprocessing.StandardScaler()
----> 3 X1_t = transformer.fit_transform(X_t)
4 X2_t = scaler.fit_transform(X1_t)
5 print(X2_t.shape)
File /Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/sklearn/base.py:867, in TransformerMixin.fit_transform(self, X, y, **fit_params)
863 # non-optimized default implementation; override when a better
864 # method is possible for a given clustering algorithm
865 if y is None:
866 # fit method of arity 1 (unsupervised transformation)
--> 867 return self.fit(X, **fit_params).transform(X)
868 else:
869 # fit method of arity 2 (supervised transformation)
870 return self.fit(X, y, **fit_params).transform(X)
File /Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/sklearn/preprocessing/_function_transformer.py:195, in FunctionTransformer.fit(self, X, y)
193 X = self._check_input(X, reset=True)
194 if self.check_inverse and not (self.func is None or self.inverse_func is None):
--> 195 self._check_inverse_transform(X)
196 return self
File /Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/sklearn/preprocessing/_function_transformer.py:160, in FunctionTransformer._check_inverse_transform(self, X)
157 idx_selected = slice(None, None, max(1, X.shape[0] // 100))
158 X_round_trip = self.inverse_transform(self.transform(X[idx_selected]))
--> 160 if not np.issubdtype(X.dtype, np.number):
161 raise ValueError(
162 "'check_inverse' is only supported when all the elements in `X` is"
163 " numerical."
164 )
166 if not _allclose_dense_sparse(X[idx_selected], X_round_trip):
File /Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/pandas/core/generic.py:5575, in NDFrame.__getattr__(self, name)
5568 if (
5569 name not in self._internal_names_set
5570 and name not in self._metadata
5571 and name not in self._accessors
5572 and self._info_axis._can_hold_identifiers_and_holds_name(name)
5573 ):
5574 return self[name]
-> 5575 return object.__getattribute__(self, name)
AttributeError: 'DataFrame' object has no attribute 'dtype'
I was able to run this code before, but had to reinstall Jupyter notebook and when reinstalling and downloading all libraries, started getting this issue. My hypothesis is that it is related to combinations of versions of Jupyter + libraries (pandas, sklearn), but don't remember the versions I previously had.
Any idea?
I try to transfer learn a LightningModule. The relevant part of the code is this:
class DeepFilteringTransferLearning(pl.LightningModule):
def __init__(self, chk_path = None):
super().__init__()
#init class members
self.prediction = []
self.label = []
self.loss = MSELoss()
#init pretrained model
self.chk_path = chk_path
model = DeepFiltering.load_from_checkpoint(chk_path)
backbone = model.sequential
layers = list(backbone.children())[:-1]
self.groundModel = Sequential(*layers)
#use the pretrained modell the same way to regress Lshall and neq
self.regressor = nn.Linear(64,2)
def forward(self, x):
self.groundModel.eval()
with torch.no_grad():
groundOut = self.groundModel(x)
yPred = self.regressor(groundOut)
return yPred
I save my model in a different, main file which relevant part is:
#callbacks
callbacks = [
ModelCheckpoint(
dirpath = "checkpoints/maxPooling16StandardizedL2RegularizedReproduceableSeeded42Ampl1ConvTransferLearned",
save_top_k=5,
monitor="val_loss",
),
]
#trainer
trainer = pl.Trainer(gpus=[1,2],strategy="dp",max_epochs=150,logger=wandb_logger,callbacks=callbacks,precision=32,deterministic=True)
trainer.fit(model,train_dataloaders=trainDl,val_dataloaders=valDl)
After try to load the modell from checkpoint like this:
chk_patH = "path/to/transfer_learned/model"
standardizedL2RegularizedL1 = DeepFilteringTransferLearning("path/to/model/trying/to/use/for/transfer_learning").load_from_checkpoint(chk_patH)
I got the following error:
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
~/anaconda3/envs/skimageTrial/lib/python3.6/site-packages/torch/serialization.py in _check_seekable(f)
307 try:
--> 308 f.seek(f.tell())
309 return True
AttributeError: 'NoneType' object has no attribute 'seek'
During handling of the above exception, another exception occurred:
AttributeError Traceback (most recent call last)
<ipython-input-6-13f5fd0c7b85> in <module>
1 chk_patH = "checkpoints/maxPooling16StandardizedL2RegularizedReproduceableSeeded42Ampl1/epoch=4-step=349.ckpt"
----> 2 standardizedL2RegularizedL1 = DeepFilteringTransferLearning("checkpoints/maxPooling16StandardizedL2RegularizedReproduceableSeeded42Ampl2/epoch=145-step=10219.ckpt").load_from_checkpoint(chk_patH)
~/anaconda3/envs/skimageTrial/lib/python3.6/site-packages/pytorch_lightning/core/saving.py in load_from_checkpoint(cls, checkpoint_path, map_location, hparams_file, strict, **kwargs)
154 checkpoint[cls.CHECKPOINT_HYPER_PARAMS_KEY].update(kwargs)
155
--> 156 model = cls._load_model_state(checkpoint, strict=strict, **kwargs)
157 return model
158
~/anaconda3/envs/skimageTrial/lib/python3.6/site-packages/pytorch_lightning/core/saving.py in _load_model_state(cls, checkpoint, strict, **cls_kwargs_new)
196 _cls_kwargs = {k: v for k, v in _cls_kwargs.items() if k in cls_init_args_name}
197
--> 198 model = cls(**_cls_kwargs)
199
200 # give model a chance to load something
~/whistlerProject/gitHub/whistler/mathe/gwInspired/deepFilteringTransferLearning.py in __init__(self, chk_path)
34 #init pretrained model
35 self.chk_path = chk_path
---> 36 model = DeepFiltering.load_from_checkpoint(chk_path)
37 backbone = model.sequential
38 layers = list(backbone.children())[:-1]
~/anaconda3/envs/skimageTrial/lib/python3.6/site-packages/pytorch_lightning/core/saving.py in load_from_checkpoint(cls, checkpoint_path, map_location, hparams_file, strict, **kwargs)
132 checkpoint = pl_load(checkpoint_path, map_location=map_location)
133 else:
--> 134 checkpoint = pl_load(checkpoint_path, map_location=lambda storage, loc: storage)
135
136 if hparams_file is not None:
~/anaconda3/envs/skimageTrial/lib/python3.6/site-packages/pytorch_lightning/utilities/cloud_io.py in load(path_or_url, map_location)
31 if not isinstance(path_or_url, (str, Path)):
32 # any sort of BytesIO or similiar
---> 33 return torch.load(path_or_url, map_location=map_location)
34 if str(path_or_url).startswith("http"):
35 return torch.hub.load_state_dict_from_url(str(path_or_url), map_location=map_location)
~/anaconda3/envs/skimageTrial/lib/python3.6/site-packages/torch/serialization.py in load(f, map_location, pickle_module, **pickle_load_args)
579 pickle_load_args['encoding'] = 'utf-8'
580
--> 581 with _open_file_like(f, 'rb') as opened_file:
582 if _is_zipfile(opened_file):
583 # The zipfile reader is going to advance the current file position.
~/anaconda3/envs/skimageTrial/lib/python3.6/site-packages/torch/serialization.py in _open_file_like(name_or_buffer, mode)
233 return _open_buffer_writer(name_or_buffer)
234 elif 'r' in mode:
--> 235 return _open_buffer_reader(name_or_buffer)
236 else:
237 raise RuntimeError(f"Expected 'r' or 'w' in mode but got {mode}")
~/anaconda3/envs/skimageTrial/lib/python3.6/site-packages/torch/serialization.py in __init__(self, buffer)
218 def __init__(self, buffer):
219 super(_open_buffer_reader, self).__init__(buffer)
--> 220 _check_seekable(buffer)
221
222
~/anaconda3/envs/skimageTrial/lib/python3.6/site-packages/torch/serialization.py in _check_seekable(f)
309 return True
310 except (io.UnsupportedOperation, AttributeError) as e:
--> 311 raise_err_msg(["seek", "tell"], e)
312 return False
313
~/anaconda3/envs/skimageTrial/lib/python3.6/site-packages/torch/serialization.py in raise_err_msg(patterns, e)
302 + " Please pre-load the data into a buffer like io.BytesIO and"
303 + " try to load from it instead.")
--> 304 raise type(e)(msg)
305 raise e
306
AttributeError: 'NoneType' object has no attribute 'seek'. You can only torch.load from a file that is seekable. Please pre-load the data into a buffer like io.BytesIO and try to load from it instead.
which I can't resolve.
I try to this according to the available tutorials on the official page of pytorch lightning here. I can't figure it out what I miss.
Could somebody point me in the right direction?
I wrote code that sucessfully parses thousands of different kind of pdfs.
However with this pdf, i get an error. Here is a very simple test code sample, that reproduces the error. My original code is too long to share here
file = open('C:/Users/username/file.pdf', 'rb')
rsrcmgr = PDFResourceManager()
device = PDFPageAggregator(rsrcmgr, laparams=LAParams())
interpreter = PDFPageInterpreter(rsrcmgr, device)
pages = PDFPage.get_pages(file)
for page in pages:
interpreter.process_page(page)
layout = device.get_result()
https://filetransfer.io/data-package/dWnZbcWl#link
Here is the full error message
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
~\AppData\Local\Temp/ipykernel_15652/28568702.py in <module>
7 for page in pages:
----> 8 interpreter.process_page(page)
9 layout = device.get_result()
C:\ProgramData\miniforge3\lib\site-packages\pdfminer\pdfinterp.py in process_page(self, page)
839 ctm = (1, 0, 0, 1, -x0, -y0)
840 self.device.begin_page(page, ctm)
--> 841 self.render_contents(page.resources, page.contents, ctm=ctm)
842 self.device.end_page(page)
843 return
C:\ProgramData\miniforge3\lib\site-packages\pdfminer\pdfinterp.py in render_contents(self, resources, streams, ctm)
852 self.init_resources(resources)
853 self.init_state(ctm)
--> 854 self.execute(list_value(streams))
855 return
856
C:\ProgramData\miniforge3\lib\site-packages\pdfminer\pdfinterp.py in execute(self, streams)
857 def execute(self, streams):
858 try:
--> 859 parser = PDFContentParser(streams)
860 except PSEOF:
861 # empty page
C:\ProgramData\miniforge3\lib\site-packages\pdfminer\pdfinterp.py in __init__(self, streams)
219 self.streams = streams
220 self.istream = 0
--> 221 PSStackParser.__init__(self, None)
222 return
223
C:\ProgramData\miniforge3\lib\site-packages\pdfminer\psparser.py in __init__(self, fp)
513
514 def __init__(self, fp):
--> 515 PSBaseParser.__init__(self, fp)
516 self.reset()
517 return
C:\ProgramData\miniforge3\lib\site-packages\pdfminer\psparser.py in __init__(self, fp)
167 def __init__(self, fp):
168 self.fp = fp
--> 169 self.seek(0)
170 return
171
C:\ProgramData\miniforge3\lib\site-packages\pdfminer\pdfinterp.py in seek(self, pos)
233
234 def seek(self, pos):
--> 235 self.fillfp()
236 PSStackParser.seek(self, pos)
237 return
C:\ProgramData\miniforge3\lib\site-packages\pdfminer\pdfinterp.py in fillfp(self)
229 else:
230 raise PSEOF('Unexpected EOF, file truncated?')
--> 231 self.fp = BytesIO(strm.get_data())
232 return
233
C:\ProgramData\miniforge3\lib\site-packages\pdfminer\pdftypes.py in get_data(self)
290 def get_data(self):
291 if self.data is None:
--> 292 self.decode()
293 return self.data
294
C:\ProgramData\miniforge3\lib\site-packages\pdfminer\pdftypes.py in decode(self)
271 raise PDFNotImplementedError('Unsupported filter: %r' % f)
272 # apply predictors
--> 273 if 'Predictor' in params:
274 pred = int_value(params['Predictor'])
275 if pred == 1:
TypeError: argument of type 'PDFObjRef' is not iterable
Can somebody try to load this into memory and if successful tell me how they did it?
Package versions used
conda 4.11.0 py39hcbf5309_0 conda-forge
ipython 7.28.0 py39h832f523_0 conda-forge
notebook 6.4.4 pyha770c72_0 conda-forge
pdfminer 20191125 pyhd8ed1ab_1 conda-forge
pillow 8.3.2 py39h916092e_0 conda-forge
pyparsing 2.4.7 pyh9f0ad1d_0 conda-forge
pytesseract 0.3.8 pyhd8ed1ab_0 conda-forge
python 3.9.7 h7840368_3_cpython conda-forge
wcwidth 0.2.5 pyh9f0ad1d_2 conda-forge
wheel 0.37.0 pyhd8ed1ab_1 conda-forge
I checked for problems with metadata but that is fine. I checked for encryption but that is also not the problem. Multipage is also no problem.
When I change
if 'Predictor' in params:
to:
if isinstance(params, dict) and 'Predictor' in params:
in file pdftypes.py (line 273), I don't get the error any more.
See: https://github.com/pdfminer/pdfminer.six/pull/471
The fix from PR 471, is not included in version 20191125.
I am testing a ResNet-34 trained_model using Pytorch and fastai on a linux system with the latest anaconda3. To run it as a batch job, I commented out the gui related lines. It started to run for a few hrs, then stopped in the Validation step, the error message is as below.
...
^M100%|█████████▉| 452/453 [1:07:07<00:08, 8.75s/it,
loss=1.23]^[[A^[[A^[[A
^MValidation: 0%| | 0/40 [00:00<?, ?it/s]^[[A^[[A^[[ATraceback
(most recent call last):
File "./resnet34_pretrained_PNG_nogui_2.py", line 279, in <module>
learner.fit(lr,1,callbacks=[f1_callback])
File "/project/6000192/jemmyhu/resnet_png/fastai/learner.py", line 302,
in fit
return self.fit_gen(self.model, self.data, layer_opt, n_cycle,
**kwargs)
File "/project/6000192/jemmyhu/resnet_png/fastai/learner.py", line 249,
in fit_gen
swa_eval_freq=swa_eval_freq, **kwargs)
File "/project/6000192/jemmyhu/resnet_png/fastai/model.py", line 162, in
fit
vals = validate(model_stepper, cur_data.val_dl, metrics, epoch,
seq_first=seq_first, validate_skip = validate_skip)
File "/project/6000192/jemmyhu/resnet_png/fastai/model.py", line 241, in
validate
res.append([to_np(f(datafy(preds), datafy(y))) for f in metrics])
File "/project/6000192/jemmyhu/resnet_png/fastai/model.py", line 241, in
<listcomp>
res.append([to_np(f(datafy(preds), datafy(y))) for f in metrics])
File "./resnet34_pretrained_PNG_nogui_2.py", line 237, in __call__
self.TP += (preds*targs).float().sum(dim=0)
TypeError: add(): argument 'other' (position 1) must be Tensor, not
numpy.ndarray
The link for the original code is
https://www.kaggle.com/iafoss/pretrained-resnet34-with-rgby-0-460-public-lb
lines 279 and 237 in my copy are shown below:
226 class F1:
227 __name__ = 'F1 macro'
228 def __init__(self,n=28):
229 self.n = n
230 self.TP = np.zeros(self.n)
231 self.FP = np.zeros(self.n)
232 self.FN = np.zeros(self.n)
233
234 def __call__(self,preds,targs,th=0.0):
235 preds = (preds > th).int()
236 targs = targs.int()
237 self.TP += (preds*targs).float().sum(dim=0)
238 self.FP += (preds > targs).float().sum(dim=0)
239 self.FN += (preds < targs).float().sum(dim=0)
240 score = (2.0*self.TP/(2.0*self.TP + self.FP + self.FN + 1e-6)).mean()
241 return score
276 lr = 0.5e-2
277 with warnings.catch_warnings():
278 warnings.simplefilter("ignore")
279 learner.fit(lr,1,callbacks=[f1_callback])
Could anyone have a clue for the issue?
Many thanks,
Jemmy
Ok, the error seems be for the latest pytorch-1.0.0, when degrade pytorch to pytorch-0.4.1, the code seems work (passed the error lines at this point). Still have no idea to make the code work with pytorch-1.0.0
I have had the same issue with this Kaggle kernel. My workarounds are the following:
1st option: In the F1 __call__ method convert preds and targs from pytorch tensors to numpy arrays;
2nd option: Initialise TP/FP/FN with pytorch tensors instead of numpy arrays, i.e. replace np.zeros(self.n) with torch.zeros(1, self.n).
Basically, the main idea - all variables should be of the same type.
Check your input data parameter.
Make sure it is in 123123 not in [123123]