Course import issue from ironwood to juniper - openedx

I’m trying to import a course that has multiple videos and is hosted on an ironwood server to a newly deployed juniper server. While importing it throws the following errors in the console.
Following is the console log output
TypeError: Unicode-objects must be encoded before hashing
[2020-12-11 09:29:02,847: INFO/Worker-1] VAL: Video created with id [54454916-47e9-4769-8b41-06062d0b7e8c] and status [external]
[2020-12-11 09:29:02,860: ERROR/Worker-1] [VAL] Transcript save failed to storage for video_id “54454916-47e9-4769-8b41-06062d0b7e8c” language code “en”
Traceback (most recent call last):
File “/edx/app/edxapp/venvs/edxapp/lib/python3.5/site-packages/edxval/models.py”, line 489, in create
video_transcript.transcript.save(file_name, transcript_content)
File “/edx/app/edxapp/venvs/edxapp/lib/python3.5/site-packages/django/db/models/fields/files.py”, line 87, in save
self.name = self.storage.save(name, content, max_length=self.field.max_length)
File “/edx/app/edxapp/venvs/edxapp/lib/python3.5/site-packages/django/core/files/storage.py”, line 52, in save
return self._save(name, content)
File “/edx/app/edxapp/venvs/edxapp/lib/python3.5/site-packages/storages/backends/s3boto3.py”, line 495, in _save
self._save_content(obj, content, parameters=parameters)
File “/edx/app/edxapp/venvs/edxapp/lib/python3.5/site-packages/storages/backends/s3boto3.py”, line 510, in _save_content
obj.upload_fileobj(content, ExtraArgs=put_parameters)
File “/edx/app/edxapp/venvs/edxapp/lib/python3.5/site-packages/boto3/s3/inject.py”, line 513, in object_upload_fileobj
ExtraArgs=ExtraArgs, Callback=Callback, Config=Config)
File “/edx/app/edxapp/venvs/edxapp/lib/python3.5/site-packages/boto3/s3/inject.py”, line 431, in upload_fileobj
return future.result()
File “/edx/app/edxapp/venvs/edxapp/lib/python3.5/site-packages/s3transfer/futures.py”, line 73, in result
return self._coordinator.result()
File “/edx/app/edxapp/venvs/edxapp/lib/python3.5/site-packages/s3transfer/futures.py”, line 233, in result
raise self._exception
File “/edx/app/edxapp/venvs/edxapp/lib/python3.5/site-packages/s3transfer/tasks.py”, line 126, in call
return self._execute_main(kwargs)
File “/edx/app/edxapp/venvs/edxapp/lib/python3.5/site-packages/s3transfer/tasks.py”, line 150, in _execute_main
return_value = self._main(**kwargs)
File “/edx/app/edxapp/venvs/edxapp/lib/python3.5/site-packages/s3transfer/upload.py”, line 692, in _main
client.put_object(Bucket=bucket, Key=key, Body=body, **extra_args)
File “/edx/app/edxapp/venvs/edxapp/lib/python3.5/site-packages/botocore/client.py”, line 317, in _api_call
return self._make_api_call(operation_name, kwargs)
File “/edx/app/edxapp/venvs/edxapp/lib/python3.5/site-packages/botocore/client.py”, line 596, in _make_api_call
request_signer=self._request_signer, context=request_context)
File “/edx/app/edxapp/venvs/edxapp/lib/python3.5/site-packages/botocore/hooks.py”, line 242, in emit_until_response
responses = self._emit(event_name, kwargs, stop_on_response=True)
File “/edx/app/edxapp/venvs/edxapp/lib/python3.5/site-packages/botocore/hooks.py”, line 210, in _emit
response = handler(**kwargs)
File “/edx/app/edxapp/venvs/edxapp/lib/python3.5/site-packages/botocore/handlers.py”, line 209, in conditionally_calculate_md5
calculate_md5(params, **kwargs)
File “/edx/app/edxapp/venvs/edxapp/lib/python3.5/site-packages/botocore/handlers.py”, line 187, in calculate_md5
binary_md5 = _calculate_md5_from_file(body)
File “/edx/app/edxapp/venvs/edxapp/lib/python3.5/site-packages/botocore/handlers.py”, line 201, in _calculate_md5_from_file
md5.update(chunk)
The course is imported from ironwood deployed sever to juniper deployed server.

I went through all the logs and was able to track that an error occurs in edxval while uploading the file to the S3 bucket. So, I checked edxval release versions and version 1.4.3 is the one that fixed S3 bucket upload issue. I updated it and it fixed my issue.

Related

Gensim: error loading pretrained vectors No such file or directory: 'word2vec.kv.vectors.npy'

I am trying to load a Pretrained word2vec embeddings that is in gensim keyedvector 'word2vec.kv'
pretrained = KeyedVectors.load(args.pretrained mmap = 'r')
where arg.pretrained is "/ptembs/word2vec.kv"
and iam getting this error
File "main.py", line 60, in main
pretrained = KeyedVectors.load(args.pretrained, mmap = 'r')
File "C:\Users\ASUS\anaconda3\lib\site-packages\gensim\models\keyedvectors.py", line 1553, in load
model = super(WordEmbeddingsKeyedVectors, cls).load(fname_or_handle, **kwargs)
File "C:\Users\ASUS\anaconda3\lib\site-packages\gensim\models\keyedvectors.py", line 228, in load
return super(BaseKeyedVectors, cls).load(fname_or_handle, **kwargs)
File "C:\Users\ASUS\anaconda3\lib\site-packages\gensim\utils.py", line 436, in load obj._load_specials(fname, mmap, compress, subname)
File "C:\Users\ASUS\anaconda3\lib\site-packages\gensim\utils.py", line 478, in _load_specials
val = np.load(subname(fname, attrib), mmap_mode=mmap)
File "C:\Users\ASUS\anaconda3\lib\site-packages\numpy\lib\npyio.py", line 417, in load
fid = stack.enter_context(open(os_fspath(file), "rb"))
FileNotFoundError: [Errno 2] No such file or directory: 'ptembs/word2vec.kv.vectors.npy'
i dont understand why it need word2vec.kv.vectors.npy file ? and i dont have it.
Any idea how to solve this problem?
gensim version 3.8.3
tried it on 4.1.2 also same error.
Where did you get the file 'word2vec.kv'?
If loading that file triggers an error mentioning a 2nd file by name, then that 2nd file should've been created alongside 'word2vec.kv' when it was 1st saved using a .save() operation.
That other file needs to be kept alongside 'word2vec.kv' in order for 'word2vec.kv' to be .load()ed again in the future.

Getting error "AttributeError: module "ibapi.contract" has no attribute "UnderComp"

Ive tried now for some time to get the following code to work, but I keep getting this error message. What am I doing wrong?
from ib_insync import IB
ib = IB()
ib.connect("127.0.0.1",7497,clientId=1)
stock = Stock("AMD","SMART","USD")
bars = ib.reqHistoricalData(
stock,
endDateTime="",
durationStr="30 D",
barSizeSetting="1 hour",
whatToShow="MIDPOINT",
useRTH="True"
)
print(bars)
Error message:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "c:/Users/Ejer/Desktop/TWS/option.py", line 1, in <module>
from ib_insync import IB
File "c:\Users\Ejer\Miniconda3\lib\site-packages\ib_insync\__init__.py", line 21, in <module>
from .objects import *
File "c:\Users\Ejer\Miniconda3\lib\site-packages\ib_insync\objects.py", line 155, in <module>
class UnderComp(Object):
File "c:\Users\Ejer\Miniconda3\lib\site-packages\ib_insync\objects.py", line 156, in UnderComp
defaults = ibapi.contract.UnderComp().__dict__
AttributeError: module 'ibapi.contract' has no attribute 'UnderComp'
>>>
Seems, you are using the old ib-insync package (ver 0.9.11). Try to install the latest version ib-insync 0.9.64, that worked for me.
Also, follow the group for more information: https://groups.io/g/insync

How to fine tune niftynet pre trained model for custom data

I want to use niftynet pretrained segmentation model for segmenting custom data. I downloaded the pre trained weights and and modified model_dir path to downloaded one.
However when I run
python3 net_segment.py train -c /home/Container_data/config/promise12_demo_train_config.ini
I am getting the error below.
Caused by op 'save/Assign_17', defined at:
File "net_segment.py", line 8, in <module>
sys.exit(main())
File "/home/NiftyNet/niftynet/__init__.py", line 142, in main
app_driver.run(app_driver.app)
File "/home/NiftyNet/niftynet/engine/application_driver.py", line 197, in run
SESS_STARTED.send(application, iter_msg=None)
File "/usr/local/lib/python3.5/dist-packages/blinker/base.py", line 267, in send
for receiver in self.receivers_for(sender)]
File "/usr/local/lib/python3.5/dist-packages/blinker/base.py", line 267, in <listcomp>
for receiver in self.receivers_for(sender)]
File "/home/NiftyNet/niftynet/engine/handler_model.py", line 109, in restore_model
var_list=to_restore, save_relative_paths=True)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/training/saver.py", line 1102, in __init__
self.build()
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/training/saver.py", line 1114, in build
self._build(self._filename, build_save=True, build_restore=True)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/training/saver.py", line 1151, in _build
build_save=build_save, build_restore=build_restore)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/training/saver.py", line 795, in _build_internal
restore_sequentially, reshape)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/training/saver.py", line 428, in _AddRestoreOps
assign_ops.append(saveable.restore(saveable_tensors, shapes))
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/training/saver.py", line 119, in restore
self.op.get_shape().is_fully_defined())
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/ops/state_ops.py", line 221, in assign
validate_shape=validate_shape)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/ops/gen_state_ops.py", line 61, in assign
use_locking=use_locking, name=name)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/op_def_library.py", line 787, in _apply_op_helper
op_def=op_def)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/util/deprecation.py", line 488, in new_func
return func(*args, **kwargs)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/ops.py", line 3274, in create_op
op_def=op_def)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/ops.py", line 1770, in __init__
self._traceback = tf_stack.extract_stack()
InvalidArgumentError (see above for traceback): Restoring from checkpoint failed. This is most likely due to a mismatch between the current graph and the graph from the checkpoint. Please ensure that you have not altered the graph expected based on the checkpoint. Original error:
Assign requires shapes of both tensors to match. lhs shape= [3,3,61,256] rhs shape= [3,3,3,61,9]
[[node save/Assign_17 (defined at /home/NiftyNet/niftynet/engine/handler_model.py:109) = Assign[T=DT_FLOAT, _class=["loc:#DenseVNet/conv/conv_/w"], use_locking=true, validate_shape=true, _device="/job:localhost/replica:0/task:0/device:GPU:0"](DenseVNet/conv/conv_/w, save/RestoreV2/_35)
https://github.com/tensorflow/models/issues/5390
Above link says to add
--initialize_last_layer = False
--last_layers_contain_logits_only = False
Can some one help me how to get rid of this error.
It seems you are having problems with your last layer. When you use a pretrained model on a new task you probably need to change your last layer to fit your new requirements.
In order to do that you should modify your config file by restoring all vars but last layer:
vars_to_restore = ^((?!(last_layer_name)).)*$
and then set num_classes to suit your new segmentation problem.
You can check transfer learning docs here: https://niftynet.readthedocs.io/en/dev/transfer_learning.html

Timeout error uploading 2 GIG file (resumable upload in Appengine task Python)

I am getting a timeout error when trying to upload a 2 GIG file using resumable upload reurned from the Doclist api - please see log extract below. I thought using resumable upload in an Appengine task that 2 GIG would not be an issue, any ideas?
File "/base/data/home/apps/s~gofiledrop/31.358777816137338904/handler.py", line 564, in post
new_entry = uploader.UploadFile('/feeds/upload/create-session/default/private/full?convert=false', entry=entry)
File "/base/data/home/apps/s~gofiledrop/31.358777816137338904/gdata/client.py", line 1033, in upload_file
start_byte, self.file_handle.read(self.chunk_size))
File "/base/data/home/apps/s~gofiledrop/31.358777816137338904/gdata/client.py", line 987, in upload_chunk
desired_class=self.desired_class)
File "/base/data/home/apps/s~gofiledrop/31.358777816137338904/gdata/client.py", line 265, in request
uri=uri, auth_token=auth_token, http_request=http_request, **kwargs)
File "/base/data/home/apps/s~gofiledrop/31.358777816137338904/atom/client.py", line 117, in request
return self.http_client.request(http_request)
File "/base/data/home/apps/s~gofiledrop/31.358777816137338904/atom/http_core.py", line 420, in request
http_request.headers, http_request._body_parts)
File "/base/data/home/apps/s~gofiledrop/31.358777816137338904/atom/http_core.py", line 497, in _http_request
return connection.getresponse()
File "/base/python_runtime/python_dist/lib/python2.5/httplib.py", line 206, in getresponse
deadline=self.timeout)

Pylons mako templates: debugging "Internal Server Error" / "NoneType" errors

I frequently encounter this error in Mako templates using Pylons 0.9.7:
AttributeError: 'NoneType' object has no attribute 'decode'
Usually I've entered a variable name that doesn't exist, tried to use a linbebreak within a code line, or some other minor error. Definitely my fault.
This results in a 'Internal Server Error' in the browser, same thing in the debug view, and a stack trace that starts in HTTPServer and ends with the AttributeError in mako/exceptions.py.
Is there anything I can do to make this easier to debug, like find out the line that exception is being generated on within the Mako template? Thanks!
I am not absolutely sure that it's the same issue, but as far as I remember this used to happen a lot when you do AJAX loads of page fragments. Then you don't get anything more useful than this message.
However if you try to load the address of the AJAX request itself in your browser (replacing post parameters with get parameters if needed), you will get a "normal" debug page.
In my case, it turns out there was a division by 0 error in my template. That was yielding an internal server error, and a very unhelpful stack trace in console output.
I know, it sounds like I shouldn't have had that logic in a template anyway, but in this case, I think it makes sense to do so. Here's the stack trace I get with division by 0:
Exception happened during processing of request from ('127.0.0.1', 50681)
Traceback (most recent call last):
File "/Library/Python/2.6/site-packages/Paste-1.7.4-py2.6.egg/paste/httpserver.py", line 1068, in process_request_in_thread
self.finish_request(request, client_address)
File "/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/SocketServer.py", line 320, in finish_request
self.RequestHandlerClass(request, client_address, self)
File "/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/SocketServer.py", line 615, in __init__
self.handle()
File "/Library/Python/2.6/site-packages/Paste-1.7.4-py2.6.egg/paste/httpserver.py", line 442, in handle
BaseHTTPRequestHandler.handle(self)
File "/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/BaseHTTPServer.py", line 329, in handle
self.handle_one_request()
File "/Library/Python/2.6/site-packages/Paste-1.7.4-py2.6.egg/paste/httpserver.py", line 437, in handle_one_request
self.wsgi_execute()
File "/Library/Python/2.6/site-packages/Paste-1.7.4-py2.6.egg/paste/httpserver.py", line 287, in wsgi_execute
self.wsgi_start_response)
File "/Library/Python/2.6/site-packages/Paste-1.7.4-py2.6.egg/paste/cascade.py", line 130, in __call__
return self.apps[-1](environ, start_response)
File "/Library/Python/2.6/site-packages/Paste-1.7.4-py2.6.egg/paste/registry.py", line 375, in __call__
app_iter = self.application(environ, start_response)
File "/Library/Python/2.6/site-packages/Pylons-0.9.7-py2.6.egg/pylons/middleware.py", line 201, in __call__
self.app, environ, catch_exc_info=True)
File "/Library/Python/2.6/site-packages/Pylons-0.9.7-py2.6.egg/pylons/util.py", line 94, in call_wsgi_application
app_iter = application(environ, start_response)
File "/Library/Python/2.6/site-packages/WebError-0.10.2-py2.6.egg/weberror/evalexception.py", line 235, in __call__
return self.respond(environ, start_response)
File "/Library/Python/2.6/site-packages/WebError-0.10.2-py2.6.egg/weberror/evalexception.py", line 483, in respond
return debug_info.content()
File "/Library/Python/2.6/site-packages/WebError-0.10.2-py2.6.egg/weberror/evalexception.py", line 545, in content
result = tmpl_formatter(self.exc_value)
File "/Library/Python/2.6/site-packages/Pylons-0.9.7-py2.6.egg/pylons/error.py", line 43, in mako_html_data
css=False)
File "/Library/Python/2.6/site-packages/Mako-0.3.2-py2.6.egg/mako/template.py", line 189, in render
return runtime._render(self, self.callable_, args, data)
File "/Library/Python/2.6/site-packages/Mako-0.3.2-py2.6.egg/mako/runtime.py", line 403, in _render
_render_context(template, callable_, context, *args, **_kwargs_for_callable(callable_, data))
File "/Library/Python/2.6/site-packages/Mako-0.3.2-py2.6.egg/mako/runtime.py", line 434, in _render_context
_exec_template(inherit, lclcontext, args=args, kwargs=kwargs)
File "/Library/Python/2.6/site-packages/Mako-0.3.2-py2.6.egg/mako/runtime.py", line 457, in _exec_template
callable_(context, *args, **kwargs)
File "memory:0x1040470d0", line 54, in render_body
File "/Library/Python/2.6/site-packages/Mako-0.3.2-py2.6.egg/mako/exceptions.py", line 88, in __init__
self.records = self._init(traceback)
File "/Library/Python/2.6/site-packages/Mako-0.3.2-py2.6.egg/mako/exceptions.py", line 166, in _init
line = line.decode('ascii', 'replace')
AttributeError: 'NoneType' object has no attribute 'decode'
----------------------------------------

Resources