Legacy autograd function with non-static forward method is deprecated - machine-learning

During the Training of the CFNet (https://github.com/gallenszl/CFNet),
after loading Mish activation, the error comes:
enter image description here
RuntimeError: Legacy autograd function with non-static forward method is deprecated. Please use new-style autograd function with static forward method. (Example: https://pytorch.org/docs/stable/autograd.html#torch.autograd.Function)
anybody knows how to fix it?

It seems you're using a pytorch version that is too new where some incompatible changes have been introduced. Use the pytorch (and python) versions indicated by their repository. In this case for pytorch this is version 1.1.0.

Related

neo compilation job failed on Yolov5/v7 model

I was trying to use AWS SageMaker Neo compilation to convert a yolo model(trained with our custom data) to a coreml format, but got an error on input config:
ClientError: InputConfiguration: Unable to determine the type of the model, i.e. the source framework. Please provide the value of argument "source", from one of ["tensorflow", "pytorch", "mil"]. Note that model conversion requires the source package that generates the model. Please make sure you have the appropriate version of source package installed.
Seems Neo cannot recognize the Yolo model, is there any special requirements to the model in AWS SageMaker Neo?
I've tried both latest yolov7 model and yolov5 model, and both pt and pth file extensions, but still get the same error. Seems Neo cannot recognize the Yolo model. I also tried to downgrade pytorch version to 1.8, still not working.
But when I use the yolov4 model from this tutorial post: https://aws.amazon.com/de/blogs/machine-learning/speed-up-yolov4-inference-to-twice-as-fast-on-amazon-sagemaker/, it works fine.
Any idea if Neo compilation can work with Yolov7/v5 model?

How can I read pytorch model file via cv2.dnn.readNetFromTorch()?

I am able to save a PyTorch custom model? (it can work any PyTorch version above 1.0)
However, I am not able to read the saved model. I am trying to read it via cv2.dnn.readNetFromTorch() so as to use the model in Opencv framework (4.1.0).
I saved the PyTorch model with different methods as follows to see whether this difference reacts to the reading function of cv2.dnn.
torch.save(model.state_dict(), '/home/aktaseren/people-opencv/pidxx.pt')
torch.save(model.state_dict(), '/home/aktaseren/people-opencv/pidxx.t7')
torch.save(model, '/home/aktaseren/people-opencv/pidxx.t7')
torch.save(model, '/home/aktaseren/people-opencv/pidxx.pth')
None of these saved file can be readable via cv2.dnn.readNetFromTorch().
The error I am getting is always the same on this issue, which is below.
cv2.error: OpenCV(4.1.0) ../modules/dnn/src/torch/torch_importer.cpp:1022: error: (-213:The function/feature is not implemented) Unsupported Lua type in function 'readObject'
Do you have any idea how to solve this issue?
OpenCV documentation states can only read in torch7 framework format. There is no mention of .pt or .pth saved by pytorch.
This post mentions pytorch does not save as .t7.
.t7 was used in Torch7 and is not used in PyTorch. If I’m not mistaken
the file extension does not change the behavior of torch.save.
An alternate method is to export the model as onnx, then read the model in opencv using readNetFromONNX.
Yes, we have tested these methods (saving the model as .pt or .pth). And we couldn't load these model files by using opencv readNetFromTorch. We should use LibTorch or ONNX as the intermediate model file to be read from C++.

Using g_autoptr() together with G_DEFINE_AUTOPTR_CLEANUP_FUNC() when using older and newer GLib versions

There is something about using g_autoptr() together with G_DEFINE_AUTOPTR_CLEANUP_FUNC() when using different GLib versions which I don't understand (this affects also the other g_auto... variants and its G_DEFINEs).
The documentation says
"The way to clean up the type must have been defined using the macro
G_DEFINE_AUTOPTR_CLEANUP_FUNC()"
When using for example GLib 2.62 and using the G_DEFINE macro this results in errors like
/usr/include/glib-2.0/glib/gmacros.h:1032:49: error: redefinition of ‘glib_slistautoptr_cleanup_GtkTreePath’
1032 | #define _GLIB_AUTOPTR_SLIST_FUNC_NAME(TypeName) glib_slistautoptr_cleanup_##TypeName
Leaving out the G_DEFINE macro will solve the problem and the program works just fine.
However, on older GLib versions like 2.50 (which is for example still used by Debian 9), using the G_DEFINE macro will not result in an error message. But I can't see any changes reflected by the GLib documentation. I cannot determine when exactly the aforementioned change of behaviour has been introduced. How am I supposed to cope with this when I want to support all GLib versions from 2.50 on?
The issue is probably in Gtk, not GLib. g_autoptr() was supported for most things in Gtk+ 3.22 (the one in Debian 9) already: so you shouldn't have to call G_DEFINE_AUTOPTR_CLEANUP_FUNC() on Gtk types yourself. GtkTreePath however was still missing a call: this was added in 3.24, see https://gitlab.gnome.org/GNOME/gtk/-/commit/86dd1e37a70e9bae057a9a11332f7254cda242e8.
You'll probably have to do the macro call behind a version check if you want to use g_autoptr() with TreePath on Gtk < 3.24.

how can I use regexp:sh_to_awk and regexp:match in erlang v5.10.4

I have a module using regexp:sh_to_awk and regexp:match.
But when I compile it, the compiler warns me that the regexp module was removed from R15 and recommends me to use the re module instead.
I searched the erlang documentation but I can't find how to replace the two functions.
Can anyone tell me how to fix this?
Indeed, regexp module has been deprecated for a while and has now been removed, replaced by the re module.
The old regexp:match function has been replaced by the re:run functions, which add a lot of functionality, such as returning captured parts as lists or binary (The old way of returning start position and length also remains):
> re:run("Test String","[a-zA-Z]{4}",[{capture,all,list},global]).
{match,[["Test"],["Stri"]]}
Read through the re:run/3 documentation, it's worth it, just as all the other functions of the re module (like compile and replace).
The regexp:sh_to_awk has been removed. You can use the filelib:wildcard functions for matching filenames, if that was your intended use of the old regexp:sh_to_awk/1 function.

Z3 theory plug-in error

I have created a custom theory plugin, which does nothing at the moment. The callbacks are all implemented and registered, but they simply return. Then, I read in a bunch of declare-consts, declare-funs, and asserts using Z3_parse_smtlib2_string, and pass the resulting ast to Z3_assert_cnstr. A subsequent call to Z3_check_and_get_model fails with the following error:
The mk_fresh_ext_data callback was not set for user theory, you must use Z3_theory_set_mk_fresh_ext_data_callback
As far as I can tell, Z3_theory_set_mk_fresh_ext_data_callback does not exist.
Using the same string, but without registering the theory plugin, Z3_check_and_get_model returns sat and gives a model as expected.
I am using version 4 and the Linux 64 bit libraries.
The full example is here: http://pastebin.com/hLJ8hFf1
The problem is the model-based quantifier instantiation module (MBQI). This module tries to create a copy of the main logical engine. To create the copy, Z3 must copy every theory plugin. It can do it for all builtin theories, but not for external theories.
The original theory plugin API did not have support for copying itself because it was implemented before the MBQI module. The API Z3_theory_set_mk_fresh_ext_data_callback is meant for that. However, it was not exposed yet for several reasons.
The main issue is that Z3 4.0 has a new API for solvers. The current theory plugin API is incompatible with the new solver API.
We are investigating ways of integrating them.
In Z3 4.0, the theory plugins only work with the old (deprecated) solver API.
To avoid the problem you described, you just have to disable the MBQI module. You can do that by setting MBQI=false when creating Z3_context.
In C, you can do that using the following code fragment.
Z3_config cfg;
Z3_context ctx;
cfg = Z3_mk_config();
Z3_set_param_value(cfg, "MBQI", "false");
ctx = Z3_mk_context(cfg);
This also explains why your plugin works on quantifier-free formulas. The MBQI module is not used for this kind of formula.

Resources