I try to analyze 3-d image of the wavelet transform of the signal (image is obtained using the mesh function)
Then I want to get my 2-d image of the wavelet transform using the cwtplot function.
cwtplot(signal, 1:32)
However, scilab shows an error
Undefined variable cwtplot
I created a new version 0.3.2 of the Atoms module "Scilab Wavelet Toolbox". It was just a matter of recompilation (+some other small fixes). If you update the Atoms database you should be able to download it:
--> atomsSystemUpdate
--> atomsInstall swt
--> atomsLoad swt
Start swt toolbox - (0.3.2)
Load macros
Load gateways
Load help
Load demos
The page of the updated Atoms is there: https://atoms.scilab.org/toolboxes/swt/0.3.2 and here is a screenshoft of the cwtplot help page example:
Related
I am trying to create a plot in Octave using the LaTeX interpreter for the axis labels, but whenever I try to do so, using the following code:
figure()
xlabel('The $x$ axis','interpreter','latex')
Octave returns the following message:
warning: latex_renderer: a run-time test failed and the 'latex' interpreter has been disabled.
According to the online Octave documentation, it requires some external tools, all of which are present on my system.
The "latex" interpreter only works if an external LaTeX tool chain is present. Three binaries are needed: latex, dvipng, and dvisvgm. If those binaries are installed but not on the path, one can still provide their respective path using the following environment variables: OCTAVE_LATEX_BINARY, OCTAVE_DVIPNG_BINARY, and OCTAVE_DVISVG_BINARY.
I even tried manually setting the aforementioned environment variables, using Octave's setenv() function, but no dice. I'm using Octave version 7.3.0, running on OpenSUSE. If there is a way I can get some more verbose output from Octave to help debug the issue I'm all ears.
I have trained a SageMaker semantic segmentation model, using the built-in sagemaker semantic segmentation algorithm. This deploys ok to a SageMaker endpoint and I can run inference in the cloud successfully from it.
I would like to use the model on a edge device (AWS Panorama Appliance) which should just mean compiling the model with SageMaker Neo to the specifications of the target device.
However, regardless of what my target device is (the Neo settings), I cant seem to compile the model with Neo as I get the following error:
ClientError: InputConfiguration: No valid Mxnet model file -symbol.json found
The model.tar.gz for semantic segmentation models contains hyperparams.json, model_algo-1, model_best.params. According to the docs, model_algo-1 is the serialized mxnet model. Aren't gluon models supported by Neo?
Incidentally I encountered the exact same problem with another SageMaker built-in algorithm, the k-Nearest Neighbour (k-NN). It too seems to be compiled without a -symbol.json.
Is there some scripts I can run to recreated a -symbol.json file or convert the compiled sagemaker model?
After building my model with an Estimator, I got to compile it in SageMaker Neo with code:
optimized_ic = my_estimator.compile_model(
target_instance_family="ml_c5",
target_platform_os="LINUX",
target_platform_arch="ARM64",
input_shape={"data": [1,3,512,512]},
output_path=s3_optimized_output_location,
framework="mxnet",
framework_version="1.8",
)
I would expect this to compile ok, but that is where I get the error saying the model is missing the *-symbol.json file.
For some reason, AWS has decided to not make its built-in algorithms directly compatible with Neo... However, you can re-engineer the network parameters using the model.tar.gz output file and then compile.
Step 1: Extract model from tar file
import tarfile
#path to local tar file
model = 'ss_model.tar.gz'
#extract tar file
t = tarfile.open(model, 'r:gz')
t.extractall()
This should output two files:
model_algo-1, model_best.params
Load weights into network from gluon model zoo for the architecture that you chose
In this case I used DeepLabv3 with resnet50
import gluoncv
import mxnet as mx
from gluoncv import model_zoo
from gluoncv.data.transforms.presets.segmentation import test_transform
model = model_zoo.DeepLabV3(nclass=2, backbone='resnet50', pretrained_base=False, height=800, width=1280, crop_size=240)
model.load_parameters("model_algo-1")
Check the parameters have loaded correctly by making a prediction with new model
Use an image that was used for training.
#use cpu
ctx = mx.cpu(0)
#decode image bytes of loaded file
img = image.imdecode(imbytes)
#transform image
img = test_transform(img, ctx)
img = img.astype('float32')
print('tranformed image shape: ', img.shape)
#get prediction
output = model.predict(img)
Hybridise model into output required by Sagemaker Neo
Additional check for image shape compatibility
model.hybridize()
model(mx.nd.ones((1,3,800,1280)))
export_block('deeplabv3-res50', model, data_shape=(3,800,1280), preprocess=None, layout='CHW')
Recompile model into tar.gz format
This contains the params and json file which Neo looks for.
tar = tarfile.open("comp_model.tar.gz", "w:gz")
for name in ["deeplabv3-res50-0000.params", "deeplabv3-res50-symbol.json"]:
tar.add(name)
tar.close()
Save tar.gz file to s3 and then compile using Neo GUI
Trying to load the video for NVIDIA DALI pipeline for video processing but not able to load the .mp4 video.
import os
import numpy as np
from nvidia.dali import pipeline_def
import nvidia.dali.fn as fn
import nvidia.dali.types as types
batch_size=2
sequence_length=8
initial_prefetch_size=16
video_directory=['sintel_trailer-720p_0.mp4']
n_iter=6
print(video_directory)
#pipeline_def
def video_pipe(file_root):
video, labels = fn.readers.video(device="gpu", file_root=file_root, sequence_length=sequence_length,
random_shuffle=True, initial_fill=initial_prefetch_size)
return video, labels
pipe = video_pipe(batch_size=batch_size, num_threads=2, device_id=0, file_root=video_directory, seed=12345)
pipe.build()
Above DALI pipeline shows the following issue while loading the video:
RuntimeError: Critical error when building pipeline: Error when
constructing operator: readers__Video encountered:
[/opt/dali/dali/operators/reader/loader/video_loader.cc:117] Assert on
"dir != nullptr" failed: Directory ['sintel_trailer-720p_0.mp4'] could
not be opened.
I have referred the documentation from NVIDIA DALI for video processing but not to able solve,
Please check for reference : NVIDIA DALI DOCS VIDEO PROCESSING
The file_root argument points to the root directory, where DALI should search for videos, and the file_list argument should point to a file listing all samples to be loaded.
However, from your example, the filenames argument must be the one that suits your needs better.
Your example should work as expected, with the following pipeline definition:
#pipeline_def
def video_pipe(file_root):
video, labels = fn.readers.video(device="gpu", filenames=file_root, labels=[], sequence_length=sequence_length,
random_shuffle=True, initial_fill=initial_prefetch_size)
return video, labels
I added the labels argument too. Without it, the operator returns just one output. Please see the DALI manual if you want to understand the operator better.
After some research and forum discussion from NVIDIA DALI got this Answer, Please refer to issues/3503 the link for a detailed answer discussion.
Thank you
I would like to visualize pointcloud in drake-visualizer using python binding.
I imitated how to publish images through lcm from here, and checked out these two issues (14985, 14991). The snippet is as follows :
point_cloud_to_lcm_point_cloud = builder.AddSystem(PointCloudToLcm())
point_cloud_to_lcm_point_cloud.set_name('pointcloud_converter')
builder.Connect(
station.GetOutputPort('camera0_point_cloud'),
point_cloud_to_lcm_point_cloud.get_input_port()
)
point_cloud_lcm_publisher = builder.AddSystem(
LcmPublisherSystem.Make(
channel="DRAKE_POINT_CLOUD_camera0",
lcm_type=lcmt_point_cloud,
lcm=None,
publish_period=0.2,
# use_cpp_serializer=True
)
)
point_cloud_lcm_publisher.set_name('point_cloud_publisher')
builder.Connect(
point_cloud_to_lcm_point_cloud.get_output_port(),
point_cloud_lcm_publisher.get_input_port()
)
However, I got the following runtime error:
RuntimeError: DiagramBuilder::Connect: Mismatched value types while connecting output port lcmt_point_cloud of System pointcloud_converter (type drake::lcmt_point_cloud) to input port lcm_message of System point_cloud_publisher (type drake::pydrake::Object)
When I set 'use_cpp_serializer=True', the error becomes
LcmPublisherSystem.Make(
File "/opt/drake/lib/python3.8/site-packages/pydrake/systems/_lcm_extra.py", line 71, in _make_lcm_publisher
serializer = _Serializer_[lcm_type]()
File "/opt/drake/lib/python3.8/site-packages/pydrake/common/cpp_template.py", line 90, in __getitem__
return self.get_instantiation(param)[0]
File "/opt/drake/lib/python3.8/site-packages/pydrake/common/cpp_template.py", line 159, in get_instantiation
raise RuntimeError("Invalid instantiation: {}".format(
RuntimeError: Invalid instantiation: _Serializer_[lcmt_point_cloud]
I saw the cpp example here, so maybe this issue is specific to python binding.
I also saw this python example, but thought using 'PointCloudToLcm' might be more convenient.
P.S.
I am aware of the development in recent commits on MeshcatVisualizerCpp and MeshcatPointCloudVisualizerCpp, but I am still on the drake-dev stable build 0.35.0-1 and want to stay on drake visualizer until the meshcat c++ is more mature.
The old version in pydrake.systems.meshcat_visualizer.MeshcatVisualizer is a bit too slow on my current use-case (multiple objects drop). I can visualize the pointcloud with this visualization setting, but it took too much machine resources.
Only the message types that are specifically bound in lcm_py_bind_cpp_serializers.cc can be used on an LCM message input/output port connection between C++ and Python. For all other LCM message types, the input/output port connection must be from a Python system to a Python system or a C++ System to a C++ System.
The lcmt_image_array is listed there, but not the lcmt_point_cloud.
If you're stuck using Drake's v0.35.0 capabilities, then I don't see any great solutions. Some options:
(1) Write your own PointCloudToLcm system in Python (by re-working the C++ code into Python, possibly with a narrower set of supported features / channels for simplicity).
(2) Write your own small C++ helper function MakePointCloudPublisherSystem(...) that calls LcmPublisherSystem::Make<lcmt_point_cloud> function in C++, and bind it into Python. Then your Python code can call MakePointCloudPublisherSystem() and successfully connect that to the existing C++ PointCloudToLcm.
I am using caffe with python(pycaffe). I am using the prebuilt alexnet model from model zoo.
from this page:
https://github.com/BVLC/caffe/tree/master/models/bvlc_alexnet
Every time I use the model, with this code:
net = caffe.Classifier('deploy.prototxt','bvlc_alexnet.caffemodel',
channel_swap=(2,1,0),
raw_scale=255,
image_dims=(256, 256))
caffe tells me the file format is old and it needs to upgrade the file. Shouldn't this happen only once?
E0304 20:52:57.356480 12716 upgrade_proto.cpp:609] Attempting to upgrade input file specified using deprecated transformation parameters: /tmp/bvlc_alexnet.caffemodel
I0304 20:52:57.356554 12716 upgrade_proto.cpp:612] Successfully upgraded file specified using deprecated data transformation parameters. E0304 20:52:57.356564 12716 upgrade_proto.cpp:614] Note that future Caffe releases will only support transform_param messages for transformation fields.
E0304 20:52:57.356580 12716 upgrade_proto.cpp:618] Attempting to upgrade input file specified using deprecated V1LayerParameter: /tmp/bvlc_alexnet.caffemodel
I0304 20:52:59.307096 12716 upgrade_proto.cpp:626] Successfully upgraded file specified using deprecated V1LayerParameter
how can I properly upgrade the file so that this doesn't happen every single time.
When you load the model caffe upgrades your prototxt and binary proto, but does not override the original files you are using. This is why you keep getting this message.
Upgrading is very straight forward. In $CAFFE_ROOT/build/tools you'll find two binaries: upgrade_net_proto_binary and upgrade_net_proto_text. Simply apply them to your deploy.prototxt and bvlc_alexnet.caffemodel and save the results:
~$ mv deploy.prototxt deploy_old.prototxt
~$ mv bvlc_alexnet.caffemodel bvlc_alexnet_old.caffemodel
~$ $CAFFE_ROOT/build/tools/upgrade_net_proto_text deploy_old.prototx deploy.prototxt
~$ $CAFFE_ROOT/build/tools/upgrade_net_proto_binary bvlc_alexnet_old.caffemodel bvlc_alexnet.caffemodel
And that's it!
Thank you for Shai for your help.
However, if you are in Windows upgrade_net_proto_binary and upgrade_net_proto_text .exe files are in path-to-caffe-master/caffe/build/tools/Release.
Hope this will help Windows users