VLC 1.2 mosaic streaming - vlc

This VLM Config works in VLC 1.1, but I am having other issues with memory leaks after the player is running a mosaic for more than an hour.
I have installed VLC 1.2 but it fails to display any streams in the mosaic (see errors below). Individual streams work fine and I am using MMSH to stream WMV files.
From what I understand fake:// access method has been deprecated in VLC 1.2. Are there any other changes that would prevent this mosaic from working in VLC 1.2?
new bg broadcast enabled
setup bg output #bridge-in{offset=10}:display
#VLC 1.2
setup bg input "http://img696.imageshack.us/img696/4131/rainbowbackgroundq.png"
#OR- VLC 1.1
setup bg input 'fake://' option 'fake-file=http://img696.imageshack.us/img696/4131/rainbowbackgroundq.png' option 'fake-width=800' option 'fake-height=600'
setup bg option sub-filter=mosaic
setup bg option mosaic-alpha=255
setup bg option mosaic-height=600
setup bg option mosaic-width=800
setup bg option mosaic-align=5
setup bg option mosaic-xoffset=0
setup bg option mosaic-yoffset=0
setup bg option mosaic-vborder=5
setup bg option mosaic-hborder=5
setup bg option mosaic-position=1
setup bg option mosaic-rows=1
setup bg option mosaic-cols=2
setup bg option no-mouse-events
setup bg option no-keyboard-events
setup bg option no-audio
setup bg option mosaic-order=v1,v2
setup bg option no-mosaic-keep-picture
setup bg option no-mosaic-keep-aspect-ratio
new v1 broadcast enabled
setup v1 input "mmsh://mediaserver2.otn.ca/mediasite/b2974e0a-24c3-43e4-9833-e3c9937197e0.wmv"
setup v1 option input-repeat=-1
setup v1 output #mosaic-bridge{id=v1,width=395,height=600}
new v2 broadcast enabled
setup v2 input "mmsh://mediaserver2.otn.ca/mediasite/070871fa-5b30-4e17-b83b-57b149044532.wmv"
setup v2 option input-repeat=-1
setup v2 output #mosaic-bridge{id=v2,width=395,height=600}
control bg play
control v1 play
control v2 play
VLC1.2 errors
[0xf80ec8] dummy interface: using the dummy interface module...
[0xf4bb68] [Media: v1] access_mms access error: cannot read data 2
[0xf7b578] [Media: v2] access_mms access error: cannot read data 2
[0xfd72f8] [Media: v2] main decoder error: cannot create packetizer output (WMA2)
[0xf72ed8] [Media: v1] main input error: ES_OUT_SET_(GROUP_)PCR is called too late (pts_delay increased to 1000 ms)
[0xf72ed8] [Media: v1] main input error: ES_OUT_RESET_PCR called

Have you tried VLC 2.0 already?
The wiki by VLC should be updated to give you working examples with VLC 2: http://wiki.videolan.org/Mosaic
Concerning changes for your mosaic setup - with VLC 2 this would include the following
(1) You already replaced fake:// - to continuously show your image add:
setup bg option image-duration=-1
(2) The mosaic options need to go with the command line as in:
vlc -I telnet --mosaic-alpha=255 --mosaic-height=600 --mosaic-align=5 --mosaic-xoffset=0 [...] --vlm-conf /path/to/your/mosaic/config/file
(3) Transcode your bg broadcast and use the sub filter there:
setup bg output #transcode{sfilter=mosaic,vcodec=h264,venc=x264{profile=baseline,level=30,aud},vb=768,width=800,height=600,scale=1}:standard{access=udp,mux=ts,dst=239.0.0.1:1234}
Hope this helps and points you in the right direction.

Related

Nao robot IMU data rates

I'm trying to stream data from the Nao's inertial unit in its trunk. However the update rate is quite slow ~ 1Hz. Is there any way to improve it? For reference, I issued the following command using qicli to measure the rates:
qicli call --json ALMemory.getListData "[[\"Device/SubDeviceList/InertialSensor/AngleY/Sensor/Value\"]]"
In this example I retrieve the tilt angle of the trunk around the Y-axis (pitch).
To execute this command, I established an SSH connection to the Nao. I timed it using the linux time command. I also tried to force a faster read rate by issuing the above command in a loop with 5 milliseconds of sleep between each iteration:
for i in {1..100}; do qicli call --json ALMemory.getListData "[[\"Device/SubDeviceList/InertialSensor/AngleY/Sensor/Value\"]]"; sleep 0.005; done
But even in this case I could see that the data was read at about a rate of 1Hz.
I tried it on Nao versions 5 and 6. I also connected both over WiFi and a link-locally using an ethernet cable.
This data is available every 10ms, but a qicli call takes a long time to init the connection.
Try using the api in python, create a proxy then call the getData in the loop, refer to the API documentation here.
As a side note, best way to record data or to monitor it efficiently is to process it directly on the NAO. Connect using ssh upload your program and run it, or use choregraphe to create and run it directly on the robot easily.
# edit: adding simple script to be run directly on NAO (untested)
import time
import naoqi
mem = naoqi.ALProxy("ALMemory","localhost",9559)
while 1:
val = mem.getData("Device/SubDeviceList/InertialSensor/AngleY/Sensor/Value")
print(val)
time.sleep(0.01)

Rospy message_filter ApproximateTimeSynchronizer issue

I installed ROS melodic version in Ubuntu 18.04.
I'm running a rosbag in the background to mock cameras in messages rostopics.
I set the camera names in rosparams and iterated through it to capture each camera topics.
I'm using message_filter ApproximateTimeSynchronizer to get time synchronized data as mentioned in the official documentation,
http://wiki.ros.org/message_filters
But most of the time the callback function to ApproximateTimeSynchronizer is not being called/is having delay. The code snippet I'm using is given below:
What am I doing wrong here?
def camera_callback(*args):
pass # Other logic comes here
rospy.init_node('my_listener', anonymous=True)
camera_object_data = []
for camera_name in rospy.get_param('/my/cameras'):
camera_object_data.append(message_filters.Subscriber(
'/{}/hd/camera_info'.format(camera_name), CameraInfo))
camera_object_data.append(message_filters.Subscriber(
'/{}/hd/image_color_rect'.format(camera_name), Image))
camera_object_data.append(message_filters.Subscriber(
'/{}/qhd/image_depth_rect'.format(camera_name), Image))
camera_object_data.append(message_filters.Subscriber(
'/{}/qhd/points'.format(camera_name), PointCloud2)
topic_list = [filter_obj for filter_obj in camera_object_data]
ts = message_filters.ApproximateTimeSynchronizer(topic_list, 10, 1, allow_headerless=True)
ts.registerCallback(camera_callback)
rospy.spin()
Looking at your code, it seems correct. There is, however, a trouble with perhaps bad timestamps and ergo this synchronizer as well, see http://wiki.ros.org/message_filters/ApproximateTime for algorithm assumptions.
My recommendation is to write a corresponding node that publishes empty versions of these four msgs all at the same time. If it's still not working in this perfect scenario, there is an issue with the code above. If it is working just fine, then you need to pay attention to the headers.
Given that you have it as a bag file, you can step through the msgs on the command line and observe the timestamps as well. (Can also step within python).
$ rosbag play --pause recorded1.bag # step through msgs by pressing 's'
On time-noisy msgs with small payloads, I've just written a node to listen to all these msgs, and republish them all with the latest time found on any of them (for sync'd logging to csv). Not optimal, but it should reveal where the issue lies.

SPI on Beagleboard Black

I've been trying to make the serial communication work on my BBB for days now and I am running out of ideas.
When I use just the BBB and connect MISO/MOSI I get the signal transfer on MOSI, SCLK and CS (MISO is mainly at high level). However, when I connect the lines to my slave part it does not work. I checked the signals on the oscilloscope and they seem fine and the part which I am using as the slave is working well when I set it in parallel mode, so I believe some programming or configuration must be wrong.
This is basically what I do:
config-pin P9.17 spi_cs
config-pin P9.18 spi
config-pin P9.21 spi
config-pin P9.22 spi_sclk
python
from Adafruit_BBIO.SPI import SPI
spi = SPI(1,0) #I would expect SPI(0,0) here, but I get the signal on the above configured ports
Then I set the configurations (already tried in many ways):
spi.mode = 0
spi.cshigh = False
spi.msh = 10500000
spi.bpw = 16
spi.lsbfirst = False
After that I open it and try to send data:
spi.open(1,0)
spi.xfer2([1,254])
If anyone is interested, I am trying to program the LMH6517 as slave and I already tried to ask about this at the TI forum here:
https://e2e.ti.com/support/amplifiers/f/14/t/751415
Oscilloscope images:
CS and SCLK
MOSI and SCLK
MISO and SCLK
Thank you,
JPL

Tcl Tk to show images in Scilab

I want to show images and videos in Scilab using a GUI made in Tcl/Tk.
Scilab has support for Tcl/Tk :- https://help.scilab.org/docs/6.0.0/en_US/section_a10b99d9dda4c3d65d29c2a48e58fd88.html.
I have made a tcl script which displays an image when run from the terminal.
image create photo img -file <filepath>
pack [label .mylabel]
.mylabel configure -image img
However when I write the following .sci file in scilab, it executes successfully but no image window is shown.
function sampletry()
TCL_EvalFile(<path_to_tcl_file>);
endfunction
I do know that the code executed successfully because when I execute the same function again in scilab, I get an error saying that the label .mylabel already exists in the parent window.
Is there any way that I can show images/videos in Scilab using this method or any other method in Scilab? I'm using OpenCV to read the image and return it back to Scilab through the Scilab Api in a list.
The problem is that you're not servicing the event loop from your Scilab code, without which the flurry of messages from the OS to do with actually putting the window on the screen never get through and handled. Assuming you want your code to stop and wait for the viewing to be done, you can just change the Tcl/Tk code to be:
image create photo img -file <filepath>
if {![winfo exists .mylabel]} {
pack [label .mylabel]
}
.mylabel configure -image img
wm deiconify .
# Wait for the user to ask for the window to be closed
wm protocol . WM_DELETE_WINDOW {set done 1}
vwait done
# Process the close immediately
wm withdraw .
update
There's nothing very special about the done variable. We're just waiting for it to be set in the callback. I've added a bit of extra code to allow you to call this twice (i.e., conditionally creating the widget, ensuring that . is displayed and then hiding it at the end).
The simplest technique if you don't want to keep everything in the same process is to run your original script as a separate program, effectively doing:
wish <path_to_tcl_file>
I don't know what the easiest way to do that from Scilab is.

Beaglebone Black – unload cape

I am using a Beaglebone Black with the most recent OS (Debian Jessie, kernel v4.1). I need to be able to use pin P9_19 as a GPIO pin, but pin 19 has already been assigned to I2C_2_SCL.
When I try to use it, I get the error (octalbonescript):
The pin P9_19 is not availble to write. Please make sure it is not used by another cape.
How can I unload the I2C cape to expose pin P9_19 for GPIO access?
There are many ways to do that.
Using device tree overlay: echo cape-universaln dtbo file to slots
or use this link to gernerate overlay file for gpio purpose, compile source dts file using
dtc -O dtb -o /lib/firmware/bspm_P9_19_17-00A0.dtbo -b 0 -# /lib/firmware/bspm_P9_19_17-00A0.dts
and then deploy output dtbo file to /lib/firmware and then echo it to slots.
Also I personally recommend you to use this amazing library Wirigbone for beaglebone, this is by far best for beaglbone.

Resources