mp3 # 0x1768ac0 Header missing - opencv

I have a school project using Opencv, a Raspberry pi 2, a Raspicam and C++. Our project needs to use a video stream from the camera to detect objects.
That's why I assume to use pipe and fifo by this line :
mkfifo fifo;
raspivid -t 0 -o fifo & ./Detection fifo
As a result I get :
[mp3 # 0x1768ac0] Header missing
But when a well saved video is sent to the program, our project perfectly runs.
Example of good behavioured instructions :
raspivid -t 10000 -o video.h264 ; ./Detection video.h264
Do someone have an idea ?
All result I found where not suitable for our project. I maybe miss some important information.
Thank you
PS : Hope I am understandable, my english is not that good

Related

Using GPIO3_13 (GPIO109) to disable USB VBUS

I found dev-USB-PWR-CTL-00A1.dtbo file. (I think this is source code for it).
Using this file I try to expose USB1_DRVVBUS pin as GPIO (GPIO3_13) with commands:
echo dev-USB-PWR-CTL > /sys/devices/platform/bone_capemgr/slots
echo 109 > /sys/class/gpio/export
echo out > /sys/class/gpio/gpio109/direction
I see new cape entry in slots and new gpio files tree.
But when I change value with command
echo 0 > /sys/class/gpio/gpio109/value
I see new value in this file but nothing happens with USB VBUS.
What am I missing?
(Before you ask do I really need this: let's leave the consequences on the side for a moment.)
Have you looked at this? question about exactly this on the Beagleboard Google Group
Please note that there are some differences to back then and current images, e.g. by default CapeManager is disabled and overlays are loaded once in U-Boot
If you are using a recent elinux.org Debian image (the necessary device tree overlay was merged in June 2015), it includes a device tree overlay (with the comment "Unless you know what you are doing, do not load this cape!!!"). This uses a hack to expose the usb1_drvvbus signal as a fictitious LED, which can then be controlled using the led interface in /sys.
Firstly, load the dev-USB-PWR-CTL-00A1.dtbo device tree overlay. For recent setups (where all dtbos are loaded by uboot and then passed to the kernel at boot time), this could be done by adding dtb_overlay=/lib/firmware/dev-USB-PWR-CTL-00A1.dtbo to /boot/uEnv.txt and rebooting (older kernels/uboots will need to use the older config mechanisms as described in /boot/uEnv.txt).
You can then do this:
echo 'usb1' > /sys/bus/usb/drivers/usb/unbind
echo 0 > /sys/devices/platform/leds/leds/usb_hub_power/brightness
sleep 1
echo 255 > /sys/devices/platform/leds/leds/usb_hub_power/brightness
echo 'usb1' > /sys/bus/usb/drivers/usb/bind
... to power-cycle the device attached to USB1.

Using TensorFlow Audio Recognition Model on iOS

I'm trying to use the TensorFlow audio recognition model (my_frozen_graph.pb, generated here: https://www.tensorflow.org/tutorials/audio_recognition) on iOS.
But the iOS code NSString* network_path = FilePathForResourceName(#"my_frozen_graph", #"pb"); in the TensorFlow Mobile's tf_simple_example project outputs this error message: Could not create TensorFlow Graph: Not found: Op type not registered 'DecodeWav'.
Anyone knows how I can fix this? Thanks!
I believe you are using the pre-build Tensorflow from Cocapods? It probably does not have that op type, so you should build it yourself from latest source.
From documentation:
While Cocapods is the quickest and easiest way of getting started, you
sometimes need more flexibility to determine which parts of TensorFlow
your app should be shipped with. For such cases, you can build the iOS
libraries from the sources. This guide contains detailed instructions
on how to do that.
This might also be helpful: [iOS] Add optional Selective Registration of Ops #14421
Optimization
The build_all_ios.sh script can take optional
command-line arguments to selectively register only for the operators
used in your graph.
tensorflow/contrib/makefile/build_all_ios.sh -a arm64 -g $HOME/graphs/inception/tensorflow_inception_graph.pb
Please note this
is an aggresive optimization of the operators and the resulting
library may not work with other graphs but will reduce the size of the
final library.
After the build is done you can check /tensorflow/tensorflow/core/framework/ops_to_register.h for operations that were registered. (autogenerated during build with -g flag)
Some progress: having realized the unregistered DecodeWav error is similar to the old familiar DecodeJpeg issue (#2883), I ran strip_unused on the pb as follows:
bazel-bin/tensorflow/python/tools/strip_unused \
--input_graph=/tf_files/speech_commands_graph.pb \
--output_graph=/tf_files/stripped_speech_commands_graph.pb \
--input_node_names=wav_data,decoded_sample_data \
--output_node_names=labels_softmax \
--input_binary=true
It does get rid of the DecodeWav op in the resulting graph. But running the new stripped graph on iOS now gives me an Op type not registered 'AudioSpectrogram' error.
Also there's no object file audio*.o generated after build_all_ios.sh is done, although AudioSpectrogramOp is specified in tensorflow/core/framework/ops_to_register.h:
Jeffs-MacBook-Pro:tensorflow-1.4.0 zero2one$ find . -name decode*.o
./tensorflow/contrib/makefile/gen/obj/ios_ARM64/tensorflow/core/kernels/decode_bmp_op.o
./tensorflow/contrib/makefile/gen/obj/ios_ARM64/tensorflow/core/kernels/decode_wav_op.o
./tensorflow/contrib/makefile/gen/obj/ios_ARMV7/tensorflow/core/kernels/decode_bmp_op.o
./tensorflow/contrib/makefile/gen/obj/ios_ARMV7/tensorflow/core/kernels/decode_wav_op.o
./tensorflow/contrib/makefile/gen/obj/ios_ARMV7S/tensorflow/core/kernels/decode_bmp_op.o
./tensorflow/contrib/makefile/gen/obj/ios_ARMV7S/tensorflow/core/kernels/decode_wav_op.o
./tensorflow/contrib/makefile/gen/obj/ios_I386/tensorflow/core/kernels/decode_bmp_op.o
./tensorflow/contrib/makefile/gen/obj/ios_I386/tensorflow/core/kernels/decode_wav_op.o
./tensorflow/contrib/makefile/gen/obj/ios_X86_64/tensorflow/core/kernels/decode_bmp_op.o
./tensorflow/contrib/makefile/gen/obj/ios_X86_64/tensorflow/core/kernels/decode_wav_op.o
Jeffs-MacBook-Pro:tensorflow-1.4.0 zero2one$ find . -name audio*_op.o
Jeffs-MacBook-Pro:tensorflow-1.4.0 zero2one$
Just verified that Pete's fix (https://github.com/tensorflow/tensorflow/issues/15921) is good:
add this line tensorflow/core/ops/audio_ops.cc to the file tensorflow/contrib/makefile/tf_op_files.txt and run tensorflow/contrib/makefile/build_all_ios.sh again (compile_ios_tensorflow.sh "-O3" itself used to work for me after adding a line to the tf_op_files.txt, but not anymore with TF 1.4).
Also, use the original model file, don't use the stripped version. Some note was added in the link above.

Why does my python script not recognize speech from audio file?

I have the following piece of code successfully recognizing short (less than 1 min) test audio file, but failing with recognition another long audiofile (1.5h).
from google.cloud import speech
def run_quickstart():
speech_client = speech.Client()
sample = speech_client.sample(source_uri="gs://linear-arena-2109/zoom0070.flac", encoding=speech.Encoding.FLAC)
alternatives = sample.recognize('uk-UA')
for alternative in alternatives:
print(u'Transcript: {}'.format(alternative.transcript))
with open("Output.txt", "w") as text_file:
for alternative in alternatives:
text_file.write(alternative.transcript.encode('utf8'))
if __name__ == '__main__':
run_quickstart()
Both files are uploaded to Google Cloud.
The first one:
https://storage.googleapis.com/linear-arena-2109/sample.flac
The second one:
https://storage.googleapis.com/linear-arena-2109/zoom0070.flac
Both were converted from mp3 with ffmpeg utility:
ffmpeg -i sample.mp3 -ac 1 sample.flac
ffmpeg -i zoom0070.mp3 -ac 1 zoom0070.flac
First file was successfully recognized, but second file outputs the following error:
google.gax.errors.RetryError: GaxError(Exception occurred in retry method that was not classified as transient, caused by <_Rendezvous of RPC that terminated with (StatusCode.INVALID_ARGUMENT, Sync input too long. For audio longer than 1 min use LongRunningRecognize with a 'uri' parameter.)>)
But I have already used uri parameter in my python script. What is wrong?
update
#NieDzejkob helped to understand the error. So, method long_running_recognize should be used instead of recognize. The comprehensive long_running_recognize usage example can be found on the corresponding document page
For any audio file longer than 1 minute, you need to use Asynchronous Speech Recognition and the file has to be uploaded to Google Cloud Storage so that you can pass in a gcs_uri.
In addition, you will need to use the .long_running_recognize method in your script. An example from GCP documentation can be found here.
I realize that OP figured it out but thought it would be useful to provide an answer and generalize it a bit.

Get application data in net frame via tshark command line

Here I need parse a custom protocol in many .pcapng files , I want direct filter and output the application raw data via tshark command .
At first , I use the "-e data.data" option , but ,some of the application data could be decode as other protocol , and wouldn't be output by -e data.data.
Then , I find a way that special the "disable-protocol" file under wireshark profile folder,but ,I must take the profile file and deploy it before run the parse program on other PC.
And, I tried disable all the protocol except udp and tcp ,but it can't work.
I also disable the known conflict protocols , it works ,but there may be same mistake on other unknown protocol and the tshark's output still can't be trust completely.
I works on Windows7 and wireshark 2.2.use python 2.7 for parse work.
In a summary , what I want is a portable command line that can flexible and direct output all data after UDP information in a net frame.
could I disable decode on some ports by just add options in command line?
EDIT1:
I find in wireshark 1.12,there is a "Do not decode" option in "decode as..." dialog , if enable it,the display is what I want.but in wireshark 2.2,they removed the option.and I still need a command line to do this filter.
After 48 hours and 26 times viewed ,it still no response but one vote up.
I already give up this way, and decode the frame by myself.
what I want is the udp srcport and dstport, and the application data.
In actual , every net frame has a same length of header , so ,it's easy to strip the header by a fixed offset , and get the special data.
In my case , I just do some filter and use -x option for output.,as this:
tshark -r xxx.pcapng -j udp -x
the output may looks like this:
(just for example,not real case)
Every line contains three parts :The first column is offset reference, the next 16 columns are bytes in hex , and the remains are the characters map to the data.
My code:
def load_tshark_data(tshark_file_path):
tshark_exe = "c:/Program Files/Wireshark/tshark.exe"
output = subprocess.check_output([
tshark_exe,
"-r",tshark_file_path,
"-j","udp",
"-x"
])
hex_buff = ""
line_buff = ""
for c in output:
if c == "\n":
if len(line_buff) > 54:
hex_buff += line_buff[5:53]
line_buff = ''
else:
src_port = int(hex_buff[0x22*3 : 0x24*3].replace(" ",""),16)
dst_port = int(hex_buff[0x24*3 : 0x26*3].replace(" ",""),16)
app_data = hex_buff[0x2a*3 : ].strip(" ")
hex_buff = ""
yield [src_port, dst_port, app_data]
else:
line_buff += c
hope this can help any one also blocked by such a problem

Is there a good way to tell if HandBrakeCLI actually encoded anything?

I'm working on a system to convert a bunch of .mov files to H.264 (using HandBrakeCLI) and webm (using ffmpeg) as the .mov files are created. In general, things are going very well. I'm hung up on a bit of error detection. I want to know if one of the encodings failed so that we can investigate, try again, etc.
To test encoding failure, I copied a text file into a file with a .mov extention, and set the programs about trying to encode it. Naturally, they both fail to encode the file (I'm not sure what "success" would mean in this context...) However, while ffmpeg reports this failure by setting its exit code to 1, HandBrakeCLI sets the exit code to 0, because it exited cleanly. This is consistent with the HandBrakeCLI documentation but it leaves me wondering how I can tell if HandBrakeCLI knows if it failed to encode anything. That same documentation page suggests "If you want to monitor HandBrake's process, you should monitor the standard pipes", so I'm now getting the effect that I want by doing something like this:
HandBrakeCLI --preset 'Normal' --input bad.mov --output out.mv4 2>&1 | grep 'Encode done'
grep then sets its exit code to 0 if it found a match, and 1 if it didn't. But, this seems rather barbaric: for instance, the text "Encode done!" could change in a future release of HandBrake.
So, anyone have a better way to tell if HandBrake encoded something or not?
Some edited shell output is included below for reference...
$ ffmpeg -i 'develqueuedir/B_BH_120409.mov' 'develqueuedir/B_BH_120409.webm'
FFmpeg version 0.6.4-4:0.6.4-0ubuntu0.11.04.1, Copyright (c) 2000-2010 the Libav Developers
[snip]
develqueuedir/B_BH_120409.mov: Invalid data found when processing input
$ echo $?
1
$ HandBrakeCLI --preset 'Normal' --maxWidth 720 --optimize --input 'develqueuedir/B_BH_120409.mov' --output 'develqueuedir/B_BH_120409.mv4'
Output format couldn't be guessed from file name, using default.
[11:45:45] hb_init: starting libhb thread
HandBrake 0.9.6 (2012022900) - Linux x86_64 - http://handbrake.fr
Opening develqueuedir/B_BH_120409.mov...
[snip]
[11:45:45] libhb: scan thread found 0 valid title(s)
No title found.
HandBrake has exited.
$ echo $?
0
Short answer is no , you can find detailed explanation at HandBrake forum https://forum.handbrake.fr/viewtopic.php?f=12&t=18559&p=85529&hilit=return+code#p85529
adddition:
I think that there is a patch from user fonkprop that is rejected by developers , if you really need it contact that guy
Good news! It appears that this feature is about to be implemented in HandBrake-CLI 0.10. As you can read on the roadmap for the 0.10 milestone:
Basic support for return codes from the CLI. (0 = No Error, 1 = Cancelled, 2 = Invalid Input, 3 = Initialization error, 4 = Unknown Error")

Resources