How to convert a Uint8Array to a mp3 formatted file to be saved - aws-sdk-nodejs

I am running with node.js 6.11.2. I am not running in a browser. I am trying to use the aws-sdk polly speechSynthesis service. The service returns a Uint8array as the audio data. lets call it audioData. I wish to convert this file, audioData, to a properly formatted audio file in mp3 format so I can cache it locally. I then wish to play the file on a raspberry Pi. How does one convert the Uint8Array to a properly formatted mp3 file? I tried audioData.toString() but that file could not be played.

I don't know much about speechsynthesis, but you could write the array data to standard out and pipe to ffmpeg.
From stdout:
node script.js|ffmpeg -f s8 -i pipe:0 file.mp3
Or you could first write the array to a file:
ffmpeg -f s8 -i audioData file.mp3
Note that the default samplerate is 44100Hz, if audio from speechsynthesis is different use the ar option to alter the expected samplerate:
ffmpeg -f s8 -ar [samplerate] -i [input] file.mp4
Different audio types can be found at https://trac.ffmpeg.org/wiki/audio%20types
Replace s8 in the above commands with the appropriate audio type.

Related

How to Extract Audio from WebM File

I just want to get a audio file(opus codec used) only in webm file.
I try to search what is webm format, how to parse, but I cant get info well.
I check that webm format is from mkv, then should I check the mkv first?
there is just one github code, but I cant find way how parse the audio from webm.
https://github.com/webmproject/libwebm/tree/master/webm_parser
You're really going to want the MKVToolNix. These include the tool mkvextract in another answer.
The MKVToolNix is actually a series of tools (mkvmerge, mkvinfo, mkvextract, mkvpropedit). First you asked how to parse the info. You can find the details using:
mkvinfo file.webm
mkvinfo file.webm -a
The first command will parse the overall structure. The second gives the detail of each frame. Use the --help switch if you want all commands.
To extract the audio, do
mkvextract file.webm tracks X:newfile.opus
Where X is the track number that you've identified as wanted from mkvinfo previously. Webm and MKV can have multiple tracks. "newfile.opus" is the new file that you want to create, choose the name you want.
There is also a mkvtoolnix gui, but I've never used that.
mkvextract can extract audio for you, and I recommend having a look at the mkvtoolsnix source code.
For example, you can extract audio from a WebM file into an Ogg Opus file like this:
$ mkvextract ~/audio/bubbles.webm tracks 0:audio.opus
Extracting track 0 with the CodecID 'A_OPUS' to the file 'audio.opus'. Container format: Ogg (Opus in Ogg)
Progress: 100%

How do I use the CLI interface of FFMpeg from a static build?

I have added this (https://github.com/kewlbear/FFmpeg-iOS-build-script) version of ffmpeg to my project. I can't see the entry point to the library in the headers included.
How do I get access to the same text command based system that the stand alone application has, or an equivalent?
I would also be happy if someone could point me towards documentation that allows you to use FFmpeg without the command line interface.
This is what I am trying to execute (I have it working on windows and android using the CLI version of ffmpeg)
ffmpeg -framerate 30 -i snap%03d.jpg -itsoffset 00:00:03.23333 -itsoffset 00:00:05 -i soundEffect.WAV -c:v libx264 -vf fps=30 -pix_fmt yuv420p result.mp4
Actually you can build ffmpeg library including the ffmpeg binary's code (ffmpeg.c). Only thing to care about is to rename the function main(int argc, char **argv), for example, to ffmpeg_main(int argc, char **argv) - then you can call it with arguments just like you're executing ffmpeg binary. Note that argv[0] should contain program name, just "ffmpeg" should work.
The same approach was used in the library VideoKit for Android.
To do what you want, you have to use your compiled FFmpeg library in your code.
What you are looking for is exactly the code providing by FFmpeg documentation libavformat/output-example.c (that mean AVFormat and AVCodec FFmpeg's libraries in general).
Stackoverflow is not a "do it for me please" platform. So I prefer explaining here what you have to do, and I will try to be precise and to answer all your questions.
I assume that you already know how to link your compiled (static or shared) library to your Xcode project, this is not the topic here.
So, let's talk about this code. It creates a video (containing video stream and audio stream randomly generated) based on a duration. You want to create a video based on a picture list and sound file. Perfect, there are only three main modifications you have to do:
The end condition is not reaching a duration, but reaching the end of your file list (In code there is already a #define STREAM_NB_FRAMES you can use to iterate over all you frames).
Replace the dummy void fill_yuv_image by your own method that load and decode image buffer from file.
Replace the dummy void write_audio_frame by your own method that load and decode the audio buffer from your file.
(you can find "how to load audio file content" example on documentation starting at line 271, easily adaptable for video content regarding documentation)
In this code, comparing to your CLI, you can figure out that:
const char *filename; in the main should be you output file "result.mp4".
#define STREAM_FRAME_RATE 25 (replace it by 30).
For MP4 generation, video frames will be encoded in H.264 by default (in this code, the GOP is 12). So no need to precise libx264.
#define STREAM_PIX_FMT PIX_FMT_YUV420P represents your desired yuv420p decoding format.
Now, with these official examples and related documentation, you can achieve what you desire. Be careful that there is some differences between FFmpeg's version in these examples and current FFmpeg's version. For example:
st = av_new_stream(oc, 1); // line 60
Could be replaced by:
st = avformat_new_stream(oc, NULL);
st->id = 1;
Or:
if (avcodec_open(c, codec) < 0) { // line 97
Could be replaced by:
if (avcodec_open2(c, codec, NULL) < 0) {
Or again:
dump_format(oc, 0, filename, 1); // line 483
Could be replaced by:
av_dump_format(oc, 0, filename, 1);
Or CODEC_ID_NONE by AV_CODEC_ID_NONE... etc.
Ask your questions, but you got all the keys! :)
MobileFFMpeg is an easy to use pod for the purpose. Instructions on how to use MobileFFMpeg at: https://stackoverflow.com/a/59325680/1466453
MobileFFMpeg gives a very simple method for translating ffmpeg commands to your IOS objective-c program.
Virtually all ffmpeg commands and switches are supported. However you have to get the pod with appropriate license. e.g min-gpl will not give you features of libiconv. libiconv is convered in vidoe, gpl and full-gpl licenses.
Please highlight if you have specific issues regarding use of MobileFFMpeg

Error opening video stream or file?

l try to read the following video, downloaded from http://www.sample-videos.com/
which is http://www.sample-videos.com/video/mp4/720/big_buck_bunny_720p_5mb.mp4
Here is my code :
import cv2
cap = cv2.VideoCapture('big_buck_bunny_720p_5mb.mp4')
if(cap.isOpened()== False):
print("Error opening video stream or file")
count = 0
while (cap.isOpened()):
# capture frame by frame :
ret, frame = cap.read()
if ret==True:
# Display the resulting frame
cv2.imshow('Frame', frame)
cv2.imwrite("frame%d.jpg" % count, frame)
count +=1
print(count)
However l get Error opening video stream or file at cap = cv2.VideoCapture('big_buck_bunny_720p_5mb.mp4')
and ret equals False always
My OpenCV version is 3.1.0
There may be the following issue with your machine:
configure the video path
check the permission to access the file
install an additional codec
You might have installed opencv but there are some prerequisites needs to be installed while reading a .mp4 video file using open cv.
You can verify that by simply reading an .avi format file and .mp4 file
[it could read .avi file but not .mp4 file]
To read a mp4 .file
Install ffmpeg package compiled with H.264 codec:
H.264/MPEG-4 Part 10 or AVC (Advanced Video Coding) is a standard for video compression, and is currently one of the most commonly used formats for the recording, compression, and distribution of high definition video.
Ref : https://www.debiantutorials.com/how-to-install-ffmpeg-with-h-264mpeg-4-avc/
Few suggestions to make sure all prerequisites are available
1. check ffmpeg package compiled with H.264 is already installed in the machine using the command below.
ffmpeg -version
2. Installation of open-cv in anaconda will reduce the stress to install ffmpeg package compiled with H.264
3. Make sure that the user created in the machine has got enough privilege to read and write in specific application related directories
a. Check the read and write permission using the command below
ls -ld <folder-path>
or
namei -mo <folder-path>
b. Alter the access writes based on the user privilege required (sudo access needed else we need to engage admin to alter the permission)
eg : sudo chmod -R 740 <folder-path>** [ Recursive rwx for user ,r for group ]

Recording and Duplicating a webcam stream with VLC

i'm trying to record a webcam, save it and stream it to a local network.
The Problem is, i want to do this with different compression:
The stream for the local network should only have <400kbit/s, but the other one, which is stored to a local file, should be uncompressed or with up to 10 Mbit/s
So i tried two methods to solve this:
First i played a little bit with the VLC Gui. It is really easy to record the Webcam, then transcode it and save it to a file or/and stream it to the internet. The command line looks like this:
vlc v4l2:///dev/video0 :v4l2-standard= :live-caching=300 :sout="#transcode{vcodec=WMV2,vb=380,fps=1,scale=Automatisch,acodec=none}:duplicate{dst=file{dst=stream.asf,no-overwrite},dst=http{dst=:8080/stream.wmv}}" :sout-keep
But i had the problem that both, the internet stream and the file, are getting compressed. So i changed the order of "duplicate" and "transcode" to:
vlc v4l2:///dev/video0 :v4l2-standard= :live-caching=300 :sout="#duplicate{dst=file{dst=stream.asf,no-overwrite}, dst="transcode{vcodec=WMV2,vb=380,fps=1,scale=Automatisch,acodec=none}:http{dst=:8080/stream.wmv}"}" :sout-keep
My thought: Now i should have a compressed internet stream and the orignal file. But it doesn't stream it to the internet.
So i tried another method: I wanted to stream the original stream to port 8080 and then use two other VLC instances to generate a compressed network stream to port 8008 and a original file. But i cant stream a stream....
So i would be really thankful, if someone has another idea or a hint where my problem is.
Sorry for my english.
Have a nice day.
You are double quoting the :sout. If you plan to use quotes " inside the value then use apostrophe ' to enclose the whole argument like:
:sout='#duplicate{dst=file{...}, dst="transcode{...}:http{dst=:8080/stream.wmv}"}'
If you add a -v (verbose output) at the end of your command you'll see some other issues too like no-overwrite not being recognized. Also,scale=Automatisch should be scale=auto.
Please note that I checked just the syntax and not your encoding parameters.

Play file with vlc using path from clipboard

So my scenario is that I've copied an http link (that I want to stream with vlc player) to the clipboard. I would like to write a simple script that plays the file located at the path on the clipboard. I've already tried
pbpaste | VLC -
pbpaste outputs the contents of the clipboard to stdout and "VLC -" attempts to play what's on stdout, so I was hoping VLC would pick up the path, read it, and then fetch the file to play, but apparently it expects an actual byte stream when you pipe things to it, not a string filepath. I've tried something similar on windows that failed so I don't think this is OS-specific
Any thoughts?
Thanks,
sh4d0w
Try this:
LOC=$(pbpaste); vlc -vvv $LOC
It should work as long as you've copied the "http://" as well. In fact, it will work for any string as described in this manual chapter
This works nowadays for the urls that I've been using
$url=Get-ClipBoard; $vlc=start-Process -FilePath "vlc" -ArgumentList $url,-f,vlc://quit

Resources