How to use vlc to combine multiple videos with different encodings into one complete video - vlc

I tried to merge video files using examples from the wiki
https://wiki.videolan.org/VLC_HowTo/Merge_videos_together/
/Applications/VLC.app/Contents/MacOS/VLC -vv a.flv b.mkv c.mp4 \
--sout-keep \
--sout '#gather:transcode{vcodec="h264",vb="1024",acodec="aac",ab="128"}:standard{mux="mp4",dst="test.mp4",access=file}' vlc://quit
But what I found is that if the videos are not identical the encoding doesn't work
Terminal Error
[0000000131779030] main encoder debug: using encoder module "x264"
[000000013170b910] main mux error: cannot add a new stream (unsupported while muxing to this format). You can try increasing sout-mux-caching value
[000000013171dd80] stream_out_transcode stream out error: cannot add this stream
[0000000132919090] main decoder error: cannot continue streaming due to errors with codec h264
[000000013175c4d0] videotoolbox generic: Raising max DPB to 3
2022-12-17 12:28:25.577 VLC[12119:275251] Can't find app with identifier com.spotify.client
[0000000131707660] macosx interface debug: Continue to use IOKit assertion NoIdleSleepAssertion (39363)
[0000000132904f70] main stream output debug: adding a new sout input for `subt` (sout_input: 0x600001af0c40)
[0000000131721000] gather stream out debug: creating new output
[000000013171dd80] stream_out_transcode stream out debug: not transcoding a stream (fcc=`subt')
[000000013170b910] main mux error: cannot add a new stream (unsupported while muxing to this format). You can try increasing sout-mux-caching value
[0000000132904f70] main stream output warning: new sout input failed (sout_input: 0x600001af0c40)
[0000000132919850] main decoder error: cannot create packetizer output (subt)
[0000000132904f70] main stream output debug: adding a new sout input for `subt` (sout_input: 0x600001af0bd0)
[0000000131721000] gather stream out debug: creating new output
[000000013171dd80] stream_out_transcode stream out debug: not transcoding a stream (fcc=`subt')
[000000013170b910] main mux error: cannot add a new stream (unsupported while muxing to this format). You can try increasing sout-mux-caching value
[0000000132904f70] main stream output warning: new sout input failed (sout_input: 0x600001af0bd0)
[0000000132919c30] main decoder error: cannot create packetizer output (subt)
Merging video files after re-encoding is still not possible. How can I use vlc to merge media files in any video encoding format
I’m truly grateful for your help
I may be worried about the inconsistency of encoding parameters, so I set all the configurable parameters of transcode to the same result. There is still no solution to the problem

Related

tf hub.module can't load elmo. I hope it works normally

When I run the following code with Jupyter Notebook, it originally worked normally, but suddenly an error pops up.
import tensorflow.compat.v1 as tf
tf.disable_v2_behavior()
tf.disable_eager_execution()
# Load pre trained ELMo model
elmo = hub.Module("https://tfhub.dev/google/elmo/1", trainable=True)
The error code is
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xc1 in position 149: invalid start byte
I had the same error before, so when I changed the version of Elmo, the problem was solved, but now it doesn't solve it either.

Reading an avro file with embedded schema from command line

I am trying to read an Avro file with embedded schema using the following command:
avro-tools tojson data.avro
I am getting the following exception though
22/11/08 14:34:56 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Exception in thread "main" org.apache.avro.AvroRuntimeException: java.io.EOFException
at org.apache.avro.file.DataFileStream.next(DataFileStream.java:238)
at org.apache.avro.tool.DataFileReadTool.run(DataFileReadTool.java:98)
at org.apache.avro.tool.Main.run(Main.java:67)
at org.apache.avro.tool.Main.main(Main.java:56)
Caused by: java.io.EOFException
at org.apache.avro.io.BinaryDecoder.ensureBounds(BinaryDecoder.java:542)
at org.apache.avro.io.BinaryDecoder.readInt(BinaryDecoder.java:173)
at org.apache.avro.io.BinaryDecoder.readBytes(BinaryDecoder.java:332)
at org.apache.avro.io.ResolvingDecoder.readBytes(ResolvingDecoder.java:242)
at org.apache.avro.generic.GenericDatumReader.readBytes(GenericDatumReader.java:544)
at org.apache.avro.generic.GenericDatumReader.readBytes(GenericDatumReader.java:535)
at org.apache.avro.generic.GenericDatumReader.readWithoutConversion(GenericDatumReader.java:194)
at org.apache.avro.generic.GenericDatumReader.read(GenericDatumReader.java:161)
at org.apache.avro.generic.GenericDatumReader.readField(GenericDatumReader.java:260)
at org.apache.avro.generic.GenericDatumReader.readRecord(GenericDatumReader.java:248)
at org.apache.avro.generic.GenericDatumReader.readWithoutConversion(GenericDatumReader.java:180)
at org.apache.avro.generic.GenericDatumReader.read(GenericDatumReader.java:161)
at org.apache.avro.generic.GenericDatumReader.read(GenericDatumReader.java:154)
at org.apache.avro.file.DataFileStream.next(DataFileStream.java:251)
at org.apache.avro.file.DataFileStream.next(DataFileStream.java:236)
... 3 more
The file is supposed to have the schema embedded and one object inside.
Conduktor is able to read the file, but avro tools isn't.
The file was generated using the following code:
val outputStream = ByteArrayOutputStream()
val writer = AvroDataOutputStream<GenericRecord>(outputStream, { it }, data.schema, CodecFactory.nullCodec())
writer.write(data)
writer.flush()
writer.close()
return outputStream.toByteArray()
How can I view this file using command line?
It would be nice to suppress or fix the avro-tools warning also.

TensorflowJS TFJS error: The dtype of dict

Had tried to run https://glitch.com/~tar-understood-exoplanet
and the model would fail to load and I wouldn't be able to use enable the webcam.
Anyone had the same issue?
While the program is running, in the console I get the following:
tfjs:2 Uncaught (in promise) Error: The dtype of dict['image_tensor'] provided in model.execute(dict) must be int32, but was float32
at Object.b [as assert] (tfjs:2)
at tfjs:2
at Array.forEach (<anonymous>)
at t.checkInputShapeAndType (tfjs:2)
at t.<anonymous> (tfjs:2)
at tfjs:2
at Object.next (tfjs:2)
at tfjs:2
at new Promise (<anonymous>)
at Zv (tfjs:2)
I have a Macbook Pro and some other people on their Windows also had some issues running the model. We also tried it on different browsers, Safari and Chrome.
SUCCESS! after switching to coco-ssd 2.0.2:
I added the version 2.0.2 in line 62 as follows:
<script src="https://cdn.jsdelivr.net/npm/#tensorflow-models/coco-ssd#2.0.2"></script>
This is caused by the warmup run of coco-ssd that uses tf.zeros tensor. The default dtype for tf.zeros is 'float' in the recent release of TFJS.
I have put out a new version with fixes. It should work if you use the latest version of coco-ssd (2.0.2) in the glitch example (index.html) as following.
<!-- Load the coco-ssd model to use to recognize things in images -->
<script src="https://cdn.jsdelivr.net/npm/#tensorflow-models/coco-ssd#2.0.2"></script>
Same error here, just occured since Friday night (04/03/2020)
TFModel works well in past few weeks.
I got the same error.
My Scenerio:
I trained a pre-trained model from tensorflow model zoo using transfer learning using tensorflow api as saved model (model.pb file) and converted it into tfjs format (model.json and shared .bin files).
When I tried running this model.json on the javascript(web), it gives the below error:
Uncaught (in promise) Error: The dtype of dict['input_tensor'] provided in model.execute(dict) must be int32, but was float32
When I tried someone else's working converted model (model.json and shared .bin files) on my javascript(web), it worked.
Conclusion:
There is something wrong with my converted model. I converted it using tensorflowjs_converter. My original model in (model.pb) works accurately in python too.
I'm still trying out to convert my model.pb file with different tensorflowjs_converters as it seems to be the converters versioning issue.

"NV_ENC_ERR_INVALID_VERSION" error while using nvenc encoder

I am using the cuda nvenc encoder to encode an YUV frame. I want to stream the encoded h264 data using RTSP streaming. I need the SPSPPS buffer to do RTSP stream. I am using "nvEncGetSequenceParams" to get the spspps buffer. I have called this function after calling the "nvEncInitializeEncoder" function as expected. I am getting the "NV_ENC_ERR_INVALID_VERSION" error which means I am passing wrong struct to this function. but I have checked multiple times the struct I have passed is correct. I think this can be driver version problem. I have Quadro k5000 GPU. I have tried this on driver version 331.82 and 337.88. Following is the code I am using.
NVENCSTATUS CNvEncoderH264::GetSPSPPSBUffer(char *SPSPPSBuffer)
{
NVENCSTATUS nvSta = NV_ENC_SUCCESS;
uint32_t size = 0;
//m_spspps is of type NV_ENC_SEQUENCE_PARAM_PAYLOAD
m_spspps.inBufferSize = 512;
m_spspps.outSPSPPSPayloadSize = &size;
SET_VER(m_spspps, NV_ENC_INITIALIZE_PARAMS);
m_spspps.spsppsBuffer = SPSPPSBuffer;
nvSta = m_pEncodeAPI->nvEncGetSequenceParams(m_hEncoder,&m_spspps);
return nvSta;
}
You are setting the wrong version macro to the SPS/PPS structure. I don't have my NVIDIA code by hand, so I'll try to Google the right macro but rule of the thumb is that each structure has a specific version macro (ans you are using NV_ENC_INITIALIZE_PARAMS for the SPS/PPS structure which is definitely not right. I assume the type of m_spspps is NV_ENC_SEQUENCE_PARAM_PAYLOAD. So you should initialize it like this:
m_spspps.version = NV_ENC_SEQUENCE_PARAM_PAYLOAD_VER;

Informatica XML read error

In my mapping we are using xml files as our source.Our issue is that while trying to execute our mapping using large xml files (i.e files larger than 300 MB) we are facing an error. The error message is
'Error [Invalid Document Structure] occured while parsing :[FATAL:Error at line1,char1 ']
We have successfully executed our mapping with smaller files(size < 300 MB).
Is there any setting which can be changed to process such large files.If not, is there any workaround that can be done?

Resources