How to make video with sequence images by opencv - opencv

I am making a small project on extracting video frames and remake it into video.
How to make sequence images back to video again?
Here is part of my extracting video frames code.
if (n_frame % 3 == 0)
{
//Save an image
sprintf(filename, "frame%.3d.jpg", n_save++);
imwrite(filename, frame);
cout << "save: " << filename << endl;
}
I named my images frame000, frame001, frame002....etc.
I am using opencv 2.4.11.
Thanks a lot!

you used FFmpegFrameRecorder
String path = Environment.getExternalStorageDirectory().getPath() + "/Video_images";
File folder = new File(path);
File[] listOfFiles = folder.listFiles();
if (listOfFiles.length > 0) {
iplimage = new opencv_core.IplImage[listOfFiles.length];
for (int j = 0; j < listOfFiles.length; j++) {
String files = "";
if (listOfFiles[j].isFile()) {
files = listOfFiles[j].getName();
System.out.println(" j " + j + listOfFiles[j]);
}
String[] tokens = files.split("\\.(?=[^\\.]+$)");
String name = tokens[0];
iplimage[j] = cvLoadImage(Environment.getExternalStorageDirectory().getPath() + "/Video_images/" + name + ".jpg");
}
recorder = new FFmpegFrameRecorder(Constn.SS, 480, 480);
try {
recorder.setVideoCodec(13);
recorder.setFrameRate(0.4d);
recorder.setPixelFormat(0);
recorder.setVideoQuality(1.0d);
recorder.setVideoBitrate(4000);
startTime = System.currentTimeMillis();
recorder.start();
int time = Integer.parseInt(params[0]);
resp = "Slept for " + time + " milliseconds";
for (int i = 0; i < iplimage.length; i++) {
long t = 1000 * (System.currentTimeMillis() - startTime);
if (t < recorder.getTimestamp()) {
t = recorder.getTimestamp() + 1000;
}
recorder.setTimestamp(t);
recorder.record(iplimage[i]);
}
} catch (Exception e) {
e.printStackTrace();
}

You need VideoWriter - http://docs.opencv.org/trunk/dd/d9e/classcv_1_1VideoWriter.html
Once you construct it with desired file type and path, you feed it with Mat objects containing frames using << operator - i.e.
auto frame = cv::imread("somePicture.png");
auto writer = cv::VideoWriter("out.avi", VideoWriter::fourcc('M','J','P','G'), 24, frame.size());
writer << frame;
writer.release();
The code above will read frame from file, feed it into a video file which has 24fps and MJPG format and AVI container and then the release() method will close the writer.

Related

JavaCV Display image in color from video capture

I have trouble with display image from camera. I using VideoCapture and when I try display image in grayscale it's works perfect, but when I try display image in color that I get something like that:
link
Part of my source code:
public void CaptureVideo()
{
VideoCapture videoCapture = new VideoCapture(0);
Mat frame = new Mat();
while (videoCapture.isOpened() && _canWorking)
{
videoCapture.read(frame);
if (!frame.empty())
{
Image img = MatToImage(frame);
videoView.setImage(img);
}
try { Thread.sleep(33); } catch (InterruptedException e) { e.printStackTrace(); }
}
videoCapture.release();
}
private Image MatToImage(Mat original)
{
BufferedImage image = null;
int width = original.size().width(), height = original.size().height(), channels = original.channels();
byte[] sourcePixels = MatToBytes(original, width, height, channels);
if (original.channels() > 1)
{
image = new BufferedImage(width, height, BufferedImage.TYPE_3BYTE_BGR);
}
else
{
image = new BufferedImage(width, height, BufferedImage.TYPE_BYTE_GRAY);
}
final byte[] targetPixels = ((DataBufferByte) image.getRaster().getDataBuffer()).getData();
System.arraycopy(sourcePixels, 0, targetPixels, 0, sourcePixels.length);
return SwingFXUtils.toFXImage(image, null);
}
private byte[] MatToBytes(Mat mat, int width, int height, int channels)
{
byte[] output = new byte[width * height * channels];
UByteRawIndexer indexer = mat.createIndexer();
int i = 0;
for (int j = 0; j < mat.rows(); j ++)
{
for (int k = 0; k < mat.cols(); k++)
{
output[i] = (byte)indexer.get(j,k);
i++;
}
}
return output;
}
Anyone can tell me what I doing wrong? I'm new in image processing and I don't get why it's not working.
Ok. I resolve this.
Solution:
byte[] output = new byte[_frame.size().width() * _frame.size().height() * _frame.channels()];
UByteRawIndexer indexer = mat.createIndexer();
int index = 0;
for (int i = 0; i < mat.rows(); i ++)
{
for (int j = 0; j < mat.cols(); j++)
{
for (int k = 0; k < mat.channels(); k++)
{
output[index] = (byte)indexer.get(i, j, k);
index++;
}
}
}
return output;

AudioConverter#FillComplexBuffer returns -50 and does not convert anything

I'm strongly following this Xamarin sample (based on this Apple sample) to convert a LinearPCM file to an AAC file.
The sample works great, but implemented in my project, the FillComplexBuffer method returns error -50 and the InputData event is not triggered once, thus nothing is converted.
The error only appears when testing on a device. When testing on the emulator, everything goes great and I get a good encoded AAC file at the end.
I tried a lot of things today, and I don't see any difference between my code and the sample code. Do you have any idea where this may come from?
I don't know if this is in anyway related to Xamarin, it doesn't seem so since the Xamarin sample works great.
Here's the relevant part of my code:
protected void Encode(string path)
{
// In class setup. File at TempWavFilePath has DecodedFormat as format.
//
// DecodedFormat = AudioStreamBasicDescription.CreateLinearPCM();
// AudioStreamBasicDescription encodedFormat = new AudioStreamBasicDescription()
// {
// Format = AudioFormatType.MPEG4AAC,
// SampleRate = DecodedFormat.SampleRate,
// ChannelsPerFrame = DecodedFormat.ChannelsPerFrame,
// };
// AudioStreamBasicDescription.GetFormatInfo (ref encodedFormat);
// EncodedFormat = encodedFormat;
// Setup converter
AudioStreamBasicDescription inputFormat = DecodedFormat;
AudioStreamBasicDescription outputFormat = EncodedFormat;
AudioConverterError converterCreateError;
AudioConverter converter = AudioConverter.Create(inputFormat, outputFormat, out converterCreateError);
if (converterCreateError != AudioConverterError.None)
{
Console.WriteLine("Converter creation error: " + converterCreateError);
}
converter.EncodeBitRate = 192000; // AAC 192kbps
// get the actual formats back from the Audio Converter
inputFormat = converter.CurrentInputStreamDescription;
outputFormat = converter.CurrentOutputStreamDescription;
/*** INPUT ***/
AudioFile inputFile = AudioFile.OpenRead(NSUrl.FromFilename(TempWavFilePath));
// init buffer
const int inputBufferBytesSize = 32768;
IntPtr inputBufferPtr = Marshal.AllocHGlobal(inputBufferBytesSize);
// calc number of packets per read
int inputSizePerPacket = inputFormat.BytesPerPacket;
int inputBufferPacketSize = inputBufferBytesSize / inputSizePerPacket;
AudioStreamPacketDescription[] inputPacketDescriptions = null;
// init position
long inputFilePosition = 0;
// define input delegate
converter.InputData += delegate(ref int numberDataPackets, AudioBuffers data, ref AudioStreamPacketDescription[] dataPacketDescription)
{
// how much to read
if (numberDataPackets > inputBufferPacketSize)
{
numberDataPackets = inputBufferPacketSize;
}
// read from the file
int outNumBytes;
AudioFileError readError = inputFile.ReadPackets(false, out outNumBytes, inputPacketDescriptions, inputFilePosition, ref numberDataPackets, inputBufferPtr);
if (readError != 0)
{
Console.WriteLine("Read error: " + readError);
}
// advance input file packet position
inputFilePosition += numberDataPackets;
// put the data pointer into the buffer list
data.SetData(0, inputBufferPtr, outNumBytes);
// add packet descriptions if required
if (dataPacketDescription != null)
{
if (inputPacketDescriptions != null)
{
dataPacketDescription = inputPacketDescriptions;
}
else
{
dataPacketDescription = null;
}
}
return AudioConverterError.None;
};
/*** OUTPUT ***/
// create the destination file
var outputFile = AudioFile.Create (NSUrl.FromFilename(path), AudioFileType.M4A, outputFormat, AudioFileFlags.EraseFlags);
// init buffer
const int outputBufferBytesSize = 32768;
IntPtr outputBufferPtr = Marshal.AllocHGlobal(outputBufferBytesSize);
AudioBuffers buffers = new AudioBuffers(1);
// calc number of packet per write
int outputSizePerPacket = outputFormat.BytesPerPacket;
AudioStreamPacketDescription[] outputPacketDescriptions = null;
if (outputSizePerPacket == 0) {
// if the destination format is VBR, we need to get max size per packet from the converter
outputSizePerPacket = (int)converter.MaximumOutputPacketSize;
// allocate memory for the PacketDescription structures describing the layout of each packet
outputPacketDescriptions = new AudioStreamPacketDescription [outputBufferBytesSize / outputSizePerPacket];
}
int outputBufferPacketSize = outputBufferBytesSize / outputSizePerPacket;
// init position
long outputFilePosition = 0;
long totalOutputFrames = 0; // used for debugging
// write magic cookie if necessary
if (converter.CompressionMagicCookie != null && converter.CompressionMagicCookie.Length != 0)
{
outputFile.MagicCookie = converter.CompressionMagicCookie;
}
// loop to convert data
Console.WriteLine ("Converting...");
while (true)
{
// create buffer
buffers[0] = new AudioBuffer()
{
NumberChannels = outputFormat.ChannelsPerFrame,
DataByteSize = outputBufferBytesSize,
Data = outputBufferPtr
};
int writtenPackets = outputBufferPacketSize;
// LET'S CONVERT (it's about time...)
AudioConverterError converterFillError = converter.FillComplexBuffer(ref writtenPackets, buffers, outputPacketDescriptions);
if (converterFillError != AudioConverterError.None)
{
Console.WriteLine("FillComplexBuffer error: " + converterFillError);
}
if (writtenPackets == 0) // EOF
{
break;
}
// write to output file
int inNumBytes = buffers[0].DataByteSize;
AudioFileError writeError = outputFile.WritePackets(false, inNumBytes, outputPacketDescriptions, outputFilePosition, ref writtenPackets, outputBufferPtr);
if (writeError != 0)
{
Console.WriteLine("WritePackets error: {0}", writeError);
}
// advance output file packet position
outputFilePosition += writtenPackets;
if (FlowFormat.FramesPerPacket != 0) {
// the format has constant frames per packet
totalOutputFrames += (writtenPackets * FlowFormat.FramesPerPacket);
} else {
// variable frames per packet require doing this for each packet (adding up the number of sample frames of data in each packet)
for (var i = 0; i < writtenPackets; ++i)
{
totalOutputFrames += outputPacketDescriptions[i].VariableFramesInPacket;
}
}
}
// write out any of the leading and trailing frames for compressed formats only
if (outputFormat.BitsPerChannel == 0)
{
Console.WriteLine("Total number of output frames counted: {0}", totalOutputFrames);
WritePacketTableInfo(converter, outputFile);
}
// write the cookie again - sometimes codecs will update cookies at the end of a conversion
if (converter.CompressionMagicCookie != null && converter.CompressionMagicCookie.Length != 0)
{
outputFile.MagicCookie = converter.CompressionMagicCookie;
}
// Clean everything
Marshal.FreeHGlobal(inputBufferPtr);
Marshal.FreeHGlobal(outputBufferPtr);
converter.Dispose();
outputFile.Dispose();
// Remove temp file
File.Delete(TempWavFilePath);
}
I already saw this SO question, but the not-detailed C++/Obj-C related answer doesn't seem to fit with my problem.
Thanks !
I finally found the solution!
I just had to declare AVAudioSession category before converting the file.
AVAudioSession.SharedInstance().SetCategory(AVAudioSessionCategory.AudioProcessing);
AVAudioSession.SharedInstance().SetActive(true);
Since I also use an AudioQueue to RenderOffline, I must in fact set the category to AVAudioSessionCategory.PlayAndRecord so both the offline rendering and the audio converting work.

OpenCv image unstable while running AES decryption

I am trying to capture video from webcam using Opencv and transmit it over TCP. In addition, I wanted to encrypt the video using AES. But whenever run the AES decrpt function the video is unstable.
I am using the opencv over tcp example and AES example
Whenever I run this function:
img->imageData = aes_decrypt(&de, img->imageData, &imgsize);
my video gets unstable.
I have attached the code segment where I wrote the function.
/* start receiving images*/
while(1)
{
/* get raw data */
for (i = 0; i < imgsize; i += bytes) {
if ((bytes = recv(sock, sockdata + i, imgsize - i, 0)) == -1) {
quit("recv failed", 1);
}
}
pthread_mutex_lock(&mutex);
for (i = 0, k = 0; i < img->height; i++) {
for (j = 0; j < img->width; j++) {
((uchar*)(img->imageData + i * img->widthStep))[j] = sockdata[k++];
}
}
img->imageData = aes_decrypt(&de, img->imageData, &imgsize);
is_data_ready = 1;
pthread_mutex_unlock(&mutex);
/* have we terminated yet? */
pthread_testcancel();
/* no, take a rest for a while */
usleep(1000);
}
This is my first post, sorry for my bad English and format of the post.

Create a Bitmap from an Image

I have an Image object which is a jpg picture taken by the camera and I need to create a Bitmap from it.
Is there any way to do it besides using BMPGenerator class? I'm working on a commercial project and I don't think I can use it due to the GPLv3 license.
So far this is the code I have. Can I do something with it?
FileConnection file = (FileConnection) Connector.open("file://" + imagePath, Connector.READ_WRITE);
InputStream is = file.openInputStream();
Image capturedImage = Image.createImage(is);
I tried this but I wasn't able to get the correct filepaht and the image is stuck in null
EncodedImage image = EncodedImage.getEncodedImageResource(filePath);
byte[] array = image.getData();
capturedBitmap = image.getBitmap();
You can use videoControl.getSnapshot(null) and then Bitmap myBitmap = Bitmap.createBitmapFromBytes(raw, 0, raw.length, 1) to get a bitmap from camera.
videoControl is got from player.getControl("VideoControl") and player is got from Manager.createPlayer()
By the way, what kind of Image do you have? If we are talking of EncodedImage, you can just use getBitmap() from it.
Fixed!
Well, almost.
Used the following method but the image is rotated 90 degrees.
Going to fix that with this
public Bitmap loadIconFromSDcard(String imgname){
FileConnection fcon = null;
Bitmap icon = null;
try {
fcon = (FileConnection)Connector.open(imgname, Connector.READ);
if(fcon.exists()) {
byte[] content = new byte[(int) fcon.fileSize()];
int readOffset = 0;
int readBytes = 0;
int bytesToRead = content.length - readOffset;
InputStream is = fcon.openInputStream();
while (bytesToRead > 0) {
readBytes = is.read(content, readOffset, bytesToRead);
if (readBytes < 0) {
break;
}
readOffset += readBytes;
bytesToRead -= readBytes;
}
is.close();
EncodedImage image = EncodedImage.createEncodedImage(content,0,content.length);
icon = image.getBitmap();
}
} catch (Exception e) {
}finally{
// Close the connections
try{ if(fcon != null) fcon.close(); }
catch(Exception e){}
}
return icon;
}

Clip sound from running sound clip

Does anybody know how to cut a sound clip from running sound file? I am working on one blackberry application, Is anybody have any sample code or link please give me that.
Thanks
Regards
V Singh
It is possible to cut a part of sound file. You have to study sound file formats and deal with sound file binary structure.
No, I don't have sample code, but you can write it by yourself, after studying sound file formats.
//Sample code to cut a sound file(AMR)
long startTime = player.getMediaTime();
long endTime = player.getMediaTime();
private void cutByTimeDuration(long startTime, long endTime) {
// TODO Auto-generated method stub
byte[] byte1 = readDSoundFile("hello.amr"); //custom method original file
int noFramesStart = (int) (startTime / 20000);
long noBytesStart = (noFramesStart * 32) + 6;
int noFramesEnd = (int) (endTime / 20000);
long noBytesEnd = (noFramesEnd * 32) + 6;
byte[] byte2 = new byte[(int) (noBytesEnd - noBytesStart + 6)];
System.arraycopy(byte1, 0, byte2, 0, 6);
System.arraycopy(byte1, (int) noBytesStart, byte2, 6,
(int) (noBytesEnd - noBytesStart));
try {
FileConnection file = (FileConnection) Connector.open(filePath
+ "/" + "xyz.amr", Connector.READ_WRITE);
if (file.exists())
file.delete();
// if (!file.exists() )
{
file.create();
OutputStream out = file.openOutputStream();
int length = byte2.length;// -1;
out.write(byte2, 0, length);
Thread.yield();
out.flush();
out.close();
file.close();
}
} catch (Exception e) {
}
}

Resources