Does anybody know how to cut a sound clip from running sound file? I am working on one blackberry application, Is anybody have any sample code or link please give me that.
Thanks
Regards
V Singh
It is possible to cut a part of sound file. You have to study sound file formats and deal with sound file binary structure.
No, I don't have sample code, but you can write it by yourself, after studying sound file formats.
//Sample code to cut a sound file(AMR)
long startTime = player.getMediaTime();
long endTime = player.getMediaTime();
private void cutByTimeDuration(long startTime, long endTime) {
// TODO Auto-generated method stub
byte[] byte1 = readDSoundFile("hello.amr"); //custom method original file
int noFramesStart = (int) (startTime / 20000);
long noBytesStart = (noFramesStart * 32) + 6;
int noFramesEnd = (int) (endTime / 20000);
long noBytesEnd = (noFramesEnd * 32) + 6;
byte[] byte2 = new byte[(int) (noBytesEnd - noBytesStart + 6)];
System.arraycopy(byte1, 0, byte2, 0, 6);
System.arraycopy(byte1, (int) noBytesStart, byte2, 6,
(int) (noBytesEnd - noBytesStart));
try {
FileConnection file = (FileConnection) Connector.open(filePath
+ "/" + "xyz.amr", Connector.READ_WRITE);
if (file.exists())
file.delete();
// if (!file.exists() )
{
file.create();
OutputStream out = file.openOutputStream();
int length = byte2.length;// -1;
out.write(byte2, 0, length);
Thread.yield();
out.flush();
out.close();
file.close();
}
} catch (Exception e) {
}
}
Related
I want to upload a video file to my cloud storage but I want to check the size of the video file before I upload. how to achieve this
If you have the path of the file, you can use dart:io
var file = File('the_path_to_the_video.mp4');
You an either use:
print(file.lengthSync());
or
print (await file.length());
Note: The size returned is in bytes.
// defined the function
getFileSize(String filepath, int decimals) async {
var file = File(filepath);
int bytes = await file.length();
if (bytes <= 0) return "0 B";
const suffixes = ["B", "KB", "MB", "GB", "TB", "PB", "EB", "ZB", "YB"];
var i = (log(bytes) / log(1024)).floor();
return ((bytes / pow(1024, i)).toStringAsFixed(decimals)) + ' ' + suffixes[i];
}
// how to call the function
print(getFileSize('file-path-will-be-here', 1));
// output will be as string like:
97.5 KB
This worked for me. I hope, this will also help you. Thanks a lot for asking question.
This is work for me.
final file = File('pickedVid.mp4');
int sizeInBytes = file.lengthSync();
double sizeInMb = sizeInBytes / (1024 * 1024);
if (sizeInMb > 10){
// This file is Longer the
}
If you are using the latest ImagePicker with xFile type then,
final XFile? photo = await _picker.pickImage(source: ImageSource.camera);
print(File(photo.path).lengthSync());
will return the file size in bytes.
import 'dart:io';
//Here's the method to get file size:
double getFileSize(File file){
int sizeInBytes = file.lengthSync();
double sizeInMb = sizeInBytes / (1024 * 1024);
return sizeInMb;
}
//Alternatively for extension:
extension FileUtils on File {
get size {
int sizeInBytes = this.lengthSync();
double sizeInMb = sizeInBytes / (1024 * 1024);
return sizeInMb;
}
}
I am making a small project on extracting video frames and remake it into video.
How to make sequence images back to video again?
Here is part of my extracting video frames code.
if (n_frame % 3 == 0)
{
//Save an image
sprintf(filename, "frame%.3d.jpg", n_save++);
imwrite(filename, frame);
cout << "save: " << filename << endl;
}
I named my images frame000, frame001, frame002....etc.
I am using opencv 2.4.11.
Thanks a lot!
you used FFmpegFrameRecorder
String path = Environment.getExternalStorageDirectory().getPath() + "/Video_images";
File folder = new File(path);
File[] listOfFiles = folder.listFiles();
if (listOfFiles.length > 0) {
iplimage = new opencv_core.IplImage[listOfFiles.length];
for (int j = 0; j < listOfFiles.length; j++) {
String files = "";
if (listOfFiles[j].isFile()) {
files = listOfFiles[j].getName();
System.out.println(" j " + j + listOfFiles[j]);
}
String[] tokens = files.split("\\.(?=[^\\.]+$)");
String name = tokens[0];
iplimage[j] = cvLoadImage(Environment.getExternalStorageDirectory().getPath() + "/Video_images/" + name + ".jpg");
}
recorder = new FFmpegFrameRecorder(Constn.SS, 480, 480);
try {
recorder.setVideoCodec(13);
recorder.setFrameRate(0.4d);
recorder.setPixelFormat(0);
recorder.setVideoQuality(1.0d);
recorder.setVideoBitrate(4000);
startTime = System.currentTimeMillis();
recorder.start();
int time = Integer.parseInt(params[0]);
resp = "Slept for " + time + " milliseconds";
for (int i = 0; i < iplimage.length; i++) {
long t = 1000 * (System.currentTimeMillis() - startTime);
if (t < recorder.getTimestamp()) {
t = recorder.getTimestamp() + 1000;
}
recorder.setTimestamp(t);
recorder.record(iplimage[i]);
}
} catch (Exception e) {
e.printStackTrace();
}
You need VideoWriter - http://docs.opencv.org/trunk/dd/d9e/classcv_1_1VideoWriter.html
Once you construct it with desired file type and path, you feed it with Mat objects containing frames using << operator - i.e.
auto frame = cv::imread("somePicture.png");
auto writer = cv::VideoWriter("out.avi", VideoWriter::fourcc('M','J','P','G'), 24, frame.size());
writer << frame;
writer.release();
The code above will read frame from file, feed it into a video file which has 24fps and MJPG format and AVI container and then the release() method will close the writer.
I was trying to implement a function to stretch the sound speed, without changing it's pitch and time scale.
I try the method to set the frequency of channel to slow of fast the speed.
Then use FMOD_DSP_PITCHSHIFT to correct the pitch sounds as default.
I was using wav format sound file for test and build function.
I'm trying to intergrate product resource which sound file was encoded as MP3.
PITCHSHIFT DSP doesn't work at MP3 sound channel. console log looks fine with no exception & error.
Same project and setting everything works fine in iOS Simulator.
After some research and experiments, results indicates even m4a works fine at iOS.
I wonder is this some kind of bug? or I missed something at configuration.
sample code was based on FMOD Sample project Play stream.
`/*==============================================================================
Play Stream Example
Copyright (c), Firelight Technologies Pty, Ltd 2004-2015.
This example shows how to simply play a stream such as an MP3 or WAV. The stream
behaviour is achieved by specifying FMOD_CREATESTREAM in the call to
System::createSound. This makes FMOD decode the file in realtime as it plays,
instead of loading it all at once which uses far less memory in exchange for a
small runtime CPU hit.
==============================================================================*/
#include "fmod.hpp"
#include "common.h"
int FMOD_Main()
{
FMOD::System *system;
FMOD::Sound *sound, *sound_to_play;
FMOD::Channel *channel = 0;
FMOD_RESULT result;
FMOD::DSP * pitch_shift;
unsigned int version;
void *extradriverdata = 0;
int numsubsounds;
Common_Init(&extradriverdata);
/*
Create a System object and initialize.
*/
result = FMOD::System_Create(&system);
ERRCHECK(result);
result = system->getVersion(&version);
ERRCHECK(result);
if (version < FMOD_VERSION)
{
Common_Fatal("FMOD lib version %08x doesn't match header version %08x", version, FMOD_VERSION);
}
result = system->init(32, FMOD_INIT_NORMAL, extradriverdata);
ERRCHECK(result);
result = system->createDSPByType(FMOD_DSP_TYPE_PITCHSHIFT, &pitch_shift);
ERRCHECK(result);
/*
This example uses an FSB file, which is a preferred pack format for fmod containing multiple sounds.
This could just as easily be exchanged with a wav/mp3/ogg file for example, but in this case you wouldnt need to call getSubSound.
Because getNumSubSounds is called here the example would work with both types of sound file (packed vs single).
*/
result = system->createSound(Common_MediaPath("aaa.m4a"), FMOD_LOOP_NORMAL | FMOD_2D, 0, &sound);
ERRCHECK(result);
result = sound->getNumSubSounds(&numsubsounds);
ERRCHECK(result);
if (numsubsounds)
{
sound->getSubSound(0, &sound_to_play);
ERRCHECK(result);
}
else
{
sound_to_play = sound;
}
/*
Play the sound.
*/
result = system->playSound(sound_to_play, 0, false, &channel);
ERRCHECK(result);
result = channel->addDSP(0, pitch_shift);
ERRCHECK(result);
float pitch = 1.f;
result = pitch_shift->setParameterFloat(FMOD_DSP_PITCHSHIFT_PITCH, pitch);
ERRCHECK(result);
pitch_shift->setActive(true);
ERRCHECK(result);
float defaultFrequency;
result = channel->getFrequency(&defaultFrequency);
ERRCHECK(result);
/*
Main loop.
*/
do
{
Common_Update();
if (Common_BtnPress(BTN_ACTION1))
{
bool paused;
result = channel->getPaused(&paused);
ERRCHECK(result);
result = channel->setPaused(!paused);
ERRCHECK(result);
}
if (Common_BtnPress(BTN_DOWN)) {
char valuestr;
int valuestrlen;
pitch_shift->getParameterFloat(FMOD_DSP_PITCHSHIFT_PITCH, &pitch, &valuestr, valuestrlen);
pitch+=0.1f;
pitch = pitch>2.0f?2.0f:pitch;
pitch_shift->setParameterFloat(FMOD_DSP_PITCHSHIFT_PITCH, pitch);
channel->setFrequency(defaultFrequency/pitch);
}
if (Common_BtnPress(BTN_UP)) {
char valuestr;
int valuestrlen;
pitch_shift->getParameterFloat(FMOD_DSP_PITCHSHIFT_PITCH, &pitch, &valuestr, valuestrlen);
pitch-=0.1f;
pitch = pitch<0.5f?0.5f:pitch;
pitch_shift->setParameterFloat(FMOD_DSP_PITCHSHIFT_PITCH, pitch);
channel->setFrequency(defaultFrequency/pitch);
}
result = system->update();
ERRCHECK(result);
{
unsigned int ms = 0;
unsigned int lenms = 0;
bool playing = false;
bool paused = false;
if (channel)
{
result = channel->isPlaying(&playing);
if ((result != FMOD_OK) && (result != FMOD_ERR_INVALID_HANDLE))
{
ERRCHECK(result);
}
result = channel->getPaused(&paused);
if ((result != FMOD_OK) && (result != FMOD_ERR_INVALID_HANDLE))
{
ERRCHECK(result);
}
result = channel->getPosition(&ms, FMOD_TIMEUNIT_MS);
if ((result != FMOD_OK) && (result != FMOD_ERR_INVALID_HANDLE))
{
ERRCHECK(result);
}
result = sound_to_play->getLength(&lenms, FMOD_TIMEUNIT_MS);
if ((result != FMOD_OK) && (result != FMOD_ERR_INVALID_HANDLE))
{
ERRCHECK(result);
}
}
Common_Draw("==================================================");
Common_Draw("Play Stream Example.");
Common_Draw("Copyright (c) Firelight Technologies 2004-2015.");
Common_Draw("==================================================");
Common_Draw("");
Common_Draw("Press %s to toggle pause", Common_BtnStr(BTN_ACTION1));
Common_Draw("Press %s to quit", Common_BtnStr(BTN_QUIT));
Common_Draw("");
Common_Draw("Time %02d:%02d:%02d/%02d:%02d:%02d : %s", ms / 1000 / 60, ms / 1000 % 60, ms / 10 % 100, lenms / 1000 / 60, lenms / 1000 % 60, lenms / 10 % 100, paused ? "Paused " : playing ? "Playing" : "Stopped");
Common_Draw("Pitch %02f",pitch);
}
Common_Sleep(50);
} while (!Common_BtnPress(BTN_QUIT));
/*
Shut down
*/
result = sound->release(); /* Release the parent, not the sound that was retrieved with getSubSound. */
ERRCHECK(result);
result = system->close();
ERRCHECK(result);
result = system->release();
ERRCHECK(result);
Common_Close();
return 0;
}
`
After some more experiments , i can approach the destination through switch sound file format to m4a on iOS Device.
MP3 still not working .
I'm strongly following this Xamarin sample (based on this Apple sample) to convert a LinearPCM file to an AAC file.
The sample works great, but implemented in my project, the FillComplexBuffer method returns error -50 and the InputData event is not triggered once, thus nothing is converted.
The error only appears when testing on a device. When testing on the emulator, everything goes great and I get a good encoded AAC file at the end.
I tried a lot of things today, and I don't see any difference between my code and the sample code. Do you have any idea where this may come from?
I don't know if this is in anyway related to Xamarin, it doesn't seem so since the Xamarin sample works great.
Here's the relevant part of my code:
protected void Encode(string path)
{
// In class setup. File at TempWavFilePath has DecodedFormat as format.
//
// DecodedFormat = AudioStreamBasicDescription.CreateLinearPCM();
// AudioStreamBasicDescription encodedFormat = new AudioStreamBasicDescription()
// {
// Format = AudioFormatType.MPEG4AAC,
// SampleRate = DecodedFormat.SampleRate,
// ChannelsPerFrame = DecodedFormat.ChannelsPerFrame,
// };
// AudioStreamBasicDescription.GetFormatInfo (ref encodedFormat);
// EncodedFormat = encodedFormat;
// Setup converter
AudioStreamBasicDescription inputFormat = DecodedFormat;
AudioStreamBasicDescription outputFormat = EncodedFormat;
AudioConverterError converterCreateError;
AudioConverter converter = AudioConverter.Create(inputFormat, outputFormat, out converterCreateError);
if (converterCreateError != AudioConverterError.None)
{
Console.WriteLine("Converter creation error: " + converterCreateError);
}
converter.EncodeBitRate = 192000; // AAC 192kbps
// get the actual formats back from the Audio Converter
inputFormat = converter.CurrentInputStreamDescription;
outputFormat = converter.CurrentOutputStreamDescription;
/*** INPUT ***/
AudioFile inputFile = AudioFile.OpenRead(NSUrl.FromFilename(TempWavFilePath));
// init buffer
const int inputBufferBytesSize = 32768;
IntPtr inputBufferPtr = Marshal.AllocHGlobal(inputBufferBytesSize);
// calc number of packets per read
int inputSizePerPacket = inputFormat.BytesPerPacket;
int inputBufferPacketSize = inputBufferBytesSize / inputSizePerPacket;
AudioStreamPacketDescription[] inputPacketDescriptions = null;
// init position
long inputFilePosition = 0;
// define input delegate
converter.InputData += delegate(ref int numberDataPackets, AudioBuffers data, ref AudioStreamPacketDescription[] dataPacketDescription)
{
// how much to read
if (numberDataPackets > inputBufferPacketSize)
{
numberDataPackets = inputBufferPacketSize;
}
// read from the file
int outNumBytes;
AudioFileError readError = inputFile.ReadPackets(false, out outNumBytes, inputPacketDescriptions, inputFilePosition, ref numberDataPackets, inputBufferPtr);
if (readError != 0)
{
Console.WriteLine("Read error: " + readError);
}
// advance input file packet position
inputFilePosition += numberDataPackets;
// put the data pointer into the buffer list
data.SetData(0, inputBufferPtr, outNumBytes);
// add packet descriptions if required
if (dataPacketDescription != null)
{
if (inputPacketDescriptions != null)
{
dataPacketDescription = inputPacketDescriptions;
}
else
{
dataPacketDescription = null;
}
}
return AudioConverterError.None;
};
/*** OUTPUT ***/
// create the destination file
var outputFile = AudioFile.Create (NSUrl.FromFilename(path), AudioFileType.M4A, outputFormat, AudioFileFlags.EraseFlags);
// init buffer
const int outputBufferBytesSize = 32768;
IntPtr outputBufferPtr = Marshal.AllocHGlobal(outputBufferBytesSize);
AudioBuffers buffers = new AudioBuffers(1);
// calc number of packet per write
int outputSizePerPacket = outputFormat.BytesPerPacket;
AudioStreamPacketDescription[] outputPacketDescriptions = null;
if (outputSizePerPacket == 0) {
// if the destination format is VBR, we need to get max size per packet from the converter
outputSizePerPacket = (int)converter.MaximumOutputPacketSize;
// allocate memory for the PacketDescription structures describing the layout of each packet
outputPacketDescriptions = new AudioStreamPacketDescription [outputBufferBytesSize / outputSizePerPacket];
}
int outputBufferPacketSize = outputBufferBytesSize / outputSizePerPacket;
// init position
long outputFilePosition = 0;
long totalOutputFrames = 0; // used for debugging
// write magic cookie if necessary
if (converter.CompressionMagicCookie != null && converter.CompressionMagicCookie.Length != 0)
{
outputFile.MagicCookie = converter.CompressionMagicCookie;
}
// loop to convert data
Console.WriteLine ("Converting...");
while (true)
{
// create buffer
buffers[0] = new AudioBuffer()
{
NumberChannels = outputFormat.ChannelsPerFrame,
DataByteSize = outputBufferBytesSize,
Data = outputBufferPtr
};
int writtenPackets = outputBufferPacketSize;
// LET'S CONVERT (it's about time...)
AudioConverterError converterFillError = converter.FillComplexBuffer(ref writtenPackets, buffers, outputPacketDescriptions);
if (converterFillError != AudioConverterError.None)
{
Console.WriteLine("FillComplexBuffer error: " + converterFillError);
}
if (writtenPackets == 0) // EOF
{
break;
}
// write to output file
int inNumBytes = buffers[0].DataByteSize;
AudioFileError writeError = outputFile.WritePackets(false, inNumBytes, outputPacketDescriptions, outputFilePosition, ref writtenPackets, outputBufferPtr);
if (writeError != 0)
{
Console.WriteLine("WritePackets error: {0}", writeError);
}
// advance output file packet position
outputFilePosition += writtenPackets;
if (FlowFormat.FramesPerPacket != 0) {
// the format has constant frames per packet
totalOutputFrames += (writtenPackets * FlowFormat.FramesPerPacket);
} else {
// variable frames per packet require doing this for each packet (adding up the number of sample frames of data in each packet)
for (var i = 0; i < writtenPackets; ++i)
{
totalOutputFrames += outputPacketDescriptions[i].VariableFramesInPacket;
}
}
}
// write out any of the leading and trailing frames for compressed formats only
if (outputFormat.BitsPerChannel == 0)
{
Console.WriteLine("Total number of output frames counted: {0}", totalOutputFrames);
WritePacketTableInfo(converter, outputFile);
}
// write the cookie again - sometimes codecs will update cookies at the end of a conversion
if (converter.CompressionMagicCookie != null && converter.CompressionMagicCookie.Length != 0)
{
outputFile.MagicCookie = converter.CompressionMagicCookie;
}
// Clean everything
Marshal.FreeHGlobal(inputBufferPtr);
Marshal.FreeHGlobal(outputBufferPtr);
converter.Dispose();
outputFile.Dispose();
// Remove temp file
File.Delete(TempWavFilePath);
}
I already saw this SO question, but the not-detailed C++/Obj-C related answer doesn't seem to fit with my problem.
Thanks !
I finally found the solution!
I just had to declare AVAudioSession category before converting the file.
AVAudioSession.SharedInstance().SetCategory(AVAudioSessionCategory.AudioProcessing);
AVAudioSession.SharedInstance().SetActive(true);
Since I also use an AudioQueue to RenderOffline, I must in fact set the category to AVAudioSessionCategory.PlayAndRecord so both the offline rendering and the audio converting work.
I have an Image object which is a jpg picture taken by the camera and I need to create a Bitmap from it.
Is there any way to do it besides using BMPGenerator class? I'm working on a commercial project and I don't think I can use it due to the GPLv3 license.
So far this is the code I have. Can I do something with it?
FileConnection file = (FileConnection) Connector.open("file://" + imagePath, Connector.READ_WRITE);
InputStream is = file.openInputStream();
Image capturedImage = Image.createImage(is);
I tried this but I wasn't able to get the correct filepaht and the image is stuck in null
EncodedImage image = EncodedImage.getEncodedImageResource(filePath);
byte[] array = image.getData();
capturedBitmap = image.getBitmap();
You can use videoControl.getSnapshot(null) and then Bitmap myBitmap = Bitmap.createBitmapFromBytes(raw, 0, raw.length, 1) to get a bitmap from camera.
videoControl is got from player.getControl("VideoControl") and player is got from Manager.createPlayer()
By the way, what kind of Image do you have? If we are talking of EncodedImage, you can just use getBitmap() from it.
Fixed!
Well, almost.
Used the following method but the image is rotated 90 degrees.
Going to fix that with this
public Bitmap loadIconFromSDcard(String imgname){
FileConnection fcon = null;
Bitmap icon = null;
try {
fcon = (FileConnection)Connector.open(imgname, Connector.READ);
if(fcon.exists()) {
byte[] content = new byte[(int) fcon.fileSize()];
int readOffset = 0;
int readBytes = 0;
int bytesToRead = content.length - readOffset;
InputStream is = fcon.openInputStream();
while (bytesToRead > 0) {
readBytes = is.read(content, readOffset, bytesToRead);
if (readBytes < 0) {
break;
}
readOffset += readBytes;
bytesToRead -= readBytes;
}
is.close();
EncodedImage image = EncodedImage.createEncodedImage(content,0,content.length);
icon = image.getBitmap();
}
} catch (Exception e) {
}finally{
// Close the connections
try{ if(fcon != null) fcon.close(); }
catch(Exception e){}
}
return icon;
}