Playing Vector CANoe BLF/ASC file using CAPL - can-bus

I'd like to play a Vector CANoe BLF/ASC file from my CAPL srcipt. For some reason the Replay Block can not be used.
Is is possible to play a CANoe BLF/ASC file from the CAPL script?

Just use the CAPL command to replay a file:
dword StartReplayFile(char fileName[]);
Starts playing the replay file with the name fileName.
(Taken from CANoe documentation).

Log files are digitized version of your network traffic stored in the harddisk of the computer for future reference. One can use them to understand what was happened during measurement (that is achieved via offline analysis) another way is recreate the traffic that was happened during measurement (this is achieved via replay block) other than these two we do not have need to play the files. Hence CAPL does not support playing files other than these to application

Related

How does one allocate memory in an Audio Unit extension (realtime process) for audiofile playback?

I am working on creating an audio unit v3. I have made decent progress up to this point, I made a host app that loads all my 3rd party plugins. I created an auv3 plugin that processes audio and can be loaded by other hosts.
I now want to make an au that loads audio data from disk and scans the data at random positions with sample precision (timestretch, granular stuff, etc). I thought it would be a cool sample playback addittion for contributing to AudioKit.
So this would be making something at the level of AKSampler in the AudioKit framework. While looking through AK source, I feel like I am missing something.
While browsing github, I ended up at places like here:
https://github.com/AudioKit/AudioKit/tree/118affee65f4c9b8d4780ef1a03a6d03004bbcee/AudioKit/Common/Nodes/Playback/Samplers
then I looked in here:
https://github.com/AudioKit/AudioKit/blob/118affee65f4c9b8d4780ef1a03a6d03004bbcee/AudioKit/Common/Nodes/Playback/Samplers/Disk%20Streamer/AKDiskStreamerAudioUnit.mm
which brought me here:
https://github.com/AudioKit/AudioKit/blob/d69dabf090a5e78d4495d938bf6c0aea9f672630/AudioKit/Common/Nodes/Playback/Samplers/Disk%20Streamer/AKDiskStreamerDSPKernel.hpp
and then eventually here:
https://github.com/AudioKit/AudioKit/blob/d69dabf090a5e78d4495d938bf6c0aea9f672630/AudioKit/Core/Soundpipe/modules/wavin.c
I am not looking for info about AKSampler, specifically, just how in general audio files are being loaded and how it makes sense with the realtime nature of the au extension process..
I couldn't find any IPC/XPC code anywhere, so I am guessing that its not about circular buffers connecting to other processes or something.
Does AudioKit allocate memory in the realtime process for audiofile playback? This would seem to go against all the warnings from experienced audio programmers(articles like http://www.rossbencina.com/code/real-time-audio-programming-101-time-waits-for-nothing), but I can't figure out what is being done in AudioKit and generally in iOS..
what am I just not understanding or finding? :D
Opening files and allocating memory for file reads should be done outside the real-time audio context, perhaps during UI file selection, never inside an Audio Unit callback.
One way to get random access to samples inside an AU callback is to memory map the file (mmap C API), and then touch every sample in the memory map before passing the memory pointer (unsafe raw etc.) and file length (mapped bounds) to the audio unit. Then you can do virtual random access file reads inside the callback with a fixed latency.
One way to touch every sample in an array by doing a checksum (and perhaps discarding the result later). This memory read is required to get the iOS virtual memory system to swap blocks from the file VM into RAM, so that storage system reads won't happen in the real-time context.

iOS Audio Sampler with Volume Envelope

my goal is to create a sampler instrument for iPhone/iOS.
The Instrument should play back sound files on different pitches/notes and it should have a volume envelope.
A volume envelope means, that the sounds volume is fading in when nit starts to play.
I tried countless way on creating that. The desired way is to use a AVAudioEngine's AVPlayerNode, then process the individual samples of that node in realtime.
Unfortunately I had no success on that approach so far. Could you give me some pointers on how this works in iOS?
Thanks,
Tobias
PS: I did not learn the Core Audio Framework. Maybe it is possible to access an AVAudioNodes Audio Unit to execute this job, but I had not the time to read into the Framework yet.
A more low-level way is to read the audio from the file and process the audio buffers.
You store the ADSR in an array or better, a mathematical function that outputs the envelope value based on the sound index you pass it (using interpolation). So the envelope maps to any sound's duration.
Then you multiply the audio sample with the returned envelope value to get the filtered sample.
One way would be to use the AVAudioNode and link a processing node to it.
I looked at another post of yours I think AUSampler - Controlling the Settings of the AUSampler in Real Time is what you're looking for.
I haven't yet used AVAudioUnitSampler, but I believe it is just a wrapper for the AUSampler. To configure an AUSampler you must first make and export a preset file on your mac using AULab. This file is a plist which contains file references and sampler decay volume pitch cutoff and all of the good stuff that the AUSampler is built for. Then this file is put into your app bundle. You then create a directory named "Sounds", copy of all of the referenced audio samples into that folder and put it in your app bundle as well (as a folder reference). Then you create your audioGraph (or in your case AVAudioEngine) and sampler and load the preset from the preset file in your app bundle. It's kind of a pain. These links I'm providing are what I used to get up and running, but they are a little dated, if I where to start now I would definitely look into the AVAudioUnitSampler first to see if there are easier ways.
To get AULab go to Apple's developer downloads, select "Audio Tools for Xcode". Once downloaded just open the DMG and drag the folder anywhere (I drag it to my Applications folder). Inside is The AULab.
Here is a technical note describing how to load presets, another technical note on how to change parameters (such as attack/decay) in real time, and here is a WWDC Video that walks you through the whole thing including the creation of your preset using AULab.

Read audio file, perform filters (i.e. reverb), and then write audio file without playback on iOS

I'm working on an app which has a requirement for running some basic audio filters (such as normalisation and reverb) on a file. The idea is to take an existing audio file, add the filters, and then write the data to a new file. Crucially, this must be done without any playback and should be fast (i.e. on a 60 second audio file I should be able to add reverb in under a second).
I've looked at several solutions such as The Amazing Audio Engine and AudioBox but these all seem to rely on you playing back any audio in realtime rather than writing it to a file.
Does anybody have examples, or can point me in the right direction, for simply taking a file and applying a basic audio filter without listening to it. I'm sure I must be missing something simple somewhere but my searches have turned up nothing.
In general,the steps are:
Set up an AUGraph like this - AudioFilePlayer -> Reverb/Limiter/EQ/etc. -> GenericOutput
Open the input file and schedule it on the AudioFilePlayer.
Create an output file and repeatedly call AudioUnitRender on the GenericOutput unit, writing the rendered buffers to the output file.
I'm not sure about the speed of this, but it should be acceptable.
There is a comprehensive example of offline rendering in this thread that covers the setup and rendering process.

Transcode/remux FLV and stream on the fly

I'm trying to teach myself a bit about video streaming and transcoding, with some Roku app development on the side. I have a number of video files (mostly in FLV format (H.264/AAC)) that I would like to stream to a client, which in this case is a Roku box (that accepts MP4 (H.264/AAC) and HTTP Live Streaming (HLS)). I'm wondering if it is possible to transcode/remux the FLV files and stream them to the client on the fly, perhaps over HLS?
I have tried using ffmpeg to remux the files and serve them immediately during the transcoding process, but they are unplayable until the write process is complete. I can get the Roku to play my completed MP4 files just fine via Apache/Rails.
But I'm wondering... is it possible to set up a server to transcode/remux a file and immediately have the output file (from ffmpeg/whatever tool I'm using) streamed to the client? If so, what tools are required to accomplish this? Is it possible to use a media file segmenter to chop up a file as it's being transcoded or remuxed?
I'm well aware that the transcoding process is CPU intensive, but I'm not so much worried about the practicality of transcoding and streaming on the fly since this is simply a personal education project (and I have an idle system that is capable if handling this).
Apologies if I'm way off base here, just trying to hack my way through this.
Thanks!
The trick to getting HLS served immediately that a TS segment has been completed is getting the playlist to dynamically update as the data arrives on disk.
What you are trying to do is essentially stream a Live event over HLS, which absolutely can be done, it just takes co-ordination between tools.
The opensource segmenter is able to do this, the trick is to have ffmpeg write out a single MPEG-TS stream (Unsegmented) and write this to a named pipe (Or equivalent for your OS), then have segmenter read from this named pipe and write the files to a directory within your shared webspace.
The segmenter repeatedly updates the M3U8 file on disk while processing so it can be used as a "Live" stream until the task is finished.
When ffmpeg closes its output the segmenter puts the end tag in the M3U8 and the file becomes "VOD".
The segmenter can be downloaded here

Stream video from ffmpeg and capture with OpenCV

I have a video stream coming in on rtp to ffmpeg and I want to pipe this to my OpenCV tools for live streaming processing. The rtp linkage is working because I am able to send the incoming data to a file and play it (or play if via ffplay). My OpenCV implementation is functional as well because I am able to capture video from a file and also a webcam.
The problem is the streaming to OpenCV. I have heard that this may be done using a named pipe. First I could stream the ffmpeg output to the pipe and then have OpenCV open this pipe and begin processing.
What I've tried:
I make a named-pipe in my cygwin bash by:
$ mkfifo stream_pipe
Next I use my ffmpeg command to pull the stream from rtp and send it to the pipe:
$ ffmpeg -f avi -i rtp://xxx.xxx.xxx.xxx:1234 -f avi -y out.avi > stream_pipe
I am not sure if this is the right way to go about sending the stream to the named pipe but it seems to be accepting the command and work because of the output from ffmpeg gives me bitrates, fps, and such.
Next I use the named pipe in my OpenCV capture function:
$ ./cvcap.exe stream_pipe
where the code for cvcap.cpp boils down to this:
cv::VideoCapture *pIns = new cv::VideoCapture(argv[1]);
The program seems to hang when reaching this one line, so, I am wondering if this is the right way of going about this. I have never used named pipes before and I am not sure if this is the correct usage. In addition, I don't know if I need to handle the named pipe differently in OpenCV--change code around to accept this kind of input. Like I said, my code already accepts files and camera inputs, I am just hung up on a stream coming in. I have only heard that named pipes can be used for OpenCV--I haven't seen any actual code or commands!
Any help or insights are appreciated!
UPDATE :
I believe named pipes may not be working in the way I intended. As seen on this cygwin forum post:
The problem is that Cygwin's implementation of fifos is very buggy. I wouldn't recommend using fifos for anything but the simplest of applications.
I may need to find another way to do this. I have tried to pipe the ffmpeg output into a normal file and then have OpenCV read it at the same time. This works to some extent, but I imagine in can be dangerous to read and write from a file concurrently--who knows what would happen!
hope it's not too late to answer, but I have tried the same thing some time ago, and here is how I did it.
The video-decoding backend for OpenCV is actually ffmpeg, so all its facitilites are available in OpenCV as well. Not all the interface is exposed, and that adds some difficulties, but you can send the rtp stream address to OpenCV.
cap.open("rtp://xxx.xxx.xxx.xxx:1234");
Important: OpenCV is not able to access password-protected rtp streams. To do that, you would need to provide the username and the password, there is no API exposed for it.

Resources