Saving videos with meshcat? - drake

What is the standard way of saving videos using MeshcatVisualizer? I know the following works for wrappers of PyPlotVisualizer:
visualizer.start_recording()
simulator.AdvanceTo(T)
ani = visualizer.get_recording_as_animation()
But the two relevant methods are not available for MeshcatVisualizer, and there don't seem to be any examples in the repo that create videos using it, and none of the methods that the class does have seemed like promising candidates. Failing that, is there another way of saving videos for 3D visualizations?

Meshcat has an animation tool: https://github.com/rdeits/meshcat-python/blob/master/animation_demo.ipynb , you can access MeshcatVisualizer's meshcat.Visualizer instance via MeshcatVisualizer.vis. However, MeshcatVisualizer doesn't have a function like MeshcatVisualizer.convert_to_video that supports this animation tool at the moment. Perhaps the easier route for now is screen recording.

I don't believe that meshcat offers it's own recording functionality, which means the recommended workflow would be to just use your favorite screen recorder software. I've forwarded this to a few meshcat experts in case they have something better to recommend.
Update: In addition to rdeits answer above, he had a few more details in email:
there's a built in animation API with recording support in
meshcat-python (see "Recording an Animation" in
https://github.com/rdeits/meshcat-python/blob/master/animation_demo.ipynb
), but AFAICT Drake's MeshcatVisualizer isn't hooked up to it. It
might not be that hard to do so--the basic idea is that you can use
at_frame to get a representation of a single frame of an animation
that behaves like a meshcat.Visualizer. You can call set_transform
on that frame, and rather than moving anything in the viewer it will
instead record that action into an animation track. Then you can send
the whole animation at once to the visualizer and let the browser side
handle replaying and recording it.

Related

What kind of drum sampling options does Audiokit have?

Working in audio kit and I am looking to understand how people have incorporated drums. Obviously, the sampler is an option, but I am wondering if there is a built in option similar to some of the basic synthesis options.
There are a few options. I personally like the AppleSampler/MidiSampler like in the example but instead of using audio files you can create a EXS Sampler instrument in Logic where you can assign notes for different velocities. AppleSampler can also load AUPresets made in GarageBand and SoundFonts (SF2). The DunneAudioKit Sampler is an option if you are working with SFZ files, but I think that might be a work-in-progress in AudioKit 5. Loading WAV files directly into AppleSampler is also a good option if you just want one shot sounds.
I'm assuming you're mostly talking about playback of samples, not recording.
The best built-in option I've seen (other than AppleSampler/MidiSampler) is AudioPlayer, which lets you load in a sample and play it back on demand (from an on-screen pad, etc). MIDIListener can then help you respond to external MIDI events, etc. It works (I have a pretty big branch in my app where I tried it), but not sure it works well.
I wouldn't recommend DunneAudioKit Sampler for drums. There is no one-shot playback (so playing the same note in quick succession will cut off the previous note, even if you mess with the release). If you're trying to build a complex/realistic acoustic drum instrument, you'll also want round-robins so that variations of the same hit can be played, which Dunne also doesn't have. It can load SFZ files, but only a very limited subset of SFZ's opcodes (so again, it's missing things like round robins, mute groups, one-shot, etc).
Having gone down all those roads, I would suggest starting with AppleSampler, and I would build the EXS or aupreset file in Logic or Mainstage rather than trying to build something programmatically.
If your needs are really simple, the examples in AudioKit's recently released drum pad playground is a great place to start, loading single samples into a specific note on AppleSampler.

Flutter, IOS, how to prevent recording

Currently there is a flutter app and it needs to be protect from screenshot or recording
but when I search about this, there is no way to implement this in a official way
but it seems there are some tricks (like 60fps? I know the concept but I don`t know how to implement this)
you can see the black screen when record the video on Netflix also (they prevent in some ways)
how could I achieve this? thanks
there is a package called window manager which does just the thing that you are asking it restricts external apps from recording as well as does not allow screenshots to be taken
a detailed tutorial is given in this article.

How to write a code to read .fla file?

I was wondering so long that how can people analyze the trait of each file extension (of course open it in notepad is not readable)
For example, I want to write a program that can read everything from .fla file like timeline, movie clips, position of each MCs or all the motion tween values. And get the image embeded in it. (I'm planning to use flash as IDE for another project.)
(The reason that I tried to read proprietary format is I want to utilize their awesome editor. What I actually want to do is, I want to make an iOS game with cocos2d. There is a code to move things around in cocos2d but there is no decent editor. So I'd like to use Flash as an editor, then convert the motion to objective-C cocos2D code by reading the .fla file.)
If you would like to be able to import timeline animation from flash into cocos2d, this tool might help. More information in this thread.
The grapefrukt-exporter might also help as it can export keyframe data, and various other formats for animation.
Instead of creating the tool yourself, it might be much easier (and time saving) to use one of these and integrate it into your workflow :)
Finally, if none of the above works, how about just exporting the flash animation as an animated GIF or a movie file?
Im assuming you want to write a decompiler, this is possible and there are several available on the internet, price varies.
It is not possible for flash to achieve this, Most programs, software are built on a native language such as C, Native meaning it can independly run in its own with out initially setting up an invironment to support it.
Flash is not independent enough to be available to have this much power.
Try looking at c++ or C# as this would be possible, also these languages are a lot more powerful.

ADSR in iOS, sample code?

I've been searching for some examples that show how to do ADSR in iOS using audio samples (preferably WAV files with loop points, but thats secondary). I guess most people who write a sampler/synth app use audio unit for this. Does any one know a good code example that shows ADSR in any iOS audio library?
In the new iOS SDK 5.0 there's now a Sampler Audio Unit! Which can do ADSR envelopes.
The presets demo shows how to use the sampler:
http://developer.apple.com/library/ios/#samplecode/LoadPresetDemo/Introduction/Intro.html#//apple_ref/doc/uid/DTS40011214
If you want to load different sound formats to play this article is helpful:
https://developer.apple.com/library/mac/#technotes/tn2283/_index.html
And here's the iOS documentation reference:
http://developer.apple.com/library/ios/#documentation/AudioUnit/Reference/AUComponentServicesReference/Reference/reference.html#//apple_ref/doc/uid/TP40007291
you can find (a very basic) one in the Apple's SinSynth sample. That is an AU, but it should demonstrate how one would apply a envelope to an audio buffer. i don't remember - it may simply be an ASR, but adding a fourth stage is simple once you have understood the existing program. The implementation is right in the note's render.
Envelope Generators are not platform specific.
musicdsp.org will be a better resource if you want more than a push in the right direction.
MusicDSP has source code for an example envelope follower with attack/release. If you understand this, then sustain/decay should be pretty logical. ;)
But an ADSR envelope is basically just a matter of applying gain to your output signal with a state machine. Each state has a starting value, and ending value, and a duration. Calculating the slope of that line and the value of each point along it was covered in your algebra class back in high school. ;) If you want to be really fancy, you can implement other types of curves, but the concept remains the same.

Virtual Instrument App Recording Functionality With RemoteIO

I'm developing a virtual instrument app for iOS and am trying to implement a recording function so that the app can record and playback the music the user makes with the instrument. I'm currently using the CocosDenshion sound engine (with a few of my own hacks involving fades etc) which is based on OpenAL. From my research on the net it seems I have two options:
Keep a record of the user's inputs (ie. which notes were played at what volume) so that the app can recreate the sound (but this cannot be shared/emailed).
Hack my own low-level sound engine using AudioUnits & specifically RemoteIO so that I manually mix all the sounds and populate the final output buffer by hand and hence can save said buffer to a file. This will be able to be shared by email etc.
I have implemented a RemoteIO callback for rendering the output buffer in the hope that it would give me previously played data in the buffer but alas the buffer is always all 00.
So my question is: is there an easier way to sniff/listen to what my app is sending to the speakers than my option 2 above?
Thanks in advance for your help!
I think you should use remoteIO, I had a similar project several months ago and wanted to avoid remoteIO and audio units as much as possible, but in the end, after I wrote tons of code and read lots of documentations from third party libraries (including cocosdenshion) I end up using audio units anyway. More than that, it's not that hard to set up and work with. If you however look for a library to do most of the work for you, you should look for one written a top of core audio not open al.
You might want to take a look at the AudioCopy framework. It does a lot of what you seem to be looking for, and will save you from potentially reinventing some wheels.

Resources