I’m currently writing some custom Core Image filters using Metal. For the sake of structure I want to put the different kernels into different .metal files with some common includes like you would do with “normal” source files.
However, when the metallib tool bundles the different .air files created by the Metal compiler into one .metallib file, only the kernel functions defined in the first input .air file given to metallib are visible. Functions from the other .air files don’t seem to be included. What’s the reason for this?
I thought (as is the default compilation behavior for Metal files) all Metal sources get compiled into one library that is then used by every custom CIFilter class to instantiate their internal CIKernel with the function they need.
I now ended up compiling a .metallib file for each custom filter with custom build rules and copying all of them them into my framework using a custom build phase. This doesn't seem to be the intended way…
Related
The bounty expires in 3 days. Answers to this question are eligible for a +50 reputation bounty.
Affinity wants to draw more attention to this question.
The idea is having a Utilities.metal shader with helper functions (let's say converting color: rgb2hsv(), ... ) that I want to reuse in many projects. I'd like to access from the local project's metal shader files those helper functions while sharing it as a swift package.
Goals:
Having a Utilities.metal shader file or files in a swift package that I can add to other projects.
Calling that rgb2hsv() function from local shaders. I want to access the functions on that Utilities.metal file from a local executable metal library's (the one that you create in the local project with MTLDevice.makeDefaultLibrary() shaders (doing #include "Utilities.h" in local metal shaders?).
I want the package to contain the .metal files so I can update and commit them from any project (as local package in tandem with the app) not just the metallib.
What I don't want:
Making the bundle library MTLDevice.makeDefaultLibrary(bundle:) to use it inside the package or locally as a stand alone library. I want to link the functions and have access to them.
Avoid if possible hacky things like converting the shaders files to strings and using MTLDevide.makeLibrary(source: source options: nil).
Do I need to use MTLDynamicLibrary? The problem is it's not supported on many devices. Any help is welcome.
I want to use only a few filters within GPUImage2 in my swift project, how can I tailor GPUImage2 to only a few filters that I need?
I am not familiar with the code base, and I don't see any documentation on this.
P.S. My concern is mostly about app size, if including everything doesn't bloat the app size, I am OK with importing GPUImage as a whole.
This is a common question for people who want to shrink their binary size by only bringing over the operations they need, so I'll see if I can provide a canonical reference.
The easiest way to do this is to remove the dependency on GPUImage from your project and instead manually copy into your project just the files necessary to build the core components of the framework. The platform-independent core files are these:
CameraConversion.swift
SerialDispatch.swift
BasicOperation.swift
Color.swift
FillMode.swift
Matrix.swift
OpenGLContext_Shared.swift
Timestamp.swift
OpenGLRendering.swift
ShaderProgram.swift
ShaderUniformSettings.swift
Framebuffer.swift
FramebufferCache.swift
Position.swift
Size.swift
Pipeline.swift
ImageOrientation.swift
The following files also need to come over, but they have platform-specific (Mac, iOS, or Linux) variants, so you'll either need to choose the ones for your specific platform target or selectively include them to each of your various targets:
PictureInput.swift
PictureOutput.swift
MovieInput.swift
MovieOutput.swift
Camera.swift
OpenGLContext.swift
RenderView.swift
With those files, you should be able to build a project that can perform image processing in the same manner as GPUImage, but without the long list of operations. If you have one or two operations you want to bring over, you can selectively copy those files into your project. You might need to copy over one or two dependencies if they are subclassed from another operation.
Is there a way to share types across fsx files?
When using #load to load the same file containing a type from multiple FSX files they seem to be prefixed into a different FS_00xx namespace each time, which means you can't pass them around.
Are there any ways around this behaviour without resorting to compiling into an assembly?
As for
http://msdn.microsoft.com/en-us/library/dd233169.aspx
[.fsx files are] used to include informal testing code in F# without adding the test code to your application, and without creating a separate project for it. By default, script files are not included in the build of a project even when they are part of a project.
This means that if you have a project with enough structure to be having such dependency problems, you should not use .fsx files, instead write modules/namespaces using .fs files. That is, you really should compile them types into an assembly.
The f# interactive interpreter generates assembly for each loaded files. If you load a file twice, the bytecode is generated twice, and the types are different even if they have the same definition and the same name. This means that there is no way for you to share types between two .fsx files, unless one of them includes the other.
When you #load a file which has the same types as ones already present in your environment, the f# interactive interpreter can use two different strategy:
refuse to load the file if conflicts with existing names arises (complaining that some stuff is already defined)
put the names in FS_00xx namespace (so that they are actually different types from the ones you already loaded), eventually opening the resulting namespace so that names are available from interactive session.
Since fsx files are supposed to be used as informal test it is more user-friendly to use the second approach (there are also technical reason for which the second approach is used, mainly dependent on .net VM type system, and the fact that existing types cannot be changed at runtime).
[Note: This is a more specific answer to a more specific question that is a duplicate of this one.]
I don't think there is a nice and easy solution for this. The one solution I have been using in some projects (like the F# snippets web site) is to have only one top-level fsx file that loads a number of fs files. For example, see app.fsx.
So, you would have common.fs, intMapper.fs and stringMapper.fs that would be loaded from caller.fsx as follows:
#load "common.fs"
#load "stringMapper.fs"
#load "intMapper.fs"
open Common
Inside stringMapper.fs and intMapper.fs, you do not load common.fs. The common types will be loaded by caller.fsx before, so things will work.
The only issue with this is that intMapper.fs now isn't a standalone script file - and if you want to get autocomplete in an editor, you need to add a fsproj file that specifies the file order. In F# snippets project, there is a project file which specifies the order in whch the editor should see and load the files.
Have all the #load and #open directives in the file you actually run from fsi.exe (C in the example below), and make sure the loaded files themselves do not #load their own dependencies:
Files A.fsx, B.fsx, C.fsx. B depends on A. C depends on B and A.
B contains
//adding the code below would cause the types defined in A to be loaded twice
//#load "A.fsx"
//#open A
C contains
#load "A.fsx"
#open A
#load "B.fsx"
#open B
Unfortunately this then makes all the files hard to edit from Visual Studio - the editor doesn't know about their dependencies and shows all sorts of errors.
Therefore this is a bit of a hack, and the recommended way seems to be to have a single .fsx file and compile everything else into a .dll :
// file1.fsx
#r "MyAssembly.dll"
https://msdn.microsoft.com/en-us/library/dd233175.aspx
I'm building up some Dart code that I would like to use in an app where it is essentially a library to the javascript. I'm wondering how I can specify which Dart files I'd like in the project to be part of the library. For example, theres Foo.dart and Bar.dart. How can I have the created product include both Foo.dart and Bar.dart in one file? I'm also concerned about tree shaking since none of the classes are instantiated in Dart.
There's also a Baz.dart, and I would like to have a different build for compiling Foo.dart and Baz.dart into a single file (though this is less important, as I can accomplish this would separate projects and some symlinking).
Thanks!
This use case (build a JavaScript library with Dart) isn't supported yet.
The reworked js-interop package is supposed to allow to do that but I don't know about it's current state.
I swear I have been looking for the answer to this question exhaustively, but I found no real solution to my problem. And what problem is that?
Well I am new to DirectX and shaders. There are a few things about shaders that I still don't get.
1 - How to make a shader? Do I have to create an .fx file in the project? Some times it is so, but in some examples I can't find any .fx file. And how do I make this file? My version of Visual Studio can't directly create .fx files; I have to "force" the file to be .fx.
2 - How I compile them? Are they compiled at the same time I compile solution or they have special ways to compile?
3 - Is there a nice tutorial around? I have been looking for a shaders-bible but mostly I found vague and short tutorials explaining few things, and never in a deep way.
1. New to Shader ?
To get the introduction to sharder use the free shazzam shader editor (http://shazzam-tool.com/) for creating the simple shaders by interactive draw tools. Try to play with different option and then compare the automatically generated HLSL(.fx) codes for better understanding. After you got the feel of how the shader code to be written buy a standard book/online tutorial and practice to write your own code according to your requirements.
2. Common Methods for compilation:
a. D3DXCreateEffectFromFile- Write the shader code and save in .fx extension and dynamically compile the code by D3DXCreateEffectFromFile. Compiled code can be used in your core module using effect(ID3DXEffect) interface.
b. Explicit Compilation: Write the shader code and save in .fx extension and explicitly compile the code using fxc.exe (You can find in DirectX SDK Utility folder).
Example:
fxc.exe /Tfx_2_0 /Fo file.fxo file.fx
After binary file is created follow as below
1. Create a buffer and load the generated binary file(.fxo) by the file stream.
2. Call D3DXCreateEffect and give the buffer content as a input parameter.
3. Like "method a" use effect(ID3DXEffect) interface for interacting with the
shader code.
3. Introduction Tutorial:
http://rbwhitaker.wikidot.com/hlsl-tutorials
Shaders are just like normal text files you put your shader code in them. You don't need to add them to your project if you are compiling the shader at runtime with the D3D functions.
There are two ways to do this. One is putting your shader code in .fx files or *.hlsl files and then compiling the shader at runtime using D3D library functions (D3DCompileFromFile). Though Microsoft is not suggesting this anymore because D3DCompileFromFile won't work in Metro style apps. The other way is to use fxc.exe to compile your shaders at build time. Visual Studio 2012 made this process part of the usual build. So you can add your hlsl files into your project and they will be build when you build your project. This will also enable you to see any errors/warnings in the shader at compile time.
http://msdn.microsoft.com/en-us/library/windows/desktop/bb509633(v=vs.85).aspx
Hope that helps.