In the WWDC 2015 fox demo, there is a SCN file representing the 3D fox. If you want to incorporate the fox in a different app, you import the fox's SCN file and its texture maps.
But if you have 3D characters made in an authoring program like Cinema 4D (https://www.maxon.net/en/products/cinema-4d/overview/), how do you generate similar SCN files for the different characters? Cinema4D cannot export SCN files like this so what do you do?
And does the process change if the characters are animated?
I'm using C4D r12, I imagine the process should be the same for later releases.
One option is to create a separate file for each character. Pay attention to the organization in the Object Manager: the hierarchy of objects as listed there will be the scene graph of nodes in your imported scene file. This includes nulls which will end up as container nodes in SceneKit. The names of your objects and nulls in C4D will be the names of the SCNNodes in the scene file. When you have this set up as desired, save via File > Export... > COLLADA(*.dae)
Alternatively, you could create all your characters with one file and then parse them in SceneKit using the unique name of that character's container node (previously a "container" null in C4D).
Xcode supports collada (dae) files. You can import them into your assets folder and convert them to .scn files. Or Xcode will automatically convert them when you compile your app.
Collada files can also contain animation data, and can be exported from most 3D authoring programs.
Related
i'm trying to practice transfer learning myself.
I'm trying to count the number of each cat and dog files (each 12500 pictures for cat and dog with the total of 25000 pictures).
Here is my code.Code
And here is my path for the picture folderenter image description here.
I thought this was a simple code, but still couldn't figure out why i keep getting (0,0) in my coding (supposed to be (12500 cat files,12500 dog files)):(.
Use os.path.join() inside glob.glob(). Also, if all your images are of a particular extension (say, jpg), you could replace '*.*' with '*.jpg*' for example.
Solution
import os, glob
files = glob.glob(os.path.join(path,'train/*.*'))
As a matter of fact, you might as well just do the following using os library alone, since you are not selecting any particular file extension type.
import os
files = os.listdir(os.path.join(path,'train'))
Some Explanation
The method os.path.join() here helps you join multiple folders together to create a path. This will work whether you are on a Windows/Mac/Linux system. But, for windows the path-separator is \ and for Mac/Linux it is /. So, not using os.path.join() could create an un-resolvable path for the OS. I would use glob.glob when I am interested in getting some specific types (extensions) of files. But glob.glob(path) requires a valid path to work with. In my solution, os.path.join() is creating that path from the path components and feeding it into glob.glob().
For more clarity, I suggest you see documentation for os.path.join and glob.glob.
Also, see pathlib module for path manipulation as an alternative to os.path.join().
So I have an OBJ 3D model with its associated MTL file. The MTL file contains all the textures. However, when I convert the file to the USDZ format, the textures are not attributed to the file. This is the code I use.
xcrun usdz_converter /Users/SaiKambampati/Downloads/Models/object.obj /Users/SaiKambampati/Downloads/Models/object.usdz
The USDZ file is created but the attributes and textures are not applied. Is there any way to include the MTL file when converting the OBJ model to USDZ model?
To convert OBJ to USDZ, I'd recommend using GLB as an intermediate format.
You can convert OBJ to GLB using Blender, by importing the OBJ, and exporting as GLB.
Then Spase has a GLB to USDZ converter available at https://spase.io/converter, that rapidly does the conversion (for free), powered by a Google USDZ library. It's a drag-and-drop tool, and after conversion the USDZ can be downloaded instantly.
For what it is worth, I created a model in Blender intended for usdz and used regular Texture and UV mapping to color it. When I output the OBJ I too got a .mtl file but did not need it. When I passed the texture .png to usdz_converter as the color_map param the texture showed up in Quick Look on iOS 12.
Xcode's Command Line Converter for USDZ doesn't understand associated material MTL files for OBJ 3D models at the moment.
For texturing converted USDZ models in Xcode 10/11/12/13 you, firstly, need to save UV-mapped textures in Autodesk Maya as JPEG or PNG files (for OBJ, ABC, FBX, USD or DAE models) and then assign this UV textures in Xcode. Or, you can use Maya 2020 / 2022 USD Plug-in to generate a USDZ model with textures.
For further details read about Pixar USD File Format HERE.
do you know anybody if it is possible to get some model file from doctor when he made 3d ultrasound of pregnant woman? I mean something like DICOM (.dcm) file or .stl file or something like that what I can then work with and finaly print with 3D printer.
Thanks a lot.
Quick search for "dicom 3d ultrasound sample" resulted in one that you might be able to use for internal testing. You can get the file from here
Bonjour,
The first problem you will face is the file format.
Because of the way the images are generated, 3D ultrasound data have voxels that are expressed in a spherical system. DICOM (as it stand now) only support voxels in a Cartesian system.
So the manufacturers have a few choices:
They can save the data in proprietary format (ex: Kretzfile for Ge, MVL for Samsung).
They can save the data in private tags inside a DICOM file (Ge, Hitachi, Philips)
They can re-format the voxels to be in Cartesian, but then the data has been transformed and nobody like that. And anyway, since they also need to save the original (untransformed) data, the companies that do offer Cartesian voxels, usually save them in the same way as the original, so they are not saved in normal DICOM tags, but in their proprietary version.
This is why most of the standard software that can do 3D from CT or MR will not be able to cope with the data files.
Then the second problem is the noise. Ultrasound datasets are inherently very noisy! Again standard 3D reconstruction software where designed for CT or MR and have problems with this.
I do have a product that will read most of the 3D ultrasound files and create an STL model directly from the datasets (spherical or Cartesian). It is called baby SliceO (http://www.tomovision.com/products/baby_sliceo.html)
Unfortunately, it is not free, but you can try it without any licenses. Give it a try and let me know if you like it...
Yves
I am using Metaio's Creator to create an AR event and using a model the client purchased from TurboSquid.com. Everytime I try to convert the .3DS file to an .MD2 file I get an error that there are to many polygons.
Is there a program that can automatically convert the .3DS or .OBJ to an .MD2 without lowering the polygon count or automatically taking polygons out without risking the integrity of the model?
MD2 only supports 4096 polygons inherently. As #0r10n said, you have to reduce the number of polygons to make it working with MD2. For conversion, I had the best experience using the QTip plugin for 3Ds Max here: http://qtipplugin.com/
Very easy to use and very powerful.
If the model has too many polygons you can import it into a 3DCC-tool like Max, Maya or Blender and use their tools to reduce the polygon-count.
For example using Blender 2.49 you can use the PolyReducer-Script, that preserves UV-Coordinates.
How to export .3ds file for using RenderMonkey from 3dsMax 2010?
When I look into Stream Mapping in RenderMonkey,
there are POSITION, NORMAL, TEXCOORD, TANGENT, BINORMAL, TESSFACTOR etc.
I want to know how that information export for sending vertex shader as streaming data.
Thanks in advance.
Which streams do you want/need? A 3ds usually contains POSITION, NORMAL, and TEXCOORD. If you export from 3DSMax to an OBJ then you have more control over exactly what gets exported. TANGENT and BINORMAL are used for shaders such as bump-mapping and are generated by RenderMonkey. I don't know how to use TESSFACTOR. Just make sure that your VS input struct uses the correct semantic (pink text in rendermonkey) for each input stream and RM will populate them appropriately.
You can use DirectX Exporter for 3ds Max to export your scenes to a x file which contains texcoords, normal and binormal. The latest release can even convert a standard material with multiple UV coordinates to a DirectX material(.fx) and generate shader code such as tangent-space normal mapping.