I'm trying to render a 3D model .obj file downloaded from the web but I'm having issues with applying the textures to the model. .mlt, .obj and texture .jpg files are all in the same folder (e.g. car.obj, car.mtl, car.jpg, carDark.jpg).
Are the textures supposed to be applied automatically by Model I/O? How am I supposed to use Model I/O to import textures?
Did you check the file path to the .mtl file in the .obj file?
If this path is incorrect then the materials will not load.
OBJ and MTL files are text files so you can use a text editor to open them.
The path to the .mtl file should be at the top of the .obj file:
mtllib mymtlfile.mtl
If they are in the same folder you can just strip the path.
If this path is okay then you should check the paths to the textures in the .mtl file. Look for lines starting with map_. For instance:
map_Kd mydiffusetexture.png
map_Ka /path/to/myambienttexture.tga
map_bump mybumptexture.jpg
If you strip all the paths then the file import should work.
Add your textures images files in art.scnassets
Select your scene and select your textures in Material properties
Related
I am currently in the process of building a Brachiograph plotter. I am 75 years old and have a minor disability with my hands. I would like to find out if anybody can tell me how I can output Turtle Graphics to a file that can be read by the Brachiograph plotter. I believe that the linedraw.py converts a .svg to a .json file that is read by the Brachiograph. I would like to create some fractal image files and print them with the Brachiograph.
Thank you for any help that you can offer with this project.
Dick Burkhartzmeyer
Welcome to Stackoverflow. In your OP you state "I believe that the linedraw.py converts a .svg to a .json file[...]". Looking at the documentation it looks like linedraw.py will convert a bitmap to an svg file and a json file.
From the documentation:
The main use you will have for linedraw is to take a bitmap image file
as input, and save the vectorised version as:
an SVG file, to check it
a JSON file, to draw with the BrachioGraph.
The way I would approach this is to create an svg with Turtle and then in your Python script convert that to a png. Then linedraw.py can be used to convert that to your JSON BrachioGraph file. I found a solution to do that in another SO thread. The answer in that thread is using Cairosvg to convert the SVG to PNG format.
What is the significance of the andy.obj file in the ARCore Sample?
Let's say if we replace the andy.png with a new image, how can we generate .obj file for the new image?
The OBJ file describes the geometry, the png file the texture to "stretch" over this 3D object. You have to use a 3D modelling program like Blender to create a new model.
This is how you export OBJ files in Blender: https://blender.stackexchange.com/questions/121/how-do-i-export-a-model-to-obj-format
The sample code only can handle the simplest OBJ models that only have 1 texture file.
Fo those complicated OBJ models, they usually come with a MTL file that refers to several different texture files. To be able to handle that ,you need some extra work on the existing code. Please check the code I implement for this case if you are interested #https://github.com/JohnLXiang/arcore-sandbox . Specifially ,you can take a look at ObjectRenderer.createOnGlThread().
To export a texture as image in Blender do the following:
Select your object and enter in edit mode. Select all vertices/faces (press 'a'). Then start the UV Mapping, press 'u'. And Select one of the options of the UVMapping. You must test the best option for your model. I'm not sure which UV Mapping mapping option the ARCore uses.
Then go to the UV/Image Editor:
Export UV Layout at the menu, and save your image.
For creating a new .obj model for your AR app you need to use 3D authoring software like Autodesk Maya, Autodesk 3dsMax, Blender, SideFx Houdini, Cinema 4D, etc. These applications can help you create a high quality polygonal model with corresponding .mtl texture file.
But you should know that Sceneform supports 3D assets not only in OBJ format (where animations aren't supported) but also in FBX (with animations) and in glTF (animations not supported).
.obj
.fbx
.glTF
Sceneform's ASCII and Binary Asset Definitions are also welcome:
.sfa
.sfb
Supported material files (aka textures for your 3D assets) have the following extensions: MTL, BIN, PNG, JPG and native Sceneform's SFM.
.mtl
.bin
.png
.jpg
.sfm
Hope this helps.
I'm making changes to an existing project that uses fontello.
And I would like to add some icons to the project's font file.
What is the easiest way to to add those new icons? Can I create a 2nd font file in fontello and then somehow merge the two font files?
When you download a Fontello pack it includes a config.json file, which is a mapping of the characters included in your custom font. If you want to add more characters to your font, you should start by uploading this config file to fontello.com, change your selected characters and then download a new pack, making sure to replace all the fonts and css files (so that the new characters start working) and this config file (for next time you want to change the set of characters included).
You can merge fontello collections using a text editor by opening the config.json file of one collection and copying the glyphs to the config.json file of the other collection. After you do that, drag (upload) the new merged config.json file to fontello.com and you can then download your new merged collection.
To add new icons, simply drag (upload) the SVG images to fontello.com. In this case you may need fix the paths (fontello will only accept files with a single path) within your SVG files before uploading them. You can do this a free tool called Inkscape by following these steps:
Open file
Select all
Object -> Ungroup
Path -> Union
Path -> Combine
File -> Vacuum Defs
Save as -> Plain SVG
If the SVG file has multiple paths that you cannot remove by combining paths, you can use Edit > XMP Editor to remove them. You should only have one path when you are done.
You can also check the svg file output in a text editor. If saved correctly, you should see a single element and an empty element.
just drag and drop your svg font to fontello home page. your previous font will be displayed. you can add additional fonts to them and download new font files.
It seems that most 3D authoring applications use Z as the 'Up' axis. While SceneKit uses Y as the 'Up' axis. SceneKit allows you to load scenes as Collada .DAE files. When loading a Scene via either:
SCNScene(named: String?, inDirectory: String?, options: [NSObject : AnyObject]?)
or
SCNSceneSource(URL url: NSURL!, options: [NSObject : AnyObject]!)
You can specify options including SCNSceneSourceConvertToYUpKey and SCNSceneSourceConvertUnitsToMetersKey.
Setting these accordingly, I expected the various nodes to be transformed and scaled when I added them to my own scene constructed from Nodes in the loaded scene. But these options appear to have no effect.
let myScene = SCNScene(named: "Scene.dae", inDirectory: nil, options: [SCNSceneSourceConvertToYUpKey:true, SCNSceneSourceConvertUnitsToMetersKey:25.4])
Have I misunderstood the meaning of these option parameters?
SceneKit does not directly load DAE (or ABC) files on iOS -- it loads scenes from a private Apple format, which Xcode automatically converts to when you include scene files in your project. Part of this conversion is the option to transform the up axis.
I don't believe that option is exposed when you simply include the DAE file as a bundle resource. (That might be a good bug to file.) However, it's a good idea to use the new SceneKit Asset Catalog feature instead, anyway -- put your DAE files and whatever external resources (textures) into a folder with a .scnassets extension, and Xcode will process them together to optimize for the target device when you build. Then, when you select that folder in the Xcode navigator, you'll get an editor for scene building options:
All the boxes there are good to check. :) (Since the first one doesn't come with an explanation: interleaving means organizing the vertex data for a geometry so you get better GPU-memory locality for fewer cache misses during vertex processing, which is important for performance on embedded devices.)
Hm, I don't see anything about units in there, though. Might be another good bug to file.
There's another option, too -- all SceneKit objects implement the NSSecureCoding protocol, so you can load and preprocess your scene on OS X, then use NSKeyedArchiver to write it out. Include the resulting file in your iOS project as a bundle resource, and Xcode won't preprocess it (it's already as compressed and optimized as it can get) -- and if you name it with an .scn extension, you can use all the SCNScene and SCNSceneSource methods for loading it just like you would a (preprocessed) DAE file.
I'm relatively new to programming and currently trying to learn more about three.js, a JavaScript 3D library. Many things are relatively easy to understand, but I an having a hard time saving an geometry and its material.
I have build a simple cube and an image is projected on to it whenever a picture is loaded.
like this:
$('#picture')[0].onload = function() {
var texture = new THREE.Texture(this,null);
texture.needsUpdate = true;
cube.material = new THREE.MeshBasicMaterial( { map: texture } );
render();
}
My goal is to save the cube and its material. Ideally I would like to save it directly as a .dae file since another program in which i would like to import my cube only takes .dae files.
However, i can not find a collada exporter for THREE.js. Therefore, I searched for other exporters which can produce a file format I can open in e.g. Blender or MeshLab and save as .dae from there. Unfortunately, I have not been able to save both geometry and material/picture with these exporters:
GeometryExporter.js, OBJExporter.js, SceneExporter.js
I also looked into the combination of OBJ and MTL. I did find the OBJMTLLoader.js, however I lack the knowledge to rewrite the OBJMTLLoader.js in to a OBJMTLExporter.js
Can anyone help me find a way to get from a cube and its (picture) material in THREE.js to a .dae file?
For a very simple use case such as this, you could write the DAE manually, or even simpler, modify existing DAE on the fly. It's just XML if you change the file extension.
Create the simple cube with material in Blender, export it to DAE, and use it as a template. Simple DAE files are not very hard to read with text editor, you could find the relevant parts and just search & replace those parts in javascript (texture reference, material properties and UVs maybe).
This might not be exactly what you are looking for, but could work. Not many formats have proper support for materials, and I doubt you have much success finding a working, fully featured Three.js exporter for such a thing (not sure though).