Converting OBJ with textures to USDZ using Reality Converter - augmented-reality

I'm trying to convert .obj with several .png (textures) files into .usdz through Reality Converter but it isn't working. It'll take the object, but when I put the png files under the material folder, nothing happens?
Any suggestions?
I end up with a blank/white object.

I encountered the same issue myself, but don't worry. In the main preview, it doesn't reflect the texture after adding a png file, but if you watch the small preview on the left side (click the Models button), it correctly displays the texture. Therefore, if you export it, the usdz file is exported correctly with the texture.

Reality Converter
OBJ model comes with two corresponding files – texture file (usually .jpg or .png) and .mtl file. The last is an auxiliary ascii file containing definitions of materials that may be accessed by an OBJ file. This definitions' file must be in the same directory as OBJ and its texture. My .mtl file (generated in Autodesk Maya) contains the following lines of code:
newmtl initialShadingGroup
illum 4
Kd 0.00 0.00 0.00
Ka 0.00 0.00 0.00
Tf 1.00 1.00 1.00
map_Kd texture.jpeg
Ni 1.00
As you can see, everything is displayed correctly.

Related

Converting AI to EPS / JPEG without "Use artboards"

I'm trying to read(preview) AI (adobe illustrator) file in my web application. my web app is on Linux machine and mainly uses Python.
I couldn't find any native python code that can preview AI file, so I continued to search for solution and found ghostscript, which gives the option to convert AI to JPG/PNG and I these format I have no problem previewing.
The issue I have is that I need the preview to include the whole document and not just the artboard, in illustrator it's possible when removing the checkbox from "use artboards" when saving, see screenshot: https://helpx.adobe.com/content/dam/help/en/illustrator/how-to/export-svg/_jcr_content/main-pars/image0/5286-export-svg-fig1.jpg
but when I try to export from ghostscript, I can't make it work...
from my understanding, it's best to try and first convert to EPS and then from that to JPG/PNG, but I failed doing that as well and the items that are outside the artboard are not showing.
on linux, these are the commands I basically tried , after installing ghostscript:
gs -dNOPAUSE -dBATCH -sDEVICE=eps2write -sOutputFile=out.eps input.ai
gs -dNOPAUSE -dBATCH -sDEVICE=jpeg -r300 -sOutputFile=out.jpeg input.ai
gs -dNOPAUSE -dBATCH -sDEVICE=pngalpha -r300 -sOutputFile=out.png input.ai
if it's not possible with ghostscript and I need imagemagick instead, I don't mind using it... I tried it for 10 minute and just got bunch of errors so I left it....
AI file for example: https://drive.google.com/open?id=1UgyLG_-nEUL5FLTtD3Dl281YVYzv0mUy
Jpeg example of the output I want: https://drive.google.com/open?id=1tLT2Uj1pp1gKRnJ8BojPZJxMFRn6LJoM
Thank you
Some updates on the topic: I've found this:
https://gist.github.com/moluapple/2059569
This is AI PGF extractor which should theoretically help to extract the additional data from the PDF. Currently, it seems quite old and written for win32, so I cannot test it at the moment, but it's at least some kind of lead.
Firstly, Adobe Illustrator native files are not technically supported by Ghostscript at all. They might work, because they are normally either PostScript or PDF files with custom bits that can be ignored for the purposes of drawing the content. But it's not a guarantee.
Secondly; no, do not multiply convert the files! That's a piece of cargo-cult mythology that's been doing the rounds for ages. There are sometimes reasons for doing so but in general this will simply magnify problems, not solve them. Really, don't do that.
You haven't quoted the errors you are getting and you haven't supplied any files to look at, so it's not really possible to tell what your problem is. I have no clue what an 'artboard' is, and a picture of the Illustrator dialog doesn't help.
Perhaps if you could supply an example file, and maybe a picture of what you expect, it might be possible to figure it out. My guess is that your '.ai' file is a PDF file, and that it has a MediaBox (which is what Ghostscript uses by default) and an ArtBox which is what you actually want to use. Or something like that. Hard to say without more information.
Edit
Well, I'm afraid the answer here is that you can't easily get what you want from that file without using Illustrator.
The file is a PDF file (if you rename input.ai to input.pdf then you can open it with a PDF reader). But Illustrator doesn't use most of the PDF file when it opens it. Instead the PDF file contains a '/PieceInfo' key, which is a key in the Page dictionary. That points to a dictionary which has a /Private key, which (finally!) points to a dictionary with a bunch of Illustrator stuff:
52 0 obj
<<
/AIMetaData 53 0 R
/AIPrivateData1 54 0 R
/AIPrivateData10 55 0 R
/AIPrivateData11 56 0 R
/AIPrivateData2 57 0 R
/AIPrivateData3 58 0 R
/AIPrivateData4 59 0 R
/AIPrivateData5 60 0 R
/AIPrivateData6 61 0 R
/AIPrivateData7 62 0 R
/AIPrivateData8 63 0 R
/AIPrivateData9 64 0 R
/ContainerVersion 11
/CreatorVersion 23
/NumBlock 11
/RoundtripStreamType 1
/RoundtripVersion 17
>>
endobj
That's the actual saved file format of the Illustrator file. You can think of the PDF file as a 'preview' wrapped around the Illustrator native file. Illustrator reads the PDF file to find its own data, then throws the PDF file away and uses the native file format stored within it instead.
The problem is that the PDF part of the file simply doesn't contain the content you want to see. That's stored in the Illustrator native data. Ghostscript just renders what's in the PDF file, it doesn't look at the Illustrator native file.
Looking at the Illustrator private data, some of it is uncompressed, but most is compressed, it doesn't say how it is compressed but applying the FlateDecode filter produces a good old-fashioned Illustrator PostScript file, one that will work with Ghostscript.
But you would have to manually parse the PDF file, extract all the compressed AIPrivateData streams, concatenate them together, apply the FlateDecode filter to decompress them, and only then send the resulting output to Ghostscript with the -dEPSCrop switch set. That will result in the output you want.
But neither Ghostscript nor ImageMagick (which generally uses Ghostscript to render PDF files) will do any of that for you, you would have to do it all yourself.

How to convert OBJ with MTL file to USDZ format

So I have an OBJ 3D model with its associated MTL file. The MTL file contains all the textures. However, when I convert the file to the USDZ format, the textures are not attributed to the file. This is the code I use.
xcrun usdz_converter /Users/SaiKambampati/Downloads/Models/object.obj /Users/SaiKambampati/Downloads/Models/object.usdz
The USDZ file is created but the attributes and textures are not applied. Is there any way to include the MTL file when converting the OBJ model to USDZ model?
To convert OBJ to USDZ, I'd recommend using GLB as an intermediate format.
You can convert OBJ to GLB using Blender, by importing the OBJ, and exporting as GLB.
Then Spase has a GLB to USDZ converter available at https://spase.io/converter, that rapidly does the conversion (for free), powered by a Google USDZ library. It's a drag-and-drop tool, and after conversion the USDZ can be downloaded instantly.
For what it is worth, I created a model in Blender intended for usdz and used regular Texture and UV mapping to color it. When I output the OBJ I too got a .mtl file but did not need it. When I passed the texture .png to usdz_converter as the color_map param the texture showed up in Quick Look on iOS 12.
Xcode's Command Line Converter for USDZ doesn't understand associated material MTL files for OBJ 3D models at the moment.
For texturing converted USDZ models in Xcode 10/11/12/13 you, firstly, need to save UV-mapped textures in Autodesk Maya as JPEG or PNG files (for OBJ, ABC, FBX, USD or DAE models) and then assign this UV textures in Xcode. Or, you can use Maya 2020 / 2022 USD Plug-in to generate a USDZ model with textures.
For further details read about Pixar USD File Format HERE.

SceneKit: import 3D characters inside authoring software into scene?

In the WWDC 2015 fox demo, there is a SCN file representing the 3D fox. If you want to incorporate the fox in a different app, you import the fox's SCN file and its texture maps.
But if you have 3D characters made in an authoring program like Cinema 4D (https://www.maxon.net/en/products/cinema-4d/overview/), how do you generate similar SCN files for the different characters? Cinema4D cannot export SCN files like this so what do you do?
And does the process change if the characters are animated?
I'm using C4D r12, I imagine the process should be the same for later releases.
One option is to create a separate file for each character. Pay attention to the organization in the Object Manager: the hierarchy of objects as listed there will be the scene graph of nodes in your imported scene file. This includes nulls which will end up as container nodes in SceneKit. The names of your objects and nulls in C4D will be the names of the SCNNodes in the scene file. When you have this set up as desired, save via File > Export... > COLLADA(*.dae)
Alternatively, you could create all your characters with one file and then parse them in SceneKit using the unique name of that character's container node (previously a "container" null in C4D).
Xcode supports collada (dae) files. You can import them into your assets folder and convert them to .scn files. Or Xcode will automatically convert them when you compile your app.
Collada files can also contain animation data, and can be exported from most 3D authoring programs.

Is it possible to generate a favicon.ico file under 5,430 bytes?

Considering Google can't even do it, I'm assuming the answer is "No"?
I just went through the basic suggestions from audreyr's "Favicon Cheat Sheet" and created a favicon.ico file consisting of two optimized png files using ImageMagick like so:
$convert favicon-16.png favicon-32.png favicon.ico
My favicon-16.png file was 137 bytes after optimizing with optipng and my favicon-32.png file was 144 bytes after optimization.
So you can understand my surprise when the combined favicon.ico file created by ImageMagick ended up being 5,430 bytes. Coincidentally, that's the exact same size as Google's official favicon.ico file.
Is 5,430 bytes the absolute minimum size for any true image/x-icon file?
That seems a little excessive when realistically every single browser accessing my favicon.ico file will be extracting the 144 byte 32x32 png version.
If the source favicon-16.png and favicon-32.png are truecolor (RGB888 or RGBA8888), ImageMagick will write an uncompressed 5430-byte ICO file. However, if they are indexed-color (i.e., in PNG8 format) or grayscale the ICO may be smaller (I observe 3638-byte ICO files in these cases).
The images are stored within the ICO in BMP format, not PNG (only 256x256 images get stored in PNG format inside the ICO).

Merge MDAT atoms of MP4 files

I have a series of MP4 files (H.264 video, AAC audio, 16KHz). I need to merge them together programmatically (Objective-C, iOS) but the final file will be too large to hold in memory so I can't use the AVFramework to do this for me.
I have written code which will do the merge and takes care of all of the MP4 atoms (STBL, STSZ, STCO etc.) based on just concatenating the contents of the respective MDATS. The problem I have is that while the resultant file plays, the audio gradually gets out of sync with the video. What seems to be happening is that there is a disparity between the audio and video length in each file which gets worse the more files I concatenate.
I've used MP4Box to generate a file from command line and it is 'similar but different' to my output. A notable different is that the length of the MDAT has changed and the chunk offsets have also changed (though sample sizes remain consistent).
I've recently read that AAC encoding introduces padding at the beginning and end of a stream so wonder if this is something I need to handle.
Q: Given two MDAT atoms containing H264 encoded data and AAC audio, is my basic method sound or do I need to introspect the MDAT data in some way.
Thanks for pointer Niels
So it seems that the approach is perfectly reasonable however each individual MP4 file has marginal differences between the audio length and video length due to differences between the sampling frequency. The MP4s include an EDTS.ELST combination which correct this issue for that file. I was failing to consider the EDTS when I merged files. Merging EDTS has fixed the issue.

Resources