Using offline collada2gltf converter to convert collada models with textures - gltf

My attached files are available here: https://drive.google.com/file/d/0B0rh99fSDowtUERnWEhVUUFwOUk/view?usp=sharing
I have 3d models in collada ('dhqg part 2.dae') with matching textures (jpg pictures in 'dhqg part 2' folder)
My command line for converting was: './collada2gltf -f 'filename.dae' -e 'TextureFolderName''.
I used option '-e' for embedding the texture folder but the result gltf file did not contain the textures in it and it was wrong.
If someone knows the answer for my trouble, please let me know.
Many thanks!

The flag -e does not take an option. If you execute it like this...
./collada2gltf -f filename.dae -e
...the textures will be embedded in the resulting .gltf file. I tested that, and the textures were definitely there: the file was 117MB large, and contain entries for the textures. Unfortunately my browser tab crashes when trying to view that. :)

Related

Edit Photos via Photoshop on a server

I wart to create a web app where a user enters certain data via a form and then receives a custom rendered image. The image is from a smart object in a psd. It's kind of like a mock-up which definitely requires needs some photoshop filters to be properly rendered.
This should all happen in real time and should be doable from my understanding since the rendering of a single images doesn't need much computing power
I've done some research and haven't really found a solution the matches my problem. Is it necessary to run Photoshop on a server and then remotely run a photoshop script and then upload the generated image somewhere else?
I've used The After Effects Plugin Template by DataClay in the past which offers similar functionality but for video.
Looking forward to hearing your ideas.
Thanks
You can use the Dataclay plugin to handle still image exports out of After Effects. Make a single-frame duration composition in After Effects and rig the layers with the Templater plugin. Then use the PNG Sequence output module to render out a single frame.
From Dataclay's forums:
Exporting
A few extra steps are required to correctly render a project file as a PNG sequence using Templater. By default, a file rendered as a PNG sequence will have the frame number appended to the end of the file name, i.e.:
filename.png00000, filename.png00001, filename.png00002, etc.
In order to designate where in the filename the frame number should be added, we’ll need to use the output column. First, add a column named output to your data source. Next, add a filename with a set of brackets with five # signs to designate where the frame numbering should be added. For example:
filename[#####] would result in filename00001.png
or
[#####]filename would result in 00001filename.png

OBJ File for ARCore Application Development

What is the significance of the andy.obj file in the ARCore Sample?
Let's say if we replace the andy.png with a new image, how can we generate .obj file for the new image?
The OBJ file describes the geometry, the png file the texture to "stretch" over this 3D object. You have to use a 3D modelling program like Blender to create a new model.
This is how you export OBJ files in Blender: https://blender.stackexchange.com/questions/121/how-do-i-export-a-model-to-obj-format
The sample code only can handle the simplest OBJ models that only have 1 texture file.
Fo those complicated OBJ models, they usually come with a MTL file that refers to several different texture files. To be able to handle that ,you need some extra work on the existing code. Please check the code I implement for this case if you are interested #https://github.com/JohnLXiang/arcore-sandbox . Specifially ,you can take a look at ObjectRenderer.createOnGlThread().
To export a texture as image in Blender do the following:
Select your object and enter in edit mode. Select all vertices/faces (press 'a'). Then start the UV Mapping, press 'u'. And Select one of the options of the UVMapping. You must test the best option for your model. I'm not sure which UV Mapping mapping option the ARCore uses.
Then go to the UV/Image Editor:
Export UV Layout at the menu, and save your image.
For creating a new .obj model for your AR app you need to use 3D authoring software like Autodesk Maya, Autodesk 3dsMax, Blender, SideFx Houdini, Cinema 4D, etc. These applications can help you create a high quality polygonal model with corresponding .mtl texture file.
But you should know that Sceneform supports 3D assets not only in OBJ format (where animations aren't supported) but also in FBX (with animations) and in glTF (animations not supported).
.obj
.fbx
.glTF
Sceneform's ASCII and Binary Asset Definitions are also welcome:
.sfa
.sfb
Supported material files (aka textures for your 3D assets) have the following extensions: MTL, BIN, PNG, JPG and native Sceneform's SFM.
.mtl
.bin
.png
.jpg
.sfm
Hope this helps.

LibTiff.net - Save Directory

I have massive tiff file that contains 8 directories (resolutions). It's also a tiled.
I can cycle thru the directories and get the resolution of each. I want to save the 4th directory to a new tif file. I think it's possible but can't get my hands on it.
Basically want to do this:
using (LibTiff.Classic.Tiff image = LibTiff.Classic.Tiff.Open(file, "r"))
{
if (image.NumberOfDirectories() > 4) {
image.SetDirectory(4);
image.WriteDirectory("C:\\Temp\Test.tif");
}
}
It would be so nice if that was possible but I know I have to create an output image and copy the rows of data into it. Not sure how yet. Any help would be much appreciated.
There are no built-in methods in LibTiff.Net library that can be used to copy one directory into a new file.
The task is quite complex and the best place to start is to look at TiffCP utility's source code.
The utility no only can copy images but it can also extract directories.

Combine multi-page PDFs into one PDF with ImageMagick

I am trying to use ImageMagick (6.8.0) to combine several multi-page PDFs into a single PDF. This command:
$ convert multi-page-1.pdf multi-page-2.pdf merged.pdf
Returns merged.pdf, which contains the first page of multi-page-1.pdf and the first page of multi-page-2.pdf.
This command:
$ convert multi-page-1.pdf[2] multi-page-2.pdf[2] merged.pdf
Returns merged.pdf, which contains the third page of multi-page-1.pdf and the third page of multi-page--2.pdf.
I would like to merged.pdf to contain all of the pages of each multi-page pdf. I have so far not found a way of telling the convert command to use a range of pages, although I have tried adding [0-1] and [0,1] at the end of the filenames.
Interestingly, this ghostscript command (which I found via StackOverflow but cannot re-find) does work as I would like it to:
$ gs -q -dNOPAUSE -dBATCH -sDEVICE=pdfwrite -sOutputFile=merged.pdf multi-page-1.pdf multi-page-2.pdf
The problem is, the ImageMagick 'convert' command takes urls as inputs and ghostscript does not, and I need my program to take url input rather than file paths.
Is it possible to get the result of the above ghostscript command using ImageMagick?
Why don't you use pdfunite?
Example:
$ pdfunite 1.pdf 2.pdf 3.pdf merged.pdf
I asked this question on an internal company forum, and the conclusion was that there is no way to do the type of document merging we would like to do with ImageMagick without first downloading the file the the local filesystem.
For those of you using Heroku, we are taking advantage of the Heroku 'tmp' directory in order to save the file "locally" on staging and production: https://devcenter.heroku.com/articles/read-only-filesystem
Once we save the file in 'tmp', we will iterate through each page of the pdf and save them all separately. We will find the number of PDF pages using the 'pdf-reader' gem.
EDIT:
Here is the custom paperclip processor I wrote to deal with this (all files are pulled down to the tmp directory beforehand):
https://gist.github.com/jessieay/5832466

Does speed of tar.gz file listing depend on tar size?

I am using the tf function to list the contents of a tar.gz file. It is pretty large ~1 GB. There are around 1000 files organized in a year/month/day file structure.
The listing operation takes quit a bit of time. Seems like a listing should be fast. Can anyone enlighten me on the internals?
Thanks -
Take a look at wikipedia, for example, to verify that each file inside the tar is preceed by a header. To verify all files inside the tar, is necessary to read the whole tar.
There's no "index" in the beggining of the tar to indicate it's contents.
Tar has simple file structure. If you want list them, you must parse all file.
If you want find one file, you can stop process. But must be sure archive has only one file version. This is typical on packed archives because adding on that is unsupported.
for example you can do like this:
tar tvzf somefile.gz|grep for find something|\
while read file; do foundfile="$file"; last; done
at this loop will break and do not read everything, but only from start to file position.
If you must do something more with list, save it to any temporary file. you can gzip this file for place saving if it is needed:
tar tvzf somefile.gz|gzip >temporary_filelist.gz

Resources