I am new to pybullet and i was just trying to render table.
I used the one given as an example on kukaarm. What i wanted to do here is resize it.So i edited the .obj file but this is the result, scaling the mesh in urdf isn't giving me any results. Is there any other way to scale it?after changing the v values .obj file
I suggest to refer to the soccerball example in the pybullet repo:
https://github.com/bulletphysics/bullet3/blob/master/examples/pybullet/examples/soccerball.py
When loading URDF you can set globalScaling parameter.
Related
I'm working on an older project that uses D3D9 for rendering 3D environments.
I have a texture file loaded into memory, that I'm applying onto a simple 3D model for rendering. I'm loading this file using the D3DXCreateTextureFromFileInMemory function (MS Docs function link: https://learn.microsoft.com/en-us/windows/win32/direct3d9/d3dxcreatetexturefromfileinmemory), and everything works okay.
However, instead of simply reading & loading the entire texture file, I want to only be able to read & load a square portion of it (a sub-texture of sorts). I have a pair of UV coordinates of the supposed square portion of the sub-texture (one UV coordinate for top-left corner of square, one for the bottom-right), relative to the main texture file, but I can't find a D3D9 function that does such a thing (I believe the correct wording for this would be a "Texture Atlas", but I've only heard it a couple of times and I'm not sure).
Here is an example diagram, to make sure my question is clear:
Looking over the MS Docs for the D3D9 texture functions, D3DXCreateTextureFromFileInMemoryEx (MS Docs link: https://learn.microsoft.com/en-us/windows/win32/direct3d9/d3dxcreatetexturefromfileinmemoryex) can also be found with is a supposed upgrade of the previous D3DXCreateTextureFromFileInMemory function, however it only accepts a "height" and a "width" parameters, but not any sort of positional parameter pair. There are also alternative functions that use "Resources" instead of files in memory, but they also do not appear to accept any sort of positional parameters (such as D3DXCreateTextureFromResourceEx, MS Docs link: https://learn.microsoft.com/en-us/windows/win32/direct3d9/d3dxcreatetexturefromresourceex).
There are also several functions for a "UV Atlas" present in the MS Docs archives (https://learn.microsoft.com/en-us/windows/win32/direct3d9/dx9-graphics-reference-d3dx-functions-uvatlas), however I do not think those would be helpful to me.
Is what I'm trying to achieve here even possible using D3D9? Are there any functions that I may be missing that could help me achieve this goal?
I am using Maxima and I have a lot of resulting plots that I want to save on drive for other uses (making GIF...etc)
This is what I am looking at:
Is there any code that can autosave the plots instead of having to save it manually one by one?
Thank you in advance.
Well, one approach is to specify a file name in the arguments of plot2d. Then the plot is output directly to the file and it doesn't show up in the GUI. E.g.,
plot2d (sin(x), [x, 0, 10], [png_file, "mysinplot.png"]);
plot2d recognizes png_file, pdf_file, ps_file and svg_file. In each case, ? png_file, etc, will show some info about that.
Note that there isn't any file output flag for GIF output. The closest thing is PNG which is similar to GIF.
I think draw also recognizes different file formats but I don't know about that without searching the documentation.
If you are generating a lot of plots, it might be convenient to automatically generate file names via sconcat, e.g. sconcat("myplot", i, ".png") produces "myplot10.png" when i is equal to 10.
I want to use NiftyNet to implement Deep Learning on medical image processing. However, there is one thing I haven't figured out regarding the data input: how does it join the multi-modality images? I saw the demo of BRATS2017, they seems to use 4 different modalities, and in the configuration file, they just included the directory of the images and they claim it will "concatenate" the images. But I want to know more, as those images are 3D, how are they concatenated? [slice1-30]:[slice1-30].. or [slice1, slice1, slice1 ...]:[slice2, slice2, slice2...]?
And can we control the data organization part? If so, which file should I modify?
Any suggestion would be greatly appreciated!
In this case, the 3D images are concatenated in an additional dimension. You control the order they're concatenated in by specifying the order of files to load in the *.ini files.
However, as long as you're consistent, it shouldn't matter what order the modalities go in.
The images are concatenated in the channel dimension. For 2D images, the dimensions are NSSC: batch size, 2 spatial dimensions, then channel. For 3D images, the dimensions are NSSSC: batch size, 3 spatial dimensions, then channel.
I am using JsPdf to generate pdf from multiple images, the issue is that I get the same image generated in all pdf files. any idea please.
I had a similar problem when using multiple canvases to generate multi-page PDF document, I was originally using the default format (PNG), so after several hours going through my code I decided to change the format to JPEG, what do you know, the problem went away. Here is the call:
doc.addImage(canvas.toDataURL("image/jpeg"), "JPEG", 0, 0, canvas.width, canvas.height);
Have a look at the parameter list of addImage():
jsPDFAPI.addImage = function(imageData, format, x, y, w, h, alias, compression, rotation)
If you add multiple different images but somehow set alias to the same for all, jsPDF will reuse the first of those images. This is intended behaviour and reduces the output size.
I recommend to always set alias to something unique for unique images. If alias is not set, jsPDF will calculate a hash and for large images, this can be quite expensive.
[Edit, as I can't comment directly to marwen web's answer below:
addImage() has no option split, so I don not know what you mean. Perhaps you can give an example in case other users have the same problem?]
thank you for your answer, actually the problem was caused by an option added in the call of the function, it is caused by the option "split".i use the PNG format withount any problem.
I bought a model from TurboSquid in 3DS format and am trying to load it into an XNA project.
I've exported to FBX and turned ON the "Tangents and Binormals" export options.
If I do not set basicEffect.TexturesEnabled, it renders but without textures. If I turn on TexturesEnabled, though, I have problems:
If I turn off "Generate Tangent Frames" in the content processor, I get "The current vertex declaration does not include all the elements required by the current vertex shader. TextureCoordinate0 is missing." at runtime.
If I turn on "Generate Tangent Frames" in the content processor, I get "Required Vertex Channel TextureCoordinate0 no found" at build time.
So, the question is how to take a model in 3DS, export it so I can use it as an FBX model in XNA and get all of the UV mapping and normals correct. Even the VS2012 FBX preview can render it properly, so it seems like it should have all it needs, but no.
This can be a number of things. If the model was using a 3rd party plugin or materials other than standard, this will cause the UVs not to align.
My suggestion is to make sure that the materials are in a standard format
Ungroup the entire model if necessary.
lastly,
If the model is not rigged, make sure it's an editable poly.
From there you can try and export the model again. Are there any other formats that XNA can import?
If this doesn't help, please go to Support.TurboSquid.com and create a support ticket. We can try our best to help.
Christopher Briere
TurboSquid Product Support