Does ```/gun/polarization 0. 0. -1.``` in histo.mac of the Pol01 example represent the photon spin? - photon

On page 348 of the geant4 User’s guide and applications manual (refer to following link)
http://ftp.tku.edu.tw/Linux/Gentoo/distfiles/BookForAppliDev-4.10.03.pdf
it states that
"Pol01 - interaction of polarized beam (e.g. circularly polarized photons) with polarized target"
On lines 25 and 26 in the histo.mac file of the Pol01 example it has the following two lines of instructions...
/gun/polarization 0. 0. -1.
/gun/particle gamma
The direction of this gamma beam is along the z-axis, and so, assuming the code is correct, the first line of code cannot be describing the polarization state of the electric field. Am I to take it then that, in this context, the first line defines the photon spin projection, and therefore it is defining a circularly polarized photon, either left or right depending on which convention geant4 uses?

Reading the code, Geant4 treats the three vector you use in /gun/polarization for a photon as the S1,S2,S3 components of a Stokes vector in calculations described in the Wikipedia article below.
https://en.wikipedia.org/wiki/Stokes_parameters
A (0,0,1) vector will represent circularly polarized light.

Related

PrismaticJoint limits not enforced

I'm creating a system of bodies with radially expanding bodies connected with PrismaticJoints, and finding that, although I initialized each joint with joint position limits, the joints pass these limits due to external forces like gravity easily. Here is a plot showing some joints' translations over time to show how they pass the lower and upper limits at 3.5 and 4.2:
What am I missing? My call to create the joints looks like this:
const multibody::Joint<double>& joint = plant_->AddJoint<drake::multibody::PrismaticJoint>(
shpere_name + "_joint",
center_body, std::nullopt,
connect_body, std::nullopt,
unitVlist()[j], r_low, r_upp, 0);
where
*_body are bodies,
unitVlist() returns a list of unit vectors to pull from,
r_low and r_upp are doubles corresponding to the lower and upper limits.
Currently joint limits in Drake are only enforced by the discrete solver, that is, what you get if you supply a time step in the MultibodyPlant constructor. Our continuous integrators don't see the limits yet. We are aware of that but I couldn't actually find a GitHub issue complaining about it -- would you mind filing one? You can do it here (select "New Issue").

How to Implement Integrated Random Walk Trend Component in Tensorflow Probability

I'm running tensorflow 2.1 and tensorflow_probability 0.9. I have fit a Structural Time Series Model with a seasonal component.
I wish to implement an Integrated Random walk in order to smooth the trend component, as per Time Series Analysis by State Space Methods: Second Edition, Durbin & Koopman. The integrated random walk is achieved by setting the level component variance to equal 0.
Is implementing this constraint possible in Tensorflow Probability?
Further to this in Durbin & Koopman, higher order random walks are discussed. Could this be implemented?
Thanks in advance for your time.
If I understand correctly, an integrated random walk is just the special case of LocalLinearTrend in which the level simply integrates the randomly-evolving slope component (ie it has no independent source of variation). You could patch this in by subclassing LocalLinearTrend and fixing level_scale = 0. in the models it builds:
class IntegratedRandomWalk(sts.LocalLinearTrend):
def __init__(self,
slope_scale_prior=None,
initial_slope_prior=None,
observed_time_series=None,
name=None):
super(IntegratedRandomWalk, self).__init__(
slope_scale_prior=slope_scale_prior,
initial_slope_prior=initial_slope_prior,
observed_time_series=observed_time_series,
name=None)
# Remove 'level_scale' parameter from the model.
del self._parameters[0]
def _make_state_space_model(self,
num_timesteps,
param_map,
initial_state_prior=None,
initial_step=0):
# Fix `level_scale` to zero, so that the level
# cannot change except by integrating the
# slope.
param_map['level_scale'] = 0.
return super(IntegratedRandomWalk, self)._make_state_space_model(
num_timesteps=num_timesteps,
param_map=param_map,
initial_state_prior=initial_state_prior,
initial_step=initial_step)
(it would be mathematically equivalent to build a LocalLinearTrend with level_scale_prior concentrated at zero, but that constraint makes inference difficult, so it's generally better to just ignore or remove the parameter entirely, as I did here).
By higher-order random walks, do you mean autoregressive models? If so, sts.Autoregressive might be relevant.

How to hit the texel cache in WebGL?

What i'm doing is GPGPU on WebGL and I don't know the access pattern which I'd be talking about applies to general graphics and gaming programs. In our code, frequently, we come across data which needs to be summarized or reduced per output texel. A very simple example is matrix multiplication during which, for every output texel, your return a value which is a dot product of a row of one input and a column of the other input.
This has been the sore point of our performance because of not so much the computation but multiplied data access. So I've been trying to find a pattern of reads or data layouts which would expedite this operation and I have been completely unsuccessful.
I will be describing some assumptions and some schemes below. The sample code for all these are under https://github.com/jeffsaremi/webgl-experiments
Unfortunately due to size I wasn't able to use the 'snippet' feature of StackOverflow. NOTE: All examples write to console not the html page.
Base matmul implementation: Example: [2,3]x[3,4]->[2,4] . This produces in a simplistic form 2 textures of (w:3,h:2) and (w:4,h:3). For each output texel I will be reading along the X axis of the left texture but going along the Y axis of the right texture. (see webgl-matmul.html)
Assuming that GPU accesses data similar to CPU -- that is block by block -- if I read along the width of the texture I should be hitting the cache pretty often.
For this, I'd layout both textures in a way that I'd be doing dot products of corresponding rows (along texture width) only. Example: [2,3]x[4,3]->[2,4] . Note that the data for the right texture is now transposed so that for each output texel I'd be doing a dot product of one row from the left and one row from the right. (see webgl-matmul-shared-alongX.html)
To ensure that the above assumption is indeed working, I created a negative test also. In this test I'd be reading along the Y axis of both left and right textures which should have the worst performance ever. Data is pre-transposed so that the results make sense. Example: [3,2]x[3,4]->[2,4]. (see webgl-matmul-shared-alongY.html).
So I ran these -- and I hope you could do as well to see -- and I found no evidence to support existence or non-existence of such caching behavior. You need to run each example a few times to get consistent results for comparison.
Then I came along this paper http://fileadmin.cs.lth.se/cs/Personal/Michael_Doggett/pubs/doggett12-tc.pdf which in short claims that the GPU caches data in blocks (or tiles as I call them).
Based on this promising lead I created a version of matmul (or dot product) which uses blocks of 2x2 to do its calculation. Prior to using this of course I had to rearrange my inputs into such layout. The cost of that re-arrangement is not included in my comparison. Let's say I could do that once and run my matmul many times after. Even this scheme did not contribute anything to the performance if not taking something away. (see webgl-dotprod-tiled.html).
A this point I am completely out of ideas and any hints would be appreciated.
thanks

Element type error being observed

I am using C3D8R element and creating section points across elements. However when doing it, I get the below error
"ELEMENT TYPE C3D8R HAS NO OUTPUT AT SECTION POINT 1. SECTION POINT REMOVED"
Can anyone let me know how should i address this error?
You cannot define section points in this way for solid elements such as C3D8R, since they use a completely different formulation. Casual reading of the documentation will make that clear.
However, you can discretize 3D geometry using "continuum shell" elements (in Abaqus: SC6R, SC8R). These elements have displacement DOF only and the thickness is defined by the nodal geometry directly. You should be sure to read the docs and become familiar with their usage before trusting your analysis results.
As far as I'm aware, you can define continuum shell element section properties in the same way as for conventional shell elements, with a couple of caveats. For example, you cannot specify an offset for the reference surface from the element midsurface when the section properties are specified by one or more material layers.

Why does ELKI need db.in file in addition to distance matrix? Also what should db.in file contain?

I tried to follow this tutorial on using ELKI with pre-computed distances for clustering.
http://elki.dbs.ifi.lmu.de/wiki/HowTo/PrecomputedDistances
I used the following set of command line options:
-dbc.filter FixedDBIDsFilter -dbc.startid 0 -algorithm clustering.OPTICS
-algorithm.distancefunction external.FileBasedDoubleDistanceFunction
-distance.matrix /path/to/matrix -optics.minpts 5 -resulthandler ResultWriter
ELkI fails with a configuration error saying db.in file is needed to make the computation.
The following configuration errors prevented execution:
No value given for parameter "dbc.in":
Expected: The name of the input file to be parsed.
No value given for parameter "parser.distancefunction":
Expected: Distance function used for parsing values.
My question is what is db.in file? Why should I provide it in addition to the distance matrix file since the pair-wise distance matrix file completely specifies all the information about the point cloud. (also I don't have access to any other information other than the pair-wise distance information).
What should I do about db.in? Should I override it, or specify some dummy information etc. Kindly help me understand.
thank you.
This is documented in the ELKI HowTos:
http://elki.dbs.ifi.lmu.de/wiki/HowTo/PrecomputedDistances
Using without primary data
-dbc DBIDRangeDatabaseConnection -idgen.count 100
However, there is a bug (patch is on the howto page, and will be in the next release) so you right now can't fully use this; as a workaround you can use a text file that enumerates the objects.
The reason for this is that ELKI is designed to work on multi-relational data. It's not just processing matrixes. But some algorithms may e.g. need a geographic representation of an object, some measurements for this object, and a label for evaluation. That is three relations.
What the DBIDRange data source essentially does is create a single "fake" relation that is just the DBIDs 0 to 99. On algorithms that don't need actual data, but only distances (e.g. LOF or DBSCAN or OPTICS), it is sufficient to have object IDs and a distance matrix.

Resources