Defining a solution dependent variable for an Abaqus subroutine - abaqus

I am new to writing Abaqus user subroutines. I am trying to figure out if there is way in which I can define solution dependent variable in an Abaqus user subroutine. I am trying to implement leakoff for the cohesive element as a time dependent parameter. I am planning to use UFLUIDLEAKOFF subroutine to implement this functionality. The leakoff model is as shown below.
Leakoff=C*f(t,τ)
where,C=constant
t=current time
τ=time at which the cohesive element was damaged
τ is a solution dependent parameter which is not defined if the cohesive elements is not damaged i.e. SDEG = 0. Thus, τ is different for each element and has to be updated for the elements which are damaged in this step, unchanged for the elements which were damaged prior to this step and should be not defined for the elements which are still intact. The leakoff in the subroutine will be computed by:
If τ is NOT defined then we have a user defined constant leakoff
If τ is defined, then we compute the value of leakoff from the function f(t,τ).
I believe we can define a state variable associated with the cohesive elements and then somehow read that in the user subroutine and choose the leakoff value based on that.
I would appreciate any help in this matter. Thanks.

A state variable or SVAR is definitely the best approach. The number of SVARS you need though is dependent upon the element type and how many variables you need to keep track off during each time increment.
For instance if you have a four node element and you need to keep track of three variables during each time increment then you will have 12 SVARS for each element.

Related

Is it possible to express rising/falling edge operators as SAT/SMT formula?

I am working on a satisfiability check for transition conditions of GRAFCET Diagrams (which is used to model the behaviour of a programmable logic controller). For this purpose I am using the Z3 SMT Solver.
In addition to normal operators (AND, OR, NOT and EQUALITY) the GRAFCET specification allows RISING and FALLING EDGE operators in its conditions.
Exemple: ↑a (RISING EDGE)
Explanation: The conditions is statisfied if the variable a changes its value from FALSE to TRUE.
My first thought would be to check, if there is a variable combination that statisfies a and also a variable combination that statisfies NOT(a). This way I could proof that the RISING EDGE could possibly occure.
[Q]: Is it possible to translate these operators directly in propositional logic or somthing similar to check satisfiablity in one forumula.
Raising/falling edges suggests change over time. In a SAT/SMT context, variables do not change. To model what you want, you’ll have to capture the value in successive points in different variables and check that the first is False and second is True for raising, etc.
You can also use an array indexed by an integer to represent the value. It all depends on how you translate these diagrams to SAT. In any case, the value of each variable will be constant in the model. (That is, checking a and Not(a) at the same time will always be unsatisfiable.)

Trying to understand this profiling result from F# code

A quick screenshot with the point of interest:
There are 2 questions here.
This happens in a tight loop. The 12.8% code is this:
{
this with Side = side; PositionPrice = position'; StopLossPrice = sl'; TakeProfitPrice = tp'; Volume = this.Volume + this.Quantity * position'
}
This object is passed around a lot and has 23 fields so it's not tiny. It looks like immutability is great for stable code, but it's horrible for performance.
Since this recursive loop is run in parallel, I need to store it's context variables in an object.
I am looking for a general idea of what makes sense, not something specific to that code because I have a few tight loops with a bunch of math which I need to profile as well. I am sure I'll find the same pattern in several places.
The flaw here is that I store both the context for the calculations and its variables in a singe type that gets passed in the loop. As the variable fields get updated, the whole object has to be recreated.
What would make sense here (in general for this type of situations)?
make the fields that can change mutable. In this case, that means keeping the type as is (23 fields) and make some fields mutable (only 5 fields get regularly changed)
move the mutable fields to their own type to have a general context object and one holding all the variables. In this case, that means having a context with (23 - 5 fields) and a separate 5 fields type
make the mutable fields variables and move them out of the type. In this case, these 5 fields would be passed as variables in the recursive loop?
and for the second question:
I have no idea what the 10.0% line with get_Tag is. I have nothing called 'Tag' in the code, so I assume that's a dotnet internal thing.
I have a type called Side and there is a field with the same name used in the loop, but what is the 'Tag' part?
What I would suggest is not to modify your existing immutable type at all. Instead, create a new type with mutable fields that is only used within your tight loop. If the type leaves that loop, convert it back to your immutable type (assuming you don't need a copy to go through the rest of your program with every iteration).
get_Tag in this case is likely the auto-generated get-only property on a discriminated union, it's just how the F# compiler represents this sort of type in CLR. The property can most easily be seen when looking at F# code from C#, here's a great page on F# decompiled:
https://fsharpforfunandprofit.com/posts/fsharp-decompiled/#unions
For the performance issues I can only offer some suggestions:
If you can constrain the context object to your code only, then try making a mutable version and see which effect it has.
You mention that the context object is quite large, is it possible to split it up?

How to dynamically define multiple polynomials inside a loop in Maxima

So...I want to create five different polynomials inside a loop in order to make a Sturm sequence, but I don't seem to be able to dynamically name a set of polynomials with different names.
For example:
In the first iteration it would define p1(x):whatever
Then, in the second iteration it would define p2(x):whatever
Lastly, in the Nth iteration it would define pn(x):whatever
So far, I have managed to simply store them in a list and call them one by one by its position. But surely there is a more professional way to accomplish this?
Sorry for the non-technical language :)
I think a subscripted variable is appropriate here. Something like:
for k:1 thru 5 do
p[k] : make_my_polynomial(k);
Then p[1], ..., p[5] are your polynomials.
When you assign to a subscripted variable e.g. something like foo[bar]: baz, where foo hasn't been defined as a list or array already, Maxima creates what it calls an "undeclared array", which is just a lookup table.
EDIT: You can refer to subscripted variables without assigning them any values. E.g. instead of x^2 - 3*x + 1 you could write u[i]^2 - 3*u[i] + 1 where u[i] is not yet assigned any value. Many (most?) functions treat subscripted variables the same as non-subscripted ones, e.g. diff(..., u[i]) to differentiate w.r.t. u[i].

When do FORTRAN subprograms save data, and when not?

Have a pretty simple function for taking the name of a month, "Jan", "Feb", etc. and converting to the number of the month:
function month_num(month_str)
character*(*) :: month_str
character*3 :: month_names(12)
integer :: ipos(1),location(12)
data month_names/'Jan','Feb','Mar','Apr','May','Jun','Jul','Aug', &
'Sep','Oct','Nov','Dec'/
where (month_names==month_str) location=1
ipos = maxloc(location)
month_num = ipos(1)
end function
And OK, yes, I know it's dangerous to not define "location" before using it.
Problem: During execution of the function, if input is OK, some value of "location" will be set to 1. And, to my surprise, when the function is called again, that value will still be equal to 1. And this, of course, really messes things up. So I figured I would fix it with a new line
data location/12*0/
And I got the same problem.
Finally, I put in
location = 0
just before the "where" statement, and that fixed everything.
So, I thought FORTRAN subprograms would not save data unless the variables were declared with the "SAVE" attribute. Also, with many compilers, you can invoke some sort of "static" option that will keep everything saved. I did neither of these here, but the "location" array was saved just the same. Can someone enlighten me on the rules of when FORTRAN saves data and when not? Thanks.
The value of a variable local to a procedure is preserved across (ie it is SAVEd) in one of two ways:
The programmer specifies the SAVE attribute when declaring the variable, for example:
REAL, SAVE :: var1
The programmer initialises the variable upon declaration, for example
REAL :: var1 = 3.1415
This second, implicit, behaviour is one features of Fortran which seem designed to catch out the programmer, and not just beginners. Note that the value the variable has upon re-invocation is not, in the 2nd example 3.1415, but whatever value it had when the last invocation exited.
It is common for compilers to behave as if a variable is SAVEd when the programmer has not exercised either of these options, perhaps the memory locations used by one invocation of a procedure are not overwritten before the next invocation. But this behaviour is not to be relied on.
The situation is slightly different for variables declared in modules. Again any variable with the SAVE attribute is saved but any other variable only retains its value while the module is use-associated with a program unit which has started executing but not finished. Again some compilers, and some executions of some programs, may behave as if the value of a module variable is preserved despite the module going out of scope but this is non-standard behaviour and not to be relied on.
This behaviour is scheduled to change in Fortran 2008 when variables defined in modules will acquire the SAVE attribute implicitly.
Personally I like to explicitly SAVE variables even when I am sure that they would get the attribute implicitly, it makes the code just a bit easier to understand next time round.

Am I missing something, or do we have a couple inconsistencies / misnomers?

Mathematicians typically count starting with 1, and call the counting variable n (i.e. "the nth term of the sequence). Computer scientists typically count starting with 0, and call the counting variable i (i.e. "the ith index of the array"). Which is why I was confused to learn that Seq.nth actually returns the "n+1 term of the sequence".
Doubly confusing is that Seq.iteri does as it's name implies and traverses a sequence supplying the current index.
Am I missing something? Is there a rational / history for this misnomer / inconsistency? Or was it just a mistake (which likely can't be fixed since we have a commercial release now).
Edit
Admittedly my claim about conventional use of i and n is not strictly true and I should have been more careful about such an assertion. But I think it is hard to deny that the most popularly used languages do start counting at 0 and that i and j are most certainly extremely popular choices for index variable names. So, when I am familiar with using Seq.iteri and Seq.mapi and then come across Seq.nth I think it is reasonable to think, "Oh, this function counts differently, probably the other way things are counted, starting with 1."
And, as I pointed out in the comments, the summaries for Seq.iteri, Seq.mapi, and Seq.nth only served to enforce my assumption (note that intellisense only gives you the summaries, it does not give you the description of each parameter which you have to find on MSDN):
Seq.iter
Applies the given function to each
element of the collection. The integer
passed to the function indicates the
index of element.
Seq.mapi
Creates a new collection whose
elements are the results of applying
the given function to each of the
elements of the collection. The
integer index passed to the function
indicates the index (from 0) of
element being transformed.
Seq.nth
Computes the nth element in the
collection.
Note the emphasis on "nth", not mine, as if everyone knows what the nth element in the sequences is as opposed to the ith element.
Talking of history, nth is zero based in Lisp, which is probably what the F# function is named for. See the common lisp spec for nth.
I haven't found your statement about i and n in mathematics to be true; usually n is the number of something rather than an index. In this case, it was the number of times to call cdr to get the next element.
Arrays are indexed starting from 0 in many computer languages, but some languages start from 1 and some allow a choice. Some allow the index range to be set as part of the array declaration, or even to be changed at runtime.
Mathematicians are as likely to start at zero as one, sometimes use other index ranges, and attach no particular meaning to the letters 'n' and 'i'.
The method names Seq.nth and Seq.iteri are poorly named.

Resources