EDIT*: After all it turned out that this is not causing the slow import. Nevertheless the answer given explains a better way to implement different densities with one material. So I'll let the question exist. (Slow import was caused by running the scripts from the abaqus PDE and not using 'Run script' from the file menu. special thanks to droooze for finding the problem)
I'm trying to optimize the porosity distribution of a certain material. Therefor I'm performing abaqus FEA simulations with +-500 different materials in one part. The simulation itself only takes about 40 seconds, but reading the input file takes more than 3 minutes. (I used a python script to generate the inp file)
I'm using these commands to generate my materials in the input file:
*SOLID SECTION, ELSET = ES_Implant_MAT0 ,MATERIAL=Implant_material0
*ELSET, ELSET=ES_Implant_MAT336
6,52,356,376,782,1793,1954,1984,3072
*MATERIAL, NAME = Implant_material0
*DENSITY
4.43
*ELASTIC
110000, 0.3
Any idea why this is so slow and is there a more efficient way to do this to reduce the load input file time?
If your ~500 materials are all of the same kind (e.g. all linear elastic isotropic mass density), then you can collapse it all into one material then define a distribution table which distributes these materials directly onto the instance element label.
Syntax:
(somewhere in the Part definition, under section)
*SOLID SECTION, ELSET = ES_Implant_MAT0 ,MATERIAL=Implant_material0
(somewhere in the Assembly definition; part= should reference the name of the part above)
**
**
** ASSEMBLY
**
*Assembly, name=Assembly
**
*Instance, name=myinstance, part=mypart
*End Instance
**
*Elset, elset=ES_Implant_MAT0, instance=myinstance
1,2,...
(somewhere in the Materials definition; see Abaqus Keywords Reference Guide for the keywords *DISTRIBUTION TABLE and *DISTRIBUTION)
**
** MATERIALS
**
*DISTRIBUTION TABLE, NAME=IMPLANT_MATERIAL0_ELASTIC_TABLE
MODULUS,RATIO
*DISTRIBUTION, NAME=Implant_material0_elastic, LOCATION=element, TABLE=IMPLANT_MATERIAL0_ELASTIC_TABLE
,110000,0.3 # First line is some default value
myinstance.1,110000,0.3 # syntax: instance name [dot] instance element label
myinstance.2,110000,0.3 # these elements currently use the material properties assigned to `ELSET = ES_Implant_MAT0`. You can define the material properties belonging to other element sets in this same table, making sure you reference the element label correctly.
...
*DISTRIBUTION TABLE, NAME=IMPLANT_MATERIAL0_DENSITY_TABLE
DENSITY
*DISTRIBUTION, NAME=Implant_material0_density, LOCATION=element, TABLE=IMPLANT_MATERIAL0_DENSITY_TABLE
,4.43 # Default value
myinstance.1,4.43
myinstance.2,4.43
...
*Material, name=Implant_material0
*Elastic
Implant_material0_elastic # Distribution name
*Density
Implant_material0_density # Distribution name
Related
I am following the Pyro introductory tutorial in forecasting, and trying to access the learned parameters after training the model, I get different results using different access methods for some of them (while getting identical results for others).
Here is the stripped-down reproducible code from the tutorial:
import torch
import pyro
import pyro.distributions as dist
from pyro.contrib.examples.bart import load_bart_od
from pyro.contrib.forecast import ForecastingModel, Forecaster
pyro.enable_validation(True)
pyro.clear_param_store()
pyro.__version__
# '1.3.1'
torch.__version__
# '1.5.0+cu101'
# import & prepare the data
dataset = load_bart_od()
T, O, D = dataset["counts"].shape
data = dataset["counts"][:T // (24 * 7) * 24 * 7].reshape(T // (24 * 7), -1).sum(-1).log()
data = data.unsqueeze(-1)
T0 = 0 # begining
T2 = data.size(-2) # end
T1 = T2 - 52 # train/test split
# define the model class
class Model1(ForecastingModel):
def model(self, zero_data, covariates):
data_dim = zero_data.size(-1)
feature_dim = covariates.size(-1)
bias = pyro.sample("bias", dist.Normal(0, 10).expand([data_dim]).to_event(1))
weight = pyro.sample("weight", dist.Normal(0, 0.1).expand([feature_dim]).to_event(1))
prediction = bias + (weight * covariates).sum(-1, keepdim=True)
assert prediction.shape[-2:] == zero_data.shape
noise_scale = pyro.sample("noise_scale", dist.LogNormal(-5, 5).expand([1]).to_event(1))
noise_dist = dist.Normal(0, noise_scale)
self.predict(noise_dist, prediction)
# fit the model
pyro.set_rng_seed(1)
pyro.clear_param_store()
time = torch.arange(float(T2)) / 365
covariates = torch.stack([time], dim=-1)
forecaster = Forecaster(Model1(), data[:T1], covariates[:T1], learning_rate=0.1)
So far so good; now, I want to inspect the learned latent parameters stored in Paramstore. Seems there are more than one ways to do this; using the get_all_param_names() method:
for name in pyro.get_param_store().get_all_param_names():
print(name, pyro.param(name).data.numpy())
I get
AutoNormal.locs.bias [14.585433]
AutoNormal.scales.bias [0.00631594]
AutoNormal.locs.weight [0.11947815]
AutoNormal.scales.weight [0.00922901]
AutoNormal.locs.noise_scale [-2.0719821]
AutoNormal.scales.noise_scale [0.03469057]
But using the named_parameters() method:
pyro.get_param_store().named_parameters()
gives the same values for the location (locs) parameters, but different values for all scales ones:
dict_items([
('AutoNormal.locs.bias', Parameter containing: tensor([14.5854], requires_grad=True)),
('AutoNormal.scales.bias', Parameter containing: tensor([-5.0647], requires_grad=True)),
('AutoNormal.locs.weight', Parameter containing: tensor([0.1195], requires_grad=True)),
('AutoNormal.scales.weight', Parameter containing: tensor([-4.6854], requires_grad=True)),
('AutoNormal.locs.noise_scale', Parameter containing: tensor([-2.0720], requires_grad=True)),
('AutoNormal.scales.noise_scale', Parameter containing: tensor([-3.3613], requires_grad=True))
])
How is this possible? According to the documentation, Paramstore is a simple key-value store; and there are only these six keys in it:
pyro.get_param_store().get_all_param_names() # .keys() method gives identical result
# result
dict_keys([
'AutoNormal.locs.bias',
'AutoNormal.scales.bias',
'AutoNormal.locs.weight',
'AutoNormal.scales.weight',
'AutoNormal.locs.noise_scale',
'AutoNormal.scales.noise_scale'])
so, there is no way that one method access one set of items and the other a different one.
Am I missing something here?
pyro.param() returns transformed parameters in this case to the positive reals for scales.
Here is the situation, as revealed in the Github thread I opened in parallel with this question...
Paramstore is no more just a simple key-value store - it also performs constraint transformations; quoting a Pyro developer from the above link:
here's some historical background. The ParamStore was originally just a key-value store. Then we added support for constrained parameters; this introduced a new layer of separation between user-facing constrained values and internal unconstrained values. We created a new dict-like user-facing interface that exposed only constrained values, but to keep backwards compatibility with old code we kept the old interface around. The two interfaces are distinguished in the source files [...] but as you observe it looks like we forgot to mark the old interface as DEPRECATED.
I guess in clarifying docs we should:
clarify that the ParamStore is no longer a simple key-value store
but also performs constraint transforms;
mark all "old" style interface methods as DEPRECATED;
remove "old" style interface usage from examples and tutorials.
As a consequence, it turns out that, while pyro.param() returns the results in the constrained (user-facing) space, the older method named_parameters() returns the unconstrained (i.e. for internal use only) values, hence the apparent discrepancy.
It's not difficult to verify indeed that the scales values returned by the two methods above are related by a logarithmic transformation:
import numpy as np
items = list(pyro.get_param_store().named_parameters()) # unconstrained space
i = 0
for name in pyro.get_param_store().keys():
if 'scales' in name:
temp = np.log(
pyro.param(name).item() # constrained space
)
print(temp, items[i][1][0].item() , np.allclose(temp, items[i][1][0].item()))
i+=1
# result:
-5.027793402915326 -5.0277934074401855 True
-4.600319371162187 -4.6003193855285645 True
-3.3920585732532835 -3.3920586109161377 True
Why does this discrepancy affect only scales parameters? That's because scales (i.e. essentially variances) are by definition constrained to be positive; that doesn't hold for locs (i.e. means), which are not constrained, hence the two representations coincide for them.
As a result of the question above, a new bullet has now been added in the Paramstore documentation, giving a relevant hint:
in general parameters are associated with both constrained and unconstrained values. for example, under the hood a parameter that is constrained to be positive is represented as an unconstrained tensor in log space.
as well as in the documentation of the named_parameters() method of the old interface:
Note that, in the event the parameter is constrained, unconstrained_value is in the unconstrained space implicitly used by the constraint.
A program to run a Schmid-Leiman transformation using SPSS's Matrix language was published in 2005 by Woolf & Preising in Behavior Research Methods volume 37, pages 48 to 58). It is probably not important for you to know what a Schmid-Leiman transformation is, but I'll explain in comments if you feel it is necessary.
In modifying the program for my own data, I'm getting an error I can't figure out:
Error # 12302 in column 12. Text: ,
Syntax error.
Execution of this command stops.
Error in RIGHT HAND SIDE of COMPUTE command.
The MATRIX statement skipped.
Here is the beginning of the code. The error is showing as coming in Line 6:
* Encoding: UTF-8.
* Schmid-Leiman Solution for 2 level higher-order Factor analysis.
Matrix.
* ENTER YOUR SPECIFICATIONS HERE.
* Enter first-order pattern matrix.
Compute F1={.461, .253, -.058, -.069;
.241, .600, .143, .033;
.582, .047, -.077, -.125;
.327, .297, -.120, -.166;
.176, .448, -.240, -.099;
.680, .069, -.036, -.138;
.415, .228, -.091, -.153;
.
.
.
.390, .205, .002, -.098;
.164, .369, -.170, -.047
}.
As shown above, the text generating the error is shown as a comma (,), but the actual text (following the COMPUTE statement) in column 12 is an open bracket ({). So I have no idea what is going on. Can someone help?
For reference, the original code as proposed by Woolf & Preising (2005) is found here;
The Woolf & Preising article is found here
PS: The sample program given in the link above does run on my copy of SPSS. Here's the beginning of that code:
* Schmid-Leiman Solution for 2 level higher-order Factor analysis.
Matrix.
* ENTER YOUR SPECIFICATIONS HERE.
* Enter first-order pattern matrix.
Compute F1={0.099, 0.5647, -0.1521;
0.0124, 0.9419, -0.1535;
-0.1501, 0.6177, 0.4218;
0.7441, -0.0882, 0.1425;
0.6241, 0.2793, -0.1137;
0.8693, -0.0331, 0.0289;
-0.0154, -0.2706, 0.6262;
-0.0914, 0.0995, 0.7216;
0.1502, 0.0835, 0.398}.
I have a SOAP node, that retrieve information from a URL in a tree structure.
Then i have a compute node to define each environment variable to each namespace variable of the SOAP retrieve.
And finally, i have a mapping node, to move the content to my message assembly structure in XML.
The error its giving me it's this (IN THE COMPUTE NODE):
I have a structure like this:
ListDocs
Description
DocType
ListTypes
Attribute
Lenght
Description
Nature
Required
ListDocs
Description
DocType
ListTypes
Attribute
Lenght
Description
Nature
Required
ListDocs
Description
DocType
ListTypes
Attribute
Lenght
Description
Nature
Required
The problem is that, when i do the definition of the variables, I do it like the code below, in the COMPUTE NODE:
WHILE I < InputRoot.SOAP.Body.ns:obterTiposDocProcessosResponse.ns:return.ns75:processo.ns75:listaTiposDocumentos
DO
SET Environment.Variables.XMLMessage.return.process.listDocs.description = InputRoot.SOAP.Body.ns75:processo.ns75:listDocs.ns75:description;
SET Environment.Variables.XMLMessage.return.process.listDocs.tipoDocumento = InputRoot.SOAP.Body.ns75:processo.ns75:listDocs.ns75:DocType;
SET Environment.Variables.XMLMessage.return.process.listDocs.listTypes.attribute = InputRoot.SOAP.Body.ns75:processo.ns75:listDocs.ns75:listTypes.ns75:atribbute;
SET Environment.Variables.XMLMessage.return.process.listDocs.listTypes.lenght = InputRoot.SOAP.Body.ns75:processo.ns75:listDocs.ns75:listTypes.ns75:lenght;
SET Environment.Variables.XMLMessage.return.process.listDocs.listTypes.description = InputRoot.SOAP.Body.ns75:processo.ns75:listDocs.ns75:listTypes.ns75:description;
SET Environment.Variables.XMLMessage.return.process.listDocs.listTypes.nature = InputRoot.SOAP.Body.ns75:processo.ns75:listDocs.ns75:listTypes.ns75:nature;
SET Environment.Variables.XMLMessage.return.process.listDocs.listTypes.required = InputRoot.SOAP.Body.ns75:processo.ns75:listDocs.ns75:listTypes.ns75:required;
SET I = I+1;
END WHILE;
BUT, in my XML final structure, it only prints the values of my first listDocs, and i want to print all of my listDocs structures.
NOTE: WITH THE WHILE LIKE THIS, IT DOESN'T EVEN WORK. I HAVE TO REMOVE THE WHILE TO PRINT THE FIRST listDocs like i said Above.
Any help?
I NEED HELP TO LOOP THE STRUCTURES, WITH A WHILE OR SOMETHING.
You should try to use the following synthax :
DECLARE I INTEGER 1;
DECLARE J INTEGER;
J = CARDINALITY(InputRoot.SOAP.Body.ns:obterTiposDocProcessosResponse.ns:return.ns75:processo.ns75:listaTiposDocumentos[])
WHILE I <= J DO
SET Environment.Variables.XMLMessage.return.process.listDocs.description = InputRoot.SOAP.Body.ns75:processo.ns75:listDocs[I].ns75:description;
....
END WHILE;
You only missed the CARDINALITY function to get the number of elements, and also the [] to define the table, and then using this [I] while accessing the elements
Note : in my sample above, the environment will be overridden at each iteration of the loop, so only the last record will be printed. You can use the [I] in the output as well if you want to construct a table in output, or you can use the following code to push each message to the output terminal (this means you have one message in input, and 3 message coming out of the output terminal)
PROPAGATE TO TERMINAL 'Out';
So for example, based on your code, if you want to generate 3 messages based on your input containing multiple element :
DECLARE I INTEGER 1;
DECLARE J INTEGER;
J = CARDINALITY(InputRoot.SOAP.Body.ns:obterTiposDocProcessosResponse.ns:return.ns75:processo.ns75:listaTiposDocumentos[])
WHILE I <= J DO
SET Environment.Variables.XMLMessage.return.process.listDocs.description = InputRoot.SOAP.Body.ns75:processo.ns75:listDocs[I].ns75:description;
SET Environment.Variables.XMLMessage.return.process.listDocs.tipoDocumento = InputRoot.SOAP.Body.ns75:processo.ns75:listDocs[I].ns75:DocType;
SET Environment.Variables.XMLMessage.return.process.listDocs.listTypes.attribute = InputRoot.SOAP.Body.ns75:processo.ns75:listDocs[I].ns75:listTypes.ns75:atribbute;
SET Environment.Variables.XMLMessage.return.process.listDocs.listTypes.lenght = InputRoot.SOAP.Body.ns75:processo.ns75:listDocs[I].ns75:listTypes.ns75:lenght;
SET Environment.Variables.XMLMessage.return.process.listDocs.listTypes.description = InputRoot.SOAP.Body.ns75:processo.ns75:listDocs[I].ns75:listTypes.ns75:description;
SET Environment.Variables.XMLMessage.return.process.listDocs.listTypes.nature = InputRoot.SOAP.Body.ns75:processo.ns75:listDocs[I].ns75:listTypes.ns75:nature;
SET Environment.Variables.XMLMessage.return.process.listDocs.listTypes.required = InputRoot.SOAP.Body.ns75:processo.ns75:listDocs[I].ns75:listTypes.ns75:required;
PROPAGATE TO TERMINAL 'Out';
END WHILE;
RETURN FALSE;
For your global information, the RETURN TRUE is the instruction "pushing" the message built in the ESQL code to the output terminal. If you use PROPAGATE instruction (same effect), you should RETURN FALSE to avoid sending an empty message after looping on your records. Another way to do it is to propagate on another terminal (i.e : 'out1'), and keep the return true. In this case, you would have all you records coming out from the out1 terminal, and a message going out of the output temrinal (due to the return true) once all the messages have been propagated (this might be useful in many situations)
So the key to understanding IIB and ESQL is that you are looking at in memory Trees built from nodes.
Each Node has pointers/REFERENCEs to PARENT, NEXTSIBLING, PREVSIBLING, FIRSTCHILD and LASTCHILD Nodes.
Nodes also have FIELDNAME, FIELDNAMESPACE, FIELDTYPE and FIELDVALUE attributes.
And last but not least that you are building Output Trees by navigating Input Trees. The Environment Tree, which you are using, is a special long lasting Tree that you can both read from and write to.
So in your code InputRoot.SOAP.Body.ns75:processo.ns75:listDocs can be thought of as shorthand for instructions to navigate to the ns75:listDocs Node. The dots '.' tell ESQL interpreter the name of the child Node of the current Node. If you were telling someone how to navigate the Nodes it would go something like this.
Start at InputRoot. InputRoot is a special Node that is automatically available to you in your ESQL Modules code.
Navigate to the first child Node of InputRoot that has the name SOAP
Navigate to the first child Node of SOAP that has the name Body
Navigate to the first child Node of Body that has the name listDocs and is in the ns75 namespace.
In the absence of a subscript ESQL assumes you want the first Node that matches the specified name ns75:listDocs and ns75:listDocs[1] both refer to the same Node.
This explains what was happening in your code. You were always navigating to the same listDocs[1] node in the InputRoot and Environment Trees.
#Jerem's code improves on what you were doing by at least navigating across the listDocs nodes in the Input tree.
For each iteration of the loop the subscript [I] gets incremented and thus it chooses a different listDocs Node. The listDocs Nodes are siblings and thus the code will access the first, second and third instance of the listDocs Nodes.
InputRoot.SOAP.Body.ns75:processo.ns75:listDocs[1] <-- Iteration I=1
InputRoot.SOAP.Body.ns75:processo.ns75:listDocs[2] <-- Iteration I=2
InputRoot.SOAP.Body.ns75:processo.ns75:listDocs[3] <-- Iteration I=3
To correct #Jerem's answer you'd need to use subscripts on the lefthand side of the statement as well. Picking the description field as an example you'd need to change your code as follows.
SET Environment.Variables.XMLMessage.return.process.listDocs[I].listTypes.description = InputRoot.SOAP.Body.ns75:processo.ns75:listDocs[I].ns75:listTypes.ns75:description;
Using subscripts is regarded as a performance no no. Imagine you had 10,000 listDocs this would result in each and every iteration of the loop walking down the tree over the InputRoot, SOAP, Body, ns75:processo Nodes and then across the listDocs sibling nodes until it found the ns75:listDocs[I] Node.
This means by the time we get round to processing ns75:listDocs[10000] it will have had to repetetively walked over all the other listDocs Nodes time and time again, In fact we can calculate it would have walked over (4 x 10,000) + ((10,000 x (10,000 + 1)) / 2) = 50,045,000 Nodes
So it's REFERENCE's to the rescue and also the answer to your question. Try a loop like this.
DECLARE ns75 NAMESPACE 'http://something.or.other.from.your.wsdl';
DECLARE InListDocsRef REFERENCE TO
InputRoot.SOAP.Body.ns75:processo.ns75:listDocs;
WHILE LASTMOVE(InListDocsRef) DO
DECLARE EnvListDocsRef REFERENCE TO Environment;
CREATE LASTCHILD OF Environment.Variables.XMLMessage.return.process AS EnvListDocsRef NAME 'listDocs';
SET EnvListDocsRef.description = InListDocsRef.ns75:description;
SET EnvListDocsRef.tipoDocumento = InListDocsRef.ns75:DocType;
SET EnvListDocsRef.listTypes.attribute = InListDocsRef.ns75:listTypes.ns75:atribbute;
SET EnvListDocsRef.listTypes.lenght = InListDocsRef.ns75:listTypes.ns75:lenght;
SET EnvListDocsRef.listTypes.description = InListDocsRef.ns75:listTypes.ns75:description;
SET EnvListDocsRef.listTypes.nature = InListDocsRef.ns75:listTypes.ns75:nature;
SET EnvListDocsRef.listTypes.required = InListDocsRef.ns75:listTypes.ns75:required;
MOVE InListDocsRef NEXTSIBLING REPEAT NAME;
END WHILE;
The code above only walks over 4 + 10,000 Nodes i.e. 10 thousand Nodes vs 50 million Nodes.
A couple of other useful things to know about setting references are:
To point to the last element you can use a subscript of [<]. So to point to the last ListItem in the aggregate MyList you would code Environment.MyList.ListItem[<]
You can use an asterisk * to set a reference to an element in the tree that you don't know the name of e.g. Environment.MyAggregate.* points to the first child of MyAggregate regardless of it's name.
You can also use asterisks * to choose an element irregardless of it's namespace InListDocsRef.*:listTypes.*:description
For anonymous namespaced elements use *:* but be very careful * and *:* are not the same thing the first means no namespace any element and the second means any namespace any element.
To process lists in reverse combine the [<] subscript with the PREVIOUSSIBLING option of MOVE.
So a chunk of code for reversing a list might go something like:
DECLARE MyReverseListItemWalkingRef REFERENCE TO Environment.MyList.ListItem[<];
WHILE LASTMOVE(MyReverseListItemWalkingRef) DO
CREATE LASTCHILD OF OuputRoot.ReversedList.Item NAME 'Description' VALUE MyReverseListItemWalkingRef.Desc;
MOVE MyReverseListItemWalkingRef PREVIOUSSIBLING REPEAT NAME;
END WHILE;
Learn how to use REFERENCES they are extremely powerful and one of your simplest options when it comes to performance.
Let's say I have a simple java program including 2 classes:
Example, Example2
and another class that uses both classes:
ExamplesUsage
and I have corresponding bazel build targets of kind java_library:
example, example2, examples_usage
so example and example2 need to be compiled before examples_usage is built.
I want to accumulate information from all three targets using bazel aspects propagation technique, how do I go about doing that?
Here's an example for accumulating the number of source files in this build chain:
def _counter_aspect_impl(target, ctx):
sources_count = len(ctx.rule.attr.srcs)
print("%s: own amount - %s" % (target.label.name , sources_count))
for dep in ctx.rule.attr.deps:
sources_count = sources_count + dep.count
print("%s: including deps: %s" % (target.label.name , sources_count))
return struct(count = sources_count)
counter_aspect = aspect(implementation = _counter_aspect_impl,
attr_aspects = ["deps"]
)
if we run it on the hypothetical java program we get the following output:
example2: own amount - 1.
example2: including deps: 1.
example: own amount - 1.
example: including deps: 1.
examples_usage: own amount - 1.
examples_usage: including deps: 3.
As you can see the 'dependencies' targets' aspects were run first, and only then the 'dependant' target aspect was run.
Of course in order to actually utilize the information some ctx.action or ctx.file_action needs to be called in order to persist the gathered data
DISCLAIMER: This question is only for those who have access to the econometrics toolbox in Matlab.
The Situation: I would like to use Matlab to simulate N observations from an ARIMA(p, d, q) model using the econometrics toolbox. What's the difficulty? I would like the innovations to be simulated with deterministic, time-varying variance.
Question 1) Can I do this using the in-built matlab simulate function without altering it myself? As near as I can tell, this is not possible. From my reading of the docs, the innovations can either be specified to have a constant variance (ie same variance for each innovation), or be specified to be stochastically time-varying (eg a GARCH model), but they cannot be deterministically time-varying, where I, the user, choose their values (except in the trivial constant case).
Question 2) If the answer to question 1 is "No", then does anyone see any reason why I can't edit the simulate function from the econometrics toolbox as follows:
a) Alter the preamble such that the function won't throw an error if the Variance field in the input model is set to a numeric vector instead of a numeric scalar.
b) Alter line 310 of simulate from:
E(:,(maxPQ + 1:end)) = Z * sqrt(variance);
to
E(:,(maxPQ + 1:end)) = (ones(NumPath, 1) * sqrt(variance)) .* Z;
where NumPath is the number of paths to be simulated, and it can be assumed that I've included an error trap to ensure that the (input) deterministic variance path stored in variance is of the right length (ie equal to the number of observations to be simulated per path).
Any help would be most appreciated. Apologies if the question seems basic, I just haven't ever edited one of Mathwork's own functions before and didn't want to do something foolish.
UPDATE (2012-10-18): I'm confident that the code edit I've suggested above is valid, and I'm mostly confident that it won't break anything else. However it turns out that implementing the solution is not trivial due to file permissions. I'm currently talking with Mathworks about the best way to achieve my goal. I'll post the results here once I have them.
It's been a week and a half with no answer, so I think I'm probably okay to post my own answer at this point.
In response to my question 1), no, I have not found anyway to do this with the built-in matlab functions.
In response to my question 2), yes, what I have posted will work. However, it was a little more involved than I imagined due to matlab file permissions. Here is a step-by-step guide:
i) Somewhere in your matlab path, create the directory #arima_Custom.
ii) In the command window, type edit arima. Copy the text of this file into a new m file and save it in the directory #arima_Custom with the filename arima_Custom.m.
iii) Locate the econometrics toolbox on your machine. Once found, look for the directory #arima in the toolbox. This directory will probably be located (on a Linux machine) at something like $MATLAB_ROOT/toolbox/econ/econ/#arima (on my machine, $MATLAB_ROOT is at /usr/local/Matlab/R2012b). Copy the contents of #arima to #arima_Custom, except do NOT copy the file arima.m.
iv) Open arima_Custom for editing, ie edit arima_Custom. In this file change line 1 from:
classdef (Sealed) arima < internal.econ.LagIndexableTimeSeries
to
classdef (Sealed) arima_Custom < internal.econ.LagIndexableTimeSeries
Next, change line 406 from:
function OBJ = arima(varargin)
to
function OBJ = arima_Custom(varargin)
Now, change line 993 from:
if isa(OBJ.Variance, 'double') && (OBJ.Variance <= 0)
to
if isa(OBJ.Variance, 'double') && (sum(OBJ.Variance <= 0) > 0)
v) Open the simulate.m located in #arima_Custom for editing (we copied it there in step iii). It is probably best to open this file by navigating to it manually in the Current Folder window, to ensure the correct simulate.m is opened. In this file, alter line 310 from:
E(:,(maxPQ + 1:end)) = Z * sqrt(variance);
to
%Check that the input variance is of the right length (if it isn't scalar)
if isscalar(variance) == 0
if size(variance, 2) ~= 1
error('Deterministic variance must be a column vector');
end
if size(variance, 1) ~= numObs
error('Deterministic variance vector is incorrect length relative to number of observations');
end
else
variance = variance(ones(numObs, 1));
end
%Scale innovations using deterministic variance
E(:,(maxPQ + 1:end)) = sqrt(ones(numPaths, 1) * variance') .* Z;
And we're done!
You should now be able to simulate with deterministically time-varying variance using the arima_Custom class, for example (for an ARIMA(0,1,0)):
ARIMAModel = arima_Custom('D', 1, 'Variance', ScalarVariance, 'Constant', 0);
ARIMAModel.Variance = TimeVaryingVarianceVector;
[X, e, VarianceVector] = simulate(ARIMAModel, NumObs, 'numPaths', NumPaths);
Further, you should also still be able to use matlab's original arima class, since we didn't alter it.