Rxweb validator in angular
when I compare two inputs with conditions, if x has value then y should have required validation. when I give value for x and not value for y. The form is also valid.
when I give value to X and value to y and then deleting for Y it shows required.
I need when I give value for x automatically required should show for y field.
Related
When I try to use NVIDIA apex.amp O1 to accelerate my training, it report an error in my code logits = einsum('b x y d, r d -> b x y r', q, rel_k):
RuntimeError: RuntimeErrorRuntimeErrorexpected scalar type Half but found Float: :
expected scalar type Half but found Float
It means that rel_kshould be torch.HalfTensor.
rel_k is defined as follow: self.rel_height = nn.Parameter(torch.randn(height * 2 - 1, dim_head) * scale)
But when I specify the type of rel_kto be torch.HalfTensor, it report an error that I should not specify dtype manually
RuntimeErrorRuntimeError: : Found param encoder.layers.0.blocks.0.attn.rel_pos_emb.rel_height with type torch.cuda.HalfTensor, expected torch.cuda.FloatTensor.
When using amp.initialize, you do not need to call .half() on your model
before passing it, no matter what optimization level you choose.Found param encoder.layers.0.blocks.0.attn.rel_pos_emb.rel_height with type torch.cuda.HalfTensor, expected torch.cuda.FloatTensor.
When using amp.initialize, you do not need to call .half() on your model
before passing it, no matter what optimization level you choose.
How should I do to use amp O1 correctly in my code?
enter image description here
i'm performing some tests at that time i'm getting value error while spliting
This error is coming because your X and Y don't have the same length (which is what train_test_split requires), i.e., X.shape[0] != Y.shape[0].
try this:
>>> X.shape
>>> Y.shape
And then to fix this:
You have to remove the extra list from inside of np.array() when
defining X or remove the extra dimension afterwards with the
following command: X = X.reshape(X.shape[1:]).
And then Transpose X by running X = X.transpose() to get equal number
of samples in X and Y.
I am modeling an LSTM model that contains multiple features and one target value. It is a regression problem.
I have doubts that my data preparation for the LSTM is erroneous; mainly because the model learns nothing but the average of the target value.
The following code I wrote is for preparing the data for the LSTM:
# df is a pandas data frame that contains the feature columns (f1 to f5) and the target value named 'target'
# all columns of the df are time series data (including the 'target')
# seq_length is the sequence length
def prepare_data_multiple_feature(df):
X = []
y = []
for x in range(len(df)):
start_id = x
end_id = x + seq_length
one_data_point = []
if end_id + 1 <= len(df):
# prepare X
for col in ['f1', 'f2', 'f3', 'f4', 'f5']:
one_data_point.append(np.array(df[col].values[start_id:end_id]))
X.append(np.array(one_data_point))
# prepare y
y.append(np.array(df['target'].values[end_id ]))
assert len(y) == len(X)
return X, y
Then, I reshape the data as follows:
X, y = prepare_data_multiple_feature(df)
X = X.reshape((len(X), seq_length, 5)) #5 is the number of features, i.e., f1 to f5
is my data preparation method and data reshaping correct?
As #isp-zax mentioned, please provide a reprex so we could reproduce the outcome and see where the problem lies.
As an aside, you could use for col in df.columns instead of listing all the column names and (minor optimisation) the first loop should be executed for x in range(len(df) - seq_length), otherwise at the end you execute the loop seq_length - 1 many times without actually processing any data. Also, df.values[a, b] will not include the element at index b so if you want to include the "window" with last row inside your X the end_id can be equal to the len(df), i.e. you could execute your inner condition (prepare and append) for if end_id <= len(df):
Apart from that I think it would be simpler to read if you sliced the dataframe across columns and rows at the same time, without using one_data_point, i.e.
to select seq_length rows without the (last) target column, simply do:
df.values[start_id, end_id, :-1]
In the CausalImpact package, the supplied covariates are independently selected with some prior probability M/J where M is the expected model size and J is the number of covariates. However, on page 11 of the paper, they say get the values by "asking about the expected model size M." I checked the documentation for CausalImpact but was unable to find any more information. Where is this done in the package? Is there a parameter I can set in a function call to decide why my desired M?
You are right, this is not directly possible with CausalImpact, but it is possible. CausalImpact uses bsts behind the scenes and this package allows to set the parameter. So you have to define you model using bsts first, set the parameter and then provide it to your CausalImpact call like this (modified example from the CausalImpact manual):
post.period <- c(71, 100)
post.period.response <- y[post.period[1] : post.period[2]]
y[post.period[1] : post.period[2]] <- NA
ss <- AddLocalLevel(list(), y)
bsts.model <- bsts(y ~ x1, ss, niter = 1000, expected.model.size = 4)
impact <- CausalImpact(bsts.model = bsts.model,
post.period.response = post.period.response)
I am following this demo-
https://github.com/torch/demos/blob/master/linear-regression/example-linear-regression.lua
feval = function(x_new)
-- set x to x_new, if differnt
-- (in this simple example, x_new will typically always point to x,
-- so the copy is really useless)
if x ~= x_new then
x:copy(x_new)
end
-- select a new training sample
_nidx_ = (_nidx_ or 0) + 1
if _nidx_ > (#data)[1] then _nidx_ = 1 end
local sample = data[_nidx_]
local target = sample[{ {1} }] -- this funny looking syntax allows
local inputs = sample[{ {2,3} }] -- slicing of arrays.
dl_dx:zero()
local loss_x = criterion:forward(model:forward(inputs), target)
model:backward(inputs, criterion:backward(model.output, target))
return loss_x, dl_dx
end
I have a few doubts in this function
Where is the argument x_new (or its copy x) used in the code?
What does _nidx_ = (_nidx_ or 0) + 1 mean?
what is the value of nidx when the function is first called?
Where is dl_dx updated? Ideally it should have been just after local loss_x is updated, but it isnt written explicitly
EDIT:
My point#4 is very clear now. For those who are interested-
(source- deep learning, oxford, practical 3 lab sheet)
Where is the argument x_new (or its copy x) used in the code?
x is the tensor of parameters of your model. It was previously acquired via x, dl_dx = model:getParameters(). model:forward() and model:backward() automatically use this parameter tensor. x_new is a new set of parameters for your model and is provided by the optimizer (SGD). If it is ever different from your model's parameter tensor, your model's parameters will be set to these new parameters via x:copy(x_new) (in-place copy of tensor's x_new values to x).
What does nidx = (nidx or 0) + 1 mean?
It increases the value of _nidx_ by 1 ((_nidx_) + 1) or sets it to 1 ((0) + 1) if _nidx_ was not yet defined.
what is the value of nidx when the function is first called?
It is never set before that function. Variables which were not yet set have the value nil in lua.
Where is dl_dx updated? Ideally it should have been just after local loss_x is updated, but it isnt written explicitly
dl_dx is the model's tensor of gradients. model:backward() computes the gradient per parameter given a loss and adds it to the model's gradient tensor. As dl_dx is the model's gradient tensor, its values will be increases. Notice that the gradient values are added, which is why you need to call dl_dx:zero() (sets the values of dl_dx in-place to zero), otherwise your gradient values would keep increasing with every call of feval.
x is a global variable, see line 126. The function only seems to update it, not to use it.
This is a common lua idiom: you set things to a parameter or a default value if it is not present. Typical use in functions:
function foo(a, b)
local a = a or 0
local b = b or "foo"
end
The idea is that an expression using and or or evaluates to the first or the second argument, according to the values. x and y yields y if x is not nil or false and x (nil or false) otherwise.
x or y yields y if x is not present (nil or false) and x otherwise. Therefore, or is used for default arguments.
The two can be rewritten the following way:
-- x and y
if x then
return y
else
return x
end
-- x or y
if x then
return x
else
return y
end
you have _nidx_ = (_nidx or 0) + 1, so at the first call of the function, _nidx_ is nil, since it has been defined nowhere. After that, it is (globally) set to 1 (0 + 1)
I'm not sure what you mean exactly. It is reset in line 152 and returned by the function itself. It is a global variable, so maybe there is an outer use for it?