using stationary solution as the initial conditions in time dependent model - comsol

Is there a way to use the stationary solution obtained in Comsol 4.2 as the initial conditions in a time dependent model? I want to conduct a simulation to find a solution (u) and its first derivative (ux) using a 3D stationary model. Save this information to a file. Then use this file to provide the initial conditions in time dependent model.

This is for COMSOL 5.2, but should be similar for 4.2:
Create the stationary study. The "Values for dependent values" in study step settings should be set to the default ("Physics-controlled" in 5.2). This will use the initial conditions you specified in your physics setting (usually 0 is used in the physics settings).
Create the time-dependent step or study. Set initial conditions in the physics to the appropriate dependent model variable names rather than the default 0. Set "Values for dependent variables" in study step settings to User Controlled...Solution...Your Stationary Study.
Solve the stationary study then the time dependent study. If you have both as steps in the same study, then solve that study.

Related

Time Series Variable Analysis

The big picture of the problem is the following : predicting the failure on engine by predicting temperature of the engine because that is intuitively the main reason of failure. The first thing I want to do is to check which other variables influence the temperature, like torque of the engine, functioning mode of the engine etc. Why? Because those are variables we can change in real life and thus avoid high temperatures, and thus the failure.
So, my question is how to find on which variables the temperature is dependent and how much. As the temperature is dependent on the time we're in the case of a time series problem but not all the variables are dependent on their past values. Therefore, I am not sure that auto-regressive models could work.
First thing that came in mind was to check whether there is a linear relationship. But by reflecting on how physically a temperature will evolve I'm pretty sure it is exponentially, so perhaps just taking the natural logarithm we can transform it into a linear problem and then apply a linear regression. The problem is that won't capture the time dependency of the temperatures. I looked into autoregressive models but I'm not sure they will work. All I want is to see which variables have an impact on temperature not to predict the temperature for now.

Applying multiple weights

I have a data set that has weighted scores based on gender and age profiles. I also have region data broken up into 7 states. I basically want to exclude one of those states and apply additional weights based on state to come up with a new "overall" score.
Manual excel calculations is only way I can think of doing this.
I need to take scores that already have a variable weight applied and add an additional weight dependent on region.
SPSS Statistics only allows a single weight variable to be applied at any one time, so Kevin Troy's comments are correct: you'll have to combine things into a single weight. If the data are properly combined into a single file you may find the Rake Weights extension that's installed with the Python Essentials useful, as you can specify multiple variables as inputs to the overall weighting scheme and have the weights calculated for you. If you're not familiar with the theory behind this, look up raking or rim weighting.

User behavior prediction/analysis

I am trying to apply machine learning methods to predict/ analyze user's behavior. The data which I have is in the following format:
data type
I am new to the machine learning, so I am trying to understand what I am doing makes sense or not. Now in the activity column, either I have two possibilities which I am representing as 0 or 1. Now in time column, I have time in a cyclic manner mapped to the range (0-24). Now at a certain time (onehot encoded) user performs an activity. If I use activity column as a target column in machine learning, and try to predict if at a certain time user will perform one activity or another, does it make sense or not?
The reason I am trying to predict activity is that if my model provides me some result about activity prediction and in real time a user does something else (which he has not been doing over the last week or so), I want to consider it as a deviation from normal behavior.
Am I doing right or wrong? any suggestion will be appreciated. Thanks.
I think your idea is valid, but machine learning models are not 100 % accurate all the time. That is why "Accuracy" is defined for a model.
If you want to create high-performance predictive models then go for deep learning models because its performance improves over time with the increase in the size of training data sets.
I think this is a great use case for a Classification problem. Since you have only few columns (features) in your dataset, i would say start with a simple Boosted Decision Tree Classification algorithm.
Your thinking is correct, that's basically how fraud detection AI works in some cases, one option to pursue is to use the decision tree model, this may help to scale dynamically.
I was working on the same project but in a different direction, have a look maybe it can help :) https://github.com/dmi3coder/behaiv-java.

Hidden markov model next state only depends on previous one state? What about previous n states?

I am working on a prototype framework.
Basically I need to generate a model or profile for each individual's lifestyle based on some sensor data about him/her, such as GPS, motions, heart rate, surrounding environment readings, temperature etc.
The proposed model or profile is a knowledge representation of an individual's lifestyle pattern. Maybe a graph with probabilities.
I am thinking to use Hidden Markov Model to implement this. As the states in HMM can be Working, Sleeping, Leisure, Sport and etc. Observations can be a set of various sensor data.
My understanding of HMM is that next state S(t) is only depends on previous one state S(t-1). However in reality, a person's activity might depends on previous n states. Is it still a good idea to use HMM? Or should I use some other more appropriate models? I have seen some work on second order and multiple order of Markov Chains, does it also apply HMM?
I really appreciate if you can give me a detailed explanation.
Thanks!!
What you are talking about is a First Order HMM in which your model would only have knowledge of the previous history State. In case of an Order-n Markov Model, the next state would be dependent on the previous 'n' States and may be this is what you are looking for right?
You are right that as far as simple HMMs are considered, the next state is dependent only upon the current state. However, it is also possible to achieve a mth Order HMM by defining the transition probabilities as shown in this link. However, as the order increases, so does the overall complexity of your matrices and hence your model, so it's really upto you if your up for the challenge and willing to put the requisite effort.

Reinforcement Learning With Variable Actions

All the reinforcement learning algorithms I've read about are usually applied to a single agent that has a fixed number of actions. Are there any reinforcement learning algorithms for making a decision while taking into account a variable number of actions? For example, how would you apply a RL algorithm in a computer game where a player controls N soldiers, and each soldier has a random number of actions based its condition? You can't formulate fixed number of actions for a global decision maker (i.e. "the general") because the available actions are continually changing as soldiers are created and killed. And you can't formulate a fixed number of actions at the soldier level, since the soldier's actions are conditional based on its immediate environment. If a soldier sees no opponents, then it might only be able to walk, whereas if it sees 10 opponents, then it has 10 new possible actions, attacking 1 of the 10 opponents.
What you describe is nothing unusual. Reinforcement learning is a way of finding the value function of a Markov Decision Process. In an MDP, every state has its own set of actions. To proceed with reinforcement learning application, you have to clearly define what the states, actions, and rewards are in your problem.
If you have a number of actions for each soldier that are available or not depending on some conditions, then you can still model this as selection from a fixed set of actions. For example:
Create a "utility value" for each of the full set of actions for each soldier
Choose the highest valued action, ignoring those actions that are not available at a given time
If you have multiple possible targets, then the same principle applies, except this time you model your utility function to take the target designation as an additional parameter, and run the evaluation function multiple times (one for each target). You pick the target that has the highest "attack utility".
In continuous domain action spaces, the policy NN often outputs the mean and/or the variance, from which you, then, sample the action, assuming it follows a certain distribution.

Resources