how to understand the memory issue of breadth-first-search in branch and bound - breadth-first-search

I was confused by the branch and bound method recently. There are three searching strategies in branch-and-bound method: deepth-first-search, breadth-first-search and best-first-search. All the books and literatures state that the breadth-first and best-first will take more memory of the computer used. How to understand this? Take a binary tree as an example, when take a node (father node) from the live node list to process, two sub-nodes (or son nodes) are generated and inserted into the live node list, but the father node should be deleted, thus, there is only one node's memory increase. From this point of view, all the three searching strategies take the same memories of the computer.
Am I right? It has been confused me for long. Could anyone give me some advice?

Well,
You could think about data structures:
Breadth-first-search: It´s implemented as a queue. When you expand a node (father node) you include son nodes in the queue. The father node is deleted.
Let´s make an example:
Expand 45: We include 20 and 70 in the queue and delete 45 so:
20 | 70
Expand 20: We expand the first node from the queue and include his sons:
70 | 10 | 28
Expand 70: We expand the first node from the queue and include his sons:
10 | 28 | 60 | 85
And so on...
As you can see space complexity is exponential: O() (b = branching factor ; d = depth, initially 0)
Deepth-first-search: It´s implemented as a stack:
Expand 45: We include 20 and 70 in the stack and delete 45 so:
20 | 70
Expand 20: We expand the first node from the top of the stack and include his sons:
10 | 28 | 70
Expand 10: We expand the first node from the top of the stack and include his sons:
1 | 18 | 28 |70
And so on...
Now space complexity is linear: O(d). Time complexity is O() in both algorithms.
Best-first-search: Sorts the queue according to a heuristic evaluation function f(n) and expands the succesor with the best f(n). Space complexity is linear: O(d).
Hope this helps.

Related

machine learning model different inputs

i have dataset, the data set consists of : Date , ID ( the id of the event ), number_of_activities, running_sum ( the running_sum is the running sum of activities by id).
this is a part of my data :
date | id (id of the event) | number_of_activities | running_sum |
2017-01-06 | 156 | 1 | 1 |
2017-04-26 | 156 | 1 | 2 |
2017-07-04 | 156 | 2 | 4 |
2017-01-19 | 175 | 1 | 1 |
2017-03-17 | 175 | 3 | 4 |
2017-04-27 | 221 | 3 | 3 |
2017-05-05 | 221 | 7 | 10 |
2017-05-09 | 221 | 10 | 20 |
2017-05-19 | 221 | 1 | 21 |
2017-09-03 | 221 | 2 | 23 |
the goal for me is to predict the future number of activities in a given event, my question : can i train my model on all the dataset ( all the events) to predict the next one, if so how? because there are inequalities in the number of inputs ( number of rows for each event is different ), and is it possible to exploit the date data as well.
Sure you can. But alot of more information is needed, which you know yourself the best.
I guess we are talking about timeseries here as you want to predict the future.
You might want to have alook at recurrent-neural nets and LSTMs:
An Recurrent-layer takes a timeseries as input and outputs a vector, which contains the compressed information about the whole timeseries. So lets take event 156, which has 3 steps:
The event is your features, which has 3 timesteps. Each timestep has different numbers of activities (or features). To solve this, just use the maximum amount of features occuring and add a padding value (most often simply zero) so they all have the samel length. Then you have a shape, which is suitable for a recurrent neural Net (where LSTMS are currently a good choice)
Update
You said in the comments, that using padding is not option for you, let me try to convince you. LSTMs are good at situations, where the sequence length is different long. However, for this to work you also need to have longer sequences, what the model can learn its patterns from. What I want to say, when some of your sequences have only a few timesteps like 3, but you have other with 50 and more timesteps, the model might have its difficulties to predict these correct, as you have to specify, which timestep you want to use. So either, you prepare your data differently for a clear question, or you dig deeper into the topic using SequenceToSequence Learning, which is very good at computing sequences with different lenghts. For this you will need to set up a Encoder-Decoder network.
The Encoder squashs the whole sequence into one vector, whatever length it is. This one vector is compressed in a way, that it contains the information of the sequence only in one vector.
The Decoder then learns to use this vector for predicting the next outputs of the sequences. This is a known technique for machine-translation, but is suitable for any kind of sequence2sequence tasks. So I would recommend you to create such a Encoder-Decoder network, which for sure will improve your results. Have a look at this tutorial, which might help you further

Minimum number of states in DFA

Minimum number states in the DFA accepting strings (base 3 i.e,, ternary form) congruent to 5 modulo 6?
I have tried but couldn't do it.
At first sight, It seems to have 6 states but then it can be minimised further.
Let's first see the state transition table:
Here, the states q0, q1, q2,...., q5 corresponds to the states with modulo 0,1,2,..., 5 respectively when divided by 6. q0 is our initial state and since we need modulo 5 therefore our final state will be q5
Few observations drawn from above state transition table:
states q0, q2 and q4 are exactly same
states q1, q3 and q5 are exactly same
The states which make transitions to the same states on the same inputs can be merged into a single state.
Note: Final and Non-final states can never be merged.
Therefore, we can merge q0, q2, q4 together and q1, q3 together leaving the state q5 aloof from collation.
The final Minimal DFA has 3 states as shown below:
Let's look at a few strings in the language:
12 = 1*3 + 2 = 5 ~ 5 (mod 6)
102 = 1*9 + 0*3 + 2 = 11 ~ 5 (mod 6)
122 = 1*9 + 2*3 + 2 = 17 ~ 5 (mod 6)
212 = 2*9 + 1*3 + 2 = 23 ~ 5 (mod 6)
1002 = 1*18 + 0*9 + 0*9 + 2 = 29 ~ 5 (mod 6)
We notice that all the strings end in 2. This makes sense since 6 is a multiple of 3 and the only way to get 5 from a multiple of 3 is to add 2. Based on this, we can try to solve the problem of strings congruent to 3 modulo 6:
10 = 3
100 = 9
120 = 15
210 = 21
1000 = 27
There's not a real pattern emerging, but consider this: every base-3 number ending in 0 is definitely divisible by 3. The ones that are even are also divisible by 6; so the odd numbers whose base-3 representation ends in 0 must be congruent to 3 mod 6. Because all the powers of 3 are odd, we know we have an odd number if the number of 1s in the string is odd.
So, our conditions are:
the string begins with a 1;
the string has an odd number of 1s;
the string ends with 2;
the string can contain any number of 2s and 0s.
To get the minimum number of states in such a DFA, we can use the Myhill-Nerode theorem beginning with the empty string:
the empty string can be followed by any string in the language. Call its equivalence class [e]
the string 0 cannot be followed by anything since valid base-3 representations don't have leading 0s. Call its equivalence class [0].
the string 1 must be followed with stuff that has an even number of 1s in it ending with a 2. Call its equivalence class [1].
the string 2 can be followed by anything in the language. Indeed, you can verify that putting a 2 at the front of any string in the language gives another string in the language. However, it can also be followed by strings beginning with 0. Therefore, its class is new: [2].
the string 00 can't be followed by anything to fix it; its class is the same as its prefix 0, [0]. same for the string 01.
the string 10 can be followed by any string with an even number of 1s that ends in a 2; it is therefore equivalent to the class [1].
the string 11 can be followed by any string in the language whatever; indeed, you can verify prepending 11 in front of any string in the language gives another solution. However, it can also be followed by strings beginning with 0. Therefore, its class is the same as [2].
12 can be followed by a string with an even number of 1s ending in 2, as well as by the empty string (since 12 is in fact in the language). This is a new class, [12].
21 is equivalent to 1; class [1]
22 is equivalent to 2; class [2]
20 is equivalent to 2; class [2]
120 is indistinguishable from 1; its class is [1].
121 is indistinguishable from [2].
122 is indistinguishable from [12].
We have seen no new equivalence classes on new strings of length 3; so, we know we have seen all the equivalence classes. They are the following:
[e]: any string in the language can follow this
[0]: no string can follow this
[1]: a string with an even number of 1s ending in 2 can follow this
[2]: same as [e] but also strings beginning with 0
[12]: same as [1] but also the empty string
This means that a minimal DFA for our language has five states. Here is the DFA:
[0]
^
|
0
|
----->[e]--2-->[2]<-\
| ^ |
| | |
1 __1__/ /
| / /
| | 1
V V |
[1]--2-->[12]
^ |
| |
\___0___/
(transitions not pictured are self-loops on the respective states).
Note: I expected this DFA to have 6 states, as Welbog pointed out in the other answer, so I might have missed an equivalence class. However, the DFA seems right after checking a few examples and thinking about what it's doing: you can only get to accepting state [12] by seeing a 2 as the last symbol (definitely necessary) and you can only get to state [12] from state [1] and you must have seen an odd number of 1s to get to [1]…
The minimum number of states for almost all modulus problems is the base of the modulus. The general strategy is one state for every modulus, as transitions between moduli are independent of what the previous numbers were. For example, if you're in state r4 (representing x = 4 (mod 6)), and you encounter a 1 as your next input, your new modulus is 4x6+1 = 25 = 1 (mod 6), so the transition from r4 on input 1 is to r1. You'll find that the start state and r0 can be merged, for a total of 6 states.

missing data in time series

As im so new to this field and im trying to explore the data for a time series, and find the missing values and count them and study a distribution of their length and fill in these gaps, the thing is i have, let's say 10 file.txt and for each file i have 2 columns as follows:
C1 C2
944 0
920 1
920 2
928 3
912 7
920 8
920 9
880 10
888 11
920 12
944 13
and so on... lets say till 100 and not necessarily the 10 files have the same number of observations.
so here for example the missing values and not necessarily appears in all files that i have, missing value are: 4,5 and 6 in C2 and the corresponding 1st column C1(measured in milliseconds, so the value of 928ms is not a time neighbor of 912ms). So i want to find those gaps(the total missing values in all 10 files) and show a histogram of their lengths.
i wrote a piece of code in R, but the problem is that i don't get the exact total number that i should have for the missing values.
path = "files path"
out.file<-data.frame(TS = 0, Index = 0, File = '')
file.names <- dir(path, pattern =".txt")
for(i in 1:length(file.names)){
file <- cbind(read.table(file.names[i],
header=F,
sep ="\t",
stringsAsFactors=FALSE),
file.names[i])
colnames(file) <- c('TS', 'Index', 'File')
out.file <- rbind(out.file, file)
}
d = dim(out.file)[1]
misDa = 0
for(i in 2:(d-1)){
if(abs(out.file$Index[i]-out.file$Index[i+1]) > 1)
misDa = misDa+1
}
Hard to give specific hints without having a more extensive example of your data that contains some of the actual NAs.
If you are using R (like it seems) the naniar and the imputeTS packages offer nice functions for missing data visualizations.
Some examples from the naniar package, which is especially good for multivariate data (more plot examples):
Some examples from the imputeTS package, which is especially good for time series data (additional plot examples):

Extract Patterns from the device log data

I am working on a project, in which we have to extract the patterns(User behavior) from the device log data. Device log contains different device actions with a timestamp like when the devices was switched on or when they was switched off.
For example:
When a person enters a room. He first switches on the light and then he
switches on the fan or Whenever the temp is less than 20 C, he switches off
the AC.
I am thinking to use Bayesian Networks to extract these patterns.
Learn Bayes Network from data (using Weka or Netica).
Arcs in Bayes Network will give the patterns/dependencies among different devices.
Is this the right approach ??
Edit: The chronological order between devices matters.
Is this the right approach?
There's many possible approaches, but here's a very simple and effective one that fits the domain:
Given the nature of the application, chronological order doesn't really matter, it doesn't matter if the Fan gets turned on before the Light e.g.
Also given that you can have e.g. a motion sensor to trigger a routine that reads the sensors, and perhaps a periodic temperature check, you can use the network below to act upon the extracted patterns (no need to complicate it further with chronological order and event tracking, we extract data to act upon, and event order in this domain isn't interesting)
For example: When a person enters a room. He first switches on the
light and then he switches on the fan or Whenever the temp is less
than 20 C, he switches off the AC.
Raw devices log might look something like this, T/F being True/False:
Person in room | Temperature | Light | Fan | AC
-----------------------------------------------
T | 20 | T | T | T
T | 19 | T | T | F
F | 18 | F | F | F
With sufficient samples you can train a model on the above, e.g. Naive bayes is not sensitive to irrelevant features/inputs, so e.g. if you look at my first raw table above that includes all the variables and try to predict AC, with sufficient data it will understand that some inputs are not very important or completely irrelevant
Or if you know how before hand what the Light, Fan, and AC depend on, e.g. we know Light isn't going to depend on Temperature, and that Fan and AC don't care if Light is turned on or not (they can operate even if the person is sleeping e.g.) so you can break it down like below:
Person in Room | Light
----------------------
T | T
F | F
Person in Room | Temperature | Fan
----------------------------------
T | 20 | T
F | 25 | F
Person in room | Temperature | AC
---------------------------------
T | 20 | T
T | 19 | F
F | 20 | F
F | 19 | F

Logical Addresses & Page numbers

I just started learning Memory Management and have an idea of page,frames,virtual memory and so on but I'm not understanding the procedure from changing logical addresses to their corresponding page numbers,
Here is the scenario-
Page Size = 100 words /8000 bits?
Process generates this logical address:
10 11 104 170 73 309 185 245 246 434 458 364
Process takes up two page frames,and that none of its are resident (in page frames) when the process begins execution.
Determine the page number corresponding to each logical address and fill them into a table with one row and 12 columns.
I know the answer is :
0 0 1 1 0 3 1 2 2 4 4 3
But can someone explain how this is done? Is there a equation or something? I remember seeing something with a table and changing things to binary and putting them in the page table like 00100 in Page 1 but I am not really sure. Graphical representations of how this works would be more than appreciated. Thanks

Resources