Least Nodes Travelled Weighted Graph With Petrol Stops - graph-algorithm

What kind of algorithm would allow me to find the least amount of steps(traversing 1 edge is 1 day) to take to reach a certain vertex however if the car runs out of petrol than you would need to rest 1 day at that place.

Related

Calculate the InfluxDB average

I want to process the value from InfluxDB on Grafana.
The final demand is to show how many miles the current vehicle has traveled in a certain time frame.
You can use the formula: average velocity * time.
Do the seniors have any good methods?
So what I'm thinking is: I've got the mean function for the average speed over a fixed period of time and the corresponding mileage, and then I want to add all the mileage together. How do I do that?
What if you only use SQL?
1.) InfluxDB uses InfluxQL, not a SQL
2.) Your approach average velocity * time is innacurate
3.) Use suitable InfluxDB functions, I would say INTEGRAL() is the best function for this case + some basic arithmetic. Don't expect the 100% accuracy. Accuracy depends heavily on the metric sampling, e.g. 1 minute sampling - but what if vehicle is driving 59 seconds and it is not moving for that second when sampling is happening. So don't be supprised, when even 10 sec sampling will be inacurrate.

Anomaly detection with machine learning without labels

I am tracing multiple signals for a certain period of time and associating them with a timestamp like following:
t0 1 10 2 0 1 0 ...
t1 1 10 2 0 1 0 ...
t2 3 0 9 7 1 1 ... // pressed a button to change the mode
t3 3 0 9 7 1 1 ...
t4 3 0 8 7 1 1 ... // pressed button to adjust a certain characterstic like temperature (signal 3)
where t0 is the tamp stamp, 1 is the value for signal 1, 10 the value for signal 2 and so on.
That captured data during that certain period of time should be considered as the normal case. Now significant derivations should be detected from the normal case. With significant derivation I do NOT mean that one signal value just changes to a value that has not been seen during the tracing phase but rather that a lot of values change that have not yet been related to each other. I do not want to hardcode rules since in the future more signals might be added or removed and other "modi" that have other signal values might be implemented.
Can this be achieved via a certain Machine Learning algorithm? If a small derivation occurs I want the algorithm to first see it as a minor change to the training set and if it occurs multiple times in the future it should be "learned". The major goal is to detect the bigger changes / anomalies.
I hope I could explain my problem detailed enough. Thanks in advance.
you could just calculate the nearest neighbor in your feature space and set a threshold how far its allowed to be away from your test point to not be an anomaly.
Lets say you have 100 values in your "certain period of time"
so you use a 100 dimensional feature space with your training data (which doesn't contain anomalies)
If you get a new dataset you want to test, you calculate the (k) nearest neighbor(s) and calculate the (e.g. euclidean) distance in your featurespace.
If that distance is larger than a certain threshold it's an anomaly.
What you have to do in order to optimize is finding a good k and a good threshold. E.g. by Grid-search.
(1) Note that something like this probably only works well if your data has a fixed starting and ending point. Otherwise you would need a huge amount of data and even than it will not perform as good.
(2) Note It should be worth trying to create an own detector for every "mode" you have mentioned in your question.

Throughput vs. Latency Confusion

According To This Article about Throughput and Latency H
"When You Go To Buy a Water Pipe, There Are Two Completely Independent Parameters That You Look At: The Diameter of the Pipe and Its Length"
But I Think These Two Parameters Are Related. Throughput Is Measured As Per Unit Time, So A Long Latency Will Affect Throughput, Say, If The Droplet Is Fast, More Of Them Will Pass The Pipe In One Second,
Can Any One Help Me Understand This?
EDIT:
the confusion is originated from counting queuing time as part of latency which we should not. Once a request is handled, the latency is independent of throughput.
Let me give you another anology...Think of a car travelling on a single lane road from location A to location B..time taken by that car to travel from A to B is your latency...and the number of cars travelling at an interval, maintaining the latency is your throughput.
The factors that affect here is your medium of travel ie by road and no of lanes on the road.
You're thinking about frequency. Say you have a window into the water pipe at some given point, and you send water droplets at some constant interval (say 1 droplet ever second). You count how often you see a single droplet pass by, and take the inverse (1/seconds). So if you count 1 second of elapsed time between droplets being observed, then you have a frequency of 1Hz.
Now say that you keep this frequency constant (1Hz), but you elongate the pipe. You send one droplet down and count how much time elapses before it reaches the end of the pipe. So say it takes 2 seconds for a single drop to travel from the beginning to the end of the pipe, then you have a latency of 2 seconds.
Now say that you widen the diameter of the pipe, and now you are able to send 2 droplets with a frequency of 1Hz. At the end of the pipe you will count 2 droplets coming out every second. So your throughput will be 2 droplets per second.
Here is my bit in a language which I can understand
When you go to buy a water pipe, there are two completely independent parameters that you look at: the diameter of the pipe and its length. The diameter determines the throughput of the pipe and the length determines the latency, i.e., the time it will take for a water droplet to travel across the pipe. Key point to note is that the length and diameter are independent, thus, so are are latency and throughput of a communication channel.
More formally, Throughput is defined as the amount of water entering or leaving the pipe every second and latency is the average time required to for a droplet to travel from one end of the pipe to the other.
Let’s do some math:
For simplicity, assume that our pipe is a 4inch x 4inch square and its length is 12inches. Now assume that each water droplet is a 0.1in x 0.1in x 0.1in cube. Thus, in one cross section of the pipe, I will be able to fit 1600 water droplets. Now assume that water droplets travel at a rate of 1 inch/second.
Throughput: Each set of droplets will move into the pipe in 0.1 seconds. Thus, 10 sets will move in 1 second, i.e., 16000 droplets will enter the pipe per second. Note that this is independent of the length of the pipe. Latency: At one inch/second, it will take 12 seconds for droplet A to get from one end of the pipe to the other regardless of pipe’s diameter. Hence the latency will be 12 seconds.

Number Generator wave cycle to graph output

I'm looking to generate a wave form generated by a cycle of numbers that increase and then decrease on a given rate. The frequency can vary between 1 to 40 per minute and the amplitude varies between 100 and 3000. The idea is to form a breathing like pattern for "breaths per minute" (1-40) and an inhaled volume per breath (100-3000).
I'm new here and I can only find random generators. I have looked at NSTimer and UIGraphs from the Ios-Developer Tesla tutorial app.
Could anyone point me in the right direction.
Many Thanks.

Genetic Algorithm CrossOver

I have a GA of population X.
After I run the gene and get the result for each gene I do some weighted multiply for the genes(so the better ranked genes get multiplied the most)
I get either x*2 or x*2+(x*100/10) genes. The 10% is random new genes it may or may not trigger depending on the mutation rate.
The problem is, I don' know what is the best approach to reduce the population to X again.
If the gene is a List should I just use list[::2] (or get every even index item from list)
What is a common practice when crossing genes?
EDIT:
Example of my GA with a population of 100;
Run the the 100 genes in the fitness function and get the result. Current Population: 100
Add 10% new random genes. Current Population: 110
Duplicate top 10% genes. Current Population: 121
Remove 10% worst genes. Current Population: 108
Combine all possible genes(no duplicates). Current Population: 5778
Remove genes from genepool until Population = 100. Current Population: 100
Restart the fitness function
What I want to know is: How should I do the last step? Currently I have a list with 5778 items and I take one every '58' or expressed as len(list)/startpopulation-1
Or should I use a 'while True' with a random.delete until len(list) == 100?
The new random genes should be added before or after the crossover?
Is there a way to make a gausian multiplication of the top-to-lowest rated items?
e.g: the top rated are multiplied by n, the second best by (n-1), the third by (n-2) ..., the worst rated multiplied by (n-n).
I do not really know why you are performing GA like that, could you give some references?
In any case here goes my typical solution for implementing a functional GA method:
Run the the 100 genes in the fitness function and get the result.
Randomly choose 2 genes based on the normalized fitness function
(consider this the probability of each gene to be chosen from the
pool) and cross-over. Repeat this step until you have 90 new genes
(45 times for this case). Save the top 5 without modification and
duplicate. Total genes: 100.
For the 90 new genes and the 5 duplicates on the new pool allow
them to mutate based on your mutation probability (typically 1%).
Total genes: 100.
Repeat from 1) to 3) until convergence, or X number of
iterations.
Note: You always want to keep unchanged the best genes such as you always get a better solution in each iteration.
Good luck!

Resources