Is it possible to attach a micro: bit board to a rock, and make it count numbers of times it has been thrown up into the air? - iot

See the headline
I can easily get gestures to work. But what if I were to attach the board on a rock, and make it register how many times the rock has been thrown up into the air - and caught again? I am wondering if it is possible to utilize the accelerometer?
This code works, if I just flip the board, it counts as described.
from microbit import *
score = 0
display.show(str(score))
while True:
if accelerometer.was_gesture('face down'):
score += 1
if score < 10:
display.show(score)
else:
display.scroll(score)
continue

This code seems to work, so in theory, it should work. The Microbit Foundation have a project for a step counter on their website (https://microbit.org/projects/make-it-code-it/step-counter/?editor=python), which is similar to what you're trying to do.
You might want to get a smashproof case for your micro:bit though!

Related

need help detecting game lighting in my lua script

I am trying to make a script, in Lua code, that detects the level of lighting in my game. I am pretty new to Lua so as far as I know lighting in the game is a number (2, 10, 5).
if game:GetService("Lighting").Brightness < 1 then
game.StarterGui.ScreenGui.output.Text = "You are getting cold";
end
I am not sure if it is just not detecting game:GetService("lighting").Brightness as a number or if the game does not know what game.StarterGui.ScreenGui.output.Text is. I've testing this multiple times, making small alteration every time, but most of the time I got no error. With the code now there is no error message.
Do not use game.StarterGui.
StarterGui is what gui is put inside a player's PlayerGui when they join. Players do NOT see StarterGui.
Instead:
game.Players.PlayerAdded:Connect(function(plr)
if game:GetService("Lighting").Brightness < 1 then
plr.PlayerGui.ScreenGui.output.Text = "You are getting cold";
end
end)
(player).PlayerGui is the gui the player sees. StarterGui will not auto update or modify the game's players' guis when it is changed.
If you have any more questions feel free to ask them with a comment! ^-^
Roblox lua documentation:
https://www.developer.roblox.com/en-us

Training Snake to eat food in specific number of steps, using Reinforcement learning

I am trying my hands on Reinforcement/Deep-Q learning these days. And I started with a basic game of 'Snake'.
With the help of this article: https://towardsdatascience.com/how-to-teach-an-ai-to-play-games-deep-reinforcement-learning-28f9b920440a
Which I successfully trained to eat food.
Now I want it to eat food in specific number of steps say '20', not more, not less. How will the reward system and Policy be changed for this?
I have tried many things, with little to no result.
For example I tried this:
def set_reward(self, player, crash):
self.reward = 0
if crash:
self.reward = -10
return self.reward
if player.eaten:
self.reward = 20-abs(player.steps - 20)-player.penalty
if (player.steps == 10):
self.reward += 10 #-abs(player.steps - 20)
else:
player.penalty+=1
print("Penalty:",player.penalty)
Thank You.
Here's is the program:
https://github.com/maurock/snake-ga
I would suggest this approach is problematic because despite changing your reward function you haven't included the number of steps in the observation space. The agent needs that information in the observation space to be able to differentiate at what point it should bump into the goal. As it stands, if your agent is next to the goal and all it has to do is turn right but all it's done so far is five moves, that is exactly the same observation as if it had done 19 moves. The point is you can't feed the agent the same state and expect it to make different actions because the agent doesn't see your reward function it only receives a reward based on state. Therefore you are contradicting the actions.
Think of when you come to the testing the agents performance. There is no longer a reward. All you are doing is passing the network a state and you are expecting it to choose different actions for the same state.
I assume your state space is some kind of 2D array. Should be straightforward to alter the code to contain the number of steps in the state space. Then the reward function would be something like if observation[num_steps] = 20: reward = 10.
Ask if you need more help coding it

No solutions found in MT StrategyTester optimization mode when using INIT_FAILED or INIT_PARAMTERS_INCORRECT

I'm trying to optimize my current EA that contains approximately 40 different inputs with MetaTrader genetic algorithm.
The inputs have constraints such as I1 < I2 < I3, I24 > 0, ... For total of about 20 constraints.
I tried to filter the solutions that do not respect the constraints with the following code :
int OnInit(){
if(I1 >= I2 || I2 >= I3) {
return(INIT_FAILED);
}
...
}
The problem is then the following : no viable solutions are found after the first 512 iterations and the optimization stops (same happens with the non genetic optimizer).
If I remove the constraints the algorithm will run and optimize the solutions but then those solutions will not respect the constraints.
Has anyone already faced similar issues ? Currently I think I'll have to use an external tool to optimize but this does not feel right
As Daniel has yesterday recommended an OnInit(){...}-handler located shortcutting, the Genetic-mode optimiser will and has to give-up, as it has not seen any progression on the evolutionary journey across some recent amount of population modifications/mutations down the road.
What has surprised me, is that the fully-meshed mode ( going across the whole Cartesian parameterSetSPACE ) rejected to test each and every parameterSetSPACE-vector, one after another. Having spent remarkable hundreds of machine-years in this very sort of testing, this sounds strange to my prior MT4 [ Strategy Tester ] experience.
One more trick :
Let me share one more option :
let pass the tested code through the OnInit(){...}, but make the conditions shortcut the OnTick(){...}-event-handler, returning straight upon entering there. This was a trick, we have invented so as our code was able to simulate some delayed starts ( an internal time-based iterator, for a sliding window location in a flow of time ) of the actual trading-under-test. This way one may simulate some adverse effect of "wrong" parameterSet-vectors, and the Genetics may evolve further, even finding as a side-effect what types of parametrisation gets penalised :o)
SearchSpace having 40+ parameters ? ... The Performance !
If this is your concerd, your next level of performance gets delivered, once you start using a distributed-computing testing-farm, where many machines perform tests upon centrally managed distribution of parameterSet-vectors and report back the results.
This was indeed a performance booster for our Quant R&D.
After some time, we have also implemented a "standalone" farm for ( again, distributed-computing ) off-platform Quant R&D prototyping and testing.

Solitiare card game - how to program resume game function?

I've been programming a solitaire card game and all has been going well so far, the basic engine works fine and I even programmed features like auto move on click and auto complete when won, unlimited undo/redo etc. But now I've realised the game cannot be fully resumed ie saved so as to continue from the exact position last time the game was open.
I'm wondering how an experienced programmer would approach this since it doesn't seem so simple like with other games where just saving various numbers, like the level number etc is sufficient for resuming the game.
The way it is now, all game objects are created on a new game, the cards, the slots for foundations, tableaus etc and then the cards are shuffled and dealt out. This is random but the way I see it, the game needs to remember this random deal to resume game and deal it again exactly the same when the game is resumed. Then all moves that were executed have to be executed as they were as well. So it looks like the game was as it was last time it was played, but in fact all moves have been executed from beginning again. Not sure if this is the best way to do it but am interested in other ways if there are any.
I'm wondering if any experienced programmers could tell me how they would approach this and perhaps give some tips/advice etc.
(I am going to assume this is standard, Klondike Solitaire)
I would recommend designing a save structure. Each card should have a suit and a value variable, so I would write out:
[DECK_UNTURNED]
H 1
H 10
S 7
C 2
...
[DECK_UNTURNED_END]
[DECK_TURNED]
...
[DECK_TURNED_END]
etc
I would do that for each location cards can be stacked (I believe you called them foundations), the unrevealed deck cards, the revealed deck cards, each of the seven main slots, and the four winning slots. Make sure however you read them in and out, they end up in the same order, of course.
When you go to read the file, a simple way is to read the entire file into a vector of strings. Then you iterate through the vector until you find one of your blocks.
if( vector[ iter ] == "[DECK_UNTURNED]" )
Now you go into another loop, using the same vector and iter, and keep reading in those cards until you reach the associated end block.
while( vector[ iter ] != "[DECK_UNTURNED_END]" )
read cards...
++iter
This is how I generally do all my save files. Create [DATA] blocks, and read in until you reach the end block. It is not very elaborate, but it works.
Your idea of replaying the game up to a point is good. Just save the undo info and redo it at load time.

How do I find the required maxima in acceleration data obtained from an iPhone?

I need to find the number of times the accelerometer value stream attains a maximum. I made a plot of the accelerometer values obtained from an iPhones against time, using CoreMotion method to obtain the DeviceMotionUpdates. When the data was being recorded, I shook the phone 9 times (where each extremity was one of the highest points of acceleration).
I have marked the 18 (i.e. 9*2) times when acceleration had attained maximum in red boxes on the plot.
But, as you see, there are some local maxima that I do not want to consider. Can someone direct me towards an idea that will help me achieve detecting only the maxima of importance to me?
Edit: I think I have to use a low pass filter. But, how do I implement this in Swift? How do I choose the frequency of cut-off?
Edit 2:
I implemented a low pass filter and passed the raw motion data through it and obtained the graph as shown below. This is a lot better. I still need a way to avoid the insignificant maxima that can be observed. I'll work in depth with the filter and probably fix it.
Instead of trying to find the maximas, I would try to look for cycles. Especially, we note that the (main) minimas seem to be a lot more consistent than the maximas.
I am not familiar with swift, so I'll layout my idea in pseudo code. Suppose we have our values in v[i] and the derivative in dv[i] = v[i] - v[i - 1]. You can use any other differentiation scheme if you get a better result.
I would try something like
cycles = [] // list of pairs
cstart = -1
cend = -1
v_threshold = 1.8 // completely guessing these figures looking at the plot
dv_threshold = 0.01
for i in v:
if cstart < 0 and
v[i] > v_threshold and
dv[i] < dv_threshold then:
// cycle is starting here
cstart = i
else if cstart > 0 and
v[i] < v_threshold and
dv[i] < dv_threshold then:
// cycle ended
cend = i
cycles.add(pair(cstart, cend))
cstart = -1
cend = -1
end if
Now you note in comments that the user should be able to shake with different force and you should be able to recognise the motion. I would start with a simple 'hard-coded' cases as the one above, and see if you can get it to work sufficiently well. There is a lot of things you could try to get a variable threshold, but you will nevertheless always need one. However, from the data you show I strongly suggest at least limiting yourself to looking at the minimas and not the maximas.
Also: the code I suggested is written assuming you have the full data set, however you will want to run this in real time. This will be no problem, and the algorithm will still work (that is, the idea will still work but you'll have to code it somewhat differently).

Resources