Does Zenbot actually trade? - zenbot

I've been working on this for a few weeks now. Every night I perform some twelve hours of the genetic_backtester script. Each morning I replace the running zenbot trade command with the options as provided in the last csv from the backtester which had the best result. I still have exactly 0 trades from Zenbot. I cannot figure out what I'm doing wrong.
Could it be because Kraken (the exchange I prefer) is having so much skew in it's local time? I do notice that a few days after another, I get more or less the same algorithm with more or less the same options, so I would expect at least a few trades... Any hints?
EDIT re comment:
For the genetic algorithm, I'm running it with scripts/genetic_backtester/darwin.js --selector=kraken.XXBT-ZEUR --days 3 --population 100. I leave that running during the night and use the latest result, first line, to trade on live, from another instance. Every morning I stop the current live trading and start a new one with the latest result.
If I run a sim with the settings that I used in the last 24 hours, I get 2 trades (with a profit, I might add). If I do the same for the settings I had on the previous day and sim it over the last two days, I get 3 trades.
I have not run it with trade --paper, but I can do that for today.
EDIT after a day
So the trade --paper bought, while the live one, running with the same parameters, did not. Here's a paste from the trade --paper:
2017-12-30 12:52:00 11099.70 XXBT-ZEUR -4.73% 439 -- 41 +28.6341 41 0.00000000 XXBT 1000.00 ZEUR +0.00% +12.23%
2017-12-30 13:44:00 10650.00 XXBT-ZEUR -4.06% 1073 -- 39 +66.9083 39 buy 0.00000000 XXBT 1000.00 ZEUR +0.00% +16.97%
2017-12-30 13:44:13 10550.80 XXBT-ZEUR -0.94% 12 -- 38 +79.5885 38 0.00000000 XXBT 1000.00 ZEUR +0.00% +18.07%
buy order completed at 2017-12-30 13:44:24:
0.09383797 XXBT at 10554.84 ZEUR
total 990.445 ZEUR
0.0450% slippage (orig. price 10550.10 ZEUR)
execution: a few seconds
2017-12-30 14:36:00 10680.00 XXBT-ZEUR +0.28% 434 -- 39 +73.9094 39 bought 0.09359399 XXBT 10.00 ZEUR +0.95% +17.76%
2017-12-30 15:28:00 10896.40 XXBT-ZEUR +2.02% 332 -- 41 +50.2788 41 +3.23% 0.09359399 XXBT 10.00 ZEUR +2.98% +17.73%
The same from the live version, it doesn't seem to even try to make a trade:
2017-12-30 12:52:00 11099.70 XXBT-ZEUR -4.73% 439 -- 41 +28.6341 41 0.00083170 XXBT 74.60 ZEUR +68.90%+106.94%
2017-12-30 13:44:00 10650.00 XXBT-ZEUR -4.06% 1066 -- 39 +66.9083 39 buy 0.00083170 XXBT 74.60 ZEUR +68.14%+114.72%
2017-12-30 14:36:00 10680.00 XXBT-ZEUR +0.28% 434 -- 39 +73.9094 39 0.00083170 XXBT 74.60 ZEUR +68.19%+114.18%
2017-12-30 15:28:00 10896.40 XXBT-ZEUR +2.02% 332 -- 41 +50.2788 41 0.00083170 XXBT 74.60 ZEUR +68.56%+110.38%
Any idea? Not unimportant to add, this is using the "taker" method, not "maker".

Try placing a manual trade in Zenbot by running zenbot with ./zenbot.sh trade --manual and then place a taker order just by typing S or B and see if the order actually gets placed. In my experience Zenbot says sell/buy command executed but Kraken never sees it. I cant get Zenbot to place trades on Kraken but can on Gdax and Binance. Also in the config file you must change the TOS to agree . Did you ever solve this.

Related

I'm trying to implement an Alert to get notifications when there is downtime a transaction panel

I want to set up an alert to get notifications when there is a downtime. I don't know how to go about setting the conditions [enter image description here](https://i.stack.imgur.com/mFCd9.jpg)
Below are the response code:
Response Code (RC) Description A code that defines the disposition of a transaction. 00 Approved or completed successfully 01 Refer to card issuer 02 Refer to card issuer, special condition 03 Invalid merchant 04 Pick-up card 05 Do not honor 06 Error 07 Pick-up card, special condition 08 Honor with identification 09 Request in progress 10 Approved, partial 11 Approved, VIP 12 Invalid transaction 2926 13 Invalid amount 14 Invalid card number 15 No such issuer 16 Approved, update track 3 17 Customer cancellation 18 Customer dispute 19 Re-enter transaction 20 Invalid response 21 No action taken 22 Suspected malfunction 23 Unacceptable transaction fee 24 File update not supported 25 Unable to locate record 26 Duplicate record 27 File update edit error 28 File update file locked 29 File update failed 30 Format error 31 Bank not supported 32 Completed partially 33 Expired card, pick-up 34 Suspected fraud, pick-up 35 Contact acquirer, pick-up 36 Restricted card, pick-up 37 Call acquirer security, pick-up 38 PIN tries exceeded, pick-up 39 No credit account 40 Function not supported 41 Lost card 42 No universal account 43 Stolen card 09038365642 44 No investment account 51 Not sufficient funds 52 No check account 53 No savings account 54 Expired card 55 Incorrect PIN 56 No card record 57 Transaction not permitted to cardholder 58 Transaction not permitted on terminal 59 Suspected fraud 60 Contact acquirer 61 Exceeds withdrawal limit 62 Restricted card 63 Security violation 64 Original amount incorrect 65 Exceeds withdrawal frequency 66 Call acquirer security 67 Hard capture and 68 Response received too late 75 PIN tries exceeded 77 Intervene, bank approval required 78 Intervene, bank approval required for partial amount 90 Cut-off in progress 91 Issuer or switch inoperative 92 Routing error 93 Violation of law 94 Duplicate transaction 95 Reconcile error 96 System malfunction 98 Exceeds cash limit 99 Invalid reference number

Why is there something written in the data section of an ICMPv4 echo ping request?

(My question differs from this one.)I am connected to a AP in a wireless network and I've send a simple ping request to www.google.com. When I analyze the packet in wireshark, I can see, that there are 48 bytes written in the data section of ICMP. After 8 bytes of trash values, the values are sequentially increasing from 0x10 to 0x37.Is there any particular reason, why ICMPv4 fits values instead of just using a bunch of zeroes?
The hexdump of the ICMPv4 data section:
0030 09 d9 0d 00 00 00 00 00 10 11 12 13 14 15 .Ù............
0040 16 17 18 19 1a 1b 1c 1d 1e 1f 20 21 22 23 24 25 .......... !"#$%
0050 26 27 28 29 2a 2b 2c 2d 2e 2f 30 31 32 33 34 35 &'()*+,-./012345
0060 36 37 67
After 8 bytes of trash values
First of all, these are not trash values. In some implementations of ping, the 1st 8 bytes may represent a timestamp.
As #ross-jacobs mentioned, RFC 792 describes the ICMP Echo Request/Reply Packets. For clarity, these two packets are described, in relevant part, as follows:
Echo or Echo Reply Message
0 1 2 3
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Type | Code | Checksum |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Identifier | Sequence Number |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Data ...
+-+-+-+-+-
...
Description
The data received in the echo message must be returned in the echo
reply message.
Here you can see that the contents of the Data is not described at all; therefore an implementation is free to use whatever data it wishes, including none at all.
Now, since ping is a network test tool, one of the things it can help test is fragmentation/reassembly. Every ping implementation I'm aware of allows the user to specify the size of the payload, and if you exceed the MTU, you should see the ICMP packet fragmented/reassembled. If you examine the payload of the first fragment, you can tell where the second fragment should start just by looking at the sequence of bytes in the payload of the first fragment. If the data was all 0's, it wouldn't be possible to do this. Similarly, if an ICMP packet wasn't reassembled properly, not only would the checksum likely be wrong, but you would most likely be able to tell exactly where the reassembly failed due to a gap or other inconsistency in the payload. This is just one example of why a payload with a sequence of bytes instead of all 0's is more useful to the user.

Ceph too many pgs per osd: all you need to know

I am getting both of these errors at the same time. I can't decrease the pg count and I can't add more storage.
This is a new cluster, and I got these warning when I uploaded about 40GB to it. I guess because radosgw created a bunch of pools.
How can ceph have too many pgs per osd, yet have more object per pg than average with a too few pgs suggestion?
HEALTH_WARN too many PGs per OSD (352 > max 300);
pool default.rgw.buckets.data has many more objects per pg than average (too few pgs?)
osds: 4 (2 per site 500GB per osd)
size: 2 (cross site replication)
pg: 64
pgp: 64
pools: 11
Using rbd and radosgw, nothing fancy.
I'm going to answer my own question in hopes that it sheds some light on the issue or similar misconceptions of ceph internals.
Fixing HEALTH_WARN too many PGs per OSD (352 > max 300) once and for all
When balancing placement groups you must take into account:
Data we need
pgs per osd
pgs per pool
pools per osd
the crush map
reasonable default pg and pgp num
replica count
I will use my set up as an example and you should be able to use it as a template for your own.
Data we have
num osds : 4
num sites: 2
pgs per osd: ???
pgs per pool: ???
pools per osd: 10
reasonable default pg and pgp num: 64 (... or is it?)
replica count: 2 (cross site replication)
the crush map
ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY
root ourcompnay
site a
rack a-esx.0
host prdceph-strg01
osd.0 up 1.00000 1.00000
osd.1 up 1.00000 1.00000
site b
rack a-esx.0
host prdceph-strg02
osd.2 up 1.00000 1.00000
osd.3 up 1.00000 1.00000
Our goal is to fill in the '???' above with what we need to serve a HEALTH OK cluster. Our pools are created by the rados gateway when it initialises.
We have a single default.rgw.buckets.data where all data is being stored the rest of the pools are adminitrastive and internal to cephs metadata and book keeping.
PGs per osd (what is a reasonable default anyway???)
The documentation would have us use this calculation to determine our pg count per osd:
(osd * 100)
----------- = pgs UP to nearest power of 2
replica count
It is stated that to round up is optimal. So with our current setup it would be:
(4 * 100)
----------- = (200 to the nearest power of 2) 256
2
osd.1 ~= 256
osd.2 ~= 256
osd.3 ~= 256
osd.4 ~= 256
This is the recommended max number of pgs per osd. So... what do you actually have currently? And why isn't it working? And if you set a
'reasonable default' and understand the above WHY ISN'T IT WORKING!!! >=[
Likely, a few reasons. We have to understand what those 'reasonable defaults' above actually mean, how ceph applies them and to where. One might misunderstand from the above that I could create a new pool like so:
ceph osd pool create <pool> 256 256
or I might even think I could play it safe and follow the documentation which states that (128 pgs for < 5 osds) can use:
ceph osd pool create <pool> 128 128
This is wrong, flat out. Because it in no way explains the relationship or balance between what ceph is actaully doing with these numbers
technically the correct answer is:
ceph osd pool create <pool> 32 32
And let me explain why:
If like me you provisioned your cluster with those 'reasonable defaults' (128 pgs for < 5 osds) as soon as you tried to do anything with rados it created a whole bunch of pools and your cluster spazzed out.
The reason is because I misunderstood the relationship between everything mentioned above.
pools: 10 (created by rados)
pgs per pool: 128 (recommended in docs)
osds: 4 (2 per site)
10 * 128 / 4 = 320 pgs per osd
This ~320 could be a number of pgs per osd on my cluster. But ceph might distribute these differently. Which is exactly what's happening and
is way over the 256 max per osd stated above. My cluster's HEALTH WARN is HEALTH_WARN too many PGs per OSD (368 > max 300).
Using this command we're able to see better the relationship between the numbers:
pool :17 18 19 20 21 22 14 23 15 24 16 | SUM
------------------------------------------------< - *total pgs per osd*
osd.0 35 36 35 29 31 27 30 36 32 27 28 | 361
osd.1 29 28 29 35 33 37 34 28 32 37 36 | 375
osd.2 27 33 31 27 33 35 35 34 36 32 36 | 376
osd.3 37 31 33 37 31 29 29 30 28 32 28 | 360
-------------------------------------------------< - *total pgs per pool*
SUM :128 128 128 128 128 128 128 128 128 128 128
There's a direct correlation between the number of pools you have and the number of placement groups that are assigned to them.
I have 11 pools in the snippet above and they each have 128 pgs and that's too many!! My reasonable defaults are 64! So what happened??
I was misunderstandning how the 'reasonable defaults' were being used. When I set my default to 64, you can see ceph has taking my crush map into account where
I have a failure domain between site a and site b. Ceph has to ensure that everything that's on site a is at least accessible on site b.
WRONG
site a
osd.0
osd.1 TOTAL of ~ 64pgs
site b
osd.2
osd.3 TOTAL of ~ 64pgs
We needed a grand total of 64 pgs per pool so our reasonable defaults should've actually been set to 32 from the start!
If we use ceph osd pool create <pool> 32 32 what this amounts to is that the relationship between our pgs per pool and pgs per osd with those 'reasonable defaults' and our recommened max pgs per osd start to make sense:
So you broke your cluster ^_^
Don't worry we're going to fix it. The procedure here I'm afraid might vary in risk and time depending on how big your cluster. But the only way
to get around altering this is to add more storage, so that the placement groups can redistribute over a larger surface area. OR we have to move everything over to
newly created pools.
I'll show an example of moving the default.rgw.buckets.data pool:
old_pool=default.rgw.buckets.data
new_pool=new.default.rgw.buckets.data
create a new pool, with the correct pg count:
ceph osd pool create $new_pool 32
copy the contents of the old pool the new pool:
rados cppool $old_pool $new_pool
remove the old pool:
ceph osd pool delete $old_pool $old_pool --yes-i-really-really-mean-it
rename the new pool to 'default.rgw.buckets.data'
ceph osd pool rename $new_pool $old_pool
Now it might be a safe bet to restart your radosgws.
FINALLY CORRECT
site a
osd.0
osd.1 TOTAL of ~ 32pgs
site b
osd.2
osd.3 TOTAL of ~ 32pgs
As you can see my pool numbers have incremented since they are added by pool id and are new copies. And our total pgs per osd is way under the ~256 which gives us room to add custom pools if required.
pool : 26 35 27 36 28 29 30 31 32 33 34 | SUM
-----------------------------------------------
osd.0 15 18 16 17 17 15 15 15 16 13 16 | 173
osd.1 17 14 16 15 15 17 17 17 16 19 16 | 179
osd.2 17 14 16 18 12 17 18 14 16 14 13 | 169
osd.3 15 18 16 14 20 15 14 18 16 18 19 | 183
-----------------------------------------------
SUM : 64 64 64 64 64 64 64 64 64 64 64
Now you should test your ceph cluster with whatever is at your disposal. Personally I've written a bunch of python over boto that tests the infrastructure and return buckets stats and metadata rather quickly. They have ensured to me that the cluster is back to working order without any of the issues it suffered from previously. Good luck!
Fixing pool default.rgw.buckets.data has many more objects per pg than average (too few pgs?) once and for all
This quite literally means, you need to increase the pg and pgp num of your pool. So... do it. With everything mentioned above in mind. When you do this however note that the cluster will start backfilling and you can watch this process %: watch ceph -s in another terminal window or screen.
ceph osd pool set default.rgw.buckets.data pg_num 128
ceph osd pool set default.rgw.buckets.data pgp_num 128
Armed with the knowledge and confidence in the system provided in the above segment we can clearly understand the relationship and the influence of such a change on the cluster.
pool : 35 26 27 36 28 29 30 31 32 33 34 | SUM
----------------------------------------------
osd.0 18 64 16 17 17 15 15 15 16 13 16 | 222
osd.1 14 64 16 15 15 17 17 17 16 19 16 | 226
osd.2 14 66 16 18 12 17 18 14 16 14 13 | 218
osd.3 18 62 16 14 20 15 14 18 16 18 19 | 230
-----------------------------------------------
SUM : 64 256 64 64 64 64 64 64 64 64 64
Can you guess which pool id is default.rgw.buckets.data? haha ^_^
In Ceph Nautilus (v14 or later), you can turn on "PG Autotuning". See this documentation and this blog entry for more information.
I accidentally created pools with live data that I could not migrate to repair the PGs. It took some days to recover, but the PGs were optimally adjusted with zero problems.

Indy 10 IdTCPSever Readbytes scrambling data

I am trying to use Indy10 ReadBytes() in Delphi 2007 to read a large download of a series of data segments formatted as [#bytes]\r\n where #bytes indicates the number of bytes. My algorithm is:
Use ReadBytes() to get the [#]\r\n text, which is normally 10 bytes.
Use ReadBytes() to get the specified # data bytes.
Go to step 1 if more data segments need to be processed, i.e., # is negative.
This works well but frequently I don't get the expected text at step 1. Here's a short example after 330 successful data segments:
Data received from last step 2 ReadBytes(). NOTE embedded Step 1 [-08019]\r\n text.
Line|A033164|B033164|C033164|D033164|E033164|F033164|G033164|H033164|EndL\r|Begin
Line|A033165|B033165|C033165|D033165|E033165|F033165|G033165|H033165|EndL\r|Begin
Line|A033166|B033166|C033166|D033166|E033166|F033166|G033166|H033166|EndL\r[-08019]
\r\n|Begin
Line|A033167|B033167|C033167|D033167|E033167|F033167|G033167|H033167|EndL\r|Begin
Line|A033168|B033168|C033168|D033168|E033168|F033168|G033168|H033168|EndL\r|Begin
Socket data captured by WireShark.
0090 30 33 33 31 36 36 7c 42 30 33 33 31 36 36 7c 43 033166|B033166|C
00a0 30 33 33 31 36 36 7c 44 30 33 33 31 36 36 7c 45 033166|D033166|E
00b0 30 33 33 31 36 36 7c 46 30 33 33 31 36 36 7c 47 033166|F033166|G
00c0 30 33 33 31 36 36 7c 48 30 33 33 31 36 36 7c 45 033166|H033166|E
00d0 6e 64 4c 0d ndL.
No. Time Source Destination Protocol Length Info
2837 4.386336000 000.00.247.121 000.00.172.17 TCP 1514 40887 > 57006 [ACK] Seq=2689776 Ack=93 Win=1460 Len=1460
Frame 2837: 1514 bytes on wire (12112 bits), 1514 bytes captured (12112 bits) on interface 0
Ethernet II, Src: Cisco_60:4d:bf (e4:d3:f1:60:4d:bf), Dst: Dell_2a:78:29 (f0:4d:a2:2a:78:29)
Internet Protocol Version 4, Src: 000.00.247.121 (000.00.247.121), Dst: 000.00.172.17 (000.00.172.17)
Transmission Control Protocol, Src Port: 40887 (40887), Dst Port: 57006 (57006), Seq: 2689776, Ack: 93, Len: 1460
Data (1460 bytes)
0000 5b 2d 30 38 30 31 39 5d 0d 0a 7c 42 65 67 69 6e [-08019]..|Begin
0010 20 4c 69 6e 65 7c 41 30 33 33 31 36 37 7c 42 30 Line|A033167|B0
0020 33 33 31 36 37 7c 43 30 33 33 31 36 37 7c 44 30 33167|C033167|D0
0030 33 33 31 36 37 7c 45 30 33 33 31 36 37 7c 46 30 33167|E033167|F0
Does anyone know why this happens? Thanks
More information. We do socket reading from a single thread and don't call Connected() while reading. Here's relevant code snippet:
AClientDebugSocketContext.Connection.Socket.ReadBytes(inBuffer,byteCount,True);
numBytes := Length(inBuffer);
Logger.WriteToLogFile(BytesToString: '+BytesToString(inBuffer,0,numBytes),0);
Move(inBuffer[0], Pointer(Integer(Buffer))^, numBytes);
Embedded data like you describe, especially at random times, usually happens when you read from the same socket in multiple threads at the same time without adequate synchronization between them. One thread may receive a portion of the incoming data, and another thread may receive another portion of the data, and they end up storing their data in the InputBuffer in the wrong order. Hard to say for sure if that your problem since you did not provide any code. The best option is to make sure you never read from the same socket in multiple threads at all. That includes any calls to Connected(), since it performs a read operation internally. You should do all of your reading within a single thread. If that is not an option, then at least wrap your socket I/O with some kind of inter-thread lock, such as a critical section or mutex.
Update: You are accessing a TIdContext object via your own AClientDebugSocketContext variable. Where is that code being used exactly? If it is not in the context of the server's OnConnect, OnDisconnect, OnExecute, or OnException events, then you are reading from the same socket across multiple threads, because TIdTCPServer internally calls Connected() (which does a read) in between calls to the OnExecute event for that TIdContext object.

midi file parsing, unrecognised event type

I have a problem trying to parse a midi file. I am trying to parse the notes files used by the frets on fire game (it just uses midi files so i don't think this is relevent) if any of you are familiar with it, the problem i am having is a general midi problem. I have a file with a track called guitar part, the hex, as viewed in a hex editor is as follows:
4D 54 72 6B 00 00 1E 74 00 FF 03 0B 50 41 52 54 20 47 55 49 54 41 52 A9 20 90 61 40 9A 20 61 00 83 60 63 40 BC
My program parses this fine as follows:
4D M
54 T
72 R
6B K
00 < --
00 size of
1E track part
74 -- >
00 time of this event
FF event type (this is meta)
03 meta event type
0B length of data
50 "P"
41 "A"
52 "R"
54 "T"
20 " "
47 "G"
55 "U"
49 "I"
54 "T"
41 "A"
52 "R"
A9 time of event (variable length) 10101001
20 time of event (variable length) 00100000
90 event,channel (non-meta) 1001=note on,channel=0000
61 note on has 2 params this is the first
40 this is the second
9A variable time 10011010
20 variable time 00100000
This is where my problem lies, there is no event that has event type 0x6, since 0x61 is 01100001 we have to assume it's non meta, therefore the event type should be 6 (0110) and the channel is (0001) but the midi specification contains no identification for this event.. I've added a few of the bytes after this incase they are somehow relevent but obviously at the moment my program hits the next byte, doesn't recognise the event and bombs out.
61
00
83
60
63
40
BC
If anyone thinks they could shed any light on where my parsing logic has gone wrong i'd be most appreciative, sorry for the formatting, i couldn't think of a better way to illustrate my problem.
I have been using this site: http://www.sonicspot.com/guide/midifiles.html as a reference and it hasn't led me wrong so far. I figured this might be something directly relating to frets on fire but it doesn't seem to be as i downloaded another notes file for the game and that file did not contain this event.
Thanks in advance.
It's called running status. If an event is of the same type as the previous event, the MIDI status byte can be eliminated. So if the first byte after the timing info is < $80, use the previous status. In the case of your $61 byte, the previous status was $90, so it's Note On, channel 0. Which makes sense since the previous event was note number $61 velocity $40. This event is note number $61 velocity 0 (releasing the previously played note). The next event is note number $63 velocity $40.

Resources