Is there a project that uses two different token burning methods at the same time? - token

I'm wondering if it is positive or negative to use the "token burning function" (this decreases the total supply) AND sending tokens to the burning address (this decreases the circulating supply)?
Does a project that uses both methods exist?
I have did quite a research, but haven't find anything meaningful so far.

Sending tokens to an unused address, like the address zero, is a method to burn tokens when the smart contract does not implement a token-burning mechanism. You can throw tokens away and consider them burned. But of course, decreasing the total supply would be cleaner than throwing tokens away like this, instead. So if you are developing the contract, I recommend you implement a burning mechanism that decreases total supply. This way you can track how many tokens are out there, easily.

Related

How is Iota on Tangle Quantum proof?

I do understand Tangle has a graph based data structure i.e. forming a direct acyclic graph. It is not a merkle tree like a typical blockchain. But I could not figure out this relation makes it quantum proof or not. Is no-mining, and peer verification enough to make a distributed ledger quantum proof?
I asked a very similar thing here https://bitcoin.stackexchange.com/questions/55202/iota-quantum-resistance
The way the ledger is organized: linked-list (as in blockchain) or DAG (Tangle) has no impact for sure. There is still some sort of PoW (when you submit a new transaction) but that is also irrelevant.
Basically with a quantum computer cryptographic one-way hash functions (like SHA-2, SHA-3, BLAKE2) are still ok with a few caveats, the same goes for block ciphers (like AES). Traditional public key cryptography (RSA, DSA, Diffie-Hellman and the eliptic versions) are however NOT secure anymore. So you can't have signatures (which is a quite necessary thing for cryptocurrencies). There are some complicated workaround constructions but the simplest is one based on hash functions (Lamport OTS). More references are in my question. Note that I still don't know how exactly IOTA does this. Basically I got stuck at reading about their Curl hash function.

Why is the CAP theorem interesting?

Is there anything mathematically interesting about the CAP theorem? Looking at the proof, there seem to be four different cases for two different statements across two formalisms. The CAP theorem holds in the three trivial cases and not in the fourth one. All of which use extremely roundabout proof techniques to say something extraordinarily simple.
3.1 Thm 1. If two machines have no communication whatsoever, they cannot contain consistent data.
3.1 Corollary 1.1 If two machines are not allowed to wait to receive messages from each other and the communication line between them is arbitrarily slow, you get an inconsistent result if you write to one and then immediately query the other.
4.2 Thm 2. If two machines that are allowed to wait-with-timeout have no connection whatsoever, they still cannot contain consistent data.
... but if the communication line between them has guarantees about worst-case transmission time, then you can just wait for the timeout each time you perform a write and CAP theorem doesn't apply.
Am I missing something here? The proof techniques used in the paper seem to be more like the kind of thing you find in the generals-on-a-hill problem (which IS nontrivial) where the generals can set a time to coordinate their attack and agree they're going to do it, but they can't agree that they agree. But I just can't see how that applies here.

What's the overhead for gen_server calls in Erlang/OTP?

This post on Erlang scalability says there's an overhead for every call, cast or message to a gen_server. How much overhead is it, and what is it for?
The cost that is being referenced is the cost of a (relatively blind) function call to an external module. This happens because everything in the gen_* abstractions are callbacks to externally defined functions (the functions you write in your callback module), and not function calls that can be optimized by the compiler within a single module. A part of that cost is the resolution of the call (finding the right code to execute -- the same reason each "dot" in.a.long.function.or.method.call in Python or Java raise the cost of resolution) and another part of the cost is the actual call itself.
BUT
This is not something you can calculate as a simple quantity and then multiply by to get a meaningful answer regarding the cost of operations across your system.
There are too many variables, points of constraint, and unexpectedly cheap elements in a massively concurrent system like Erlang where the hardest parts of concurrency are abstracted away (scheduling related issues) and the most expensive elements of parallel processing are made extremely cheap (context switching, process spawn/kill and process:memory ratio).
The only way to really know anything about a massively concurrent system, which by its very nature will exhibit emergent behavior, is to write one and measure it in actual operation. You could write exactly the same program in pure Erlang once and then again as an OTP application using gen_* abstractions and measure the difference in performance that way -- but the benchmark numbers would only mean anything to that particular program and probably only mean anything under that particular load profile.
All this taken in mind... the numbers that really matter when we start splitting hairs in the Erlang world are the reduction budget costs the scheduler takes into account. Lukas Larsson at Erlang Solutions put out a video a while back about the scheduler and details the way these costs impact the system, what they are, and how to tweak the values under certain circumstances (Understanding the Erlang Scheduler). Aside from external resources (iops delay, network problems, NIF madness, etc.) that have nothing to do with Erlang/OTP the overwhelming factor is the behavior of the scheduler, not the "cost of a function call".
In all cases, though, the only way to really know is to write a prototype that represents the basic behavior you expect in your actual system and test it.

I'm looking for suggestions on how to store things of value in my db

I have a requirement (pending) that allows the user to buy "credits" and then to trade those credits for real goods or services.
This is the first time I have had to do this in an application and I am concerned it would paint a target on my database's back. A hacker could (in theory) change the amount of credits a user has and then "spend" those credits, or convert them to cash.
I'm building the solution in ruby/rails, but I am not limited to this tech. I can even use an outside provider if it's more practical.
Does anyone have any suggestions on how to do this? Would you encrypt your DB? Would that be enough?
There are a lot of different things to consider. When it comes to matters of security, there is never a silver bullet (anyone who suggests that is likely selling snake oil); rather, security often involves many different steps to mitigate and manage risks and also redundant layers of protection so that one is still protected if some subset of those layers fail for whatever reason.
In terms of storing monetary transactions, there are often a number of legal regulations that need to be followed, so I suggest consulting those in addition to other security measures. In terms of encrypting the database, there are many different ways to apply encryption... one can encrypt the database as a whole or individual rows of the database using different keys. If you just encrypt the database as a whole, it won't provide you much protection if someone has access to the key (which also begs the question of who has the key, where is it stored, etc.?). What you probably want, in addition to the database itself, is a write-only log of transactions with some sort of checksumming so that you can be assured of its authenticity that provides you with an audit trail that is independent of the database (and from which the database could be reconstructed in the event of some sort of breach). In addition, you'll want to ensure that only authorized applications (such as your production instance of your frontend server) can talk to and decrypt the database (and ensure that only a very limited number of people you trust can deploy new versions of those, so that no one can arbitrarily deploy malicious versions that abuse that access). If it's possible to independently encrypt individual rows with different keys (e.g. to encrypt each per-user row with key material derived from the user's login credential), then that is highly advisable (though this is not always possible to do, such as if you need to be able to process that row even when the user is not actively interacting with your application). I'm sure there are other things that I have not thought of, which is why you'll also want to regularly conduct penetration testing to check for any vulnerabilities (and not only fix anything you discover this way, but also use it to inform projects or processes that you can employ to prevent similar vulnerabilities in the future).
In addition to the security considerations, monetary transactions is one of the few cases where "eventual consistency" doesn't really work; you'll need to make sure that you are careful in your programming to make transactions appropriately atomic. That is, you wouldn't want the number of credits to decrease at a separate time step from the dispension of dollars, as that would allow the same credits to be spent twice... you'll want to be very careful in your coding that the decrease in credits and increase in dollars (or vice-versa) happen simultaneously. For this and other reasons thorough testing and good code review practice is a good idea.

how to find the abnormal id from so many ids

We run an affiliate program. Users who sign up can gain points when they successfully recruit other users. However, spammers are abusing this program, and automatically signing up large numbers of accounts. We want to prevent this from happening by closing down clearly machine-generated accounts. My idea for this is to write a program to identify machine-generated account names, or at least select a subset for manual inspection.
So far, we have found that there are two types of abnormal ids:
The first one is that there are some ids looks very similar to others, such as:
wss12345
wss12346
wss12347
test1
test2
...
The second one is that there are some ids looks like randomly generated with out rules, such as:
MiDjiSxxiDekiE
NiMjKhJixLy
DAFDAB7643
...
For the first one, I use the Levenshtein(edit) distance. This method can find out some ids, which was illustrate in type 1. (I have done this, and can get good performance)
For the second one, I can calculate the probabilty for the ids, just like:
id = "DAFDAB7643:
p(id) = p(D)*p(A|D)*p(F|A)*p(D|F)*...*p(3|4)
So I can use the probability to filter out the abnormal ids. (Just an idea; I haven't tried it out.)
Can anyone give me other suggestions about this topic? How else could I approach this problem? Can you see flaws or omissions in my attempts?
Assuming that these new accounts refer back to the the recruiter's ID, I'd look at the rate and/or sheer number of new accounts associated with a given recruiter.
Some analysis on IP addresses or similar may also indicate if multiple users are coming from the same computer.
I'd use a dictionary of words, and kind of do the reverse of detecting poor passwords -- human user names should have dictionary words, personal names, lack punctuation, not include repeated characters, be mostly lower case etc.
Sort of going back to 1. above -- if a recruiter has an anamalously tight cluster of IDs, using the features you've already identified, would be a good flag. I think that this might be, essentially, #larsmans comment directly under the question.
I'd be curious to know if re-purposing password checking algorithms (item 3) provides any benefit.
You're not telling us what sort of site you are running, so this is a bit on the speculative side; but consider Stack Overflow as a prime example of successfully promoting good behavior through the use of a user reputation system, and weeding out many kinds of unwanted behaviors.
A quick, hackish fix might be to progressively deduct from the score when the amount of dormant recruit accounts grows larger, but a more rewarding and compelling fix is to award higher reputation scores for actually contributing to the site's content. However, this depends on the type of site you have; a stock market tips site, say, obviously works quite differently from a techical discussion forum.

Resources