Is it possible to burn from dead address in bsc network? - token

I sent my token to dead address(0x000000000000000000000000000000000000dead)
At first I was trying to burn all my token so I sent the token to the dead address using meta mask.
Now I can see the my token(https://bscscan.com/address/0x0083a5a7e25e0Ee5b94685091eb8d0A32DfF11D4)'s total supply isn't reduced. And the dead address is the holder of the token. How can I fix this out?
Actually I want to remove all tokens minted from my tokne.

I'm afraid you have misunderstood the concept of burning coins. Burning does not destroy coins. It sends them to an address/wallet/account that can only receive but cannot send (or spend) them, making them effectively lost forever as this is recorded in the immutable ledger.
This means that the supply of tokens in circulation (those tokens that can still be used to make transactions) is reduced, but not the total supply. So actually, everything that happened in your case is completely expected.
Here is one among many internet resources that explains the concept of burning coins:
https://www.investopedia.com/tech/cryptocurrency-burning-can-it-manage-inflation/

I see that you used the regular transfer() method to send your tokens to the zero address (link).
Your contract implements the burn() function that effectively reduces the total supply as well.
Expanding on Marko's answer: In this particular case, you should use the burn() function instead of just regular transfer. However, different token contracts might use different function names or not implement a burn mechanism at all - it all depends on the token contract implementation.

Related

Unspoofing phone calls

So I get spoofed calls and they're annoying; however from a little reading it seems like all the spoofing is only done in the Caller ID field, but that there are additionally 2-3 ANI fields that generally are used for carrier billing that are much more difficult to spoof. I also have both legitimate friends and spam calls that use blocked numbers, and again it seems it's just Caller ID being left blank and ANI still being submitted. (please correct if this assumption is wrong/there's a better value to use instead)
My end goal is to set up a "public" number that I give out to people, and that "public" number would read the ANI data, "fix" the caller ID, then forward the call to my actual number/send me a text/notification with the real number. My understanding is this is possible if I internally forwarded the call to an 800 number I own first (then forward the number back to a non-800 number to avoid charges) but I haven't seen this mentioned in any Twilio/Bandwidth.com/etc APIs - they mention a 'from' field but not how that field is determined. I've seen products that do this like Trapcall so I know it's possible somehow, but would prefer not to forward all my calls to a number I don't control.
How do I do this? If I forward a call with a fake/blocked caller ID to an 800 number on Twilio/Bandwidth will the from number of that forwarded call be automatically corrected/unblocked? (And would I be able to compare the from of the original call to the from of the 800 call, where a mismatch would mean a spoofed number?) Or is there some specific way the 800 number has to be setup for this/the 800 numbers off of Twilio don't work at all/etc?
I also read that ANI is not very reliable on VOIP calls, and VOIP calls are more or less anonymous. Is there any way to find out whether an incoming call is being made from a VOIP service or from an actual landline/mobile? I know there's the Caller ID lookup, but if we assume that data is unreliable can we find out just from data made available during the call itself?
Figured it out, it does work from toll free #'s, it's just twilio specifically didn't work with it. worked with other providers.

How can you verify that a blockchain/DLT has not been tampered with?

I understand how a distributed ledger ensures integrity using a chained linked-list data model whereby each block is chained to all its previous ones.
I also understand how in a PoW/PoS/PoET/(insert any trustless consensus mechanism here) context, the verification process makes it difficult for a malicious (or a group of) individual to tamper with a block because they would not have the necessary resources to broadcast an instance of the ledger to all network members so that they update their version to a wrong one.
My question is, if let's say some one does manage to change a block, does an integrity checking process ever happen? Is it an actual part of the verification mechanism and if so, how far in history does it go?
Is it ever necessary to verify the integrity of i.e. block number 500 out of a total of 10000 and if so, how do I do that? Do I need to start from block 10000 and verify all blocks from there until block 500?
My question is, if let's say some one does manage to change a block, does an integrity checking process ever happen?
Change the block where? If you mean change my copy of the block in my computer, how would they do that? By breaking into my computer? And how would that affect anyone else?
Is it an actual part of the verification mechanism and if so, how far in history does it go? Is it ever necessary to verify the integrity of i.e. block number 500 out of a total of 10000 and if so, how do I do that? Do I need to start from block 10000 and verify all blocks from there until block 500?
The usual rule for most blockchains is that every full node checks every single block it ever receives to ensure that it follows every single system rule for validity.
While you could re-check every block you already checked to ensure that your particular copy hasn't been tampered with, this generally serves no purpose. Someone who could tamper with your local block storage could also tamper with your local checking code. So this kind of check doesn't usually provide any additional security.
To begin with, tampering with a block is not made almost-impossible because of resource shortage for broadcasting the wrong ledger to all nodes. Broadcasting is not necessarily resource intensive. It is a chain-reaction which you only have to trigger. The challenge with tampering a block-chained block arises from the difficulty of recomputing the valid hashes (meeting the block difficulty level) of all the successive blocks (the blocks that come after the one being tampered). Because altering a block alters its hash, which in turn changes the previous hash attribute of the next block, hence invalidating its previously correct hash, and so on till the latest block. If say the latest block index is 1000. And if you tamper the 990th block. This implies that you would have to re-mine (recalculate a valid hash by randomly changing the nonce value) blocks from 990 - 1000. That in itself is very difficult to do. But say somehow you manged to do that, but by the time you broadcast your updated blockchain, there would have been other blocks (of index say 1001, 1002) mined by other miners. So yours won't be the longest valid blockchain,and therefore would be rejected.
According to this article, when a new block is broadcasted to the network, the integrity of its transaction is validated and verified against the history of transactions. The trouble with integrity check arises only if a malicious longer block-chain is broadcasted to the network. In this case, the protocol forces the nodes to accept this longest chain. Note that, this longest chain maintains its own integrity and is verified of its own. But it is verified against its own version of truth. Mind you though, this can only be possible if the attacker has a hashing power that is at least 51% of the network's hashing power. And this kind of power is assumed practically impossible. https://medium.com/coinmonks/what-is-a-51-attack-or-double-spend-attack-aa108db63474

EOS custom token transfer?

I know that this isn't the best place to place a question like this, but I have to design project in short time line and would greatly appreciate quick answer.
According to #walnutown ( https://github.com/walnutown in ), issue https://github.com/EOSIO/eos/issues/4173 You'd be charged for RAM for transfer of custom EOS tokens. I just need to know if this is true.
Thanks in advance, enjoy : )
Yes, but the amount of RAM spend depends on whether recipient of custom token has accounts table or not.
token::transfer(...) action invokes token::add_balance(..., ram_payer), but the third argument ram_payer would be sender.
If recipient has accounts table (already has custom token), transfer only consumes 128 bytes of sender's RAM, or transfer will consumes 368 bytes for allocating new accounts table and adding new item (recipient's balance for custom token).
Yes. RAM is used to store changes in states of the contract. Balance of a token to a particular account will be saved in RAM. As per the default eosio.token contract, this state will be saved in the RAM of "from" user who is pushing the transaction. In issuing case also, RAM of issuer will be consumed.

what will happen if two nodes with same name claiming same address in j1939?

If two nodes with same name claiming same address in j1939 what will happen in this situation? will any one of node will claim address or error will occour ?
My copy of the specification is dated, but I'm sure this rule hasn't changed since 2003 (SAE J1939-81):
"Manufacturers of ECUs and integrators of networks must assure that the NAMEs of all CAs intended to
transmit on a particular network are unique."
Of course, that being said, it is of course possible to put devices with the same NAME on the
same set of wires, either through ignorance or malicious intent.
I personally haven't played with it, but in theory, if your device has the exact same NAME as another,
your address claim will exactly overlap the other, neither would be aware of the other's presence,
the message would go through successfully, and each device will assume it is the one that sent it.
I may be wrong, but I think the only thing odd an CA might see is message coming in from an address
it thought it had claimed, a problem which it may not even be checking for.
From the network standpoint, there is no way to distinguish that the nodes are different since they are identifying themselves as the same entity. What would happen is that the first requests will be addressed, and the second request will be ignored. In other words, this is race condition because only one message is processed at a time in the datalink. By the time the second node tries to claim the same address, the address table is already occupied and the late request node won't be able to get the notification that the address was assigned to it. Remember that each node has its own internal states/configuration.
J1939-81 says
"Repeated Collisions Occur, devices go BUS OFF CAs should retry using
a Pseudo-random delay before attempting to reclaim and then revert to
Figures 2 and 3."

Could the Valence API be used to handle realtime user account creation and enrollment for large numbers?

Valence allows to do this, but I wondered if there's limitations to trying to automate user account setup and enrollment in this way. We're a relatively small institution with hundreds of enrollments perhaps in a given term scattered over several weeks, so I don't think there'd be a problem with realtime events. But I wondered what the implication might be for a larger university that might have thousands of enrollments updating all the time. There'd be spikes of activity certainly as a term reached official start.
The issue of performance is a complex one, with many inter-dependant factors. I can't comment on the impact of hardware hosting your back-end LMS, but obviously, higher-performance hardware and network deployment will result in higher performance interaction between clients and the LMS.
If your needs are "hundreds of creations scattered over several weeks", that is certainly within comfortable expected performance of a back-end service.
At the moment, each user-creation request must be done separately, and if you want to pre-provide a password for a user, then it takes two calls (you can, in one call, nudge the LMS to send a "password set" email to the new user, but if you want to manually set the password for the user, you have to do that in a separate call after the user record gets created). The individual calls are pretty light-weight, being a simple HTTP POST with response back to provide the created user record's details.
There are roadmapped plans to expand support for batch operations, and batched user-creation is certainly a logical possibility for improvement.

Resources