EOS custom token transfer? - token

I know that this isn't the best place to place a question like this, but I have to design project in short time line and would greatly appreciate quick answer.
According to #walnutown ( https://github.com/walnutown in ), issue https://github.com/EOSIO/eos/issues/4173 You'd be charged for RAM for transfer of custom EOS tokens. I just need to know if this is true.
Thanks in advance, enjoy : )

Yes, but the amount of RAM spend depends on whether recipient of custom token has accounts table or not.
token::transfer(...) action invokes token::add_balance(..., ram_payer), but the third argument ram_payer would be sender.
If recipient has accounts table (already has custom token), transfer only consumes 128 bytes of sender's RAM, or transfer will consumes 368 bytes for allocating new accounts table and adding new item (recipient's balance for custom token).

Yes. RAM is used to store changes in states of the contract. Balance of a token to a particular account will be saved in RAM. As per the default eosio.token contract, this state will be saved in the RAM of "from" user who is pushing the transaction. In issuing case also, RAM of issuer will be consumed.

Related

Is it possible to burn from dead address in bsc network?

I sent my token to dead address(0x000000000000000000000000000000000000dead)
At first I was trying to burn all my token so I sent the token to the dead address using meta mask.
Now I can see the my token(https://bscscan.com/address/0x0083a5a7e25e0Ee5b94685091eb8d0A32DfF11D4)'s total supply isn't reduced. And the dead address is the holder of the token. How can I fix this out?
Actually I want to remove all tokens minted from my tokne.
I'm afraid you have misunderstood the concept of burning coins. Burning does not destroy coins. It sends them to an address/wallet/account that can only receive but cannot send (or spend) them, making them effectively lost forever as this is recorded in the immutable ledger.
This means that the supply of tokens in circulation (those tokens that can still be used to make transactions) is reduced, but not the total supply. So actually, everything that happened in your case is completely expected.
Here is one among many internet resources that explains the concept of burning coins:
https://www.investopedia.com/tech/cryptocurrency-burning-can-it-manage-inflation/
I see that you used the regular transfer() method to send your tokens to the zero address (link).
Your contract implements the burn() function that effectively reduces the total supply as well.
Expanding on Marko's answer: In this particular case, you should use the burn() function instead of just regular transfer. However, different token contracts might use different function names or not implement a burn mechanism at all - it all depends on the token contract implementation.

Are there mechanisms preventing DeviceCheck timestamps from acting as fingerprints?

iOS 11 adds DeviceCheck, a mechanism to allow app developers to store a small amount of data (2 bits, along with a timestamp) that stays on the device, surviving deletion of the app. This is mean to identify for example, if a user has already participated in a free trial, or other similar purposes, while preventing a unique identification (fingerprinting) of the user. Documentation link
My question is couldn’t this theoretically be abused by developers to store much more data (potentially uniquely fingerprinting the user), by using a unique timestamp? Is there any mechanism keeping developers from doing this? If not this could be a significant privacy concern, defeating the point of this feature.
I could easily see malicious developers either storing the timestamp on their server to later uniquely identify the user, or simply waiting to store the data until a particular timestamp arrives, and encoding data in (the lower few bits of) the timestamp itself. Is this an actual risk?
According to the documentation it looks like the last_update_time timestamp that Apple gives you is in the YYYY-MM format. If you have more than a handful of users it's probably safe to assume that month-level granularity on the timestamp isn't enough to uniquely identify any user.

FOREX live data feed provider - why so expensive?

I am developing an application for a customer where I would need to get live rates for EUR/USD, XAU/USD, etc.
After a quick google search, I concluded that I won't be able to get what I am looking for. The few clean JSON data solutions are updated once per hour and allow like 100.000 requests per month ( much less than the ~ 2.6M seconds a month has ).
Surprisingly for me even these "poor" service costs several hundred bucks per month.
I am not opening this just for weeping, but to know if somebody could help me in exporting data from MetaTrader Terminal to XML, or any other source, any broker gives that info updated every second on demo accounts without delays.
With that I would be able to create my own API and use that data on my application. If that works correctly, I am willing to create an endpoint, open for anyone, it's a shame that in 2017 there are no free live feeds for at least the main markets. I am sure it would help many developers.
FOREX? Each data-feed is Broker / LP-provider specific ( i.e. different )
There is no "globally universal" XAUUSD stream, there are many streams, valid for respective Broker / LP-provider Terms & Conditions.
While technically doable ( republishing of the received stream of events ), some T&C will have explicitly legally bind you not to re-distribute the data from any feed you receive from the FOREX Trade Execution venue.
Why so expensive?
A principal business understanding is needed here. If one produces some service ( a Market Data Feed in this case ), such undertaking is to be justified by some type of exchange of values, if one is willing to receive someone else's product.
Except for some extremely altruistic exceptions ( which are typically very well funded indirectly, from some other business-domain, thus not requiring the Product/Service Consumers to share and pay the fair portion of the accumulated costs of that Product / Service creation + operations + maintenance ) and except for some ultra-idealistic illusions ( that did not happen to proof their ability to survive on their own, in the real world ),
there is no such thing as a free dinner ( as Britons are cited to say ).
So, given some Product / Service represents any non-zero value, there is fair to pay some price to exchange ones funds for such Product / Service, that allows the Consumer to benefit from the value received in exchange for her/his funds.
Meaning, if such a Product / Service is not considered absolutely useless ( as a synthetic RNG or demo account just mimicking Real Market Data Feed might appear to have in the real world's FOREX Market Data Feed context ), then there is a price to pay for it.
Surprised? Why?
If one pays nothing, why is one expecting to receive anything more than a symmetrically equivalent value of that zero paid equivalent ( a value of zero, or less, if costs are taken in account )?
Trying to exchange for zero typically brings nothing useful or reliable - by principle.
The world just works this way. There is no exception, no excuse. Even the "Non-profit" NPOs know very well, the real world works this way and share the price to pay in exchange for buying the externally provided goods and services they consume inside their NPO programmes.
You can open a demo/real ( does not matter ) account at any MT4 broker and consume quotes very natively using MetaQuotes datafeed.
Do have a look for an example, published at: https://github.com/CPlugin/CPlugin.PlatformWrapper.MetaTrader4DataFeed
I am using rtfxd.com to get high quality forex data at 1 sec frequency. They cover over 40 currency pairs and the price is currently peanuts. You can purchase a login (currently only $3) and connect via ssh and start receiving data instantly.

DynamoDB auto incremented ID & server time (iOS SDK)

Is there an option in DynammoDB to store auto incremented ID as primary key in tables? I also need to store the server time in tables as the "created at" fields (eg., user create at). But I don't find any way to get server time from DynamoDB or any other AWS services.
Can you guys help me with,
Working with auto incremented IDs in DyanmoDB tables
Storing server time in tables for "created at" like fields.
Thanks.
Actually, there are very few features in DynamoDB and this is precisely its main strength. Simplicity.
There are no way automatically generate IDs nor UUIDs.
There are no way to auto-generate a date
For the "date" problem, it should be easy to generate it on the client side. May I suggest you to use the ISO 8601 date format ? It's both programmer and computer friendly.
Most of the time, there is a better way than using automatic IDs for Items. This is often a bad habit taken from the SQL or MongoDB world. For instance, an e-mail or a login will make a perfect ID for a user. But I know there are specific cases where IDs might be useful.
In these cases, you need to build your own system. In this SO answer and this article from DynamoDB-Mapper documentation, I explain how to do it. I hope it helps
Rather than working with auto-incremented IDs, consider working with GUIDs. You get higher theoretical throughput and better failure handling, and the only thing you lose is the natural time-order, which is better handled by dates.
Higher throughput because you don't need to ask Dynamo to generate the next available IDs (which would require some resource somewhere obtaining a lock, getting some numbers, and making sure nothing else gets those numbers). Better failure handling comes when you lose your connection to Dynamo (Dynamo goes down, or you are bursty and your application is doing more work than currently provisioned throughput). A write-only application can continue "working" and generating data complete with IDs, queueing it up to be written to dynamo, and never worry about ID collisions.
I've created a small web service just for this purpose. See this blog post, that explains how I'm using stateful.co with DynamoDB in order to simulate auto-increment functionality: http://www.yegor256.com/2014/05/18/cloud-autoincrement-counters.html
Basically, you register an atomic counter at stateful.co and increment it every time you need a new value, through RESTful API.

Gift Card/Debit Card Activation

General Problem
How do retail establishments constrain activation for gift cards, or those pre-paid phone/debit cards?
They must have a system in place that only keeps you from calling in to activate cards that haven't scanned through the register, and I assume there must be a standard solution built into the retail ERP/accounting systems. It probably involves web services or EDI.
Specific Problem
I ask all this because one of my clients wants me to develop a product that you get into by purchasing a $30 card at a retail store. The card has a unique number on it. Once you've purchased a card and activated it via a web site, coupons for restaurants and so on are emailed to you periodically.
However, if someone were to steal a bunch of cards or figure out the numbering sequence, we don't want the cards to work.
Presumably, this is a solved problem because retailers are doing this with the products above (pre-paid phone cards, etc).
I can think of a number of ways to solve this problem, however I need to provide the "standard" solution that the retailers expect, so that the product will snap into their infrastructure in the normal way.
Thanks a lot!
I've worked on a few of these types of systems and they all basically work the same way. The card has a # encoded into the magnetic strip (it could also be a barcode). That's usually all that's on the card itself. Cards are then activated at time of purchase.
Here's the basic flow:
Customer comes in and purchases a card:
The card is swiped and/or scanned.
A call is made to an on-line system (usually via some type of webservice call). It includes the card #, the amount they are activating with, and maybe a bit of additional information (ex. invoice #), and possible something like the previous transaction #.
If the call is successful, you get back a transaction ID #.
If the call fails, there is usually some protocol you are supposed to follow (sometimes handled during the daily settlement process). Things like retrying the activation, or running a query to determine if the last transaction went through.
If it was successful, the card is now active.
So basically, the card is worthless until it's activated. At that point it becomes "live" and has money associated with it. That is, back on some server somewhere is a database that has this card #, when/where it was activated, amounts, etc.
There is usually some functionality to generate an "end of day" transaction report to help you reconcile your numbers (what your system says vs what they have recorded).
Since cards are centrally managed it becomes easy for them to flag cards if they were stolen (not that it matters since they have $0 value until they have been activated).
I found out through other sources that there are about eight card processing services that integrate with the various retail locations.
Each retail location uses one. When a card scans through the register, the retailer notifies the card processing service (unlocking the PIN so that it can be activated), and then presumably the card processing service notifies us via an API call.
Then, when the customer goes to activate their card, we can tell which ones have scanned through the register (because they are unlocked). In this way, we get around problems surrounding stolen cards or guessed pin numbers.
The names of a few of these networks are:
Blackhawk Networks
InCom
Coin Star
I had the joy of working on one of these systems right out of college. Depending on the way they handle their processing whether end of day batch or weekly report could cause quite a lot of problems. One of the things I saw was that if the person whom had the card, whether legit or not, if they managed to make a bunch of purchases that were < the day's starting balance but by the end of the day > greater than starting balance all the purchases would go through. Not very fun when the company had to swallow upwards of 100 dollars per user a day.
In terms of security, make the company that you interface with be held responsible for purchases. That is the best way of handling this in what I have seen, because that is what they are there for. Hope that helps in some roundabout way.
You have to be careful, I have seen this played out in retailers..a customer in front of you is asked if they have a loyalty card by the cashier. The customer says no, but the customer behind them offers their card and it gets swiped hence collecting points for something that they did not buy...
Thus gets points at the expense of the customer in not having it thus skewing/distorting the results of the customer (the one who has a loyalty card)'s personal shopping experience and what they did not buy...on the appropriate system's database
In short, there is no foolproof way of getting around that other than asking for a retina scan or finger print that identifies the customer. Some customers would be cautious in joining a club in regards to their privacy...that is another thing to be kept in mind...not all of them would have a loyalty card...
Hope this helps,
Best regards,
Tom.
I believe that the most secure solution possible is to have a server that generates and prints (or exports) the card numbers. When a customer is interested in purchasing a gift card, it is scanned at the register and the register notifies the server that the card has been approved (probably with the credentials of the cashier).
Then when input on your website, the website checks with the card server to see if the card number is valid and approved.
Then, stolen cards are not approved. If someone figures out the numbering scheme, then you may be screwed so it is recommended that the numbers be random with enough digits to make guessing numbers unreasonable (perhaps with something similar to a CV2 code).
This is similar to how debit cards work: card number/CV2 generated ("server") -> shipped to customer -> customer activates via phone ("register", with the "credentials" being their SSN or similar) -> customer then uses at a store and the store contacts the card company's server
I know that Intuit Quickbooks Point of Sale offers a service like this (complete with an API), you could look them up.
I like HalfBrain's solution. I'd also imagine they have certain pieces of security in mind, like a single IP address (or some other criterion) with more than some number of failed activation attempts getting flagged for apparently attempting to probe the system.

Resources