How to grab the Token count in solana Web3 - token

Hey i want to fetch the token amount how do I do that? I have reached this point but want to hold just the value
used this till here console.log("Signature: ",hash.meta.postTokenBalances);

It's up to you what you need. Let's use SOL as an example, which as 9 decimal places, such that 1 SOL = 1_000_000_000 lamports
amount is the total amount of lamports, so 90_000_000_000 in your example
uiAmountString is the formatted amount of SOL, so 90.000000000 in your example
If you need to do calculations, you should hold on to amount to do the math, and decimals to display it. If you only need to display, then uiAmountString will be easiest.

Related

Needing formula for a compounding number multiplied by a value (Not anually)

Looking for very simple compounding math.
I have a number, for example, 5000. This number increases by a percent, for simplicity sake, let's say it increases by 100%, it does that 3 times. The final result for that should be 40000. 5000*2 then *2 then *2.
The question is, how do I make this happen with math on a spreadsheet. Preferably Google Sheets. Something I can use variables in for the percentages and times it increases.
This is not for annual compounding interest or any of that. I just need plain and simple compounding numbers.
Most likely you seek something simple as:
=A22*2^3
which could be also written as:
=A22*2*2*2
in terms of percentage it would be simply:
=(A22*B22)*2^3

What should i do to maintain performance of a mobile app which is using database?

I'm building an app using database.
I have a words table and everytime user types something, this app will record and update word the database.
And the frequency field will be auto increase after user enter one matched word.
But the trouble is user type day by day and i afraid the search performance will be reduce after times and also the Int field will reach to the limit (max limit Int) someday.
So, i limit the database to around less than 50.000 records.
I delete less-used records after a certain time.
But i don't know how to deal with frequency Int field of each word?
How to know exactly frequency usage of each word without increasing the field forever?
I recommend that you use a logarithmic scale for the frequency values. That's what is often done in situations like this. See Wikipedia to learn about logarithmic scales.
For example, if you have a word MAN that has a frequency of 15, the value you store in the database would be log(15) ~= 1.17609125906.
If you then find 4 new occurrences of MAN, then you want to add 4 to the field. You cannot add the log values directly because log(x)+log(y)=log(x*y). (See the Logarithm Rules section of this article for more information on log rules.)
Instead -- assuming you use a base 10 logarithm, you would use this formula:
SET frequency = log(10^frequency+4)
Depending on the length of your words, the few bytes for the frequency don't matter. With an unsigned four bytes integer, you can count up to more than two billion, which is way above the number of words what the user can type in in their whole lifespan.
So may want to go for two or three bytes, but the savings may be negligible.
Anyway, there are the following approaches for preventing overflow:
You can detect it, and then undo the operations, scale everything down by some factor of two, and then redo.
You can periodically check all your numbers and do the scaling when approaching the limit.
You can do a probabilistic update like below.
Probabilistic update
Instead of simply incrementing the frequency every time by one, you do it only with a probability which gets lower and lower as the counter grows. For example, you can do the increment with a probability of 1.0 / (oldValue + 1) or 2 ** -oldValue. The latter leads to a logarithmic growth, but, unlike the idea in the other answer, it works.
There are obviously some disadvantages due to the randomness and precision loss, but when all you care about is the relative frequency, it should be good enough.

Interpreting results from contiki's powertrace app

I am using contiki's powertrace (that in turn uses ENERGEST) to get power consumption. I came across the formula for that to be= ((rxon)*(RXi)*Vcc)/(cpu+lpm).
Where rxon, cpu and lpm are obtained from powertrace (i.e. times the mote spends in these states) and RXi (current) and Vcc (voltage) from datasheet.
My question is if I need to obtain total current consumption do I just remove the Vcc or do i need to remove Vcc and divide the whole thing by RTIMER_ARCH_SECOND. Since i read somewhere that powertrace results time in Rtimer ticks.
Thank you,
Avijit
If your formula is the calculation of the average total power consumption where (cpu+lpm) is the whole period, then you do not have to convert the time values in real seconds. The formula is a ratio and if your divide the numerator by RTIMER_ARCH_SECOND you need equally to divide the denominator, which brings you to the same result.
The following link explains in detail and with examples how to use powertrace. It provides the formula that you need:
http://thingschat.blogspot.de/2015/04/contiki-os-using-powertrace-and.html

Average numbers in the format "Minutes:seconds.fractions_of_seconds" in Google Docs

I've tried doing a custom format, but don't have any luck. I always get a divide by zero error. Ideally, I'd be able to average "1:00.0, 1:30.00, and 2:00.00" and have the function return "90.0" OR "1:30.0"
Can someone help me convert the times into values that the average function can understand?
This is in Google Sheets
Assuming 1:00.0 is in A1, 1:30.00 in A2 etc. The mix of : and , seems not to be accepted as a number format for data entry, so your data is presumably strings. These need first to be converted to numbers. This might be achieved with something like:
=ArrayFormula(value("0:"&A1:A100))
in B1. It not only assumes you have 100 rows of data but also that your data points do not individually equal or exceed one hour unless expressed as 60 minutes or more.
You may want to format the results with Format – Number, More Formats, Custom number format…:
mm:ss.000
though in this format hours are not shown.
In that format the result of:
=average(B1:B3)
should be:
01:30.000
Might not work in old Google Sheets.

Lookup table size reduction

I have an application in which I have to store a couple of millions of integers, I have to store them in a Look up table, obviously I cannot store such amount of data in memory and in my requirements I am very limited I have to store the data in an embebedded system so I am very limited in the space, so I would like to ask you about recommended methods that I can use for the reduction of the look up table. I cannot use function approximation such as neural networks, the values needs to be in a table. The range of the integers is not known at the moment. When I say integers I mean a 32 bit value.
Basically the idea is use some copmpression method to reduce the amount of memory but without losing many precision. This thing needs to run in hardware so the computation overhead cannot be very high.
In my algorithm I have to access to one value of the table do some operations with it and after update the value. In the end what I should have is a function which I pass an index to it and then I get a value, and after I have to use another function to write a value in the table.
I found one called tile coding , this one is based on several look up tables, does anyone know any other method?.
Thanks.
I'd look at the types of numbers you need to store and pull out the information that's common for many of them. For example, if they're tightly clustered, you can take the mean, store it, and store the offsets. The offsets will have fewer bits than the original numbers. Or, if they're more or less uniformly distributed, you can store the first number and then store the offset to the next number.
It would help to know what your key is to look up the numbers.
I need more detail on the problem. If you cannot store the real value of the integers but instead an approximation, that means you are going to reduce (throw away) some of the data (detail), correct? I think you are looking for a hash, which can be an artform in itself. For example say you have 32 bit values, one hash would be to take the 4 bytes and xor them together, this would result in a single 8 bit value, reducing your storage by a factor of 4 but also reducing the real value of original data. Typically you could/would go further and perhaps and only use a few of those 8 bits , say the lower 4 and reduce the value further.
I think my real problem is either you need the data or you dont, if you need the data you need to compress it or find more memory to store it. If you dont, then use a hash of some sort to reduce the number of bits until you reach the amount of memory you have for storage.
Read http://www.cs.ualberta.ca/~sutton/RL-FAQ.html
"Function approximation" refers to the
use of a parameterized functional form
to represent the value function
(and/or the policy), as opposed to a
simple table."
Perhaps that applies. Also, update your question with additional facts -- don't merely answer in the comments.
Edit.
A bit array can easily store a bit for each of your millions of numbers. Let's say you have numbers in the range of 1 to 8 million. In a single megabyte of storage you can have a 1 bit for each number in your set and a 0 for each number not in your set.
If you have numbers in the range of 1 to 32 million, you'll require 4Mb of memory for a big table of all 32M distinct numbers.
See my answer to Modern, high performance bloom filter in Python? for a Python implementation of a bit array of unlimited size.
If you are merely looking for the presence of the number in question a bloom filter, might be what you are looking for. Honestly though your question is fairly vague and confusing. It would help to explain what Q values are, and what you do with them once you find them in the table.
If your set of integers is homongenous, then you could try a hash table, because there is a trick you can use to cut the size of the stored integers, in your case, in half.
Assume the integer, n, because its set is homogenous can be the hash. Assume you have 0x10000 (16k) buckets. Each bucket index, iBucket = n&FFFF. Each item in a bucket need only store 16 bits, since the first 16 bits are the bucket index. The other thing you have to do to keep the data small is to put the count of items in the bucket, and use an array to hold the items in the bucket. Using a linked list will be too large and slow. When you iterate the array looking for a match, remember you only need to compare the 16 bits that are stored.
So assuming a bucket is a pointer to the array and a count. On a 32 bit system, this is 64 bits max. If the number of ints was small enough we might be able to do some fancy things and use 32 bits for a bucket. 16k * 8 bytes = 524k, 2 million shorts = 4mb. So this gets you a method to lookup the ints and about 40% compression.

Resources