Right way to get ethereum ERC-20 tokens information - token

Can somebody tell me the right way to get info (decimals count, name and symbol) of the ERC-20 token from the Ethereum blockchain?
I supposed that it could be done through the calls of the appropriate functions decimals(), name(), symbol() to the contract address of the exact token via ABI construction (if user web3js library). And in many cases it can be done such way. But unfortunatelly that's not for all cases. For example, there's a token with address:
0xb5a5f22694352c15b00323844ad545abb2b11028
If we read contract info about this token on etherscan's webpage, there's no any info written in the contract public variables name, symbol, and decimals:
https://etherscan.io/address/0xb5a5f22694352c15b00323844ad545abb2b11028#readContract
But etherscan knows the name and symbol of this token (ICON (ICX)). Moreover there's another website that can get info about decimals count for this token:
https://api.ethplorer.io/getTokenInfo/0xb5a5f22694352c15b00323844ad545abb2b11028?apiKey=freekey
So the question is: is there any universal way to get decimals, name and symbol for the exact token (e.g 0xb5a5f22694352c15b00323844ad545abb2b11028) via blockchain calls or not? And how etherscan and ethplorer.io website can handle such calculations?
Hope somebody can help with my question. Thanks in advance!

The token provider interface spec says that those 3 methods are optional. It means that you can't expect every token contract to implement those. I guess that you have to have workarounds for those contracts which are not compatible with the standard.
For the https://etherscan.io/address/0xb5a5f22694352c15b00323844ad545abb2b11028#readContract if you look at the source code, the contract name is contract IcxToken, that's at least one way to identify it.

Related

problem make token bsc (Low level interactions)

i make token in remix.eth but get error , Low level interactions
( Both 'receive' and 'fallback' functions are not defined)
please help me
thanks
In the "Deploy & Run Transactions" tab, there's a selectbox allowing you to chose which of the contracts you want to deploy as the main contract. It's sorted alphabetically, so the Address (one of the contracts listed in the linked .sol file) comes as the first and default option.
You most likely wanted to deploy the COMF contract instead.
The Address contract is a helper library. It doesn't have any public and external functions, so the list of available functions to call is empty.
It also doesn't have the receive() and fallback() functions, so the low-level call with empty payload fails with the "Both 'receive' and 'fallback' functions are not defined" message.

STRIPE integration

I'm using the REST Debugger as a first try at working with the stripe API. I can login and perform some basic creation and listing tasks.
When creating a customer the (postal) address elements are described as parameters that are of type 'dictionary'. The docs (https://stripe.com/docs/api/customers/create) refers to them as 'child parameters' with a notation address.line1, address.city etc. I'm lost as to what this means in terms of a Delphi friendly syntax. Anyone got any clues to move me forward? Many thanks
I took a look at the PHP client sources, and I found that the parameters were flattened with a key[subkey] scheme before being sent. So you should use the following parameter names for the address:
address[line1]
address[city]
address[country]
...

Decoding YouTube's error 500 page

This is YouTube's 500 page. Can anyone help decode this information?
<p>500 Internal Server Error<p>
Sorry, something went wrong.
<p>A team of highly trained monkeys has been dispatched to deal with this situation.<p>
If you see them, show them this information:
AB38WENgKfeJmWntJ8Y0Ckbfab0Fl4qsDIZv_fMhQIwCCsncG2kQVxdnam35
TrYeV6KqrEUJ4wyRq_JW8uFD_Tqp-jEO82tkDfwmnZwZUeIf1xBdHS_bYDi7
6Qh09C567MH_nUR0v93TVBYYKv9tHiJpRfbxMwXTUidN9m9q3sRoDI559_Uw
FVzGhjH5-Rd1GPLDrEkjcIaN_C3xZW80hy0VbJM3UI5EKohX35gZNK2aNi_8
Toi9z3L8lzpFTvz5GyHygFFBFEJpoRRJSu3CbH5S2OxXEVo4HgaaBTV7Dx_1
Zs1HZvPqhPIvXg9ifd4KZJiUJDFS8grPLE7bypFsRamyZw-OCVyUHsGQKBwu
77pTtRwpF3hOxYLxM4KnAyiY1N6yrASSWyaeumRDENAoEEe8i8MRxzifqHuR
leatvNMiwsg1pbSl7IIiaKljZaD9UkRms4Kvz1uYUNk4AwXnJ9-Wq44ufMPl
syiHp_LwaeqyuxXykJMl-SA9p05VrJc4kCETUW3Ybp0yTYvVrqggo56A0ofC
OiyAmifQA9pdYVGeumrQtbFlFyDyG9VKNpzn5lqutxFZPsS8xjiILfF3bETD
H4aUb5fT4iERFsEL7S-ClsXiA4yAJdAcNH-OhGg9ipAaIxRRTOR5P1MYx6s6
-OrqgpT5VEaEx2hMpS1afaMd2_F21sxvcz2d8sCpEceHHSfsntTth6talYeD
4l63aUTbbCKV1lHxKWxdUjACFKRobeAvIpcJPcdHSN3CNQI-LlIWIx9jeyBU
tDcL6S6GpRG_Z2of9fmw0LHpVU5hKlQ3lCPd4pVP6J02yrsBi0S9OLoE9jmM
T2FfCvU1sWUCsrZu4-UPflXMyRnFK8aN8DYiwWWE8OvnLQ-LIaRDhjp29u9a
LT6Lh4KxEmWF5XeZTwrzJxtuDLVomxVD5mpwFvK0YSoaz9dnPGXb0Fm2txSL
BvGssSrWBJ4FeR6eEEkd_UkQ-aUnPv2W-POox17n54wzTwLugYjslRenMzmk
I4_jlXcx9NpKmUg7Pa0qJuaElt-ZymPv6h0cXRUlyZtS0iT9-CQOHWLYMi3I
kKrYa6bKUCAj058JEderSnbXqGEMvwBeZ_xgJpAjJiSgMOxJPokhbS6ezIv4
1JNr_dvQyvu4vh-YQNZ37fNTqQcoDZtYflBsJjuGrJlmIcqBYufB9g6nUaOE
xPAKjPdvZ_z1Rn_8sWVf8NHNBBKGe5lgDgBxypsV0kIwVa9QOlehivOaieBI
tmqHNdQIfdob0XUTEBPSeLj9hmw3Bqplc3gqUfFhIvpHml6dOTbjBhfkq0TE
5yCRHL2VSe2Xt9_i8SPQA2yCtJVO8HP6pnohmxqlBWSTE8Xj87PI6quX7f9i
0W6PdtkMYaGJsd_Ly_4Ag-KmGNHN585tF9eC5HeQ8Gz-vHZWOUiM4OQAG9UA
31ENOAjHtYb--ketbUcdX_FdjGiPtI_GxYeBqEShICotcd-S-E3bEGO-77M2
CuUUdB1AUYVDZR81XejVG5kSWsrz-p1qZ-6sSpSHCp114C6PheQPCwRHEr_1
AS-DkZfIuZ-w8XAo6pHIwvnv0dORSo-hPFgw1rw2VE4aKsgeMc7ZoPUxby1d
Zr-o-0X4ZMgxoQHw_Ub27rTTHxS5Czt_vgBPq7k5OK5dm6b7JCs6Dbn2dsIA
AakPL26t4smr8IiPAnqNC2sn7vxSiAe9mTJ670eNc6C9dCSGwqzqSURiLHmT
kFyLhNSOdipttECmSSA1qh_E0K4LUhiOq7MFDEzg9CLD8kuJrqpEGgltYpD-
8lk7KEpyjMqbWFs-qeD8uJpsVfY2ac1C67OmyGzkERVoC245-YXuNCP8KUZH
LGzRm9jXwUP_piDETX0N5xj34VOCfUTffT1WlWHmB9WRPhwjIsYYy_kgR-uT
kIEDQ23NVUEGgDoryl-ymysIfwifjq-lPB3e85dz1PajNxawsCrKNeR_4hhq
zE_E4ete1EgXeAYoeH4UIgrPGXDD-KfoNoB6viNs0GzNU9czD9Avr-tDtARO
HBLSLIVRYq8caMA-jvpplTOMoDdmUMUWytf4Y_F5tKTpNtLPaAe1py1IgZBl
lfAGY9L_k5slelh_9gUBEkURxS2oMGf2gdSeDdRBxKKx5tF1b-cuMLK6JYZJ
vbGFYSsSENOkHrHEo9NdTwTi7NON9ZgRJgh7OaENK4TFCXrhKc4C6cyJs-V_
HZ0Q-B8XDyjL0qudg_0rJbjTNpNZajT_1WGsnhsTTAgMCGtTsj1T8vNx2LuX
lPQV30nUKpukdCP3zuiE9_aeJQ-nzf3dMQ-KnZU5APmGcIP_u2be6blieMWH
qVax1asKmuIjslh49ceM6lRt3Ia2bHUB8b1TMSjU4I79KPqc3clDnD8quNnU
cRkgfJ_8LCEoH7jml_2TNV0fLuH_9IOXF3jKjhT9K5f-e5N06GmPQLzdqzeQ
MnEtHuDcf4IizyKnB5GUXoNfQxbScQEzztQ_nHMYfF-E8KqoxxlK-Z0wfEDv
dJpL3mcNfFu_vz-_LJ0oI4dE0-vthsxbpTxVQkdI0E5XSi4nYfLqhXompk4j
gpxcHBjsXVbWcnelWhhQP15gCApj6Gz_ddRtk_uxiyiqZ44oUUDcl1KeWMTf
yhKDj7jgGNzTOkUsXZRPb9M77-ZYPuL2wR68E3b9PC_mS6HBHiUxQ7pXvkwS
Bi2CoFgd9SqBXk2O5I_BPaEoA8Aorazw4OvDrmTQrCk4OkGPKRukE4Ci2RMq
TZIYbBz-v3QxmOKHJoMXPNOfj93TRWpmlAd6iHCH6BVlSdfgfjdbHeD0b0ct
qXC_-S5fr1XFBuaZwaUTrBPxU-3IxWLp-dx7wpKcFykqKnByYpkzR3twKEXc
z--CZV79Qk3ZTMY9ATia4HbyhoAqY_hV9GKAHQdU_C-9qwYt0rliNUcizlBc
RHcwzoMyx30ciwbE8e9QsEH_AMa3E2ezuhjTqQlAG33_Gy5Bwe7fj6zNR0ud
jjpcNVf-wprWHEYxMcKwjCQvEHBtv6TnCHkgi_AOtPzzm3aYkMc_ysdNAnI7
DE_T9S5Mkcs6VdT2DWgUN_UC-oAw27xej0aTIn0GckXPDcLBgvrUhUPU3FRn
lW65syvFvxFmBOiCAEHD1q6Is1XhIf8vE6y0FMdNEWSMUW5rQG8f3KP_pjqG
XlUHnGrQPQysylrczHOj4E3WTT918xg2vrXVraDJbCYnJpaWp8m94iqZw2gJ
I_0UWOAZJMCYWz5jYf2DanCOBaGeZIO-UsWorP6YV3yHehirZ_Lc6KtaUorb
bm_BnnCqGVZypL4k6cDy-4GyO2GNohXzN-VWqbAIUQIat9w6RsDpzpS2DIap
96aMBDg24D73RhFTEgCSunPpbbGrDVU3GFkuTGFFBQWNfAU_F22XtoPr_ZB0
1zZPrBVXrEhebvrzp0Z31_sIT8zLop_oaSRvykbyJKKxucARfPee-d0xlgWN
WwKKtl49WVMhhu0OfDScH3knAVdv0LDAyt1fo3WF8jxdp7J9Hn3OWF3rcn0p
zw0gt6YV_6FRy_UbZmpLBvEhZQKUfKuxp6LK-SfHOilT29ERg9LJZhnyluTV
HELRtkJIcHzvphXupCIaIgZispYxHNSmAfze2cshWBYizGTBSKXWgrJeo7Q6
kEjim72yKaJ8JaLzMQFPxtQxhtvHRw94dCuXcajg3nE_r_9t7D8RicqF-CVV
tvp-rHMPhhizlgfixHWXrPB7reTtftT64pOSl5vUop8gTlbeW5Kg7WQAPNfp
zUH8YcAo0xDLJHA-FgTM4mYGih41rKaKKteWRFGU-fIyEzeO1s35tbGzlZ7R
btUG_fCpIbaJmucMZK9OzVBSfBgTBtFSesqKq6hIc8HctGcj5LPUfP9DRqqe
CrBi6bPjTlzrjaxJoU6oRq4ZtiBG38skOAaCUk61tpjilkq1fmWe2ByvLXhp
O2furZoiwNrizYmUmAW3ak3iSneScA64M-9apdZwhhEgpqyw5mUMYNT5SOOf
xZePlgXxhlL81t3KlofdbzT0w6tlbbT0NSbj9Q_zNkeZ8ar5aeMgTR-pJACg
baB20YVezziX-yboCF-uIptCTFNV
(Source: this post on HN)
The debug information contained in the (urlsafe-)base64 blob is likely encrypted.
Think about it from Google's perspective: You would want to display a stack trace, relevant headers of the http request and possibly some internal state of the user session to help a developer debug the situation. On the other hand all that information might contain sensitive information that you don't want the general public to see or that might endanger the user if he copy'n pastes it in a public support forum.
If I was to take a guess of the format I would imagine:
A public identifier of the key used for encryption (their servers could use different keys then)
The debug data encrypted using an authenticated encryption scheme
Additional data for error correction when OCR has to be used
For statistical analysis of the format it would be interesting to sample a lot of these error messages and see if some parts of the message are less random than you would expect from encrypted data (symmetrical encrypted data should follow a uniform distribution).
It looks like you are not the only one who is looking for some secret messages in YouTube error page. It seems that you can decode it using Base64.
Here is how:
http://www.cambus.net/decoding-youtube-http-error-500-message/
In a nutshell:
Sadly, contrary to my expectations, there doesn't seem to be any
hidden message… Screw you, highly trained monkeys!
I guess it is just another Easter Egg similar to 'Goats Teleported' performance counter Google Chrome had:
https://plus.google.com/+RobertPitt/posts/PrqAX3kVapn
But I guess unless you look like this, you can't be 100% sure.
It's entirely possible that this is random padding to avoid the "friendly" IE error pages that show if your error page does not contain more than 512 bytes of HTML. It would be base64 encoded if it were simply random bytes.
Imho this is all about customer care.
Actually there would be no need to send the error/debug message to the customer, because, I guess, it's already handled internally.
So:
why do we see this?
and why do they crypt it?
and is there really no hidden message for us?
Although the error might be handled and resolved internally, this does not necessarily satisfy a customer, who is not able to use the product. They pretty much do crypt by a good reason as this debug message might reveal more than a typical admin is used to.
And also there is no need to hide a message for us. Why? Because we NEVER stop until we find something.
I think:
internally the error is dealt with
external users might have something in hand to tell a technician if necessary and in return can get an approximation of ongoing problem
All in all nothing special about it and i think linking e.g. to the inf. monkey theorem is a bit overspectulated...
Error 500 means google has a problem which can not resolve. So when reporting a bug the most important thing is to prepare reproduction steps. So I tried to find an answer of the question "When this happens?"
I found this post in reddit: https://www.reddit.com/r/youtube/comments/40k858/is_youtube_giving_you_500_internal_server_errors/?utm_source=amp&utm_medium=comment_list
As resume:
It happens on desktops (www...), it works ok on mobile version (m...)
It happens for authenticated users. For anonymous users is working fine.
The problem is resolved after cookies are cleaned.
So I would give a direction: try to find the key in the session cookie. I hope my 2 cents will help.

Access Control: Database Fortify

We ran the Fortify scan and had some Access Control: Database issues. The code is getting the textbox value and setting it to a string variable. In this case, it's passing the value from the TextBox to the stored procedure in a database. Any ideas on how I can get around this Access Control: Database issue?
Without proper access control, the method ExecuteNonQuery() in DataBase.cs
can execute a SQL statement on line 320 that contains an attacker-controlled primary
key, thereby allowing the attacker to access unauthorized records.
Source: Tool.ascx.cs:591 System.Web.UI.WebControls.TextBox.get_Text()
rptItem.FindControl("lblClmInvalidEntry").Visible = false;
ToolDataAccess.UpdateToolData(strSDN, strSSNum, strRANC, strAdvRecDate, strAdvSubDate, strClmRecDate, strClmAuth, strClmSubDate, strAdvAuth, txtNoteEntry.Text);
Sink: DataBase.cs:278
System.Data.SqlClient.SqlParameterCollection.Add()
// Add parameters
foreach (SqlParameter parameter in parameters)
cmd.Parameters.Add(parameter);
The point of "Access Control: Database" is where it isn't being specific enough in the query and so could potentially allow a user to see information that they're not supposed to.
An easy example of this vulnerability would be a payroll database where there is a textbox that says the ID of the employee and gives their salary, this could potentially allow the user to change the ID and see the salary of other employees.
Another example where this is often intended functionality is in a website URL where the product ID is used in a parameter, meaning a user could go through every product you have on your site. But as this only allows users to see information they're supposed to be able to, it's not particularly a security issue.
For instance:
"SELECT account_balance FROM accounts WHERE account_number = " + $input_from_attacker + ";"
// even if we safely build the query above, preventing change to the query structure,
// the attacker can still send someone else's account number, and read Grandma's balance!
As this is pretty context based, it's difficult to determine statically so there are lots of examples where Fortify may catch this but it's actually intended functionality. That's not to say the tool is broken, it's just one of the limitations of static analysis and depending on what your program is supposed to be doing it may or may not be intended.
If this is intended to work like this, then I would suggest auditing it as not an issue or suppressing the issue.
If you can see that this is definitely an issue and users can see information that they shouldn't be able to, then the stored procedure needs to be more specific so that users can only see information they should be able to. However SCA will likely still pick this up in a latter scan so you would still then need to audit it as fixed and no longer an issue.

Getting Invalid Token response when trying to create D2L user account

I am trying to modify a D2L database from within a 3rd party application using their Valence API. I've gotten some operations to work but am stuck trying to create a new user account. I have been told that the account I am working under is authorized to do this.
I’ve defined a JSON object to hold the values I want:
{
"OrgDefinedId": "XX000TEST",
"FirstName": "Tom",
"MiddleName": "",
"LastName": "Foolery",
"ExternalEmail": "tom#something.com",
"UserName": "Tom.Foolery",
"RoleId": "78",
"IsActive": "true",
"SendCreationEmail": "false"
}
I copied the above text to the HTTP post buffer and then called the following link:
/d2l/api/lp/1.0/users/?
The parameter string contains the IDs and signatures (x_a, x_b, etc) as specified in the Valence docs. I assume the authorization values are correct, since I'm getting correct results when using the same algorithms on other Valence queries.
Any suggestions on how to get past the "Invalid Token" message would be appreciated.
--stein
If you're getting a 403 "Invalid Token" message then you are not, for some reason, forming your x_a, x_b, x_c, or x_d authentication tokens correctly. Common problems we have seen in the past are:
Trying to re-use x_c and or x_d signatures generated for one API call with another
Getting the tokens swapped around: x_a is App ID, x_c is App Sig, x_b is User ID, and x_d is User Sig
Generating the signatures using the wrong HTTP method (the method is one of the components of the base string for the signatures)
Not using all upper case letters for the HTTP method in the base string (the component should be GET not get)
Not using all lower case letters for the API route in the base string, or including incorrect characters: for example, in your question, you seem to imply that you're passing in the ? as a part of the route; you shouldn't do this. In this case, your base string for creating the URL should be POST&/d2l/api/lp/1.0/users/&1234567 where 1234567 should be replaced with the timestamp you generate and also pass in x_t
Using the API route with API version component provided, but when calling, using another version component (i.e. generating with /d2l/api/lp/1.0/... but calling with /d2l/api/lp/1.1/...)
Using an incorrect/mismatching timestamp value in the base string (the timestamp you use for the basestring should be in seconds, and be the same stamp as the x_t value)
While calls previously worked, suddenly none of the calls work with a 403 invalid token result: the user tokens could have expired and you need to re-authenticate the user
While calls previously worked against a test instance, when you try moving to a different LMS (prod instance for example) the calls don't work: perhaps the App ID/Key pair hasn't shown up on this new LMS, or you're trying to use the user ID/Key pair from one LMS to generate signatures on a different LMS
Also, notice that your JSON object is strictly not correctly formed: the IsActive and SendCreationEmail properties should have values of true and false respectively, not "true" and "false", although it's possible that the LMS parser on the server side will be forgiving about that.
If none of these points assist you, please feel free to open an issue in our issue tracker, or contact our Valence support email address, and we can try to help you out through this issue.
NOTE Please note that invalid tokens will throw you back a 403 (but the message will be "Invalid Token" or "Expired Token" or similar). If your tokens are correctly generated, but your calling user context is not allowed to create a user, then you'll also get a 403, but this time the message will be "Not Permitted" or "Not Authorized" or similar. Make sure you double check what sort of 403 you're getting back.
In this particular case, the permissions around creating a user are a bit tricky; not only must you have a permission to create a user, you must also have permission to modify the properties that you will be passing into the API in the CreateUserData structure (OrgDefinedId, Email, and so on), and you must also be able to see all those fields in the User Information Privacy settings, and you must have permission to enroll the user role you have provided at the organization level... those last two bits have tripped up some of our clients in the past.

Resources