Arweave seems to be a way to store NFT data 'permanently' but when I see the URI in the NFTs that are linked to Arweave, I see {ID}.arweave.net/{Other_ID}. The thing that puzzles me is how we can say it is permanent since it is resolving a domain from DNS where it can be redirected to a random website at some point in the future.
I can see the merits of paying once and storing permanently but if I wrote an ERC721 URI function (unchangeable in the future) that points to some .net domain, I would be concerned that this domain might redirect to something else since it is 'controlled' by some organization. I am not very experienced with this network-DNS area so please take my question with a bit of salt but happy to hear if this is a valid point or not.
Basically, my simple question is:
Can we claim that URL is persistent?
Related
I'm making a Rails polling site, which should have results that are very accurate. Users vote using POST links. I've taken pains to make sure users only vote once, and know exactly what they're voting for.
But it occurred to me that third parties with an interest in the results could put up POST links on their own websites, that point to my voting paths. They could skew my results this way, for example by adding a misleading description.
Is there any way of making sure that the requests can only come from my domain? So a link coming from a different domain wouldn't run any of the code in my controller.
There are various things that you'll need to check. First is request.referer, which will tell you the page that referred the link to your site. If it's not your site, you should reject it.
if URI(request.referer).host != my_host
raise ArgumentError.new, "Invalid request from external domain"
end
However, this only protects you from web clients (browsers) that accurately populate the HTTP referer header. And that's assuming that it came from a web page at all. For instance, someone could send a link by email, and an email client is unlikely to provide a referer at all.
In the case of no referer, you can check for that, as well:
if request.referer.blank?
raise ArgumentError.new, "Invalid request from unknown domain"
elsif URI(request.referer).host != my_host
raise ArgumentError.new, "Invalid request from external domain"
end
It's also very easy with simple scripting to spoof the HTTP 'referer', so even if you do get a valid domain, you'll need other checks to ensure that it's a legitimate POST. Script kiddies do this sort of thing all the time, and with a dozen or so lines of Ruby, python, perl, curl, or even VBA, you can simulate interaction by a "real user".
You may want to use something like a request/response key mechanism. In this approach, the link served from your site includes a unique key (that you track) for each visit to the page, and that only someone with that key can vote.
How you identify voters is important, as well. Passive identification techniques are good for non-critical activities, such as serving advertisements or making recommendations. However, this approach regularly fails a measurable percentage of the time when used across the general population. When you also consider the fact that people actually want to corrupt voting activities, it's very easy to suddenly become a target for everyone with a good concept to "beat the system" and some spare time on their hands.
Build in as much security as possible early on, because you'll need far more than you expect. During the 2012 Presidential Election, I was asked to pre-test 41 online voting sites, and was able to break 39 of them within the first 24 hours (6 of them within 1 hour). Be overly cautious. Know how attackers can get in, not just using "normal" mechanisms. Don't publish information about which technologies you're using, even in the code. Seeing "Rails-isms" anywhere in the HTML or Javascript code (or even the URL pathnames) will immediately give the attacker an enormous edge in defeating your safety mechanisms. Use obscurity to your advantage, and use security everywhere that you can.
NOTE: Checking the request.referer is like putting a padlock on a bank vault: it'll keep out those that are easily dissuaded, but won't even slow down the determined individual.
What you are trying to prevent here is basically cross-site request forgery. As Michael correctly pointed out, checking the Referer header will buy you nothing.
A popular counter-measure is to give each user an individual one-time token that is sent with each form and stored in the user's session. If, on submit, the submitted value and the stored value do not match, the request is disgarded. Luckily for you, RoR seems to ship such a feature. Looks like a one-liner indeed.
I've searched for this a bit on Stack, but I cannot find a definitive answer for https, only for solutions that somehow include http or unencrypted parameters which are not present in my situation.
I have developed an iOS application that communicates with MySQL via Apache HTTPS POSTS and php.
Now, the server runs with a valid certificate, is only open for traffic on port 443 and all posts are done to https://thedomain.net/obscurefolder/obscurefile.php
If someone knew the correct parameters to post, anyone from anywhere in the world could mess up the database completely, so the question is: Is this method secure? Let it be known nobody has access to the source code and none of the iPads that run this software are jailbreaked or otherwise compromised.
Edit in response to answers:
There are several php files which alone only support one specific operation and depend on very strict input formatting and correct license key (retreived by SQL on every query). They do not respond to input at all unless it's 100% correct and has a proper license (e.g. password) included. There is no actual website, only php files that respond to POSTs, given the correct input, as mentioned above. The webserver has been scanned by a third party security company and contains no known vulnerabilities.
Encryption is necessary but not sufficient for security. There are many other considerations beyond encrypting the connection. With server-side certificates, you can confirm the identity of the server, but you can't (as you are discovering) confirm the identity of the clients (at least not without client-side certficates which are very difficult to protect by virtue of them being on the client).
It sounds like you need to take additional measures to prevent abuse such as:
Only supporting a sane, limited, well-defined set of operations on the database (not passing arbitrary SQL input to your database but instead having a clear, small list of URL handlers that perform specific, reasonable operations on the database).
Validating that the inputs to your handler are reasonable and within allowable parameters.
Authenticating client applications to the best you are able (e.g. with client IDs or other tokens) to restrict the capabilities on a per-client basis and detect anomalous usage patterns for a given client.
Authenticating users to ensure that only authorized users can make the appropriate modifications.
You should also probably get a security expert to review your code and/or hire someone to perform penetration testing on your website to see what vulnerabilities they can uncover.
Sending POST requests is not a secure way of communicating with a server. Inspite of no access to code or valid devices, it still leaves an open way to easily access database and manipulating with it once the link is discovered.
I would not suggest using POST. You can try / use other communication ways if you want to send / fetch data from the server. Encrypting the parameters can also be helpful here though it would increase the code a bit due to encryption-decryption logic.
Its good that your app goes through HTTPS. Make sure the app checks for the certificates during its communication phase.
You can also make use of tokens(Not device tokens) during transactions. This might be a bit complex, but offers more safety.
The solutions and ways here for this are broad. Every possible solution cannot be covered. You might want to try out a few yourself to get an idea. Though I Suggest going for some encryption-decryption on a basic level.
Hope this helps.
I understand that when we use MVC, once we get the user password. It should go through some layers till it is ready to be introduced to the database. So, should we encrypt the password immediately after I got it from the web form? or I can wait to encrypt the password on the database?
I do apologize if this is an stupid question, but Im just starting on this.
This might be a good starting point for reading on the topic: https://crackstation.net/hashing-security.htm. As others have mentioned, I highly recommend that you don't reinvent the wheel of security. Use existing libraries, and try to understand the process before writing any code.
To answer your question directly: If the person is creating a new account/password, I would immediately hash the password upon receiving it and store it in the database. There probably wouldn't be any 'in-between' steps. No real reason for their password (hashed or otherwise) to float around the application. Make sure to follow the principles outlined in the article that I provided, though.
Not covered in that topic: make sure to also implement SSL/TLS. This is another topic that will require some reading. I don't know where your server is located (internal to a company or publicly to the world), but people tend to use the same password for different services, and if you don't protect their passwords as it travels from the browser to the server, their password could be obtained and used with their other accounts.
In conclusion - this is a big topic that really warrants some reading before writing any code. Security is hard. Make sure to take some time to read up front and understand what needs to be done.
I would like to protect my s3 documents behind by rails app such that if I go to:
www.myapp.com/attachment/5 that should authenticate the user prior to displaying/downloading the document.
I have read similar questions on stackoverflow but I'm not sure I've seen any good conclusions.
From what I have read there are several things you can do to "protect" your S3 documents.
1) Obfuscate the URL. I have done this. I think this is a good thing to do so no one can guess the URL. For example it would be easy to "walk" the URL's if your S3 URLs are obvious: https://s3.amazonaws.com/myapp.com/attachments/1/document.doc. Having a URL such as:
https://s3.amazonaws.com/myapp.com/7ca/6ab/c9d/db2/727/f14/document.doc seems much better.
This is great to do but doesn't resolve the issue of passing around URLs via email or websites.
2) Use an expiring URL as shown here: Rails 3, paperclip + S3 - Howto Store for an Instance and Protect Access
For me, however this is not a great solution because the URL is exposed (even for just a short period of time) and another user could perhaps in time reuse the URL quickly. You have to adjust the time to allow for the download without providing too much time for copying. It just seems like the wrong solution.
3) Proxy the document download via the app. At first I tried to just use send_file: http://www.therailsway.com/2009/2/22/file-downloads-done-right but the problem is that these files can only be static/local files on your server and not served via another site (S3/AWS). I can however use send_data and load the document into my app and immediately serve the document to the user. The problem with this solution is obvious - twice the bandwidth and twice the time (to load the document to my app and then back to the user).
I'm looking for a solution that provides the full security of #3 but does not require the additional bandwidth and time for loading. It looks like Basecamp is "protecting" documents behind their app (via authentication) and I assume other sites are doing something similar but I don't think they are using my #3 solution.
Suggestions would be greatly appreciated.
UPDATE:
I went with a 4th solution:
4) Use amazon bucket policies to control access to the files based on referrer:
http://docs.amazonwebservices.com/AmazonS3/latest/dev/index.html?UsingBucketPolicies.html
UPDATE AGAIN:
Well #4 can easily be worked around via a browsers developer's tool. So I'm still in search of a solid solution.
You'd want to do two things:
Make the bucket and all objects inside it private. The naming convention doesn't actually matter, the simpler the better.
Generate signed URLs, and redirect to them from your application. This way, your app can check if the user is authenticated and authorized, and then generate a new signed URL and redirect them to it using a 301 HTTP Status code. This means that the file will never go through your servers, so there's no load or bandwidth on you. Here's the docs to presign a GET_OBJECT request:
https://docs.aws.amazon.com/sdk-for-ruby/v3/api/Aws/S3/Presigner.html
I would vote for number 3 it is the only truly secure approach. Because once you pass the user to the S3 URL that is valid till its expiration time. A crafty user could use that hole the only question is, will that affect your application?
Perhaps you could set the expire time to be lower which would minimise the risk?
Take a look at an excerpt from this post:
Accessing private objects from a browser
All private objects are accessible via
an authenticated GET request to the S3
servers. You can generate an
authenticated url for an object like
this:
S3Object.url_for('beluga_baby.jpg', 'marcel_molina')
By default
authenticated urls expire 5 minutes
after they were generated.
Expiration options can be specified
either with an absolute time since the
epoch with the :expires options, or
with a number of seconds relative to
now with the :expires_in options:
I have been in the process of trying to do something similar for quite sometime now. If you dont want to use the bandwidth twice, then the only way that this is possible is to allow S3 to do it. Now I am totally with you about the exposed URL. Were you able to come up with any alternative?
I found something that might be useful in this regard - http://docs.aws.amazon.com/AmazonS3/latest/dev/AuthUsingTempFederationTokenRuby.html
Once a user logs in, an aws session with his IP as a part of the aws policy should be created and then this can be used to generate the signed urls. So in case, somebody else grabs the URL the signature will not match since the source of the request will be a different IP. Let me know if this makes sense and is secure enough.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I've been trying to collect analytics for my website and realized that Google analytics was not setup to capture data for visitors to www.example.com (it was only setup for example.com). I noticed that many sites will redirect me to www.example.com when I type only example.com. However, stackoverflow does exactly the opposite (redirects www.stackoverflow.com to just stackoverflow.com).
So, I've decided that in order to get accurate analytics, I should have my web server redirect all users to either www.example.com, or example.com. Is there a reason to do one or the other? Is it purely personal preference? What's the deal with www? I never type it in when I type domains in my browser.
History lesson.
There was a time when the Web did not dominate the Internet. An organisation with a domain (e.g. my university, aston.ac.uk) would typically have several hostnames set up for various services: gopher.aston.ac.uk (Gopher is a precursor to the World-wide Web), news.aston.ac.uk (for NNTP Usenet), ftp.aston.ac.uk (FTP - including anonymous FTP archives). They were just the obvious names for accessing those services.
When HTTP came along, the convention became to give the web server the hostname "www". The convention was so widespread, that some people came to believe that the "www" part actually told the client what protocol to use.
That convention remains popular today, and it does make some amount of sense. However it's not technically required.
I think Slashdot was one of the first web sites to decide to use a www-less URL. Their head man Rob Malda refers to "TCWWW" - "The Cursed WWW" - when press articles include "www" in his URL. I guess that for a site like Slashdot which is primarily a web site to a strong degree, "www" in the URL is redundant.
You may choose whichever you like as the canonical address. But do be consistent. Redirecting from other forms to the canonical form is good practice.
Also, skipping the “www.” saves you four bytes on each request. :)
It's important to be aware that if you don't use a www (or some other subdomain) then all cookies will be submitted to every subdomain and you won't be able to have a cookie-less subdomain for serving static content thus reducing the amount of data sent back and forth between the browser and the server. Something you might later come to regret.
(On the other hand, authenticating users across subdomains becomes harder.)
It's just a subdomain based on tradition, really. There's no point of it if you don't like it, and it wastes typing time as well. I like http://somedomain.com more that http://www.somedomain.com for my sites.
It's primarily a matter of establishing indirection for hostnames. If you want to be able to change where www.example.com points without affecting where example.com points, this matters. This was more likely to be useful when the web was younger, and the "www" helped make it clear why the box existed. These days, many, many domains exist largely to serve web content, and the example.com record all but has to point to the HTTP server anyway, since people will blindly omit the www. (Just this week I was horrified when I tried going to a site someone had mentioned, only to find that it didn't work when I omitted the www, or when I accidentally added a trailing dot after the TLD.)
Omitting the "www" is very Web 2.0 Adoptr Gamma... but with good reason. If people only go to your site for the web content, why keep re-adding the www? I general, I'd drop it.
http://no-www.org/
Google Analytics should work just fine with or without a www subdomain, though. Plenty of sites using GA successfully that don't force either/or.
It is the third-level domain (see Domain name. There was a time where it designated a physical server: some sites used URLs like www1.foo.com, www3.foo.com and so on.
Now, it is more virtual (different 3rd-level domains pointing to same server, same URL handled by different servers), but it is often used to handle sub-domains, and with some trick, you can even handle an infinite number of sub-domains: see, precisely, Wikipedia which uses this level for the language (en.wikipedia.org, fr.wikipedia.org and so on) or others site to give friendly URLs to their users (eg. my page http://PhiLho.deviantART.com).
So the www. isn't just here for decoration, it has a purpose, even if the vast majority of sites just stick to this default, and if not provided, supply it automatically. I knew some sites forgetting to redirect, giving an error if you omitted it, while they communicated on the www-less URL: they expected users to supply it automatically!
Let alone the URL already specifies what protocol is to be used so "www." is really of no use.
As far as I remember, in former times services like www and ftp were located on different machines, therefore using the natural DNS features (subdomains) was necessary at this time (more or less).