Firebase security rules operating fees - firebase-realtime-database

I've recently considering various security issues of my Firebase services and faced an interesting issue related to the Firebase pricing. The question is simple as below:
If a set of security rules of realtime database read some data inside itself(rtdb) within the security rules' validation process, are such server-reading for validating purpose subject to any part of rtdb billing? For example, if a line of rule needs a "role" data in the matching rtdb's json tree, is such validation free from rtdb pricing of download fee($1/GB) or connection quota(Simultaneous connections of 200,000)? It might be true because the validation must read the data anyway to find out if the request is compliant with the rules.
If a set of security rules of cloud firestore read some data inside itself(firestore) within the security rules' validations, are such data-reading for validation subject to read operation fee of firestore($0.036 per 100,000 documents in LA location)? For example, to validate a line allow read if resource.data.visibility == 'public', the rule have to retrieve data just like a mobile client read such data without any security rule.
Hope this question reach gurus in the community! Thank you in advance [:

firebaser here
There is no charge for reads done inside Realtime Database security rules. They are also not counted as persistent connections.
Accessing the current resource or the future request.resource data in your Firestore security rules are not charged. Additional document reads (with get(), getAfter(), exists() and existsAfter()) that you perform are charged.
For more on the latter, see the Firebase documentation on access calls and pricing in Firestore security rules.

Related

How to ensure the integrity of data sent to the database from my application?

I am currently creating an iOS application with Swift. For the database I use Firebase Realtime Database where I store among other things information about the user and requests that the user sends me.
It is very important for my application that the data in the database is not corrupted.
For this I have disabled data persistence so that I don't have to store the requests locally on the device. But I was wondering if it was possible for the user to directly modify the values of the variables during the execution of my application and still send erroneous requests.
For example the user has a number of coins, can he access the memory of the application, modify the number of coins, return to the application and send an erroneous request without having to modify it himself.
If this is the case then is it really more secure to disable data persistence or is this a misconception?
Also, does disabling access to jailbroken devices solve my problems? Because I've heard that a normal user can still modify the request backups before they are sent.
To summarize I would like to understand if what I think is correct? Is it really useful to prevent requests to save locally or then anyway a malicious user will be able to modify the values of variables directly during the execution and this without jailbreak?
I would also like to find a solution so that the data in my database is reliable.
Thank you for your attention :)
PS : I also set the security rules of the db so that only a logged in user can write and read only in his area.
You should treat the server-side data as the only source of truth, and consider all data coming from the client to be suspect.
To protect your server-side data, you should implement Firebase's server-side security rules. With these you can validate data structures and ensure all read/writes are authorized.
Disabling client-side persistence, or write queues as in your previous question, is not all that useful and not necessary once you follow the two rules above.
As an added layer of security you can enable Firebase's new App Check, which works with a so-called attestation provider on your device (DeviceCheck on iOS) to detect tampering, and allows you to then only allow requests from uncorrupted devices.
By combining App Check and Security Rules you get both broad protection from abuse, and fine-grained control over the data structure and who can access what data.

GOOGLE FIRESTORE: Are these security rules safe?

So I am going through the security rules documentation of firestore right now in an effort to make sure the data users put in my app will be okay. As of right now, all I need users to be able to do is to read data (really only the 'get', but 'read' is fine too), and create data. So, my security rules for the firestore data right now are:
rules_version = '2';
service cloud.firestore {
match /databases/{database}/documents {
match /jumpSpotAnnotations/{id} {
// 'get' instead of 'read' would work too
allow read, create;
}
}
}
I have the exact same 'allow read, create;' for my storage data too. Will this be okay upon release or is this dangerous? In the documentation, they write:
"As you set up Cloud Firestore, you might have set your rules to allow open access during development. You might think you're the only person using your app, but if you've deployed it, it's available on the internet. If you're not authenticating users and configuring security rules, then anyone who guesses your project ID can steal, modify, or delete the data."
This text precedes an example where the rules are, 'allow read, write;', as opposed to my 'allow read, create'. Are my rules also subject to the deletion/modification of the data? I put create because I assume that that only lets people create the data, and not delete or modify it.
Final part of this question, but how could a user guess my project ID? Would they not have to sign in on my google account to then be able to manually delete, modify, or steal data? I'm not sure how that works. My app interface allows for the user to only create data, or read data, nothing else. So could some random person still somehow get into this database online and mess with it?
Thanks for any help.
Your rule allows anyone with an internet connection to read and create documents in the jumpSpotAnnotations collection. We don't know if that's "safe" for your app. You have to determine for yourself if that situation is safe. If you're OK with someone anonymously loading up that collection with documents, and you're OK with paying for that behavior, then it's safe.
Your project ID is baked into your app before you publish it. All someone has to do is download and decompile your app to find it. It's not hard. Your project ID is not private information.
No, your rules are not secure, to understand how someone can guess your project id and steal data first you have to understand that Firebase provides a simple REST API to access stored data. All of the data is stored in JSON format, so public databases can be accessed by making a request to the database URL appended by “.json”.
Now the main concern that how someone can guess your project id, see there are many tools available through which you can set up a proxy on your network and analyze each and every request going through. As Google already said that firebase simply uses rest API so the API endpoints can be known easily by intercepting HTTP requests and then if your rules are not secured then your data could be compromised.
Now solution, how to protect your data. See there are many ways even firebase provides tons of ways to secure data just read their docs about database security. But there is something which you could do from your side so that if your data is compromised then also someone can't actually read it.
You can prevent the apps from reading the data in plaintext. Use public-key algorithms to encrypt the data. Keep the private key on the systems that have to read the data. Then the app cannot read the data in plain text. This also will not prevent the manipulation or deletion of data.

What is the "scope" of a CKServerChangeToken?

As described in https://developer.apple.com/reference/cloudkit/ckserverchangetoken, the CloudKit servers return a change token as part of the CKFetchRecordZoneChangesOperation callback response. For what set of subsequent record fetches should I include the given change token in my fetch calls?
only fetches to the zone we fetched from?
or would it apply to any fetches to the db that that zone is in? or perhaps the whole container that the db is in?
what about app extensions? (App extensions have the same iCloud user as the main app, but have a different "user" as returned by fetchUserRecordIDWithCompletionHandler:, at least in my testing) Would it be appropriate to supply a change token from the main app in a fetch call from, say, a Messages app extension? I assume not, but would love to have a documented official answer.
I, too, found the scope of CKServerChangeToken a little unclear. However, after reviewing the documentation, both CKFetchDatabaseChangesOperation and CKFetchRecordZoneChangesOperation provide and manage their own server change tokens.
This is particularly useful if you decide to follow the CloudKit workflow Dave Browning outlines in his 2017 WWDC talk when fetching changes (around the 8 minute mark).
The recommended approach is to:
1) Fetch changes for a database using CKFetchDatabaseChangesOperation. Upon receiving the updated token via changeTokenUpdatedBlock, persist this locally. This token is 'scoped' to either the private or shared CKDatabase the operation was added to. The public database doesn't offer change tokens.
2) If you receive zone IDs via the recordZoneWithIDChangedBlock in the previous operation, this indicates there are zones which have changes you can fetch with CKFetchRecordZoneChangesOperation. This operation takes in it's own unique server change token via it's rather cumbersome initializer parameter: CKFetchRecordZoneChangesOperation.ZoneConfiguration. This is 'scoped' to this particular CKRecordZone. So, again, when receiving an updated token via recordZoneChangeTokensUpdatedBlock, it needs persisting locally (perhaps with a key which relates to it's CKRecordZone.ID).
The benefit here is that it probably minimises the number of network calls. Fetching database changes first prevents making calls for each record zone if the database doesn't report any changed zone ids.
Here's a code sample from the CloudKit team which runs through this workflow. Admittedly a few of the APIs have since changed and the comments don't explicitly make it clear the 'scope' of the server change tokens.

HTTPS POST Security level

I've searched for this a bit on Stack, but I cannot find a definitive answer for https, only for solutions that somehow include http or unencrypted parameters which are not present in my situation.
I have developed an iOS application that communicates with MySQL via Apache HTTPS POSTS and php.
Now, the server runs with a valid certificate, is only open for traffic on port 443 and all posts are done to https://thedomain.net/obscurefolder/obscurefile.php
If someone knew the correct parameters to post, anyone from anywhere in the world could mess up the database completely, so the question is: Is this method secure? Let it be known nobody has access to the source code and none of the iPads that run this software are jailbreaked or otherwise compromised.
Edit in response to answers:
There are several php files which alone only support one specific operation and depend on very strict input formatting and correct license key (retreived by SQL on every query). They do not respond to input at all unless it's 100% correct and has a proper license (e.g. password) included. There is no actual website, only php files that respond to POSTs, given the correct input, as mentioned above. The webserver has been scanned by a third party security company and contains no known vulnerabilities.
Encryption is necessary but not sufficient for security. There are many other considerations beyond encrypting the connection. With server-side certificates, you can confirm the identity of the server, but you can't (as you are discovering) confirm the identity of the clients (at least not without client-side certficates which are very difficult to protect by virtue of them being on the client).
It sounds like you need to take additional measures to prevent abuse such as:
Only supporting a sane, limited, well-defined set of operations on the database (not passing arbitrary SQL input to your database but instead having a clear, small list of URL handlers that perform specific, reasonable operations on the database).
Validating that the inputs to your handler are reasonable and within allowable parameters.
Authenticating client applications to the best you are able (e.g. with client IDs or other tokens) to restrict the capabilities on a per-client basis and detect anomalous usage patterns for a given client.
Authenticating users to ensure that only authorized users can make the appropriate modifications.
You should also probably get a security expert to review your code and/or hire someone to perform penetration testing on your website to see what vulnerabilities they can uncover.
Sending POST requests is not a secure way of communicating with a server. Inspite of no access to code or valid devices, it still leaves an open way to easily access database and manipulating with it once the link is discovered.
I would not suggest using POST. You can try / use other communication ways if you want to send / fetch data from the server. Encrypting the parameters can also be helpful here though it would increase the code a bit due to encryption-decryption logic.
Its good that your app goes through HTTPS. Make sure the app checks for the certificates during its communication phase.
You can also make use of tokens(Not device tokens) during transactions. This might be a bit complex, but offers more safety.
The solutions and ways here for this are broad. Every possible solution cannot be covered. You might want to try out a few yourself to get an idea. Though I Suggest going for some encryption-decryption on a basic level.
Hope this helps.

How can I use Delphi to create a visual challenge / response for restoring access to an application?

I'm interested in creating a challenge / response type process in Delphi. The scenario is this...we have 2 computers...1 belongs to the user and 1 belongs to a support technician.
The user is locked out of a certain program, and in order to gain 1 time access, I want:
The user to be presented with a challenge phrase, such as "28394LDJA9281DHQ" or some type of reasonably unique value
The user will call support staff and read this challenge (after the support staff has validated their identity)
The support person will type this challenge value into a program on their system which will generate a response, something equally as unique as the response, such as "9232KLSDF92SD"
The user types in the response and the program determines whether or not this is a valid response.
If it is, the user is granted 1 time access to the application.
Now, how to do this is my question? I will have 2 applications that will not have networked access to one another. Is there any functionality within Windows that can help me with this task?
I believe that I can use some functionality within CryptoAPI, but I really am not certain where to begin. I'd appreciate any help you could offer.
I would implement a MD5 based Challenge-Response authentication.
From wikipedia http://en.wikipedia.org/wiki/CRAM-MD5
Protocol
Challenge: In CRAM-MD5 authentication, the server first sends
a challenge string to the client.
Response: The client responds with a username followed by a space
character and then a 16-byte digest in
hexadecimal notation. The digest is
the output of HMAC-MD5 with the user's
password as the secret key, and the
server's original challenge as the
message.
Comparison: The server uses the same method to compute the expected
response. If the given response and
the expected response match then
authentication was successful.
This provides three important types of
security.
First, others cannot duplicate the hash without knowing the password.
This provides authentication.
Second, others cannot replay the hash—it is dependent on the
unpredictable challenge. This is
variously called freshness or replay
prevention.
Third, observers do not learn the password. This is called secrecy.
The two important features of this
protocol that provide these three
security benefits are the one-way hash
and the fresh random challenge.
Additionally, you may add some application-identification into the challenge string, for a double check on the sender of the challenge.
Important: it has some weaknesses, evaluate carefully how they may affect you.
Regarding the verbal challenge/response strategy: We used this approach to license a niche application on five thousand workstations world-wide for more than ten years. Our support team called it the "Missile Launch Codes" because of its similarity to the classic missile launch authentication process seen on old movies.
This is an extremely time consuming way to protect your program. It consumed enormous amounts of our staffs' and customers' time reading the codes to and from users. They all hated it.
Your situation/context may be different. Perhaps you won't be using it nearly as frequently as we did. But here are some suggestions:
Carefully consider the length and contents of the code: most users (and support staff) resent typing lots of characters. Many users are bad typists. Consider whether a long string and including punctuation marks and case sensitivity unduly burdens them compared to the amount of security added.
After years of using a verbal challenge/response implementation, we left it in place (as a fall-back) but added a simple automated system. We chose to use FTP rather than a more sophisticated web approach so that we didn't have to have any software running on our in-house server (or deal with our IT staff!)
Basically, we use FTP files to do the exchange that was previously done on the phone. The server places a file on the FTP server containing the challenge phrase. The file's name is the customer's name. Our support staff have a program that automatically creates this file on our ftp site.
The customer is instructed by our staff to hit a hot key that reads the FTP file, authenticates it, and places a response file back on the server.
Our support staffs' software has been polling waiting for the customer's software to create the response file. When it sees the file, it downloads it and confirms its contents, and deletes it from the server.
You can of course have this exchange happen as many times and in either direction as you need in a given session in order to accomplish your goals.
The data in the files can have the same MD5 keys that you would use verbally, so that it is as secure as you'd like.
A weakness in this system is that the user has to have FTP access. We've found that the majority of our users (all businesses) have FTP access available. (Of course, your customer base may not...) If our application in the field is unable to access our FTP site, it clearly announces the problem so that our customer can go to their IT staff to request that they open the access. Meanwhile, we just fall back to the verbal codes.
We used the plain vanilla Indy FTP tools with no problem.
No doubt there are some weaknesses in this approach (probably including some that we haven't thought of.) But, for our needs, it has been fantastic. Our support staff and customers love it.
Sorry if none of this is relevant to you. Hope this helps you some.

Resources