The web site I'm working on currently provides an option for the user to download their data in an encrypted zip file. Standard zip file encryption is nearly worthless (so I've read), so I'm looking to replace that with something that uses AES encryption but still has a self-extracting format. There are a couple of issues with doing this, which I am sure someone has worked through before:
I don't know what platform the user is on (Mac or Windows or Linux) so I can't just make a self-extracting .exe file and assume that will work. I suppose I will need to ask. (I am already asking for a password.)
My web site is running on Linux, and I suspect that most programs that produce self-extracting encrypted .exe files expect to be run (to make the .exe) on a Windows machine. I suppose I could set up a virtual machine running Windows, and have my Linux server send that virtual machine a request (and the data) to make the .exe, but that sounds complicated.
The ZIP encryption being rubbish controversy was from a long time ago (see here). The main issue with ZIP encryption is that although it uses a 128 bit AES cypher it still requires a password from the user. Attackers have already determined how the ZIP program generates a key from the password so when a user entered password contains low entropy (i.e. a simple password) then it becomes very easy to brute force the key and open the file. If you assign a large very random password it is considered very secure.
Related
If I am to make an online backup using the neo4j-admin backup tool remotely, as is advised by Neo4J, I have to open a public IP and the backup port on my Neo4J application.
However, I don't see neo4j-admin asking for any login credentials, basically making it possible for anybody to access the server and copy all the data while the port is opened.
There is no setting inside the neo4j.conf that would only accept backup requests from a certain address.
So what does it mean? When the online backups are done remotely, as is advised, the database may be vulnerable to somebody else just copying all the data.
I didn't find anything in Neo4J documentation that addresses this flaw (only a warning) and it looks like in more than 7 years that this feature has been available as a part of the commercial enterprise version there has not been any solution offered for this.
What do you do to protect the DB then? At the moment the only solution seems to not back it up remotely, but that causes additional stress on the server and is not the best solution. Plus the online backup is not stable when done locally for large DBs. Another solution could be to only open the port remotely via some kind of API to the server, but that may still be exploited if somebody figures out the time frame when the backup is made.
The documentation states that ne04j-admin must be invoked as the neo4j user. That is the user that owns the neo4j executables and the databases. So the security is handled by the OS login and the file permissions should be set to prevent unathorised access to the neo4j directories/files including the neo4j-admin executable.
I am writing a script having user personal information like "User Id", "Password", "Server detail", Bla bla bla. And I want to secure these all personal data.
And you know, Script inside Nodemcu is not secure at all. Anybody can download the script and make a cop of my project.
So, I want to encrypt the script which is uploaded in the Nodemcu so that some other can not decrypt or read my script.
Is it possible in NodeMCU?
I am using NodeMCU V3(Written at the back side of nodemcu)
Initial Details :
NodeMCU custom build by frightanic.com
branch: 1.5.4.1-final
commit: b9436bdfa452c098d5cb42a352ca124c80b91b25
SSL: false
modules: file,gpio,mqtt,net,node,rtctime,tmr,uart,wifi
build created on 2019-09-21 17:56
powered by Lua 5.1.4 on SDK 1.5.4.1(39cb9a32)
lua: cannot open init.lua
It is possible to achieve high security level but not 100%. NodeMCU stores data in external flash which is not protected from reading, even encrypted.
You need at least a firmware with standard crypto and TLS modules for basic encryption. Without TLS encryption (as module for net communication) you are vulnerable event without touching your device.
Better, is to use modified firmware with custom encryption/decryption functionality using internal unique chip id's as part of key, making it harder to break.
Some interesting ideas: https://bbs.espressif.com/viewtopic.php?t=936
To protect your scripts, compile in binary form without storing original scripts: https://nodemcu.readthedocs.io/en/master/compiling/
Edit:
In module crypto you can add a modified version of crypto_encdec() as encryptintern/decryptintern with predefined/calculated key and iv.
To get device specific id for key calculation you can use MAC address with wifi_get_macaddr() and flash id with spi_flash_get_id() as suggested: https://bbs.espressif.com/viewtopic.php?t=1303
To encrypt/decrypt compiled scripts you can modify luaL_loadfile (require uses it too) to decrypt files, and luac.c for encryption on your host.
Note that nothing will help against an even halfway determined person. It's trivial to dump the contents of flash, find keys, and decrypt everything. Without hardware support (there are cheap crypto chips out there), it is impossible to secure these devices.
Depending on your situation, there are alternatives; for example, for my home usage I'm planning to set up a separate WiFi network that's low security (no access to internet, just IoT devices) once I start deploying ESP8266 based devices. Yes, people can easily get the credentials but you'll be connected to a mostly useless network.
Security is very situational. What kind of attackers are you protecting against? How valuable is what you are protecting? It's hard to give advice without knowing more about that.
We need to protect customer data and using FirebirdSQL 2.5(.8) with Delphi 7.
Also it is essential to do regular backups on "secondary" PC, or pen-drives if the "master" fails.
For that we used this method, calling Gbak.exe and 7z.exe with stdin/out.
Realized that was a bad idea because it's very easy to see the parameters (passwords) added to command line during the process, even with a simple Task-manager.
Is there a more secure way to do it?
(Using standard Interbase componenst OR UIB)
Upgrade to Firebird 3 which added Database Encryption capability. If you don't want or cannot, I believe you might run the GBAK tool from your application with the STDOUT option but instead of using 7-zip for compression you would read that output in your application, and encrypt such input by some encryption library on the fly.
I believe you may find many examples how to run an application and read its standard output over here (here is something related to start with), so the rest might be about finding a way of an on the fly stream encryption. Or just capturing STDOUT in one stream and encypting in another.
Firebird guys on SQL.ru forum say, that actually it is possible to use Services API to get backup stream remotely.
That does not mean that IBX or UIB or any other library readily support it though. Maybe it does, maybe not.
They suggested to read Release Notes for Firebird 2.5.2 or Part 4 of doc\README.services_extension.txt files of Firebird 2.5.2+ installation.
Below is a small excerpt from the latter:
The simplest way to use this feature is fbsvcmgr. To backup database
run approximately the following:
fbsvcmgr remotehost:service_mgr -user sysdba -password XXX action_backup -dbname some.fdb -bkp_file stdout >some.fbk
and to restore it:
fbsvcmgr remotehost:service_mgr -user sysdba -password XXX action_restore -dbname some.fdb -bkp_file stdin <some.fbk
Please notice - you can't use "verbose" switch when performing backup
because data channel from server to client is used to deliver blocks
of fbk files. You will get appropriate error message if you try to do
it. When restoring database verbose mode may be used without
limitations.
If you want to perform backup/restore from your own program, you
should use services API for it. Backup is very simple - just pass
"stdout" as backup file name to server and use isc_info_svc_to_eof in
isc_service_query() call. Data, returned by repeating calls to
isc_service_query() (certainly with isc_info_svc_to_eof tag) is a
stream, representing image of backup file.
Restore is a bit more tricky. Client sends new spb parameter
isc_info_svc_stdin to server in
isc_service_query(). If service needs some data in stdin, it returns
isc_info_svc_stdin in query results, followed by 4-bytes value -
number of bytes server is ready to accept from client. (0 value means
no more data is needed right now.) The main trick is that client
should NOT send more data than requested by server - this causes an
error "Size of data is more than requested". The data is sent in next
isc_service_query() call in the send_items block, using
isc_info_svc_line tag in traditional form: isc_info_svc_line, 2 bytes
length, data. When the server needs next portion, it once more returns
non-zero isc_info_svc_stdin value from isc_service_query().
A sample of how services API should be used for remote backup and
restore can be found in source code of fbsvcmgr.
I've noticed several tutorials for most of the major players in social networks have examples where a API key tied to your account is embedded (usually in plan text) in the source code. For example, Google Maps APIs Premium Plan. This key is used to bill your company.
I found a similar question in Is it safe to put private API keys in your .m files when exporting to the appstore?1 - Of note, anyone with a jailbroken phone can see the unencrypted executable.
Is this practice actually safe, and if so, why?
Embedding API keys in an app is not secure and generally not a good practice but does require a substantial work factor to obtain them, it is not trivial. There is no tool to decrypt the executable other than the OS for execution.
RE: "anyone with a jailbroken phone can see the unencrypted executable." is not really true. Just jailbreaking will not decrypt the app binary, it is only decrypted as the binary is loaded in RAM to execute and the key will not be available, it is decrypted in hardware in the DMA path. One needs to add debugging tools and catch the binary after it is loaded into memory for execution.
You need to determine who the attacker is, how much skill and time the attacker will spend and the cost to you.
There is no 100% secure solution, only increasing the work factor.
An alternative is to obtain the API keys on first run at login to a server and then move them to the Keychain. But this is also just an increase in work factor because as above the executable can be examined at run time when it is sent to the service.
As long as the key has to be in the app memory during any part of execution it is vulnerable.
Putting the API keys in the source may meet the security needs.
Background:
I got some data encrypted with AES (ie symmetric crypto) in a database. A server side application, running on a (assumed) secure and isolated Linux box, uses this data. It reads the encrypted data from the DB, and writes back encrypted data, only dealing with the unencrypted data in memory.
So, in order to do this, the app is required to have the key stored in memory.
The question is, is there any good best practices for this? Securing the key in memory.
A few ideas:
Keeping it in unswappable memory (for linux: setting SHM_LOCK with shmctl(2) ?)
Splitting the key over multiple memory locations.
Encrypting the key. With what, and how to keep the...key key.. secure?
Loading the key from file each time its required (slow and if the evildoer can read our memory, he can probably read our files too)
Some scenarios on why the key might leak: evildoer getting hold of mem dump/core dump; bad bounds checking in code leading to information leakage;
The first one seems like a good and pretty simple thing to do, but how about the rest? Other ideas? Any standard specifications/best practices?
Thanks for any input!
All depends on the level of your paranoia and the sensitivity of the key/data. In the extreme cases, as soon as you have an unencrypted key in memory, one can retrieve it using coldboot techniques. There is an interesting development at frozencache to try to defeat that. I merely casually read it, did not try it in practice, but it seems like an interesting approach to try.
With the tinfoil hat off, though - (1), (2), (3) do seem reasonable. (4) won't cut it precisely for the reason you mentioned. (Not only it is slow, but assuming you read into the stack, with different stack depths the key might become visible more than once).
Assuming the decrypted data is worth it, and it would be in the swappable memory, you definitely should encrypt the swap itself as well. Also, the root, /tmp partitions should also be encrypted. This is a fairly standard setup which is readily available in most guides for the OSes.
And then, of course, you want to ensure the high level of physical security for the machine itself and minimize the functions that it performs - the less code runs, the less the exposure is. You also might want to see how you can absolutely minimize the possibilities for the remote access to this machine as well - i.e. use the RSA-keys based ssh, which would be blocked by another ACL controlled from another host. portknocking can be used as one of the additional vectors of authentications before being able to log in to that second host. To ensure that if the host is compromised, it is more difficult to get the data out, ensure this host does not have the direct routable connection to the internet.
In general, the more painful you make it to get to the sensitive data, the less chance someone is going to going to get there, however there this is also going to make the life painful for the regular users - so there needs to be a balance.
In case the application is serious and the amount of things at stake is high, it is best to build the more explicit overall threat model and see what are the possible attack vectors that you can foresee, and verify that your setup effectively handles them. (and don't forget to include the human factor :-)
Update: and indeed, you might use the specialized hardware to deal with the encryption/decryption. Then you don't have to deal with the storage of the keys - See Hamish' answer.
If you are serious about security then you might consider a separate cryptographic subsystem. Preferably one that is FIPS 140-2/3 certified (list of certified modules).
Then the key is held in tamper proof memory (non-extractable) and all cryptographic operations are performed inside the crypto boundary.
Expensive but for some applications necessary.
Also don't forget the threat of core dumps and your memory being swapped out!
On both POSIX (like Linux) and Windows systems, there are techniques to prevent that from happening if you're dealing with C language - see this section from CERT Secure Coding Standards:
MEM06-C. Ensure that sensitive data is not written out to disk
The big problem is the program has to read the key from somewhere. Unless you accept direct keyboard input each time the server reboots, it pretty much has to exist on disk somewhere.
In general you have to assume the evildoer doesn't have access to the root level operating system or hardware as when that's the case they'll eventually manage to get the key even if it's only in RAM.
So you assume the server's OS is secure. But let's say somebody can come and steal the hard drive so starting the server would give them the key. Then let the server ask another server for half of the key, the remote server validates the request (using ip, private/public key pairs) and supplies half the key. Then your server has a complete key, and the remote server never has more than half. That seems to me an improved level of protection.
I'd be looking at what
openssh,
openssl,
GnuPG (see related sub-projects via the project-root dropdown), and
GnuTLS
do when handling keys. They're sufficiently paranoid about such security matters...
Use of "super super user" hardware memory is ideal. All Intel Macs have this SecureEnclave memory area and it also includes an AES decryption in hardware such that the application and operating system never have access to the raw private key. When the machine boots, a password is typed in (optional), and the SecureEnclave decrypts its cold flash memory encrypted version of the key into its RAM area, which is not accessible by the main operating system.
Nice side effect is the hardware accelerated encryption: I benchmarked 600 MB/sec writes to my PCIe storage on a freshly formatted encrypted disk.
In the cloud, Amazon have this AWS Key Management Service (KMS) managed service that makes it easy for you to create and control the encryption keys used to encrypt your data, and uses FIPS 140-2 validated hardware security modules to protect the security of your keys: https://aws.amazon.com/kms/