We all know situations when you cannot go open source and freely distribute software - and I am in one of these situations.
I have an app that consists of a number of binaries (compiled from C sources) and Python code that wraps it all into a system. This app used to work as a cloud solution so users had access to app functions via network but no chance to touch the actual server where binaries and code are stored.
Now we want to deliver the "local" version of our system. The app will be running on PCs that our users will physically own. We know that everything could be broken, but at least want to protect the app from possible copying and reverse-engineering as much as possible.
I know that Docker is a wonderful deployment tool so I wonder: is it possible to create encrypted Docker containers where no one can see any data stored in the container's filesystem? Is there a known solution to this problem?
Also, maybe there are well known solutions not based on Docker?
The root user on the host machine (where the docker daemon runs) has full access to all the processes running on the host. That means the person who controls the host machine can always get access to the RAM of the application as well as the file system. That makes it impossible to hide a key for decrypting the file system or protecting RAM from debugging.
Using obfuscation on a standard Linux box, you can make it harder to read the file system and RAM, but you can't make it impossible or the container cannot run.
If you can control the hardware running the operating system, then you might want to look at the Trusted Platform Module which starts system verification as soon as the system boots. You could then theoretically do things before the root user has access to the system to hide keys and strongly encrypt file systems. Even then, given physical access to the machine, a determined attacker can always get the decrypted data.
What you are asking about is called obfuscation. It has nothing to do with Docker and is a very language-specific problem; for data you can always do whatever mangling you want, but while you can hope to discourage the attacker it will never be secure. Even state-of-the-art encryption schemes can't help since the program (which you provide) has to contain the key.
C is usually hard enough to reverse engineer, for Python you can try pyobfuscate and similar.
For data, I found this question (keywords: encrypting files game).
If you want a completely secure solution, you're searching for the 'holy grail' of confidentiality: homomorphous encryption. In short, you want to encrypt your application and data, send them to a PC, and have this PC run them without its owner, OS, or anyone else being able to scoop at the data.
Doing so without a massive performance penalty is an active research project. There has been at least one project having managed this, but it still has limitations:
It's windows-only
The CPU has access to the key (ie, you have to trust Intel)
It's optimised for cloud scenarios. If you want to install this to multiple PCs, you need to provide the key in a secure way (ie just go there and type it yourself) to one of the PCs you're going to install your application, and this PC should be able to securely propagate the key to the other PCs.
Andy's suggestion on using the TPM has similar implications to points 2 and 3.
Sounds like Docker is not the right tool, because it was never intended to be used as a full-blown sandbox (at least based on what I've been reading). Why aren't you using a more full-blown VirtualBox approach? At least then you're able to lock up the virtual machine behind logins (as much as a physical installation on someone else's computer can be locked up) and run it isolated, encrypted filesystems and the whole nine yards.
You can either go lightweight and open, or fat and closed. I don't know that there's a "lightweight and closed" option.
I have exactly the same problem. Currently what I was able to discover is bellow.
A. Asylo(https://asylo.dev)
Asylo requires programs/algorithms to be written in C++.
Asylo library is integrated in docker and it seems to be feаsable to create custom dоcker image based on Asylo .
Asylo depends on many not so popular technologies like "proto buffers" and "bazel" etc. To me it seems that learning curve will be steep i.e. the person who is creating docker images/(programs) will need a lot of time to understand how to do it.
Asylo is free of charge
Asylo is bright new with all the advantages and disadvantages of being that.
Asylo is produced by Google but it is NOT an officially supported Google product according to the disclaimer on its page.
Asylo promises that data in trusted environment could be saved even from user with root privileges. However, there is lack of documentation and currently it is not clear how this could be implemented.
B. Scone(https://sconedocs.github.io)
It is binded to INTEL SGX technology but also there is Simulation mode(for development).
It is not free. It has just a small set of functionalities which are not paid.
Seems to support a lot of security functionalities.
Easy for use.
They seems to have more documentation and instructions how to build your own docker image with their technology.
For the Python part, you might consider using Pyinstaller, with appropriate options, it can pack your whole python app in a single executable file, which will not require python installation to be run by end users. It effectively runs a python interpreter on the packaged code, but it has a cipher option, which allows you to encrypt the bytecode.
Yes, the key will be somewhere around the executable, and a very savvy costumer might have the means to extract it, thus unraveling a not so readable code. It's up to you to know if your code contains some big secret you need to hide at all costs. I would probably not do it if I wanted to charge big money for any bug solving in the deployed product. I could use it if client has good compliance standards and is not a potential competitor, nor is expected to pay for more licenses.
While I've done this once, I honestly would avoid doing it again.
Regarding the C code, if you can compile it into executables and/or shared libraries can be included in the executable generated by Pyinstaller.
Related
I recently read about decompilation of iOS apps and I'm now really concerned about it. As stated in the following posts (#1 and #2) it is possible to decompile an iOS which is distributed to the App Store. This can be done with jailbreak and I think with copying the app from memory to hdd. With some tools it is possible to
read out strings (strings tools)
dump the header files
reverse engineer to assembly code
It seems NOT to be possible to reverse engineer to Cocoa code.
As security is a feature of the software I create, I want to prevent bad users from reconstructing my security functions (encryption with key or log in to websites). So I came up with the following questions:
Can someone reconstruct my saving and encryption or login methods with assembly? I mean can he understand what exactly is going on (what is saved to which path at which time, which key is used etc., with what credentials is a login to which website performed)? I have no assembly understanding it looks like the matrix for me...
How can I securly use NSStrings which cannot be read out with strings or read in assembly? I know one can do obfuscation of strings - but this is still not secure, isn't it?
This is a problem that people have been chasing for years, and any sufficiently-motivated person with skills will be able to find ways to find out whatever information you don't want them to find out, if that information is ever stored on a device.
Without jailbreaking, it's possible to disassemble apps by using the purchased or downloaded binary. This is static inspection and is facilitated with standard disassembly tools. Although you need to have a tool which is good enough to add symbols from the linker and understand method calls sufficiently to be able to tease out what's going on. If you want to get a feel for how this works, check out hopper, it's a really good disassembly/reverse-engineering tool.
Specifically to your secure log in question, you have a bigger problem if you have a motivated attacker: system-based man-in-the-middle attacks. In this case, the attacker can shim out the networking code used by your system and see anything which is sent via standard networking. Therefore, you can't depend on being able to send any form of unencrypted data into a "secure" pipe at the OS or library level and expect it not to be seen. At a minimum you'll need to encrypt before getting the data into the pipe (i.e. you can't depend on sending any plain text to standard SSL libraries). You can compile your own set of SSL libraries and link them directly in to your App, which means you don't get any system performance and security enhancements over time, but you can manually upgrade your SSL libraries as necessary. You could also create your own encryption, but that's fraught with potential issues, since motivated hackers might find it easier to attack your wire protocol at that point (publicly-tested protocols like SSL are usually more secure than what you can throw together yourself, unless you are a particularly gifted developer with years of security/encryption experience).
However, all of this assumes that your attacker is sufficiently motivated. If you remove the low-hanging fruit, you may be able to prevent a casual hacker from making a simple attempt at figuring out your system. Some things to avoid:
storing plain-text encryption keys for either side of the encryption
storing keys in specifically named resources (a file named serverkey.text or a key stored in a plist with a name which contains key are both classics)
avoid simple passwords wherever possible
But, most important is creating systems where the keys (if any) stored in the application themselves are useless without information the user has to enter themselves (directly, or indirectly through systems such as OAUTH). The server should not trust the client for any important operation without having had some interaction with a user who can be trusted.
Apple's Keychain provides a good place to store authentication tokens, such as the ones retrieved during an OAUTH sequence. The API is a bit hard to work with, but the system is solid.
In the end, the problem is that no matter what you do, you're just upping the ante on the amount of work that it takes to defeat your measures. The attacker gets to control all of the important parts of the equation, so they will eventually defeat anything on the device. You are going to need to decide how much effort to put into securing the client, vs securing the server and monitoring for abuse. Since the attacker holds all of the cards on the device, your better approach is going to be methods that can be implemented on the server to enhance your goals.
I have used dxdiag before, but I would prefer to point potential users to some tool that's a bit simpler, that they can just run and email me the output.
As well as obvious things like CPU, RAM, graphics, DirectX version and Windows version, I also need to know if ExpressCard (a laptop standard) is supported.
I know this isn't quite a programming question, but it's critical to establishing a way to tell users if their hardware supports our software before we deploy it.
System Information tool (msinfo32). Comes standard with the OS; supports bunch of command-line switches as well, allowing you to automate it.
In particular, you can tell the users to just run the following command and email you the resulting .txt file:
msinfo32 /report "%USERPROFILE%\desktop\configuration.txt"
Or if you want a subset, just filter it out based on the categories.
Not sure where is the info about the ExpressCard in it, but it should be in there somewhere.
All this information will be available through the Windows Management Instrumentation (WMI).
There are some microsoft provided examples available here.
FastCGI is old but it still seems like it must be the right answer in some cases.
It seems like the preferred deployment of Perl/Catalyst web applications is with FastCGI.
FastCGI was popular with Rails but seems to no longer be. (Why?)
The Java world doesn't seem to have anything to do with FastCGI. Is something like Tomcat way better than Apache+FastCGI?
Is choosing FastCGI still a good idea or just a lingering technology?
Ted
Since it depends a lot on your setup and requirements, I'll let the "Is X still a right answer?" up to you. However, by looking at different architectures, you can come up with a list of questions to ask to determine if it still is a right answer given specific circumstances.
Concerns of frequent interest
The questions you'll want to ask are usually related to security and flexibility. For security, you'll want to follow the principle of least privilege. For flexibility, you'll want to know if you can run multiple frameworks, multiple versions of the framework and how easily you can delegate work to other tasks.
Other concerns
For a simple web front-end to a database-backed application, not all of these questions are important. You also need to keep in mind that some of the recommendations have nothing to do with what's outlined here. Many web frameworks will recommend whatever architecture is easiest to setup with their framework. They do this because it helps get new users trying out the framework with minimal fuss and without flooding the mailing list. Also, the Java community tends to stick to a common denominator rather than take full advantage of the platform at hand, so they'll often recommend an all-Java solution.
Popular architectures
Single process architectures
From a pure performance point of view, a single process (probably threaded) with an embedded framework probably gives most performance as it reduces most communication overhead between whatever receives the request and whatever produces a response.
Security: a single process must have all of the permissions required to perform every single task it is handed. In simple applications, this might not be a problem. However, its possible you might serve multiple services
Flexibility: probably can't run multiple version of the same framework (e.g. code for different parts of your website require different versions of Java, Rails, Python, etc.). Moreover, changing your setup to serve some work on different machines becomes painful (less difficult when split up on virtual hosts).
Sub-process based architectures
Under the CGI model, you have to pay the price of spawning a new process for each request. Even on UNIX machines where spawning a process is considered cheap, 600 requests a second will kill your server if you spawn a process for each.
Security: to spawn child processes under different user accounts, your gateway probably runs under quite high privileges.
Flexibility: additional flexibility for the multiple frameworks, multiple versions, multiple languages approach, but you're still stuck on the same machine.
Distributed architectures
The FastCGI/SCGI approach tried to solve the CGI process management problem in a clean way. Just keep the process alive. Have the gateway talk to that process to serve the request.
Security: Because the gateway doesn't spawn the processes that serve requests, the gateway can run with far less privileges enabled. Actually, if it only serves as a gateway and doesn't do any work itself, it can run with hardly any privileges at all.
Flexibility: you get even better flexibility than the CGI model because you can forward the request to any machine on the network.
Conclusion
I like FastCGI, because it gives me high flexibility at a price (i.e. request forwarded through socket) I can afford to pay. It's not my full time job to administer systems. I don't develop all the apps I hosts. This means I look for the easiest solution for hosting whatever I try to host. FastCGI popular enough to be supported by major web servers and popular web frameworks. Adding another app usually just boils down to installing and mapping the desired URL to the application over FastCGI.
How can i write a cloud-aware application? e.g. an application that takes benefit of being deployed on cloud. Is it same as an application that runs or a vps/dedicated server? if not then what are the differences? are there any design changes? What are the procedures that i need to take if i am to migrate an application to cloud-aware?
Also i am about to implement a web application idea which would need features like security, performance, caching, and more importantly free. I have been comparing some frameworks and found that django has least RAM/CPU usage and works great in prefork+threaded mode, but i have also read that django based sites stop to respond with huge load of connections. Other frameworks that i have seen/know are Zend, CakePHP, Lithium/Cake3, CodeIgnitor, Symfony, Ruby on Rails....
So i would leave this to your opinion as well, suggest me a good free framework based on my needs.
Finally thanks for reading the essay ;)
I feel a matrix moment coming on... "what is the cloud? The cloud is all around us, a prison for your program..." (what? the FAQ said bring your sense of humour...)
Ok so seriously, what is the cloud? It depends on the implementation but usual features include scalable computing resource and a charge per cpu-hour, storage area etc. So yes, it is a bit like developing on your VPS/a normal server.
As I understand it, Google App Engine allows you to consume as much as you want. The back-end resource management is done by Google and billed to you and you pay for what you use. I believe there's even a free threshold.
Amazon EC2 exposes an API that actually allows you to add virtual machine instances (someone correct me please if I'm wrong) having pre-configured them, deploy another instance of your web app, talk between private IP ranges if you wish (slicehost definitely allow this). As such, EC2 can allow you to act like a giant load balancer on the front-end passing work off to a whole number of VMs on the back end, or expose all that publicly, take your pick. I'm not sure on the exact detail because I didn't build the system but that's how I understand it.
I have a feeling (but I know least about Azure) that on Azure, resource management is done automatically, for you, by Microsoft, based on what your app uses.
So, in summary, the cloud is different things depending on which particular cloud you choose. EC2 seems to expose an API for managing resource, GAE and Azure appear to be environments which grow and shrink in the background based on your use.
Note: I am aware there are certain constraints developing in GAE, particularly with Java. In a minute, I'll edit in another thread where someone made an excellent comment on one of my posts to this effect.
Edit as promised, see this thread: Cloud Agnostic Architecture?
As for a choice of framework, it really doesn't matter as far as I'm concerned. If you are planning on deploying to one of these platforms you might want to check framework/language availability. I personally have just started Django and love it, having learnt python a while ago, so, in my totally unbiased opinion, use Django. Other developers will probably recommend other things, based on their preferences. What do you know? What are you most comfortable with? What do you like the most? I'd go with that. I chose Django purely because I'm not such a big fan of PHP, I like Python and I was comfortable with the framework when I initially played around with it.
Edit: So how do you write cloud-aware code? You design your software in such a way it fits on one of these architectures. Again, see the cloud-agnostic thread for some really good discussion on ways of doing this. For example, you might talk to some services on GAE which scale. That they are on GAE (example) doesn't really matter, you use loose coupling ideas. In essence, this is just a step up from the web service idea.
Also, another feature of the cloud I forgot to mention is the idea of CDN's being provided for you - some cloud implementations might move your data around the globe to make it more efficient to serve, or just because that's where they've got space. If that's an issue, don't use the cloud.
I cannot answer your question - I'm not experienced in such projects - but I can tell you one thing... both CakePHP and CodeIgniter are designed for PHP4 - in other words: for really old technology. And it seems nothing is going to change in their case. Symfony (especially 2.0 version which is still in heavy beta) is worth considering, but as I said on the very beginning - I can not support this with my own experience.
For designing applications for deployment for the cloud, the main thing to consider if recoverability. If your server is terminated, you may lose all of your data. If you're deploying on Amazon, I'd recommend putting all data that you need persisted onto an Elastic Block Storage (EBS) device. This would be data like user generated content/files, the database files and logs. I also use the EBS snapshot on a 5 day rotation so that's backed up itself. That said, I've had a cloud server up on AWS for over a year without any issues.
As for frameworks, I'm giving Grails a try at the minute and I'm quite enjoying it. Built to be syntactically similar to Rails but runs on the JVM. It means you can take advantage of all the Java goodness, like threading, concurrency and all the great libraries out there to build your web application.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
I was recently approached by a network-engineer, co-worker who would like to offload his minor network admin duties to a junior-level helpdesk tech. The specific location in need of management acts as an ISP for tenants on its single-site property, so there's a lot of small adjustments being made on a daily basis.
I am thinking it would be helpful to write him a winform app to manage the 32 Cisco devices, on-site. I'd like to initially provide functionality which could modify access control lists, port VLAN assignments, and bandwidth limitations per VLAN... adding more to the list as its deemed valuable.
My initial thought was to emulate a telnet session with the network device; utilizing my network-engineer's familiarity with the command-line / IOS interaction. Minimal time would be required to learn Cisco IOS conventions, myself.
Though while searching for solutions, it appears that most people favor SNMP. That, or, their specific circumstances pushed them in the direction of SNMP.
I wanted to know if I've overlooked an obvious benefit of SNMP. Should I be using SNMP? Why or why not?
SNMP is great for getting information out of a Cisco device, but is not very useful controlling the device. (although technically, you can push a new config to a Cisco IOS device using a combination of SNMP and TFTP. But sending a whole new config is a pretty blunt instrument for controlling your router or switch).
One of the other commenters mentioned the Cisco IOS XR XML API. It's important to note that the IOS XR XML API is only available on devices that run IOS XR. IOS XR is only used on a few of Cisco's high end carrier class devices, so for 99% of all Cisco routers and switches the IOS XR XML API is not an option.
Other possibilities are SSH or HTTP (many Cisco routers, switches, AP, etc. have an optional web interface). But I'd recommend against either of those. To my knowledge, the web interface isn't very consistent across devices, and a rather surprising number of Cisco devices don't support SSH, or at least don't support it in the base license.
Telnet is really the only way to go, unless you're only targeting a small range of device models. To give you something to compare against, Cisco's own CiscoWorks network management software uses Telnet to connect to managed devices.
I wouldn't use SNMP, instead look at a little language called 'expect'. it makes for a very nice expect/response processor for these routers.
I have done a reasonable amount of real world SNMP programming with Cisco switches and find Python on top of Net-SNMP to be quite reasonable. Here is an example, via Google books, of uploading a new Cisco configuration via Net-SNMP and Python: Cisco Switch Upload via Net-SNMP and Python. I should disclose I was the co-author of the book referenced in the link.
Everyone's milage may vary, but I personally do not like using expect, and prefer to use SNMP because it was actually designed to be a "Simple Network Management Protocol". In a pinch, expect is ok, but it would not be my first choice. One of the reasons some companies use expect is that a developer just gets used to using expect. I wouldn't necessarily chock up bypassing SNMP just because there is an example of someone automating telnet or ssh. Try it out for self first.
There can be some truly horrible things that happen with expect, that may not be obvious as well. Because expect waits for input, under the right conditions there be very subtle problems that are difficult to debug. This doesn't mean a very experienced developer can't develop reliable code with expect, but it something to be aware of as well.
One of the other things you may want to look at is an example of using the multiprocessing module to write non-blocking SNMP code. Because this is my first post to stackoverflow I cannot post more then one link, but if you google for it you can find it, or another one on using IPython and Net-SNMP.
One thing to keep in mind when writing SNMP code is that it involves reading a lot of documentation and doing trial and error. In the case of Cisco, the documentation is quite good though.
SNMP isn't bad but it may not be able to do everything you need it to do. Depending on the library you use and how it hides the details of interacting with SNMP you may have a hard time finding the correct parts of the MIB to change and even knowing what or how to change them to do what you want.
One reason not to use SNMP is that you can do all the configuration you need using the IOS XR XML API. It could be a lot easier to bundle up the commands you want to send to the devices using that than to interact with SNMP.
I've found SNMP to be a pain for management. If you just need to grab a little data it's great; if you need to change things or use if heavily it can be very time consuming. In my case I'm comfortable with the CLI so a Telnet approach works well. I've written some Python scripts to perform administrative tasks on various pieces of network gear using Telnetlib
SNMP has quite a significant CPU hit on the devices in question compared to telnet; I'd recommend telnet wherever possible. (As stated in a previous answer, the IOS XR XML API would be nice, but as far as I know IOS XR is only deployed on high-end carrier grade routers).
In terms of existing configuration management systems, two commercial players are HP Opsware, and EMC Voyence. Both will probably do what you need. I'm not aware of many open source solutions that actually support deploying changes. (RANCID, for example, only does configuration monitoring, not pre-staging and deploying config changes).
If you are going to roll your own solution, one thing I would recommend is sitting down with your network admin and coming up with a best-practice deployment model for the service he's providing (e.g. standardised ACL, QoS queue, and VLAN names; similar entries in ACLs that have the same function for different customers, etc.). Ensure that all the existing deployed config complies with this BP before you start your design, it will make the problem much more manageable. Best of luck.
Sidenote: before you reinvent the wheel writing another service provisioning system/network management system, try looking for existing ones. I know quite a lot of commercial solutions of various degrees of flexibility/functionality, but I am sure there are quite a lot opensource ones.
Cisco has included menu options for helpdesk applications. Basically you telnet to the box and it presents a nice clean menu (press 1, 2, 3). For more info check this link:
http://www.cisco.com/en/US/docs/ios/12_2/configfun/command/reference/frf001.html#wp1050026
Another vote for expect.
Also, you don't want to allow configuration of your firewalls via either telnet or SNMP - ssh is the only way to go. The reason is that ssh encrypts its payload, and will not expose the privileged management credentials to potential interception.
If for some reason you cannot use ssh directly, consider connecting up an ssh-enabled serial console server to the firewall's console port and configuring it that way.