i'm really confused. is software fault isolation the same as sandboxing? or they are different? everywhere i read that sandboxing means that we can run an untrusted code without affecting other programs or host. ok but how? do they do this by memory isolation? i mean when a program can access only on its own memory and can't access other memories we call that sandbox?
Sandboxing allows you to run untrusted code, as you said. There are different levels of sandboxes. Memory isolation - so what operating system is doing can be treated as one kind of sandbox. However programs still can share the same libraries and files. So there is Docker, that allows isolating those. But two processes run on separate docker containers, are still running on the same physical machine. So there is virtualization that can be treated as yet higher level of sandbox.
Sandbox is connected to isolating something that is run in sandbox from host system or other applications run on host system.
However software fault isolation means finding (isolating) fault or detecting fault. There is practice when debugging software to provide smallest possible code that reproduce the problem. So engineer who will work on fixing problem, can quickly find root cause because number of possibilities are smaller when he or she sees problem "isolated".
Sandboxes may be used to isolate problems, especially when working on dangerous, virus codes or systems that can destabilize host system that you are performing fault isolation from.
Related
There are many articles and videos comparing containers, virtual machines, physical machines. However almost all information is theoretical: containers are fast, VMs are secure, etc. But I could not find description of specific use cases or guidance on when to choose virtual machines, physical machines, but not containers. So, currently I cannot imagine situation when somebody gives recommendation to not use containers.
Question:
Could you please list specific applications or solutions when you would recommend using VMs, but not container?
Could you please list specific applications or solutions when you would recommend using OS over bare metal, but not containers or VMs?
Here is example of answer I would appreciate to get (note, that I am not sure if this information is correct):
Use case 1: Edge Router
Edge router is a router which connects organizational network to the Internet. Also, in this case it is assumed, that vendor of the router provides it not as device but as a software package (virtualized router).
Edge router most probably will be one of target of hacker's attacks. Thus security requirements come to the first place.
Containers are not recommended in this case. By default containers provide mediocre level of security. Strong security can be achieved with complex configuration (what configuration?) but this is more difficult than in case of VM or bare metal. In addition, high security level may require special hardened Linux kernel, however containers technology does not allow adjusting kernel configuration.
Virtual Machines would be a good choice if vendor of the router provides software as VM image or when organization has many edge routers (for example, many offices with internet access points), and has (or is ready to create) well-established process of preparation of VM images. In this case using VMs will simplify rollout, update and healing the virtualized edge router. VM also provides high security level; nevertheless is it still recommended to place such a VM in a separate server and to not share same server with other applications/VMs to avoid cross-VM attacks.
Physical machine would be a good choice if router vendor provides router's software as an application package (not as a VM) such as .rpm, and rollout, update and healing processes are not expected to take much efforts; this might be the case when when company has few routers (so updates can be performed manually or automated with tools like Ansible), and couple of hour of planned and unplanned downtime is acceptable.
Use case 2: ...
Thank you in advance.
The question is a bit vague so I'll try my best:
you'd usually allocate work to containers when you have a few separate applications with limited physical resources and you'd like to run them each with their own different environment (different runtime version, architecture and dependencies) which managing on a machine (physical or virtual) would be cumbersome.
you'd use a VM when you want specifically a feature that containers couldn't satisfy or it would just be a headache to set them up with it and a simple quick and easy VM could solve (and again you have limited resources you'd like to share between use cases)
and finally, a physical machine when performance is of the essence like I/O requests and latency around that.
you can also mix and match to match each tier needs:
we need to run many applications that VM would be too much of an overhead for them and containers would make their handling more automated and streamline so containers with k8s, but on the other hand, we want local storage offered to those containers to be very fast so we run the k8s cluster on physical machines.
if recoverability would be of the essence we would have used VM due to the options of snapshotting VM states over time.
It's all a big LEGO set you can mix and match depending on your use case and needs
We're working on an application meant to run on an embedded system, in a moderately harsh environment (a controller for a heating system in a residential building).
That application should run for years without needing to reboot the system. It runs on an embedded PC running Linux. The program instantiates several classes whose lifetime is the same as the application's.
Should I worry about memory becoming corrupt over such a long lifetime? Does it make sense to periodically check the class invariants to detect any such memory corruption? Or does modern hardware make such corruption astronomically unlikely?
I have seen my share of cheap sd cards on boards, they can die on you easily.
Few months ago have been dealing with one maker, under high data throughput SD card was unable to react in time. Some irq failure messages pop up and whole partition blows up.
If it's not intended for mass production I would definitely suggest you to choose some good and recommended storage.
But really, I can not remember memory corruption issues(besides rom), I would worry about memory leaks. Those are the most nasty problems for embedded system intended to last long without reboot.
Have to be really careful, they can happen either in userspace or in kernel space. Even software which you have always had confidence in may have them, depending on the build version. Have to choose Linux distribution carefully, if there is no dedicated kernel development team usually this stuff is outsourced to companies which build stable systems, where every included package is tested and confirmed to not leak.
In the end, definitely a few cycles of stress testing are needed, if there are problems with memory you will notice.
As far as I know Erlang provides advanced features for error handling and isolation of processes.
I'm building a system that allow user to submit their code to be executed on the shared server environment and need to make it safe.
Requirements are:
limit CPU and Memory usage individually for each user-process.
forbid user-process to communicate with other processes (except some processes specially designed for such purpose).
forbid access to all sytem resources (shell, file system, ...).
terminate user-process in case of errors or high resource consumption.
Is it possible to to all this with Erlang and keep it performance efficient?
In general, Erlang doesn't provide means to sandbox code which a user can inject. You can try writing your own piece of protection code, but it is rather hard.
A better choice would probably be a language like "safe haskell":
http://www.haskell.org/ghc/docs/7.4.2/html/users_guide/safe-haskell.html
which is specifically built to do this kind of thing.
The isolation provided by Erlang is not intended to protect against malicious modules being injected. In fact, there is no such protection in the distributed case either. As soon as two machines are connected, there is no limit to what you can do to the other machine.
There has been work done on Safe Erlang in the past and you can find several papers about it.
The ErlHive project addresses the problem in an interesting way.
Is WM operating system protects process memory against one another?
Can one badly written application crash some other application just mistakenly writing over the first one memory?
Windows Mobile, at least in all current incarnations, is build on Windows CE 5.0 and therefore uses CE 5.0's memory model (which is the same as it was in CE 3.0). The OS doesn't actually do a lot to protect process memory, but it does enough to generally keep processes from interfering with one another. It's not hard and fast though.
CE processes run in "slots" of which there are 32. The currently running process gets swapped to slot zero, and it's addresses are re-based to zero (so all memory in the running process effectively has 2 addresses, the slot 0 address and it's non-zero slot address). These addresses are proctected (though there's a simple API call to cross the boundary). This means that pointer corruptions, etc will not step on other apps but if you want to, you still can.
Also CE has the concept of shared memory. All processes have access to this area and it is 100% unprotected. If your app is using shared memory (and the memory manager can give you a shared address without you specifically asking, depending on your allocation and its size). If you have shared memory then yes, any process can access that data, including corrupting it, and you will get no error or warning in either process.
Is WM operating system protects process memory against one another?
Yes.
Can one badly written application crash some other application just mistakenly writing over the first one memory?
No (but it might do other things like use up all the 'disk' space).
Even if you're a device driver, to get permission to write to memory that's owned by a different process there's an API which you must invoke explicitly.
While ChrisW's answer is technically correct, my experience of Windows mobile is that it is much easier to crash the entire device from an application than it is on the desktop. I could guess at a few reasons why this is the case;
The operating sytem is often much more heavily OEMed than Windows desktop, that is the amount of manufacturer specific low level code can be very high, which leads to manufacturer specific bugs at a level that can cause bad crashes. On many devices it is common to see a new firmware revision every month or so, where the revisions are fixes to such bugs.
Resources are scarcer, and an application that exhausts all available resources is liable to cause a crash.
The protection mechanisms and architecture vary quite a bit. The device I'm currently working with is SH4 based, while you mostly see ARM, X86 and the odd MIPs CPU..
Can anyone give a concise set of real-world considerations that would drive the choice of whether or not to use inetd to manage a program that acts as a network server?
(If inetd is used, I think it alters the requirements around networking code in the program, so I think it's definitely programming-related and not general IT)
The question is based around an implementation I've seen that uses a control program managed by inetd to start a network listener that then runs forever and takes constant and heavy load. It didn't seem like a good fit with the textbook inetd usage profile (on-demand, infrequently used, lightweight) and got me interested in the more general question.
It depends on the usage pattern for your service. If the startup time for your daemon is low, and you expect it to be used infrequently, then inted might be a good fit. It reduces or even eliminates the need to write any additional networking code.
If your daemon is more heavyweight or more frequently used, you're probably better off writing it standalone. You can just as easily write an init.d script and some conf.d configuration to go with it and it will be no harder for an admin to manage. Most programming languages these days have easy to use socket libraries so in many cases the networking code may not even be that difficult.
I've found in my experience that few admins these days are familiar with inetd. Most daemons just provide their own init script. In fact, of the few hundred systems which I manage I can't think of a single one that launches anything through inetd at all. That's something worth considering.
Hooking into inetd will make your service slightly easier to manage from an operational point of view as inetd allows a sysadmin to control practically all of how network communications with your program happens. However it will require you to make a few code changes to your program. Also, it may not be as efficient as just making your program run as a daemon to begin with.
EDIT: I personally never use inetd and always opt to write server processes as standalone daemons.
I think another factor worth considering when deciding to use inetd is how much memory the process handling the request consumes on an average? If this is fairly high, then under high load you risk running out of memory (since inetd forks). The same server might be implementable in a multithreaded or select-poll manner possibly allowing for higher load / less memory per connection.
You can use inetd to give TCP/UDP capabilities to simple programs that operate over stdin/stdout.
Without inetd, your program will need to manage a slew of concerns, including network interfaces, sockets, forking, resource limits, etc.
What alternative strategy are you considering?
inetd is a good way of ensuring your server starts when the OS boots in the appropriate run level. Even if you do design your server to have some other management mechanism, inetd could still wrap all the commands quite simply. It's only shell scripts after all.