libcontainer - system programmers perspective - docker

I am a newbie go programmer, with system programming background,trying to dissect libcontainer. I am pretty familiar with name spaces and control groups . I am interested in knowing how exactly libcontainer leverages these features to create a container.
Logically speaking someone has to call clone system call with NEW_NS_FLAGS.But I cant find where this clone system call being called!!
Documentations says that , one has to use factory interface , to create container. I see that it simply does the validation job for id and config and create directory with 700 permission.
container.start , supposed to be creating a new name space , also does not call clone system call.
If some one can tell me , how container creation works in terms of system calls , it would be very helpful.

I too am interested in this, and have only just started looking at the code in depth.
I believe what you are looking for is done in nsexec.c which reads (or rather gets passed via a unix socket using netlink messages) the config for the namespace setup, which then calls clone() twice.
In the child process, I believe it calls setns() to create or set the namespace to the new values.
The whole thing is not entirely clear to me, but from what I seem to understand so far, the process using libcontainer execs itself with a arg of "init" which becomes PID 1 in the new container, and it looks like this new process does a few things in C as well as go to setup the container.

Related

A completely closed source docker container

I was wondering if it is possible to offer Docker images, but not allow any access to the internals of the built containers. Basically, the user of the container images can use the services they provide, but can't dig into any of the code within the containers.
Call it a way to obfuscate the source code, but also offer a service (the software) to someone on the basis of the container, instead of offering the software itself. Something like "Container as a Service", but with the main advantage that the developer can use these container(s) for local development too, but with no access to the underlying code within the containers.
My first thinking is, the controller of the Docker instances controls everything down to root access. So no, it isn't possible. But, I am new to Docker and am not aware of all of its possibilities.
Is this idea in any way possible?
An obfuscation-based only solution would not be enough, as "Encrypted and secure docker containers" details.
You would need full control of the host your containers are running in order to prevent any "poking". And that is not the case in your scenario, where a developer does have access to the host (ie his/her local development machine) where said container would run.
What is done sometimes is to have some piece of "core" code to run on a remote location (remote server, usb device), in a way that the external piece of code on the one hand can do some client authentication but also and more importantly run some business core code in order to guarantee that the externally located code "has" to be executed to have the things done. If it were only some check that is not actually core code, a cracker could just override it and avoid calling it on the client side. But if the code is actually required to be run and its not then the software won't be able to finish its processing. Of course there is an overhead for all of this, both in complexity and probably computation times, but that's one way you could deploy something that will unfailingly be required to contact your server/external device.
Regards,
Eduardo

How to make ACE/TAO service setup more user-friendly?

The standard way of setting up a network of applications communicating over the ACE/TAO CORBA framework has always been
run the naming service
run the event channel
run your applications
I'd like to alleviate my end-users from having to spawn multiple background services by hand and am looking for a clean solution. I'd also like to have my networks as plug 'n play as possible. That means we're synchronizing various hardware components with the help of a central controller instance. Each of these pairings makes up an (isolated) network, so we can have multiples of these in one environment and don't want any interference between them.
My idea was to just spawn a naming service and and event service on the controller's initialization but I haven't found a nice way yet to spawn both processes (tao_cosnaming, tao_rtevent) as child processes, so that they are really tied to the controller instance and don't keep running if the controller crashes i.e. Is there already a mechanism inside TAO that allows this?
The Implementation Repository could do this for you. Another option is to just link the Naming Service and Event Channel into your controller, just one process that also delivers these services.

How to deploy a full (web) application using Docker, if each process must be a container?

I'm trying to understand how Docker is supposed to be used.
It's not clear whether I should put everything I need in a single Dockerfile. I've read some people saying that the current best practice is to have one container per process, eg: web server, database, and language interpreter would make 3 containers.
But how do I pack all those containers together? Does that responsibility belong to Docker, or do I have to use something else? To get started I could write a simple bash script that installs all the containers I need. Is that the way to go?
Another question (maybe I should open a separate thread for this): What's the most common practice? To use the default server for "docker push", or to host your own?
First your second question. A good reason to use a private repository is if your images are, well... private. The most common practice I guess is that people that do not have a private repository use the public index, simply because it's easy. If you want to open source something, surely use the public index. But if you have a private project that would be the time to start a private index.
Concerning your first question. I think you're heading the right way. Yes, it is logical to use Docker to establish a separation of concerns by setting up a container for as many of the UML blocks in your setup as possible. Since docker is so efficient this is possible. This makes sure that you can deploy your containers on different hosts later, even though you might not need that initially.
Indeed, the communication between those containers is not the responsibility of docker, although it provides linking for instance (but linking is not much more than setting a couple of environment variables, which you can do in other ways as well).
Me: I go for the bash script approach that you mention. packing containers together is not dockers responsibility.
Good luck.

Erlang-Pid control

I have written a simple chat server in Erlang (without any sockets or ports, just between to message among multiple shells), but when I try to simulate it I have some problems.
Almost every client function (like pm, say_to_all) in my implementation needs Chat_server-s Process ID.
If I open chat_server and client in one shell, I can easily bound chat_server's process ID and access it if necessary, but problem comes up when I want to open another shell for client.
look at the picture --> http://s018.radikal.ru/i501/1308/ee/a194aa8486ae.png
how to access the process from 1-st shell (chat_server) from second shell (chat_client) ?
You could register your server globally under a certain name (http://erlang.org/doc/man/global.html#register_name-2). That way it would be possible for you to access the server from any shell within your chatsystem.
Don't forget, that you need to connect the shells with net_adm:ping first, to let the shells know of globally registered names.
And I can really recommend looking into gen_server (http://www.erlang.org/doc/man/gen_server.html) since it can really help when trying to organize a client-server-structure.
Edit:
Sorry maybe you also want an explanation for your problem.
This is because every erlang-shell has its own environment with own variables etc. That means a second shell does not know about any variables of other shells.

Exposing the same Auto of Process COM server from multiple copies of the same executable

I have a media application (written in Delphi 2010 but I am not sure that's entirely relevant) and it only allows one instance (via mutex).
One of my customers would like to run 2 instances of the app by duplicating its install and all of its application data as this will allow him to run the output to two different sound cards, giving him two audio zones.
Now I can allow the second instance via command line switch, thus creating a differently named mutex and even allowing him to send controls to either instance of the appliction via command line switches or windows message passing.
My application also exposes a COM interface for automation purposes, obviously this provides a much richer interface than command line and makes it much easier to get information out of the application.
So my problem is that, as far as I am aware, I can only expose the COM interface to one executable. Now I know that makes sense, but I am wondering if anyone can think of a workaround to this.
I had a quick try at duplicating the registry keys for my HKLM\Software\Classes\AppID thus making AppIDv2 and got as far as it lanching the other copy of my app, but I guess it all came unstuck when it hit the more specific GUIDS for the TypeLib etc. Mind you, I know I overstepped the bounds of my knowledge!
My thought is that if I can create a different AppID string and ultimately target the exe sitting in different locations then we'd at least be able to do some automation via scripting COM Automation but I suspect that the requirement for GUIDs is ultimately going to let me down.
Another option may be to move my COM to inprocess and then have multiple compiled versions of my application that expose an instance of the main interface via new AppIDs, but that gets messy when you want the DLL to know all about the running instance of your application.
Any ideas welcome. Thanks in advance.
It sounds like you want to register yourself in the Running Objects Table (ROT).
i'm likening your problem to that of multiple copies of Excel running. COM has a mechanism to allow someone to find my the running instances of Excel and connect to one of them.
Out of process COM objects are expected to register themselves with the ROT. Callers can then use GetActiveObject to find your instance:
To automate an Office application that is already running, you can use the GetActiveObject() API function to obtain the IDispatch pointer for the running instance. Once you have this IDispatch pointer for the running instance, you can use the methods and the properties of the running instance.
You might not like it, but i believe the solution is that there is one application interface, and that "first" application acts as the gateway to other "instances" of your application (i.e. your automation server).
i'm not an expert in out-of-process COM automation, but i think i've read enough to believe that's the (unfortunate) answer.
See also
Registering the Active Object with API Functions
How To Attach to a Running Instance of an Office Application
You do indeed need the Running Object Table (IRunningObjectTable). Ian's answer is largely correct.
http://msdn.microsoft.com/en-us/library/ms695276(v=VS.85).aspx
However, it is possible to have two distinguishable instances in the ROT, allowing both copies of your app to be accessed because their monikers are distinguished.
Martyn

Resources