I am using Docker Compose, which will run on a Linux tablet in production. I have a container serving up a web GUI. The user will click a "print" button in the GUI, which will result in some kind of request (probably HTTP to Flask in another container, which will maybe forward it to some other container), and that request will result in some data being sent to the printer.
My first step, I can only imagine, is to be able to send data to the printer from inside a Docker container. Any Docker container. I can then use that knowledge, of how to send something to the printer from Docker, to incorporate the printing into my system.
So, that's the infrastructure I'm working with. It can be simplified as simply "I want to print to a printer from a Docker container." I'm working on a Mac, and I can print from the Mac using lp. So I know the connection to the printer is working.
I've tried a few containers, including olbat/cupsd. lpstat -r pretty much always says the Scheduler is running, but lpstat -v always shows that no destinations are set up.
My DevOps guy and I have been banging our heads against the wall all day on this. There are various articles and repos about setting of CUPS in Docker, but they all have holes somewhere, where they say "Use the fooglesplatter to connect to the printer" without telling you what a fooglesplatter is. Or (for a more concrete example) they'll talk about how you set up the CUPS dashboard to add your printer on your local machine, and then say "Voila! You can print!" without telling you what to do in the container. Or they'll refer to a conf file that doesn't exist on my machine. Or something else that leaves us completely baffled.
Can someone who has accomplished this please post (or direct me to) a step-by-step guide that basically treats me like I've never touched a computer before? That assumes no knowledge whatsoever and spells out every step? We are wise Docker users, and my DevOps guy is a much smarter guy than I am, but we are both at a loss.
I know this is a crazy request. Maybe it's not an SO appropriate question. Close it if you must. But we are incredibly stuck and I really hope someone can help us.
Related
I am about to decide on programming language for the project.
The requirements are that some of customers want to run application on isolated servers without external internet access.
To do that I need to distribute application to them and cannot use SaaS approach running on, for example, my cloud (what I'd prefer to do...).
The problem is that if I decide to use Python for developing this, I would need to provide customer with easy readable code which is not really what I'd like to do (of course, I know about all that "do you really need to protect your source code" kind of questions but it's out of scope for now).
One of my colleagues told me about Docker. I can find dozen of answers about Docker container security. Problem is all that is about protecting (isolating) host from code running in container.
What I need is to know if the Python source code in the Docker Image and running in Docker Container is secured from access - can user in some way (doesn't need to be easy) access that Python code?
I know I can't protect everything, I know it is possible to decompile/crack everything. I just want to know the answer just to decide whether the way to access my code inside Docker is hard enough that I can take the risk.
Docker images are an open and documented "application packaging" format. There are countless ways to inspect the image contents, including all of the python source code shipped inside of them.
Running applications inside of a container provides isolation from the application escaping the container to access the host. They do not protect you from users on the host inspecting what is occurring inside of the container.
Python programs are distributed as source code. If it can run on a client machine, then the code is readable on that machine. A docker container only contains the application and its libraries, external binaries and files, not a full OS. As the security can only be managed at OS level (or through encryption) and as the OS is under client control, the client can read any file on the docker container, including your Python source.
If you really want to go that way, you should consider providing a full Virtual Machine to your client. In that case, the VM contains a full OS with its account based security (administrative account passwords on the VM can be different from those of the host). Is is far from still waters, because it means that the client will be enable to setup or adapt networking on the VM among other problems...
And you should be aware the the client security officer could emit a strong NO when it comes to running a non controlled VM on their network. I would never accept it.
Anyway, as the client has full access to the VM, really securing it will be hard if ever possible (disable booting from an additional device may even not be possible). It is admitted in security that if the attacker has physical access, you have lost.
TL/DR: It in not the expected answer but just don't. It you sell your solution you will have a legal contract with your customer, and that kind of problem should be handled at a legal level, not a technical one. You can try, and I have even given you a hint, but IMHO the risks are higher than the gain.
I know that´s been more than 3 years, but... looking for the same kind of solution I think that including compiled python code -not your source code- inside the container would be a challenging trial for someone trying to access your valuable source code.
If you run pyinstaller --onefile yourscript.py you will get a compiled single file that can be run as an executable. I have only tested it in Raspberry, but as far as I know it´s the same for, say, Windows.
Of course anything can be reverse engineered, but hopefully it won´t be worth the effort to the regular end user.
I think it could be a solution as using a "container" to protect our code from the person we wouldn't let them access. the problem is docker is not a secure container. As the root of the host machine has the most powerful control of the Docker container, we don't have any method to protect the root from accessing inside of the container.
I just have some ideas about a secure container:
Build a container with init file like docker file, a password must be set when the container is created;
once the container is built, we have to use a password to access inside, including
reading\copy\modify files
all the files stored on the host machine should be encypt。
no "retrieve password" or “--skip-grant-” mode is offered. that means nobody can
access the data inside the container if u lost the password.
If we have a trustable container where we can run tomcat or Django server, code obfuscation will not be necessary.
We are a small design company, I'm the only one to "code" (making small scripts/tools for the creatives)
I have a server on a local network.
On this server, I installed docker and docker-compose.
On this server I want to have a few containers running, one per service (gitlab, taiga, wiki.js, mattermost, wekan)
When setting the docker-compose.yml, How should I manage ports (and or any other settings) so that:
First (case study): (Let's say I just have one container running) when typing the host IP address in a web browser, it redirect to my service and display for example, /var/www/ if my service is a website
Second: when typing subdomain.myhostname in a web browser, it redirects to one specific service
It's a very broad question, strongly dependent on one's experience. From what I consider fast and reliable, as far as small environments are considered, you may want to take Rancher for a spin.
It's super easy to start with. What's more, there's a range of services like Gitlab or DokuWiki you can start with just one click. On top of that, you can configure a load balancer, that can perform the redirections you mentioned. I think it's one of the fastest options to get a functional and scalable stack. Definitely not the most stable one, compared to enterprise-grade OpenShift, but I think it'll do just fine.
I will not go through all the setup details as I believe it's not what the question is about, but you can start with setting up Rancher 1.6 docker server going step by step through the official doc guide. It's pretty straightforward - one bash command and you are up and running.
Openshift is a platform competing to Rancher. To my best knowledge, it's harder to work with, especially having no experience. It's more stable, that's for sure, alas requires more effort in general.
I intentionally omitted a few options as I took an assumption OP wants it working asap while still easily being re-configurable, stable, and GUI-manageable.
-- edit a few years later --
Rancher and Openshift are still actively developed and attract new users. Rancher released a stable v2 since my original answer, and so I no longer recommend looking at v1.6.
I am writing an application using Asterisk-Java. It is designed to run on a server that also runs Asterisk. So far, so good.
My application, that originates calls (using the AMI) and that manages user input (using Asterisk-Java's FastAGI and an embedded AgiServer) works great on both my development server and the production server.
For deployment purposes, I am now asked to create a Docker container that would pack up Asterisk and my application, so that it could be easily deployed to other places without having to go through installations and configurations.
The thing is, my application does not behave the same way in the Docker container: on the development / production servers, using the getData function, I can get a DTMF code; on the Docker container, getData seems to never receive the DTMF data from Asterisk (I can stream a file, but the function eventually times out, which means it did not get anything).
I first though of an unexposed port, but since this communication problem seems to be between the AGI Server and Asterisk, which are both running in the container, I find it hard to believe.
I have no other idea, please suggest.
Check out the dtmfmode Parameter for your SIP-Peer...
If your are using RFC2833 (DTMF via RTP), unexposed media ports could very well be the reason.
You could try to optimize your port settings (could be a lot of ports!).
Or try to use DMTF via SIP-Info as an alternative.
But that wouldnt fix any media problems...
I was wondering if it is possible to offer Docker images, but not allow any access to the internals of the built containers. Basically, the user of the container images can use the services they provide, but can't dig into any of the code within the containers.
Call it a way to obfuscate the source code, but also offer a service (the software) to someone on the basis of the container, instead of offering the software itself. Something like "Container as a Service", but with the main advantage that the developer can use these container(s) for local development too, but with no access to the underlying code within the containers.
My first thinking is, the controller of the Docker instances controls everything down to root access. So no, it isn't possible. But, I am new to Docker and am not aware of all of its possibilities.
Is this idea in any way possible?
An obfuscation-based only solution would not be enough, as "Encrypted and secure docker containers" details.
You would need full control of the host your containers are running in order to prevent any "poking". And that is not the case in your scenario, where a developer does have access to the host (ie his/her local development machine) where said container would run.
What is done sometimes is to have some piece of "core" code to run on a remote location (remote server, usb device), in a way that the external piece of code on the one hand can do some client authentication but also and more importantly run some business core code in order to guarantee that the externally located code "has" to be executed to have the things done. If it were only some check that is not actually core code, a cracker could just override it and avoid calling it on the client side. But if the code is actually required to be run and its not then the software won't be able to finish its processing. Of course there is an overhead for all of this, both in complexity and probably computation times, but that's one way you could deploy something that will unfailingly be required to contact your server/external device.
Regards,
Eduardo
I've found Koding on the interwebs and I really dig it. In fact, I dig it so much, that I want to write my game server solely in Koding, hence it is a reliable app on the net, so I can work from anywhere anytime. But my problem is there, that when I want to try things from outside Koding (the client) I cannot connect to server. Unfortunately I haven't found the IP of my machine (I tried all the citruslee.kd.io variants I have [vm-0.], ifconfig -a adresses, but nothing really happened). The question is, how can I get the somewhat public IP of my VM?
I hope I understood you correctly. If you want to see your VM in action you can just access the URL citruslee.kd.io and if you run some other server (other then Apache, that comes preinstalled on all VMs) on your VM you can type the port after the URL. Keep in mind that your VM shuts down after ~15 mins. Hope I answered your question.
If you have any more questions you can always email us at support#koding.com