Needed example of a docker run --user on a windows server running docker - docker

On my windows server 2016, I am trying to figure out the run command syntax to run a docker image as a user in my ldap. I read this article, but I am not following it very well (different environments)
Perhaps I am miss understanding the concept all together, but in the end I need to run the container as a specific user in our active directory.
Any links to a well documented run --user examples would be appreciated...
One of the things that is confusing is trying to figure out the UserId and such...

The answer depends on the use case, but may be gMSA authentication would help? Basically, with gMSA authentication, you can add the host OS to an AD domain, and containers running on it can share the privileges to use things like network drive. That way, you don't need to pass credential every time you access them.
MS team has a good write up on it here:
Active Directory Service Accounts for Windows Containers
https://learn.microsoft.com/en-us/virtualization/windowscontainers/manage-containers/manage-serviceaccounts
Also, artisticcheese has fantastic walk through.
Enabling integrated Windows Authentication in windows docker container
https://artisticcheese.wordpress.com/2017/09/09/enabling-integrated-windows-authentication-in-windows-docker-container/
Hope this helps.

Related

Easy to set up docker-compose hosting

I am trying to find an easy to set up docker hosting. What I have is a private git repository with an application, that I can get running locally just with checking it out and running docker-compose up -d. I am not at the moment looking for a production-ready solution, just for a way to get it running somewhere so that few of the potential customers can see the progress, paly with the app a little and suggest improvements. So any service where it is not too much hassle to get it running and accessible from the web.
Solution 1
You could use play-with-docker . This is a free online docker environment accessible via web. The docker-compose tool is also available. The only downside is that the environment will expire after 4 hours. An other similar free online service is also katacoda.
Solution 2
Create AWS account and deploy a linux VM in the free tier. The free tier enable you to create a VM with limited resources for one year.
Solution 3
Prepare a virtual box VM with everyting is needed to run your application.
If you need I can provide further details about the above solutions.

How to see the logs of an application inside a docker?

If I am creating a docker image for one of my applications and publishing it in docker hub.
This image was downloaded by many users and ran that application in their containers and that generated application logs in a folder.
Now as a developer how can I see those application logs from my machine when that container is in remote computer for which I dont have access?
If it is a virtual machine, I can do ssh to that same machine and go to that folder anse see the logs for that particular application, so how it is possible with docker?
I am not talking about docker event logs, the logs generated by my python application with the logging module. Could you please help me on how to handle this case in dockers.
I don't have any experience with working on dockers.
docker exec can be used to run bash commands in a docker container. But in your case the containers are running in a remote machine and not in your local machine. So, in that case, you have 2 options.
1. ssh into the remote machine and then use docker exec command to check the logs.
2. Directly ssh to the docker container.
But, in both scenarios, you will need SSH access to the remote machines from the end users.
I hope this helps.
If your application writes log files to the container filesystem, this is one of a couple of good uses for Docker bind mounts. If the operator (the person running the container; not you, the original software author) starts the container with
docker run -v $PWD/logs:/app/logs ... you/yourimage
then they will be able to read the log files directly on their host system.
As the original application developer, you have no access to these logs. This is the same as every other (non-SaaS) application: the end user installs software on their system and runs it, but it's on a system you can't log into, so you can't directly see things like log files. The techniques for dealing with this are the same as anything else: when a user files a bug report make sure they provide a sufficient reproduction, log files, and relevant configuration, and reproduce the issue yourself locally.

Safest way to turn docker cli bash commands into an api for external application in production use

I'm running a program that uses several docker images and containers and it's all spawned and managed by the code. At the same time, I need to enter into the docker exec -it cli bash and execute some commands. These commands however cant be manual and must be made into an api. After extensive searching the closest thing I found is docker remote api [https://blog.trifork.com/2013/12/24/docker-from-a-distance-the-remote-api]. However, I'm a bit scared messing with the internals of docker. I want the spawning and management to remain controlled by the program. I only need to run a limited number of commands to docker cli. Is docker remote api the right way to go? Will it handle scale- my application may see ~27000 mobile and webapps use/call the apis from different parts of the world. Tried and tested solutions would be preferred.
Any advice would be highly appreciated.
There’s not an easy answer to this. Since you include “safest” in the question title, I will suggest you probably need to do some redesign of your application architecture.
The first critical detail is this: being able to run any Docker command, or access the Docker API, implies unrestricted root access on the host. You can trivially docker run an image with writeable root-level access to the host’s filesystem and steal public keys, user passwords, give yourself sudo access, and so on. Using it as a core part of your workflow is incredibly dangerous. Turning on the Docker remote API at all is incredibly dangerous.
As a corollary to this, while docker exec is handy as a debugging tool, you can’t really use it as part of your core workflow. As you note running commands by hand as a trusted administrator doesn’t scale. There are also dangers in shell quoting: you need to make sure an argument doesn’t look like foo; docker run -v/:/host ... and inadvertently gain access to the host system.
In my mind your only real option here is to do this “properly”. Take whatever administrative commands you need to do and wrap them in some API, probably HTTP-based. Build a new service (or several) and add it to your Docker deployment. Maybe under the hood that launches a shell script as a subprocess, but the API wrapper has control over the arguments and can double-check things. The plus side is that this approach probably won’t be a choke point if your application does need to scale out.

Why doesn't Docker support multi-tenancy?

I watched this YouTube video on Docker and at 22:00 the speaker (a Docker product manager) says:
"You're probably thinking 'Docker does not support multi-tenancy'...and you are right!"
But never is any explanation of why actually given. So I'm wondering: what did he mean by that? Why Docker doesn't support multi-tenancy?! If you Google "Docker multi-tenancy" you surprisingly get nothing!
One of the key features most assume with a multi-tenancy tool is isolation between each of the tenants. They should not be able to see or administer each others containers and/or data.
The docker-ce engine is a sysadmin level tool out of the box. Anyone that can start containers with arbitrary options has root access on the host. There are 3rd party tools like twistlock that connect with an authz plugin interface, but they only provide coarse access controls, each person is either allowed or disallowed from an entire class of activities, like starting containers, or viewing logs. Giving users access to either the TLS port or docker socket results in the users being lumped into a single category, there's no concept of groups or namespaces for the users connecting to a docker engine.
For multi-tenancy, docker would need to add a way to define users, and place them in a namespace that is only allowed to act on specific containers and volumes, and restrict options that allow breaking out of the container like changing capabilities or mounting arbitrary filesystems from the host. Docker's enterprise offering, UCP, does begin to add these features by using labels on objects, but I haven't had the time to evaluate whether this would provide a full multi-tenancy solution.
Tough question that others might know how to answer better than me. But here it goes.
Let's take this definition of multi tenancy (source):
Multi-tenancy is an architecture in which a single instance of a software application serves multiple customers.
It's really hard to place Docker in this definition. It can be argued that it's both the instance and the application. And that's where the confusion comes from.
Let's break Docker up into three different parts: the daemon, the container and the application.
The daemon is installed on a host and runs Docker containers. The daemon does actually support multi tenancy, as it can be used my many users on the same system, each of which has their own configuration in ~/.docker.
Docker containers run a single process, which we'll refer to as the application.
The application can be anything. For this example, let's assume the Docker container runs a web application like a forum or something. The forum allows users to sign in and post under their name. It's a single instance that serves multiple customers. Thus it supports multi tenancy.
What we skipped over is the container and the question whether or not it supports multi tenancy. And this is where I think the answer to your question lies.
It is important to remember that Docker containers are not virtual machines. When using docker run [IMAGE], you are creating a new container instance. These instances are ephemeral and immutable. They run a single process, and exit as soon as the process exists. But they are not designed to have multiple users connect to them and run commands simultaneously. This is what multi tenancy would be. Instead, Docker containers are just isolated execution environments for processes.
Conceptually, echo Hello and docker run echo Hello are the same thing in this example. They both execute a command in a new execution environment (process vs. container), neither of which supports multi tenancy.
I hope this answers is readable and answers your question. Let me know if there is any part that I should clarify.

Docker and SSH for development with phpStorm

I am trying to setup a small development environment using Docker. phpStorm team is working hard on get Docker integrated for remote interpreter and therefore for debugging but sadly is not working yet (see here). The only way I have to add such capabilities for debugging is by creating and enabling an SSH access to the container which works like a charm.
Now, I have read a lot about this and some people like the one on this post says is not recommended. I have read others which says to have a dedicated SSH Docker container which I don't get how to fit on this environment.
I am already creating a user docker-user (check repo here) for certain tasks like run composer without root permissions. That could be used for this SSH stuff easily by adding a default password to it.
How would you handle this under such circumstances?
I too have implemented the ssh server workaround when using jetbrains IDEs.
Usually what I do is add a public ssh key to the ~/.ssh/authorized_keys file for the SSH user in the target container/system, and enable passwordless sudo.
One solution that I've thought of, but not yet had the time to implement, would be to make some sort of SSH service that would be a gateway to a docker exec command. That would potentially allow at least some functionality without having to modify your images in any way for this dev requirement.

Resources