I am learning docker currently.
one thing that I noticed is that whenever I create a new machine using command :
docker-machine create --driver virtualbox default
& after running this command, if I start my docker-machine using:
docker start default
it will always prompt me with super user mode. i.e I always see # instead of $ in my terminal.
I would like like to know why this is happening?
is there particular requirement?
if possible, can I use normal user mode in this terminal?
any inputs?
thanks in advance.
That's the thing with docker: "security isn't the primary requirement".
So, the not-so nice reality is: by default, when docker is doing something, it happens with root permissions. You actually have to put in "work" in order to not use privileged containers; and to run process not as root.
In essence: one should a good amount of research to understand the risks and solutions around "security of containers"; see here for example.
Related
I'm learning Podman so I apologize for silly mistakes.
Docker Redis has a notorious problem where the database might fail to write if in the container /proc/sys/vm/overcommit_memory is not set to 1 (the default value is 0).
I know that Podman doesn't use a Daemon and I thought that this could allow it to set specific host environmental values per container (Docker doesn't allow this -- one must change the host variables and then all containers being created will copy them; thus, if you set a different value for the host environmental variables it will be applied to all subsequent containers, there's no way to apply it only to a specific one).
The documentation says that, for --env, "--env: Any environment variables specified will override previous settings". But alas, I think it behaves identically to Docker and doesn't allow one to change host env per container. I tried podman run .... --env 'overcommit_memory=1' and it made no difference at all. I guess this approach would only make sense for general environmental variables and not for specific vm ones.
But I was curious: is it possible at all to change host env per container in Podman? And in specific, is there any way to change /proc/sys/vm/overcommit_memory per container?
EDIT: can podman play kube be of any help?
EDIT2: one might wonder why I don't encapsulate the podman run command with echo 1 > overcommit_memory and afterwards revert to echo 0 > overcommit_memory, but I need to use a Windows machine to develop this and I think this wouldn't be possible
EDIT 3: Eureka! Found a solution [not really, see criztovyl's answer] to my original problem, I just need create a dir (say, mkdir vm), add a overcommit_memory file to it with content equal to 1, and add to the podman run instruction -v vm:/proc/sys/vm:rw. This way a volume is bound to the container in rw mode and rewrites the value of overcommit_memory. But I'm still curious as to whether there's a more straightforward way to change that env
EDIT 4: Actually, COPY init.sh is the best option so far https://r-future.github.io/post/how-to-fix-redis-warnings-with-docker/ [again, not really, see criztovyl's answer below]
As Richard says, it does not seem to be currently possible to set vm.overcommit_memory per-container, you must set it at the host via sysctl (not --sysctl).
This applies to both Podman and Docker, as for the "actual" container they both in the end rely on the same kernel APIs (cgroups).
Note that you say "changing the host env", which can be misinterpreted as changing the host's environment variables. Overwriting environment variables is possible as you tried with --env.
But memory overcommit is a kernel parameter which you must set via sysctl, not environment variables.
Certain sysctl options are overwriteable via --sysctl, but vm.overcommit_memory is, as far as i can tell, no such option.
Regarding your first edit: kube play only is a fancy way to "import" pods/containers described in kubernetes yaml format. in the end it does not set different options you could not also use manually.
Regarding your second edit: I dont think for development you need to toggle it, it should be okay to just keep it enabled. Or keep it disabled altogether because during development the warning and potential failing writes should be acceptable.
Your option 3 only silences the warning, but the database write can still fail, it only makes it appear to redis overcomitting is enabled, but actually overcommit is still disabled.
Your option 4 works with a privileged container, enabling overcommit for the whole host. As such it is a convenience option for when you can run privileged containers (eg during development), but it will fail when you cannot run privileged (eg in production).
Suppose I have a docker application (such as this one). The standard usage is using the CLI to run docker run, in this case, for macOS users it would be:
docker run -it --rm bigdeddu/nyxt:2.2.1
Now, I would like to produce an app bundle or something so that users can double click to launch this docker application as a desktop application. It would be kind of a GUI shortcut to launch docker.
How can I achieve that?
1 - Is there a solution already done for it? If so, which one?
2 - If there is not a solution already done for it, what would be a rough sketch on how to build one?
Thanks!
Docker was designed to encapsulate server processes. For servers, the CLI is a reasonable and often satisfactory interface.
If you want users to run their possibly interactive application, you may want to look for https://appimage.org/. Although I am unsure whether that is available for MacOS.
To get around these limitations, you could either think of creating an end user targeting GUI for docker, or an implementation of AppImage for MacOS.
Let's assume scenario I'm using a set of CLI docker run commands for creating a whole environment of containers, networks (bridge type in my case) and connect containers to particular networks.
Everything works well till the moment I want to have only one such environment at a single machine.
But what if I want to have at the same machine a similar environment to the one I've just created but for a different purpose (testing) I'm having an issue of name collisions since I can't crate and start containers and networks with the same name.
So far I tried to start second environment the same way I did with the first but with prefixing all containers and networks names.That worked but had a flaw: in the application that run all requests to URIs were broken since they had a structure
<scheme>://<container-name>:<port-number>
and the application was not able to reach <prefix-container-name>.
What I want to achieve is to have an exact copy of the first environment running on the same machine as the second environment that I could use to perform the application tests etc.
Is there any concept of namespaces or something similar to it in Docker?
A command that I could use before all docker run etc commands I use to create environment and have just two bash scripts that differ only by the namespace command at their beginning?
Can using virtual machine, ie Oracle Virtualbox be the solution to my problem? Create a VM for the second environment? isn't that an overkill, will it add an additional set of troubles?
Perhaps there is a kind of --hostname for docker run command that will allow to access the container from other container by using this name? Unlucky --hostname only gives ability to access the container by this name form the container itself but not from any other. Perhaps there is an option or command that can make an alias, virtual host or whatever magic common name I could put into apps URIs <scheme>://<magic-name>:<port-number> so creating second environment with different containers and networks names will cause no problem as long as that magic-name is available in the environment network
My need for having exact copy of the environment is because of tests I want to run and check if they fail also on dependency level, I think this is quite simple scenario from the continues integration process. Are there any dedicated open source solutions to what I want to achieve? I don't use docker composer but bash script with all docker cli commands to get the whole env up and running.
Thank you for your help.
Is there any concept of namespaces or something similar to it in Docker?
Not really, no (but keep reading).
Can using virtual machine [...] be the solution to my problem? ... Isn't that an overkill, will it add an additional set of troubles?
That's a pretty reasonable solution. That's especially true if you want to further automate the deployment: you should be able to simulate starting up a clean VM and then running your provisioning script on it, then transplant that into your real production environment. Vagrant is a pretty typical tool for trying this out. The biggest issue will be network connectivity to reach the individual VMs, and that's not that big a deal.
Perhaps there is a kind of --hostname for docker run command that will allow to access the container from other container by using this name?
docker run --network-alias is very briefly mentioned in the docker run documentation and has this effect. docker network connect --alias is slightly more documented and affects a container that's already been created.
Are there any dedicated open source solutions to what I want to achieve?
Docker Compose mostly manages this for you, if you want to move off of your existing shell-script solution: it puts a name prefix on all of the networks and volumes it creates, and creates network aliases for each container matching its name in the YAML file. If your host volume mounts are relative to the current directory then that content is fairly isolated too. The one thing you can't easily do is launch each copy of the stack on a separate host port(s), so you have to resolve those conflicts.
Kubernetes has a concept of a namespace which is in fact exactly what you're asking for, but adopting it is a substantial investment and would involve rewriting your deployment sequence even more than Docker Compose would.
I'm new to Docker and was wondering if it was possible (and a good idea) to develop within a docker container.
I mean create a container, execute bash, install and configure everything I need and start developping inside the container.
The container becomes then my main machine (for CLI related works).
When I'm on the go (or when I buy a new machine), I can just push the container, and pull it on my laptop.
This sort the problem of having to keep and synchronize your dotfile.
I haven't started using docker yet, so is it something realistic or to avoid (spacke disk problem and/or pull/push timing issue).
Yes. It is a good idea, with the correct set-up. You'll be running code as if it was a virtual machine.
The Dockerfile configurations to create a build system is not polished and will not expand shell variables, so pre-installing applications may be a bit tedious. On the other hand after building your own image to create new users and working environment, it won't be necessary to build it again, plus you can mount your own file system with the -v parameter of the run command, so you can have the files you are going to need both in your host and container machine. It's versatile.
> sudo docker run -t -i -v
/home/user_name/Workspace/project:/home/user_name/Workspace/myproject <container-ID>
I'll play the contrarian and say it's a bad idea. I've done work where I've tried to keep a container "long running" and have modified it, but then accidentally lost it or deleted it.
In my opinion containers aren't meant to be long running VMs. They are just meant to be instances of an image. Start it, stop it, kill it, start it again.
As Alex mentioned, it's certainly possible, but in my opinion goes against the "Docker" way.
I'd rather use VirtualBox and Vagrant to create VMs to develop in.
Docker container for development can be very handy. Depending on your stack and preferred IDE you might want to keep the editing part outside, at host, and mount the directory with the sources from host to the container instead, as per Alex's suggestion. If you do so, beware potential performance issue on macos x with boot2docker.
I would not expect much from the workflow with pushing the images to sync between dev environments. IMHO keeping Dockerfiles together with the code and synching by SCM means is more straightforward direction to start with. I also carry supporting Makefiles to build image(s) / run container(s) same place.
I'm trying to automate the following loop with Docker: spawn a container, do some work inside of it (more than one single command), get some data out of the container.
Something along the lines of:
for ( i = 0; i < 10; i++ )
spawn a container
wget revision-i
do something with it and store results in results.txt
According to the documentation I should go with:
for ( ... )
docker run <image> <long; list; of; instructions; separated; by; semicolon>
Unfortunately, this approach is not attractive nor maintanable as the list of instructions grows in complexity.
Wrapping the instructions in a script as in docker run <image> /bin/bash script.sh doesn't work either since I want to spawn a new container for every iteration of the loop.
To sum up:
Is there any sensible way to run a complex series of
commands as described above inside the same container?
Once some data are saved inside a container in, say, /home/results.txt,
and the container returns, how do I get results.txt? The only way I
can think of is to commit the container and tar the file out of the
new image. Is there a more efficient way to do it?
Bonus: should I use vanilla LXC instead? I don't have any experience with it though so I'm not sure.
Thanks.
I eventually came up with a solution that works for me and greatly improved my Docker experience.
Long story short: I used a combination of Fabric and a container running sshd.
Details:
The idea is to spawn container(s) with sshd running using Fabric's local, and run commands on the containers using Fabric's run.
To give a (Python) example, you might have a Container class with:
1) a method to locally spawn a new container with sshd up and running, e.g.
local('docker run -d -p 22 your/image /usr/sbin/sshd -D')
2) set the env parameters needed by Fabric to connect to the running container - check Fabric's tutorial for more on this
3) write your methods to run everything you want in the container exploiting Fabric's run, e.g.
run('uname -on')
Oh, and if you like Ruby better you can achieve the same using Capistrano.
Thanks to #qkrijger (+1'd) for putting me on the right track :)
On question 2.
I don't know if this is the best way, but you could install SSH on you image and use that. For more information on this, you can check out this page from the documentation.
You post 2 questions in one. Maybe you should put 2. in a different post. I will consider 1. here.
It is unclear to me whether you want to spawn a new container for every iteration (as you say first) or if you want to "run a complex series of commands as described above inside the same container?" as you say later.
If you want to spawn multiple containers I would expect you to have a script on your machine handling that.
If you need to pass an argument to your container (like i): there is being work done on passing arguments currently. See https://github.com/dotcloud/docker/pull/1015 (and https://github.com/dotcloud/docker/pull/1015/files for documentation change which is not online yet).