What is the difference between lxc commands lxc start and lxc-start, etc? - lxc

Its not clear from the documentation what is the difference between lxc start and lxc-start and the --help provides different usages and switches. Tutorials use both variants but delving into the documentation a bit deeper, it seems that lxc-start is suited for applications running inside the container whereas lxc start is for starting a container. Either way its not clear since it doesn't explain if the commands are links of one another or completely different. I am erring on the side of "they are different as in different binaries, different code paths, but eventually converge under the hood with a few modifications".
Documentation: http://manpages.ubuntu.com/manpages/bionic/man1/lxc-start.1.html
$ lxc-start --help
Usage: lxc-start --name=NAME -- COMMAND
lxc-start start COMMAND in specified container NAME
Options :
-n, --name=NAME NAME of the container
-d, --daemon Daemonize the container (default)
-F, --foreground Start with the current tty attached to /dev/console
-p, --pidfile=FILE Create a file with the process id
-f, --rcfile=FILE Load configuration file FILE
-c, --console=FILE Use specified FILE for the container console
-L, --console-log=FILE Log container console output to FILE
-C, --close-all-fds If any fds are inherited, close them
If not specified, exit with failure instead
Note: --daemon implies --close-all-fds
-s, --define KEY=VAL Assign VAL to configuration variable KEY
--share-[net|ipc|uts]=NAME Share a namespace with another container or pid
Common options :
-o, --logfile=FILE Output log to FILE instead of stderr
-l, --logpriority=LEVEL Set log priority to LEVEL
-q, --quiet Don't produce any output
-P, --lxcpath=PATH Use specified container path
-?, --help Give this help list
--usage Give a short usage message
--version Print the version number
Mandatory or optional arguments to long options are also mandatory or optional
for any corresponding short options.
See the lxc-start man page for further information.
Documentation: http://manpages.ubuntu.com/manpages/bionic/man7/lxc.7.html
$ lxc start --help
Usage: lxc start [<remote>:]<container> [[<remote>:]<container>...]
Start containers.
Options:
--debug (= false)
Enable debug mode
--force-local (= false)
Force using the local unix socket
--no-alias (= false)
Ignore aliases when determining what command to run
--stateful (= false)
Store the container state (only for stop)
--stateless (= false)
Ignore the container state (only for start)
--verbose (= false)
Enable verbose mode

Both LXC and LXD are implementations of Linux Containers.
LXC and LXD are related, both developed by the same team at https://linuxcontainers.org/
LXC predates LXD.
Both are based on the common liblxc library.
LXC is written in C while LXD is written in the Go language.
LXD comes with a hypervisor (container manager) that makes it more user-friendly for most users.
If you are a new user and try to decide which to use, go for LXD.
References: https://blog.simos.info/comparison-between-lxc-and-lxd/

From https://discuss.linuxcontainers.org/t/comparing-lxd-vs-lxc/24,
It states,
lxd
lxd is the LXD daemon. For interacting with the daemon (to create and manage containers, for instance), you want to use the lxc command. You generally don’t want to invoke lxd directly – unless you need to run lxd init or something; check man lxd or lxd --help for more info on what you can do with running lxd directly, but once you get it running from your init system, you probably won’t need to invoke it directly again unless you are debugging LXD itself.
The lxc command is the LXD front-end (“LXD Client” is how I think of it).
However, if you’re trying to use LXD, you should avoid using any commands that start with lxc- (that’s lxc, followed by a short hyphen)! These commands are associated with LXC.
lxc
LXC commands start with lxc- (that’s lxc followed by a short hyphen). If there’s no hyphen, just the literal command lxc, that’s associated with LXD.

In very crude terms, 'LXC' is a container platform itself, managed by 'lxc-*' tools and provide very basic set of functions.
Contrary, 'LXD' is an orchestration tool built upon LXC.
Again, this is a very crude analogy and does not cover nuances and specifics.
If all you need is a small set of persistent isolated containers (Cloud instance, gaming server), an ability to finely tune behavior of each long-running instance, LXC is likely all you would ever need.
If you want to create and destroy containers daily by the hundred, using templates and automation tools, look into LXD.

Related

Can Podman change host environments per container or does it behave exactly like Docker?

I'm learning Podman so I apologize for silly mistakes.
Docker Redis has a notorious problem where the database might fail to write if in the container /proc/sys/vm/overcommit_memory is not set to 1 (the default value is 0).
I know that Podman doesn't use a Daemon and I thought that this could allow it to set specific host environmental values per container (Docker doesn't allow this -- one must change the host variables and then all containers being created will copy them; thus, if you set a different value for the host environmental variables it will be applied to all subsequent containers, there's no way to apply it only to a specific one).
The documentation says that, for --env, "--env: Any environment variables specified will override previous settings". But alas, I think it behaves identically to Docker and doesn't allow one to change host env per container. I tried podman run .... --env 'overcommit_memory=1' and it made no difference at all. I guess this approach would only make sense for general environmental variables and not for specific vm ones.
But I was curious: is it possible at all to change host env per container in Podman? And in specific, is there any way to change /proc/sys/vm/overcommit_memory per container?
EDIT: can podman play kube be of any help?
EDIT2: one might wonder why I don't encapsulate the podman run command with echo 1 > overcommit_memory and afterwards revert to echo 0 > overcommit_memory, but I need to use a Windows machine to develop this and I think this wouldn't be possible
EDIT 3: Eureka! Found a solution [not really, see criztovyl's answer] to my original problem, I just need create a dir (say, mkdir vm), add a overcommit_memory file to it with content equal to 1, and add to the podman run instruction -v vm:/proc/sys/vm:rw. This way a volume is bound to the container in rw mode and rewrites the value of overcommit_memory. But I'm still curious as to whether there's a more straightforward way to change that env
EDIT 4: Actually, COPY init.sh is the best option so far https://r-future.github.io/post/how-to-fix-redis-warnings-with-docker/ [again, not really, see criztovyl's answer below]
As Richard says, it does not seem to be currently possible to set vm.overcommit_memory per-container, you must set it at the host via sysctl (not --sysctl).
This applies to both Podman and Docker, as for the "actual" container they both in the end rely on the same kernel APIs (cgroups).
Note that you say "changing the host env", which can be misinterpreted as changing the host's environment variables. Overwriting environment variables is possible as you tried with --env.
But memory overcommit is a kernel parameter which you must set via sysctl, not environment variables.
Certain sysctl options are overwriteable via --sysctl, but vm.overcommit_memory is, as far as i can tell, no such option.
Regarding your first edit: kube play only is a fancy way to "import" pods/containers described in kubernetes yaml format. in the end it does not set different options you could not also use manually.
Regarding your second edit: I dont think for development you need to toggle it, it should be okay to just keep it enabled. Or keep it disabled altogether because during development the warning and potential failing writes should be acceptable.
Your option 3 only silences the warning, but the database write can still fail, it only makes it appear to redis overcomitting is enabled, but actually overcommit is still disabled.
Your option 4 works with a privileged container, enabling overcommit for the whole host. As such it is a convenience option for when you can run privileged containers (eg during development), but it will fail when you cannot run privileged (eg in production).

Docker namespace, docker on virtualbox, mirror environment

Let's assume scenario I'm using a set of CLI docker run commands for creating a whole environment of containers, networks (bridge type in my case) and connect containers to particular networks.
Everything works well till the moment I want to have only one such environment at a single machine.
But what if I want to have at the same machine a similar environment to the one I've just created but for a different purpose (testing) I'm having an issue of name collisions since I can't crate and start containers and networks with the same name.
So far I tried to start second environment the same way I did with the first but with prefixing all containers and networks names.That worked but had a flaw: in the application that run all requests to URIs were broken since they had a structure
<scheme>://<container-name>:<port-number>
and the application was not able to reach <prefix-container-name>.
What I want to achieve is to have an exact copy of the first environment running on the same machine as the second environment that I could use to perform the application tests etc.
Is there any concept of namespaces or something similar to it in Docker?
A command that I could use before all docker run etc commands I use to create environment and have just two bash scripts that differ only by the namespace command at their beginning?
Can using virtual machine, ie Oracle Virtualbox be the solution to my problem? Create a VM for the second environment? isn't that an overkill, will it add an additional set of troubles?
Perhaps there is a kind of --hostname for docker run command that will allow to access the container from other container by using this name? Unlucky --hostname only gives ability to access the container by this name form the container itself but not from any other. Perhaps there is an option or command that can make an alias, virtual host or whatever magic common name I could put into apps URIs <scheme>://<magic-name>:<port-number> so creating second environment with different containers and networks names will cause no problem as long as that magic-name is available in the environment network
My need for having exact copy of the environment is because of tests I want to run and check if they fail also on dependency level, I think this is quite simple scenario from the continues integration process. Are there any dedicated open source solutions to what I want to achieve? I don't use docker composer but bash script with all docker cli commands to get the whole env up and running.
Thank you for your help.
Is there any concept of namespaces or something similar to it in Docker?
Not really, no (but keep reading).
Can using virtual machine [...] be the solution to my problem? ... Isn't that an overkill, will it add an additional set of troubles?
That's a pretty reasonable solution. That's especially true if you want to further automate the deployment: you should be able to simulate starting up a clean VM and then running your provisioning script on it, then transplant that into your real production environment. Vagrant is a pretty typical tool for trying this out. The biggest issue will be network connectivity to reach the individual VMs, and that's not that big a deal.
Perhaps there is a kind of --hostname for docker run command that will allow to access the container from other container by using this name?
docker run --network-alias is very briefly mentioned in the docker run documentation and has this effect. docker network connect --alias is slightly more documented and affects a container that's already been created.
Are there any dedicated open source solutions to what I want to achieve?
Docker Compose mostly manages this for you, if you want to move off of your existing shell-script solution: it puts a name prefix on all of the networks and volumes it creates, and creates network aliases for each container matching its name in the YAML file. If your host volume mounts are relative to the current directory then that content is fairly isolated too. The one thing you can't easily do is launch each copy of the stack on a separate host port(s), so you have to resolve those conflicts.
Kubernetes has a concept of a namespace which is in fact exactly what you're asking for, but adopting it is a substantial investment and would involve rewriting your deployment sequence even more than Docker Compose would.

Intro to Docker for FreeBSD Jail User - How and should I start the container with systemd?

We're currently migrating room server to the cloud for reliability, but our provider doesn't have the FreeBSD option. Although I'm prepared to pay and upload a custom system image for deployment, I nontheless want to learn how to start a application system instance using Docker.
in FreeBSD Jail, what I did was to extract an entire base.txz directory hierarchy as system content into /usr/jail/app, and pkg -r /usr/jail/app install apache24 php perl; then I configured /etc/jail.conf to start the /etc/rc script in the jail.
I followed the official FreeBSD Handbook, and this is generally what I've worked out so far.
But Docker is another world entirely.
To build a Docker image, there are two options: a) import from a tarball, b) use a Dockerfile. The latter of which lets you specify a "CMD", which is the default command to run, but
Q1. why isn't it available from a)?
Q2. where are information like "CMD ENV" stored? in the image? in the container?
Q3. How to start a GNU/Linux system in a container? Do I just run systemd and let it figure out the rest from configuration? Do I need to pass to it some special arguments or envvars?
You should think of a Docker container as a packaging around a single running daemon. The ideal Docker container runs one process and one process only. Systemd in particular is so heavyweight and invasive that it's actively difficult to run inside a Docker container; if you need multiple processes in a container then a lighter-weight init system like supervisord can work for you, but that's usually an exception more than a standard packaging.
Docker has an official tutorial on building and running custom images which is worth a read through; this is a pretty typical use case for Docker. In particular, best practice is to write a Dockerfile that describes how to build an image and check it into source control. Containers should avoid having persistent data if they can (storing everything in an external database is ideal); if you change an image, you need to delete and recreate any containers based on it. If local data is unavoidable then either Docker volumes or bind mounts will let you keep data "outside" the container.
While Docker has several other ways to create containers and images, none of them are as reproducible. You should avoid the import, export, and commit commands; and you should only use save and load if you can't use or set up a Docker registry and are forced to move images between systems via a tar file.
On your specific questions:
Q1. I suspect the best reason the non-docker build paths to create images don't easily let you specify things like CMD is just an implementation detail: if you look at the docker history of an image you'll see the CMD winds up being its own layer. Don't worry about it and use a Dockerfile.
Q2. The default CMD, any set ENV variables, and other related metadata are stored in the image alongside the filesystem tree. (Once you launch a container, it has a normal Unix process tree, with the initial process being pid 1.)
Q3. You don't "start a system in a container". Generally run one process or service in a container, and manage their lifecycles independently.

How do I change the default docker container location? [duplicate]

This question already has answers here:
How to change the docker image installation directory?
(20 answers)
Closed 2 years ago.
When I run docker, downloaded docker images (seem to be) stored in /var/lib/docker somewhere.
Since disk space is limited on this directory, and I'm provisioning docker to multiple machines at once; is there a way to change this default location to i.e. /mnt/hugedrive/docker/?
Working solution as of Docker v18.03
I found #Alfabravo's comment to work in my situation, so credit to them and upvoted.
However I think it adds value to provide an answer here to elaborate on it:
Ensure docker stopped (or not started in the first place, e.g. if you've just installed it)
(e.g. as root user):
systemctl stop docker
(or you can sudo systemctl stop docker if not root but your user is a sudo-er, i.e. belongs to the sudo group)
By default, the daemon.json file does not exist, because it is optional - it is added to override the defaults. (Reference - see Answer to: Where's docker's deamon.json? (missing)
)
So new installs of docker and those setups that haven't ever modified it, won't have it, so create it:
vi /etc/docker/daemon.json
And add the following to tell docker to put all its files in this folder, e.g:
{
"graph":"/mnt/cryptfs/docker"
}
and save.
(Note: According to stackoverflow user Alireza Mohamadi's comment beneath this answer on May 11 5:01: "graph option is deprecated in v17.05.0. Use data-root instead." - I haven't tried this myself yet but will update the answer when I have)
Now start docker:
systemctl start docker
(if root or prefix with sudo if other user.)
And you will find that docker has now put all its files in the new location, in my case, under: /mnt/cryptfs/docker.
This answer from #Alfabravo is also supported by: This answer to this problem: Docker daemon flags ignored
Notes and thoughts on Docker versioning
My host platform that is running docker is Ubuntu Linux 16.04.4 LTS 64bit.
I would therefore assume that this solution would apply to later, future versions of Docker, as well as the current time of writing, v18.03. In other words: "this solution should work from v18.03 onwards". As what seems to be the case with other answers, there is also the possibility that this answer might not work for some future version of Docker, if the Docker developers decide to change things in this area. But for now, it works with v18.03, at least in my case, I hope you also find it to work for you.
Optional Housekeeping tip:
If you had files in the original location /var/lib/docker and you know yourself that you definitely don't need them anymore (i.e. you have all the data (databases inside containers, files etc) within them backed up or in another form), you can delete them, so as to keep your machine tidy.
What did NOT work - other answers here (unfortunately):
Other solutions here did not work for my situation for the current version of docker that I am using (as the time of writing, current docker version was: Docker v18.03 (current) ).
Also note (as #AlfaBravo correctly points out in their comment to my answer) that the other answers may well have worked for different or earlier versions of docker.
I should note that my host platform is Ubuntu Linux 16.04.4 LTS 64bit.
In all cases when attempting the other answers I had followed the process of stopping docker before doing the solution and then starting it up after, as required. :
https://stackoverflow.com/a/47604857/227926 - #Gerald Sabu M's
solution to alter the /lib/systemd/system/docker.service - alter
the line to: ExecStart=/usr/bin/docker daemon -g
/mnt/hugedrive/docker/ - Outcome for me: docker still put its files
in the default, original location: /var/lib/docker
I tried #Fai's comment, but that file does not exist on my system, so
it would be something particular to their setup:
/etc/systemd/system/docker.service.d/exec_start.conf.
docker.service
I also tried #Hatem Jaber's answer
https://stackoverflow.com/a/32072042/227926 - but again, as will
#Gerald Sabu M's answer, docker still puts the files in the original
default location of /var/lib/docker.
(I would of course like to thank them for their efforts, though).
Why I am changing the default docker location: encrypted file system for GDPR purposes:
As an aside, and perhaps useful to you, I'm running docker inside an encrypted file system (as part of a GDPR initiative) in order to provide Encryption of Data-at-Rest data state (also known as Encryption-at-Rest) and also for Data-In-Use) (definitions).
The process of defining a GDPR datamap includes, among many other things, looking at the systems where the sensitive data is stored (Reference 1: GDPR Data Map Template: An easy to use self-assessment tool for understanding how data moves through your organisation) (Reference 2: Data mapping: Where to start for GDPR compliance). And by encrypting the filesystem where the database and application code is stored and the swap file, risk of residual data left behind when deleting or moving a VM can be eliminated.
I've made use of some of the steps defined in the following links, credit to them:
Encrypting Docker containers on a Virtual Server
How To: Linux Hard Disk Encryption With LUKS [ cryptsetup Command
]
I would note that a further step of encryption is recommended: to encrypt the database fields themselves - the sensitive fields at least - i.e. user data. You can probably find out about various levels of support for this in the implementation of popular database systems. Field encryption provides defence against malicious instrusion and leakage of data while the web application is running.
Also, as another aside point: to cover 'Data-In-Motion' state of data, I am using the free Let's Encrypt
The best solution would be to start the docker daemon (dockerd) with a correct data root path. According to the official documentation, as of Feb 2019, there are no --graph, -g options. These were renamed to the single argument --data-root.
https://docs.docker.com/engine/reference/commandline/dockerd/
So you should modify your /lib/systemd/system/docker.service so that the ExecStart takes into consideration that argument
An example could be
ExecStart=/usr/bin/dockerd --data-root /mnt/data/docker -H fd://
Then you should restart your docker daemon. (Keep in mind that you will no longer have your containers and your images, copy the data from your old folder to the new one if you want to maintain everything)
service docker restart
Keep in mind that if you restart the docker daemon your containers will be stopped, and only those with a correct restart policy will be restarted.
Tested on Ubuntu 16.04.5 Docker version 18.09.1, build 4c52b90
You can start the Docker daemon using -g option and the directory of your choice. This sets the appropriate runtime for Docker.
With version 1.8, it should be something like:
docker daemon -g /path/to/directory
With earlier versions, it would be:
docker -d -g /path/to/directory
From the man page:
-g, --graph=""
Path to use as the root of the Docker runtime. Default is /var/lib/docker.
You can perform the following steps to modify the default docker image location, i.e /var/lib/docker:-
Stop Docker
# systemctl stop docker
# systemctl daemon-reload
Add the following parameters to /lib/systemd/system/docker.service.
FROM:
ExecStart=/usr/bin/dockerd
TO:
ExecStart=/usr/bin/docker daemon -g /mnt/hugedrive/docker/
Create a new directory and rsync the current docker data to new directory.
# mkdir /mnt/hugedrive/docker/
# rsync -aqxP /var/lib/docker/ /mnt/hugedrive/docker/
Now, Docker Daemon can be started safely
# systemctl start docker
In /etc/default/docker or whatever location it exists in your system, change the following to something like this:
DOCKER_OPTS="--dns 8.8.8.8 --dns 8.8.8.4 -g /drive/location
If you have issues and it is ignored, apply this solution: Docker Opts in Etc Default Docker Ignored

Sandbox command execution with docker via Ajax

I'm looking For help in this matter, what options do I have if I want to sandbox the execution of commands that are typed in a website? I would like to create an online interpreter for a programming language.
I've been looking at docker, how would I use it? Is this the best option?
codecube.io does this. It's open source: https://github.com/hmarr/codecube
The author wrote up his rationale and process. Here's how the system works:
A user types some code in to a box on the website, and specifies the language the code is written in
They click “Run”, the code is POSTed to the server
The server writes the code to a temporary directory, and boots a docker container with the temporary directory mounted
The container runs the code in the mounted directory (how it does this varies according to the code’s language)
The server tails the logs of the running container, and pushes them down to the browser via server-sent events
The code finishes running (or is killed if it runs for too long), and the server destroys the container
The Docker container's entrypoint is entrypoint.sh, which inside a container runs:
prog=$1
<...create user and set permissions...>
sudo -u codecube /bin/bash /run-code.sh $prog
Then run-code.sh checks the extension and runs the relevant compiler or interpreter:
extension="${prog##*.}"
case "$extension" in
"c")
gcc $prog && ./a.out
;;
"go")
go run $prog
;;
<...cut...>
The server that accepts the code examples from the web, and orchestrates the Docker containers was written in Go. Go turned out to be a pretty good choice for this, as much of the server relied on concurrency (tailing logs to the browser, waiting for containers to die so cleanup could happen), which Go makes joyfully simple.
The author also details how he implemented resource limiting, isolation and thoughts of security.

Resources