Installing python pip for docker container doesnt work - docker

I'm trying to configure python-pip for my docker container(s). But it gives me the error that I dont have permission. After I use sudo, it gives me an another error.
I'v tried using sudo for root permission. I aslo tried the command exec and run.
sudo docker container run davidrazd/discord-node-10 sudo apt-get install python-pip
sudo docker container exec davidrazd/discord-node-10 sudo apt-get install python-pip
docker: Error response from daemon: OCI runtime create failed: container_linux.go:348: starting container process caused "exec: \"sudo\": executable file not found in $PATH": unknown.
And without sudo:
E: Could not open lock file /var/lib/dpkg/lock-frontend - open (13: Permission denied)
E: Unable to acquire the dpkg frontend lock (/var/lib/dpkg/lock-frontend), are you root?

Here's 3 reasons I think you should reconsider your use of Docker and make sure you're getting the most out of containers.
you have an image called ...-node-10. You shouldn't need specific images for specific nodes. They should all run on the same image and, to the extent necessary, be configured at runtime (this should be kept to a minimum, usually things like discovery or dynamic config). If 10 is a version, you should be using tags to version it, not image ID itself.
There's a valid use case for one-off execution of an install within a running container via exec for one-off package installs (knowing that the install will disappear when the container stops), but docker run ... apt-get install really doesn't make sense to me. As #DavidMaze points out in a question comment, installs that are meant to be should always be part of the Dockerfile. If you're installing packages to long-lived containers - don't, and don't. The worst things that happen to folks with docker is they attempt to treat containers as long lived Virtual Machines, when they should be treating them as ephemeral runtime environments that maintain minimal state, are essentially immutable ( and therefore easily replaced by the next version), their image contains all their install-time dependencies, and store any long term data on a separate docker volume (that hopefully is itself backed up).
You're likely configuring a user your application to run as a USER in your Dockerfile but you're attempting to run privileged commands without setting your USER to something else. Then you attempt to run sudo. But sudo doesn't make much sense in the context of a container, and typically is not installed (and if it was, it would mean you'd have to somehow set up a user to sudo, so it wouldn't make your life any easier, just harder). You can set your user at docker exec time to root with docker exec -u root .... But you shouldn't really need to do this - all setup should be done in the Dockerfile and you should ship a new dockerfile to change versions.
I'm a very heavy user of docker myself, and build nearly all applications in it becuse it so vastly simplifies expressing their runtime requirements, so I think I speak from the voice of experience here. You won't get any of the value of docker if your containers aren't immutable; you'll just have the same problems as you would without containers, but with less management tooling. Docker obviates most of the management at runtime - capitalize on that, don't fight it.

Related

docker how to disable resetting data after restart

Im new in docker.
I have wrote:
docker pull *docker from dockerhub*
docker run *image*
sudo apt-get install nano
And when i restart this image, nano is not installed
Is it possible to turn off resetting data in docker container?
The container filesystem is intrinsically temporary. If you docker run -d image twice, the two copies will each start from a fresh copy of the container filesystem and not share anything. There is no option to disable this.
Correspondingly, it is usually a mistake to install software in an interactive shell in a container, since that installation will be lost as soon as the container exits. It's usually unnecessary to install interactive editors like nano or vim, again since they can't make permanent changes. It's better to install your application and only the specific supporting programs it needs in a Dockerfile.
(There is a Docker command that can create a new image from a container, but this is pretty much never considered a best practice. It's hard to specify things like the command the resulting image should run, a chain of images made this way will grow over time, and it's all but impossible to take security updates from the original image. You may also run afoul of licensing or corporate source-tracking requirements with this approach.)

Keycloak Docker image basic unix commands not available

I have setup my Keycloak identification server by running a .yml file that uses the docker image jboss/keycloak:9.0.0.
Now I want get inside the container and modify some files in order to make some testing.
Unfortunately, after I got inside the running container, I realized that some very basic UNIX commands like sudo or vi (and many many more) aren't found (as well as commands like apt-get or yum which I used to download command packages and failed).
According to this question, it seems that the underlying OS of the container (Redhat Universal Base Image) uses the command microdnf to manage software, but unfortunately when I tried to use this command to do any action I got the following message:
error: Failed to create: /var/cache/yum/metadata
Could you please propose any workaround for my case? I just need to use a text editor command like vi, and root privileges for my user (so commands like sudo, su, or chmod). Thanks in advance.
If you still, for some reason, want to exec in to the container try adding --user root to you docker exec command.
Just exec:ing in to the container without the --user will do so as user jboss, that user seems to have less privileges.
It looks like you are trying to use approach from non docker (old school) world in the docker world. That's not right. Usually, you don't have need to go to the container and edit any config file there - that change will be very likely lost (it depends on the container configuration). Containers are configured via environment variables or volumes usually.
Example how to use TLS certificates: Keycloak Docker HTTPS required
https://hub.docker.com/r/jboss/keycloak/ is also good starting point to check available environment variable, which may help you achieve what you need. For example PROXY_ADDRESS_FORWARDING=true enable option, when you can run Keycloak container behind a loadbalancer without you touching any config file.
I would say also adding own config files on the build is not the best option - you will have to maintain your own image. Just use volumes and "override" default config file(s) in the container with your own config file(s) from the host OS file system, e.g.:
-v /host-os-path/my-custom-standalone-ha.xml:/opt/jboss/keycloak/standalone/configuration/standalone-ha.xml

How to convert VM image to dockerfile?

For work purpose, I have an ova file which I need to convert it to DockerFile.
Does someone know how to do it?
Thanks in advance
There are a few different ways to do this. They all involve getting at the disk image of the VM. One is to mount the VDI, then create Docker image from that (see other Stackoverflow answers). Another is to boot the VM and copy the complete disk contents, starting at root, to a shared folder. And so on. We have succeeded with multiple approaches. As long as the disk in the VM is compatible with the kernel underlying the running container, creating Docker image that has the complete VM disk has worked.
Yes it is possible to use a VM image and run it in a container. Many our customers have been using this project successfully: https://github.com/rancher/vm.git.
RancherVM allows you to create VMs that run inside of Kubernetes pods,
called VM Pods. A VM pod looks and feels like a regular pod. Inside of
each VM pod, however, is a container running a virtual machine
instance. You can package any QEMU/KVM image as a Docker image,
distribute it using any Docker registry such as DockerHub, and run it
on RancherVM.
Recently this project has been made compatible for kubernetes as well. For more information: https://rancher.com/blog/2018/2018-04-27-ranchervm-now-available-on-kubernetes
Step 1
Install ShutIt as root:
sudo su -
(apt-get update && apt-get install -y python-pip git docker) || (yum update && yum install -y python-pip git docker which)
pip install shutit
The pre-requisites are python-pip, git and docker. The exact names of these in your package manager may vary slightly (eg docker-io or docker.io) depending on your distro.
You may need to make sure the docker server is running too, eg with ‘systemctl start docker’ or ‘service docker start’.
Step 2
Check out the copyserver script:
git clone https://github.com/ianmiell/shutit_copyserver.git
Step 3
Run the copy_server script:
cd shutit_copyserver/bin
./copy_server.sh
There are a couple of prompts – one to correct perms on a config file, and another to ask what docker base image you want to use. Make sure you use one as close to the original server as possible.
Note that this requires a version of docker that has the ‘docker exec’ option.
Step 4
Run the build server:
docker run -ti copyserver /bin/bash
You are now in a practical facsimile of your server within a docker container!
Source
https://zwischenzugs.com/2015/05/24/convert-any-server-to-a-docker-container/
in my opinon it's totally impossible. But you can create a dockerfile with same OS and mount your datas.

How to run Arangodb on Openshift?

While different database images are available for OpenShift Container Platform users as explained here, others including Arangodb is not yet available. I tried to install Arangodb official container from Dcokerhub by running the following command via Openshift CLI:
oc new-app arangodb
but it does not run successfully throwing the following error:
chown: changing ownership of '/var/lib/arangodb3': Operation not permitted
It is related to permissions. By default, OpenShift runs containers using an arbitrarily assigned user ID and not as the root as documented in Support Arbitrary User IDs section. I tried to chanage the permission of directories and files that may be written to by processes in the image to be owned by the root group and be read/writable by that group in the Dockerfile:
RUN chgrp -R 0 /some/directory \
&& chmod -R g+rwX /some/directory
This time it throws the following error:
FATAL cannot set uid 'arangodb': Operation not permitted
By looking at the script that thatinitializes arangodb (arangod script), arangodb runs as arangodb:arangodb, which should (or may !!!) be arangodb:0 in the case of Openshift.
Now, I am really confused. I've read and searched a lot:
Getting any Docker image running in your own OpenShift
cluster
User namespaces have arrived in
Docker!
new-app fails on some official Docker images due to chown
permissions
I also tried doing the reverse engineering by looking at mongodb
image
provided by openshift. But at the end, I got more confused.
I also do not want to ask cluster administrators to allow the project to run as root using:
# oadm policy add-scc-to-user anyuid -z default
Th more I read, the more I get confused. Has anybody done it before that can provide me a docker container I can run on Openshift?
With ArangoDB 3.4 the docker image has been migrated to an alpine based Image, and its core now shouldn't invoke CHOWN/CHRGRP anymore when invoked in the right way.
This should be one of the requirements to get it working on Openshift.
If you still have problems running ArangoDB on openshift, use the github issue tracker with the specific problems you see. You may also want to try to add changes to the dockerfile, so it can be improved.

Best practice to apply patch to a modified docker container?

So let's say we just spun up a docker container and allows user SSH into the container by mapping port 22:22.
User then installed some software like git or whatever they want. So that container is now polluted.
Later on, suppose I want to apply some patches to the container, what is the best way to do so?
Keep in mind that the user has modified contents in container, including some system level directories like /usr/bin. So I cannot simply replace the running container with another image.
So to give you some real life use cases. Take Nitrous.io as an example. I saw they are using docker containers to serve as user's VM. So users can install packages like Node.js global packages. So how do they update/apply patch to containers like a pro? Similar platforms like Codeanywhere might work in the same way.
I tried google it but I failed. I am not 100 percent sure whether this is a duplicate though.
User then installed some software like git or whatever they want ... I want to apply some patch to the container, what is the best way to do so ?
The recommended way is to plan your updates through Dockerfile. However, if you are unable to achieve that, than any additional changes or new packages installed to the container should be committed before they are exited.
ex: Below is simple container created which does not have vim installed.
$ docker images
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
pingimg 1.5 1e29ac7353d1 4 minutes ago 209.6 MB
Start the container and check if vim is installed.
$ docker run -it pingimg:1.5 /bin/bash
root#f63accdae2ab:/#
root#f63accdae2ab:/# vim
bash: vim: command not found
Install the required packages, inside the container:
root#f63accdae2ab:/# sudo apt-get update && install -y vim
Back on the host, commit the container with a new tag before stopping or exiting the container.
$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
f63accdae2ab pingimg:1.5 "/bin/bash" About a minute ago Up About a minute modest_lovelace
$ docker commit f63accdae2ab pingimg:1.6
378e0359eedfe902640ff71df4395c3fe9590254c8c667ea3efb54e033f24cbe
$ docker stop f63accdae2ab
f63accdae2ab
Now docker images should show to both the tags or versions of the container. Note, the updated container shows larger size.
$ docker images
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
pingimg 1.6 378e0359eedf 43 seconds ago 252.8 MB
pingimg 1.5 1e29ac7353d1 4 minutes ago 209.6 MB
Re-start the recently committed container, you can see that vim installed
$ docker run -it pingimg:1.6 /bin/bash
root#63dbbb8a9355:/# which vim
/usr/bin/vim
Verify the contents of the previous version container and should notice that vim is still missing.
$ docker run -it pingimg:1.5 /bin/bash
root#99955058ea0b:/# which vim
root#99955058ea0b:/# vim
bash: vim: command not found
Hope this helps!
There's a whole branch of software called configuration management that seeks to solve this issue, with solutions such as Ansible and Puppet. Whilst designed with VMs in mind, it is certainly possible to use such solutions with containers.
However, this is not the Docker way. Rather than patch a Docker container, throw it away and replace it with a new one. If you need to install new software, add it to the Dockerfile and build a new container as per #askb's solution. By doing things this way, we can avoid a whole set of headaches (similarly, prefer docker exec to installing ssh in containers).

Resources