Install boto3 from a docker container - docker

I am using docker. In one of my container I want to use boto3 so for that I used this command from inside the container
RUN apt-get install boto3
but it showed me like
bash: RUN: command not found
I also tried sudo apt-get install boto3 but it also showed me error like
bash: sudo: command not found
So can someone tell me how to install a package in a docker container?
Update
When I make docker ps -a
I get this
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
distracted_rubin
6a8b04e81122 odoo:11.0 "/entrypoint.sh odoo" 6 weeks ago Up 4 hours 8071/tcp, 0.0.0.0:18069->8069/tcp odoo
As you can see mu container id is 6a8b04e81122 I used this command to go inside container
docker exec -it 6a8b04e81122 /bin/bash

odoo image by default uses a user called odoo. This user doesn't have enough privilege to install a package.
So you have to create a container with different user (i.e) root.
docker run -it --user root odoo:11 bash
Now you created a container with root user context.
You can install boto3 by issuing below command.
apt update
For python 2.x: apt install python-boto3
For python 3.x: apt install python3-boto3
Finally, commit the container to persist the changes.
Update:
You can also open the existing container with a different user by issuing the below command.
docker exec -it --user root <container-id> bash

Related

Installing Xdebug in Docker existing docker Container

I am trying to install a xdebug on docker. First I would like to know if I can download it on existing docker container if yes how ?
Sure you can. Simply interact with the container then perform the installation commands.
$ docker exec -it <containerName or ID> /bin/bash
then you will be root in container mostly
# apt update && apt install <application>

How to use root user from a container?

I’m new to the docker and linux.
I’m using windows 10 and got a github example to create a container with Centos and nginx.
I need to use the root user to change the nginx.config.
From Kitematic, I clicked on Exec to get a bash shell in the container and I tried sudo su – as blow:
sh-4.2$ sudo su –
sh: sudo: command not found
So, I tried to install sudo by below command:
sh-4.2$ yum install sudo -y
Loaded plugins: fastestmirror, ovl
ovl: Error while doing RPMdb copy-up:
[Errno 13] Permission denied: '/var/lib/rpm/Installtid'
You need to be root to perform this command.
Then I ran su - , but I don’t know the password! How can I set the password?
sh-4.2$ su -
Password:
Then, from powershell on my windows I also tried:
PS C:\Containers\nginx-container> docker exec -u 0 -it 9e8f5e7d5013 bash
but it shows that the script is running and nothing happened and I canceled it by Ctrl+C after an hour.
Some additional information:
Here is how I created the container:
PS C:\Containers\nginx-container> s2i build https://github.com/sclorg/nginx-container.git --context->dir=examples/1.12/test-app/ centos/nginx-112-centos7 nginx-sample-app
From bash shell in the container. I can get the os information as below:
sh-4.2$ cat /etc/os-release
NAME="CentOS Linux"
VERSION="7 (Core)"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="7"
PRETTY_NAME="CentOS Linux 7 (Core)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:centos:centos:7"
HOME_URL="https://www.centos.org/"
BUG_REPORT_URL="https://bugs.centos.org/"
CENTOS_MANTISBT_PROJECT="CentOS-7"
CENTOS_MANTISBT_PROJECT_VERSION="7"
REDHAT_SUPPORT_PRODUCT="centos"
REDHAT_SUPPORT_PRODUCT_VERSION="7"
I would really appreciate if you can help me to fix these issues.
Thanks!
Your approach is generally wrong. You should prepare the file outside the container an then let the Docker itself to change it.
There are several ways to achieve this.
You can mount your file during startup:
docker run -v /your/host/path/to/config.cfg:/etc/nginx/config.cfg ...
You can copy the file into the container during building the container (inside Dockerfile):
FROM base-name
COPY config.cfg /etc/nginx/
You can apply a patch to the config script (once again, a Dockerfile):
FROM base-name
ADD config.cfg.diff /etc/nginx/
RUN ["patch", "-N", "/etc/nginx/config.cfg", "--input=/etc/nginx/config.cfg.diff"]
For each method, there are lots of examples on StackOverflow.
You should read Docker's official tutorial on building and running custom images. I rarely do work in interactive shells in containers; instead, I set up a Dockerfile that builds an image that can run autonomously, and iterate on building and running it like any other piece of software. In this context su and sudo aren't very useful because the container rarely has a controlling terminal or a human operator to enter a password (and for that matter usually doesn't have a valid password for any user).
Instead, if I want to do work in a container as a non-root user, my Dockerfile needs to set up that user:
FROM ubuntu:18.04
WORKDIR /app
COPY ...
RUN useradd -r -d /app myapp
USER myapp
CMD ["/app/myapp"]
The one exception I've seen is if you have a container that, for whatever reason, needs to do initial work as root and then drop privileges to do its real work. (In particular the official Consul image does this.) That uses a dedicated lighter-weight tool like gosu or su-exec. A typical Dockerfile setup there might look like
# Dockerfile
FROM alpine:3.8
RUN addgroup myapp \
&& adduser -S -G myapp myapp
RUN apk add su-exec
WORKDIR /app
COPY . ./
ENTRYPOINT ["/app/docker-entrypoint.sh"]
CMD ["/app/myapp"]
#!/bin/sh
# docker-entrypoint.sh
# Initially launches as root
/app/do-initial-setup
# Switches to non-root user to run real app
su-exec myapp:myapp "$#"
Both docker run and docker exec take a -u argument to indicate the user to run as. If you launched a container as the wrong user, delete it and recreate it with the correct docker run -u option. (This isn't one I find myself wanting to change often, though.)
I started the container on my local and turns out you don't need sudo you can do it with su that comes by default on the debian image
docker run -dit centos bash
docker exec -it 9e82ff936d28 sh
su
also you could try executing the following which defaults you to root:
docker run -dit centos bash
docker exec -it 9e82ff936d28 bash
never less you could create the Nginx config outside the container and just have it copy using docker container copy {file_path} {container_id}:{path_inside_container}
Thanks everyone.
I think it's better to setup a virtualbox with Centos and play with nginx.
Then when I'm ready and have a correct nginx.config, I can use Dockerfile to copy my config file.
VM is so slow and I was hoping that I can work in interactive shells in containers to learn and play instead of using a VM. do you have any better idea than virtualbox?
I tried
docker run -dit nginx-sample-app bash
docker exec -u root -it 9e8f5e7d5013 bash
And it didn't do anything , it stays in the below status:
here
the same commands worked on debian image but not centos.

install/access executable for existing docker container

I want to run an executable and all of its libraries from within my container. How do I do that?
For my Ubuntu 14.04 server, I can do sudo apt-get install tetex-base tetex-bin
In this case, however, someone already set up a docker container for me, and I need to be able to run the program from within the container.
I got it working with
docker exec -it containerName apt-get install tetex-base tetex-bin
See docs.

How to run a Docker host inside a Docker container?

I have a Jenkins container running inside Docker and I want to use this Jenkins container to spin up other Docker containers when running integration tests etc.
So my plan was to install Docker in the container but this doesn't seem to work so well for me. My Dockerfile looks something like this:
FROM jenkins
MAINTAINER xxxx
# Switch user to root so that we can install apps
USER root
RUN apt-get update
# Install latest version of Docker
RUN apt-get install -y apt-transport-https
RUN apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys 36A1D7869245C8950F966E92D8576A8BA88D21E9
RUN sh -c "echo deb https://get.docker.com/ubuntu docker main > /etc/apt/sources.list.d/docker.list"
RUN apt-get update
RUN apt-get install -y lxc-docker
# Switch user back to Jenkins
USER jenkins
The jenkins image is based on Debian Jessie. When I start bash terminal inside container based on the generated image and do for example:
docker images
I get the following error message:
FATA[0000] Get http:///var/run/docker.sock/v1.16/images/json: dial unix /var/run/docker.sock: no such file or directory. Are you trying to connect to a TLS-enabled daemon without TLS?
I suspect that this could be because the docker service is not started. But my next problem arise when I try to start the service:
service docker start
This gives me the following error:
mount: permission denied
I've tracked the error in /etc/init.d/docker to this line:
mount -t tmpfs -o uid=0,gid=0,mode=0755 cgroup /sys/fs/cgroup
So my questions are:
How do I actually start a Docker host inside a container? Or is this
something that should be avoided?
Is there something special I need to do if I'm running Mac and boot2docker?
Perhaps I should instead link to the Docker on the host machine as described here?
Update: I've tried the container as user root and jenkins. sudo is not installed.
A simpler alternative is to mount the docker socket and create sibling containers. To do this, install docker on your image and run something like:
docker run -it -v /var/run/docker.sock:/var/run/docker.sock myimage
In the container you should now be able to run docker commands as if you were on the host. The advantage of this method is that you don't need --privileged and get to use the cache from the host. The disadvantage is that you can see all running containers, not just the ones the created from the container.
1.- The first container you start (the one you launch other one inside) must be run with the --privileged=true flag.
2.- I think there is not.
3.- Using the privileged flag you don't need to mount the docker socket as a volume.
Check this project to see an example of all this.

I lose my data when the container exits

Despite Docker's Interactive tutorial and faq I lose my data when the container exits.
I have installed Docker as described here: http://docs.docker.io/en/latest/installation/ubuntulinux
without any problem on ubuntu 13.04.
But it loses all data when exits.
iman#test:~$ sudo docker version
Client version: 0.6.4
Go version (client): go1.1.2
Git commit (client): 2f74b1c
Server version: 0.6.4
Git commit (server): 2f74b1c
Go version (server): go1.1.2
Last stable version: 0.6.4
iman#test:~$ sudo docker run ubuntu ping
2013/10/25 08:05:47 Unable to locate ping
iman#test:~$ sudo docker run ubuntu apt-get install ping
Reading package lists...
Building dependency tree...
The following NEW packages will be installed:
iputils-ping
0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded.
Need to get 56.1 kB of archives.
After this operation, 143 kB of additional disk space will be used.
Get:1 http://archive.ubuntu.com/ubuntu/ precise/main iputils-ping amd64 3:20101006-1ubuntu1 [56.1 kB]
debconf: delaying package configuration, since apt-utils is not installed
Fetched 56.1 kB in 0s (195 kB/s)
Selecting previously unselected package iputils-ping.
(Reading database ... 7545 files and directories currently installed.)
Unpacking iputils-ping (from .../iputils-ping_3%3a20101006-1ubuntu1_amd64.deb) ...
Setting up iputils-ping (3:20101006-1ubuntu1) ...
iman#test:~$ sudo docker run ubuntu ping
2013/10/25 08:06:11 Unable to locate ping
iman#test:~$ sudo docker run ubuntu touch /home/test
iman#test:~$ sudo docker run ubuntu ls /home/test
ls: cannot access /home/test: No such file or directory
I also tested it with interactive sessions with the same result. Did I forget something?
EDIT: IMPORTANT FOR NEW DOCKER USERS
As #mohammed-noureldin and others said, actually this is NOT a container exiting. Every time it just creates a new container.
You need to commit the changes you make to the container and then run it. Try this:
sudo docker pull ubuntu
sudo docker run ubuntu apt-get install -y ping
Then get the container id using this command:
sudo docker ps -l
Commit changes to the container:
sudo docker commit <container_id> iman/ping
Then run the container:
sudo docker run iman/ping ping www.google.com
This should work.
When you use docker run to start a container, it actually creates a new container based on the image you have specified.
Besides the other useful answers here, note that you can restart an existing container after it exited and your changes are still there.
docker start f357e2faab77 # restart it in the background
docker attach f357e2faab77 # reattach the terminal & stdin
There are following ways to persist container data:
Docker volumes
Docker commit
a) create container from ubuntu image and run a bash terminal.
$ docker run -i -t ubuntu:14.04 /bin/bash
b) Inside the terminal install curl
# apt-get update
# apt-get install curl
c) Exit the container terminal
# exit
d) Take a note of your container id by executing following command :
$ docker ps -a
e) save container as new image
$ docker commit <container_id> new_image_name:tag_name(optional)
f) verify that you can see your new image with curl installed.
$ docker images
$ docker run -it new_image_name:tag_name bash
# which curl
/usr/bin/curl
In addition to Unferth's answer, it is recommended to create a Dockerfile.
In an empty directory, create a file called "Dockerfile" with the following contents.
FROM ubuntu
RUN apt-get install ping
ENTRYPOINT ["ping"]
Create an image using the Dockerfile. Let's use a tag so we don't need to remember the hexadecimal image number.
$ docker build -t iman/ping .
And then run the image in a container.
$ docker run iman/ping stackoverflow.com
There are really great answers above to the asked question. There might be no need for another answer but still I want to give my personal opinion on the topic in the simplest words possible.
Here are some points about containers & images that will help us for a conclusion:
A docker image can be:
created-from-a-given-container
deleted
used-to-create-any-number-of-containers
A docker container can be:
created-from-an-image
started
stopped
restarted
deleted
used-to-create-any-number-of-images
A docker run command does this:
Downloads an image or uses a cached image
Creates a new container out of it
Starts the container
When a Dockerfile is used to create an image:
It is already well known that the image will eventually be used to run a docker container.
After issuing docker build command, docker behind-the-scenes creates a running container with a base-file-system and follows steps inside the Dockerfile to configure that container as per the developers need.
After the container is configured with specs of the Dockerfile, it will be committed as an image.
The image gets ready to rock & roll!
Conclusion:
As we can see, a docker container is independent of a docker image.
A container can be restarted provided the unique ID of that container [use docker ps --all to get the id].
Any operation like making a new directory, creating files, installing tools, etc. can be done inside the container when it is running. Once the container is stopped, it persists all the changes. Container stopping and restarting is like rebooting a computer system.
An already created container is always available for a restart but when we issue docker run command, a new container is created out of an image and hence it is like a new computer system. The changes made inside the old container - as we can understand now - are not available in this new container.
A final note:
I guess it's now obvious why the data seems to be lost yet it is always there.. but in a different [old] container. So, take a good note of the difference in docker start & docker run command & never get confused in them.
I have got a much simpler answer to your question, run the following two commands
sudo docker run -t -d ubuntu --name mycontainername /bin/bash
sudo docker ps -a
the above ps -a command returns a list of all containers. Take the name of the container which references the image name - 'ubuntu' . docker auto generates names for the containers for example - 'lightlyxuyzx', that's if you don't use the --name option.
The -t and -d options are important, the created container is detached and can be reattached as given below with the -t option.
With --name option, you can name your container in my case 'mycontainername'.
sudo docker exec -ti mycontainername bash
and this above command helps you login to the container with bash shell. From this point on any changes you make in the container is automatically saved by docker.
For example - apt-get install curl inside the container
You can exit the container without any issues, docker auto saves the changes.
On the next usage, All you have to do is, run these two commands every time you want to work with this container.
This Below command will start the stopped container:
sudo docker start mycontainername
sudo docker exec -ti mycontainername bash
Another example with ports and a shared space given below:
docker run -t -d --name mycontainername -p 5000:5000 -v ~/PROJECTS/SPACE:/PROJECTSPACE 7efe2989e877 /bin/bash
In my case:
7efe2989e877 - is the imageid of a previous container running
which I obtained using
docker ps -a
You might want to look at docker volumes if you you want to persist the data in your container. Visit https://docs.docker.com/engine/tutorials/dockervolumes/. The docker documentation is a very good place to start
My suggestion is to manage docker, with docker compose. Is an easy to way to manage all the docker's containers for your project, you can map the versions and link different containers to work together.
The docs are very simple to understand, better than docker's docs.
Docker-Compose Docs
Best
the similar problem (and no way Dockerfile alone could fix it) brought me to this page.
stage 0:
for all, hoping Dockerfile could fix it: until --dns and --dns-search will appear in Dockerfile support - there is no way to integrate intranet based resources into.
stage 1:
after building image using Dockerfile (by the way it's a serious glitch Dockerfile must be in the current folder), having an image to deploy what's intranet based, by running docker run script. example:
docker run -d \
--dns=${DNSLOCAL} \
--dns=${DNSGLOBAL} \
--dns-search=intranet \
-t pack/bsp \
--name packbsp-cont \
bash -c " \
wget -r --no-parent http://intranet/intranet-content.tar.gz \
tar -xvf intranet-content.tar.gz \
sudo -u ${USERNAME} bash --norc"
stage 2:
applying docker run script in daemon mode providing local dns records to have ability to download and deploy local stuff.
important point: run script should be ending with something like /usr/bin/sudo -u ${USERNAME} bash --norc to keep container running even after the installation scripts finishes.
no, it's not possible to run container in interactive mode for the full automation matter as it will remain inside internal shall command prompt until CTRL-p CTRL-q being pressed.
no, if interacting bash will not be executed at the end of the installation script, the container will terminate immediately after finishes script execution, loosing all installation results.
stage 3:
container is still running in background but it's unclear whether container has ended installation procedure or not yet. using following block to determine execution procedure finishes:
while ! docker container top ${CONTNAME} | grep "00[[:space:]]\{12\}bash \--norc" -
do
echo "."
sleep 5
done
the script will proceed further only after completed installation. and this is the right moment to call: commit, providing current container id as well as destination image name (it may be the same as on the build/run procedure but appended with the local installation purposes tag. example: docker commit containerID pack/bsp:toolchained.
see this link on how to get proper containerID
stage 4: container has been updated with the local installs as well as it has been committed into newly assigned image (the one having purposes tag added). it's safe now to stop container running. example: docker stop packbsp-cont
stage5: any moment the container with local installs require to run, start it with the image previously saved.
example: docker run -d -t pack/bsp:toolchained
a brilliant answer here How to continue a docker which is exited from user kgs
docker start $(docker ps -a -q --filter "status=exited")
(or in this case just docker start $(docker ps -ql) 'cos you don't want to start all of them)
docker exec -it <container-id> /bin/bash
That second line is crucial. So exec is used in place of run, and not on an image but on a containerid. And you do it after the container has been started.
None of the answers address the point of this design choice. I think docker works this way to prevent these 2 errors:
Repeated restart
Partial error

Resources