install/access executable for existing docker container - docker

I want to run an executable and all of its libraries from within my container. How do I do that?
For my Ubuntu 14.04 server, I can do sudo apt-get install tetex-base tetex-bin
In this case, however, someone already set up a docker container for me, and I need to be able to run the program from within the container.

I got it working with
docker exec -it containerName apt-get install tetex-base tetex-bin
See docs.

Related

Install package in running docker container

i've been using a docker container to build the chromium browser (building for Android on Debian 10). I've already created a Dockerfile that contains most of the packages I need.
Now, after building and running the container, I followed the instructions, which asked me to execute an install script (./build/install-build-deps-android.sh). In this script multiple apt install commands are executed.
My question now is, is there a way to install these packages without rebuilding the container? Downloading and building it took rather long, plus rebuilding a container each time a new package is required seems kind of suboptimal. The error I get when executing the install script is:
./build/install-build-deps-android.sh: line 21: lsb_release: command not found
(I guess there will be multiple missing packages). And using apt will give:
root#677e294147dd:/android-build/chromium/src# apt install nginx
Reading package lists... Done
Building dependency tree
Reading state information... Done
E: Unable to locate package nginx
(nginx just as an example install).
I'm thankfull for any hints, as I could only find guides that use the Dockerfile to install packages.
You can use docker commit:
Start your container sudo docker run IMAGE_NAME
Access your container using bash: sudo docker exec -it CONTAINER_ID bash
Install whatever you need inside the container
Exit container's bash
Commit your changes: sudo docker commit CONTAINER_ID NEW_IMAGE_NAME
If you run now docker images, you will see NEW_IMAGE_NAME listed under your local images.
Next time, when starting the docker container, use the new docker image you just created:
sudo docker run **NEW_IMAGE_NAME** - this one will include your additional installations.
Answer based on the following tutorial: How to commit changes to docker image
Thanks for #adnanmuttaleb and #David Maze (unfortunately, they only replied, so I cannot accept their answers).
What I did was to edit the Dockerfile for any later updates (which already happened), and use the exec command to install the needed dependencies from outside the container. Also remember to
apt update
otherwise you cannot find anything...
A slight variation of the steps suggested by Arye that worked better for me:
Create container from image and access in interactive mode: docker run -it IMAGE_NAME bin/bash
Modify container as desired
Leave container: exit
List launched containers: docker ps -a and copy the ID of the container just modified
Save to a new image: docker commit CONTAINER_ID NEW_IMAGE_NAME
If you haven't followed the Post-installation steps for Linux
, you might have to prefix Docker commands with sudo.

Installing Xdebug in Docker existing docker Container

I am trying to install a xdebug on docker. First I would like to know if I can download it on existing docker container if yes how ?
Sure you can. Simply interact with the container then perform the installation commands.
$ docker exec -it <containerName or ID> /bin/bash
then you will be root in container mostly
# apt update && apt install <application>

How to run Dockerfile or Dock Image to install the python dependencies

Sorry for basic Question, as I am new on Docker and I want to install the dependencies by using the docker file, So please guide me how to run this file on Ubuntu?
Author have written the dependencies in the Dockerfile for building the Opensfm.
GitHub Repository Link
FROM ubuntu:18.04
# Install apt-getable dependencies
RUN export DEBIAN_FRONTEND=noninteractive \
&& apt-get update \
&& apt-get install -y \
build-essential \
Can anyone guide me how to run the file and install the dependencies on Ubuntu?
You really should follow mchawre's advice and read the docker get-started. However, I can try to direct you into the right direction.
I want to install the dependencies by using the docker file
You have to understand that a docker file compiles to a docker image which then can be run as a docker container. You can think of a docker container as a lightweight virtual machine. With this in mind, your statement does not make sense, since you can not install dependencies for your host system (the system in which you might want to start the docker container) with the help of a docker image. This is not how docker containers are supposed to work.
Instead, the docker file allows you to create a virtualized (isolated) environment in which you can ssh (the docker way: docker exec -it <container_name> bash) and then build the respective application.
If you do not want to mess with docker at all and your systems runs something close to ubuntu:18.04, you can also manually execute the instruction from the docker file on your normal system in order to build your desired application on your system.

Install boto3 from a docker container

I am using docker. In one of my container I want to use boto3 so for that I used this command from inside the container
RUN apt-get install boto3
but it showed me like
bash: RUN: command not found
I also tried sudo apt-get install boto3 but it also showed me error like
bash: sudo: command not found
So can someone tell me how to install a package in a docker container?
Update
When I make docker ps -a
I get this
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
distracted_rubin
6a8b04e81122 odoo:11.0 "/entrypoint.sh odoo" 6 weeks ago Up 4 hours 8071/tcp, 0.0.0.0:18069->8069/tcp odoo
As you can see mu container id is 6a8b04e81122 I used this command to go inside container
docker exec -it 6a8b04e81122 /bin/bash
odoo image by default uses a user called odoo. This user doesn't have enough privilege to install a package.
So you have to create a container with different user (i.e) root.
docker run -it --user root odoo:11 bash
Now you created a container with root user context.
You can install boto3 by issuing below command.
apt update
For python 2.x: apt install python-boto3
For python 3.x: apt install python3-boto3
Finally, commit the container to persist the changes.
Update:
You can also open the existing container with a different user by issuing the below command.
docker exec -it --user root <container-id> bash

docker within docker, post http error

I am trying to run docker within docker. The sole purpose is experimental, I am by no means trying to implement anything functional, I just want to check how docker performs when it is run from another docker.
I start docker through boot2docker om my mac and then spin up a simple ubuntu image.
$ docker run -t -i ubuntu /bin/bash
I then go ahead and install docker as well as python.
root#aa9263c874e4: apt-get update
root#aa9263c874e4: apt-get install -y docker.io python2.7
It is able to connect to the internet, because it performs this apt-get. I then get the following error when I am trying to start a docker instance from within docker:
root#aa9263c874e4: sudo docker run -t -i ubuntu /bin/bash
2015/01/09 08:59:09 Post http:///var/run/docker.sock/v1.12/containers/create: dial unix /var/run/docker.sock: no such file or directory
Any idea what I missed? It seems weird that I get a post error because it seems to be able to connect to the internet via apt-get before.
I answered a similar question before on howto run a Docker container inside Docker.
To run docker inside docker is definitely possible. The main thing is that you run the outer container with extra privileges (starting with --privileged=true) and then install docker in that container.
Check this blog post for more info: Docker-in-Docker.
An excellent use case for this is described in this entry. The blog describes how to build docker containers within a Jenkins docker container.
So, I believe that your POST problem has nothing to do with connecting to the internet since the container is trying to talk to the docker socket. To solve the problem, simply add the --privileged=true flag on the outer container when starting it, like this:
docker run --privileged=true ...

Resources