Docker can't load config file, but container works fine - docker

I'm using docker-machine with generic driver to deploy containers on an existing remote host. But when I switch to the remote host and try to run a container, this happens:
$ docker-machine create --driver generic --generic-ip-address=$REMOTEIP \
--generic-ssh-key ~/.ssh/id_rsa --generic-ssh-user $REMOTEUSER vm
[works fine]
$ eval $(docker-machine env vm) #switch to remote host
[works fine]
$ docker run -it busybox sh
WARNING: Error loading config file:/home/user/.docker/config.json - open /home/user/.docker/config.json: permission denied
[Even with the warning, runs fine]
The container runs fine anyway, but I want to solve that warning message.
Given that user doesn't exist in the remote host, I guess that this file doesn't exist. But ..
1) why does the engine search for it in the first place? shouldn't it search for the config.json of $REMOTEUSER instead of that?
2) and why the container runs properly on the remote host anyway?

Docker is not searching for that file in the remote host but in the local host. Turns out that file exists and it's owned by root.
$ ls -lsa ~/.docker/config.json 4 -rw------- 1 root root \
95 dic 29 15:29 /home/user/.docker/config.json
That's why it says permission denied.
A simple chown fixes the issue:
$ sudo chown user:user /home/user/.docker -R

Related

Correct user configuration for aws-cli container

The AWS CLI v2 documentation presents an option and guide to installing / configuring the cli via docker. The guide is straightforward enough to follow, and the container works fine with the key items being
mounting the local .aws directory to provide credentials to the container
mounting $pwd for any I/O work required
I'm using it for s3 and realized that any files I copy to my local drive from s3 show as owned by root.
>docker run --rm -v "$HOME/.aws:/root/.aws:rw" -v "$PWD:/aws:rw" amazon/aws-cli s3 cp s3://xxx/hello .
download: s3://xxx/hello to ./hello
>ls -l
total 0
-rw-r--r-- 1 root root 0 Oct 2 09:43 hello
This makes sense, as the process is running as root in the container, but isn't ideal. There isn't a any other user in the container, so I can't just run "as" kirk.
>docker run --rm -u kirk -v "$HOME/.aws:/root/.aws:rw" -v "$PWD:/aws:rw" amazon/aws-cli s3 cp s3://xxx/hello .
docker: Error response from daemon: unable to find user kirk: no matching entries in passwd file.
Is there a way to mount the volume "as" a user or by delegating user access to the container? I don't care (& not sure I can control) the user inside the container, but I would like the process to run in the context of a user on the host system. What's the right approach here?
You can run a container as a user that doesn't exist inside the image using numerical values for -u ${UID}:${GID}. For example:
docker run --rm \
-u 1000:1000 \
-e AWS_ACCESS_KEY_ID=${AWS_ACCESS_KEY_ID} \
-e AWS_SECRET_ACCESS_KEY=${AWS_SECRET_ACCESS_KEY} \
-v ${PWD}:/aws:rw \
amazon/aws-cli s3 cp s3://devops-example/lolz.gif .
... will copy the file as UID 1000 GID 1000.
Note: using AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY environment variables for passing credentials instead of mounting the credentials file. The full list of environment variables is available here.

How can I fix the permissions using docker on a bluemix volume?

In a container, I am trying to start mysqld.
I was able to create an image and push to the registry but when I want to start it, the /var/lib/mysql volume can't be initialized as I try to do a chown mysql on it and it is not allowed.
I checked docker specific solutions but for now I couldn't make any work.
Is there a way to set the right permissions on a bind-mounted folder from bluemix? Or is the option --volumes-from supported, I can't seem to make it work.
The only solution I can see right now is running mysqld as root, but I would rather not.
Try with mount-bind
created a volume on bluemix using cf ic volume create database
try to run mysql_install_db on my db container to initialize it's content
docker run --name init_vol -v database:/var/lib/mysql registry.ng.bluemix.net/<namespace>/<image>:<tag> mysql_install_db --user=mysql
mysql_install_db is supposed to populate the /var/lib/mysql and set the rights to the owner set in the --user option, but I get:
chown: changing ownership of '/var/lib/mysql': Permission denied.
I also tried the above in different ways, using sudo or a script. I tried with mysql_install_db --user=root, which does setup my folder correctly, except it is owned by the root user, and I would rather keep mysql running as the mysql user.
Try with volumes-from data container
I create a data container with a volume /var/lib/mysql
docker run --name db_data -v /var/lib/mysql registry.ng.bluemix.net/<namespace>/<image>:<tag> mysql_install_db --user=mysql
I run my db container with the option --volumes-from
docker run --name db_srv --volumes-from=db_data registry.ng.bluemix.net/<namespace>/<image>:<tag> sh -c 'mysqld_safe & tail -f /var/log/mysql.err'
docker inspect db_srv shows:
[{ "BluemixApp": null, "Config": {
...,
"WorkingDir": "",
... } ... }]
cf ic logs db_srv shows:
150731 15:25:11 mysqld_safe Starting mysqld daemon with databases from
/var/lib/mysql 150731 15:25:11 [Note] /usr/sbin/mysqld (mysqld
5.5.44-0ubuntu0.14.04.1-log) starting as process 377 .. /usr/sbin/mysqld: File './mysql-bin.index' not found (Errcode: 13)
150731 15:25:11 [ERROR] Aborting
which is due to --volumes-from not being supported, and to data created in the first not staying in the second run.
In IBM Containers, the user namespace is enabled for docker engine. The "Permission denied " issue appears to be because the NFS is not allowing mapped user, from container, to perform the operation.
On my local setup, on the docker host, mounted a NFS (exported with no_root_squash option). and attached the volume to container using -v option. When the container is
spawned from docker with disabled user namespace, I am able to change the ownership for bind-mount inside the container. But With user namespace enabled docker, I am getting
chown: changing ownership of ‘/mnt/volmnt’: Operation not permitted
The volume created by cf (cf ic volume create ...) is a NFS, to verify just try mount -t nfs4 from container.
When, the user namespace is enabled for docker engine. The effective root inside the container is a non-root user out side the container process and NFS is not allowing the mapped non-root user to perform the chown operation on the volume inside the container.
Here is the work-around, you may want to try
In the Dockerfile
1.1 Create user mysql with UID 1010, or any free ID, before MySql installation.
Other Container or new Container can access mysql data files on Volume with UID 1010
RUN groupadd --gid 1010 mysql
RUN useradd --uid 1010 --gid 1010 -m --shell /bin/bash mysql
1.2 Install MySqlL but do not initialize database
RUN apt-get update && apt-get install -y mysql-server && rm -rf /var/lib/mysql && rm -rf /var/lib/apt/lists/*
In the entry point script
2.1 Create mysql Data directory under bind-mount as user mysql and then link it as /var/lib/mysql
Suppose the volume is mounted at /mnt/db inside the container (ice run -v <volume name>:/mnt/db --publish 3306... or cf ic run --volume <volume name>:/mnt/db ...).
Define mountpath env var
MOUNTPATH="/mnt/db"
Add mysql to group "root"
adduser mysql root
Set permission for mounted volume so that root group members can create directory and files
chmod 775 $MOUNTPATH
Create mysql directory under Volume
su -c "mkdir -p /mnt/db/mysql" mysql
su -c "chmod 700 /mnt/db/mysql" mysql
Link the directory to /var/lib/mysql
ln -sf /mnt/db/mysql /var/lib/mysql
chown -h mysql:mysql /var/lib/mysql
Remove mysql from group root
deluser mysql root
chmod 755 $MOUNTPATH
2.2 For first time, initialize database as user mysql
su -c "mysql_install_db --datadir=/var/lib/mysql" mysql
2.3 Start the mysql server as user mysql
su -c "/usr/bin/mysqld_safe" mysql
You have multiple questions here. I will try to address some. Perhaps that will get you a step further in the right direction.
--volumes-from is not supported yet in IBM Containers. You can get around that by using the same --volume (-v) option on the first and subsequent containers, instead of using -v on the first container creation command and --volumes-from on the subsequent ones.
--user option is not supported also by IBM Containers.
I see your syntax for using --user (I suppose on localhost docker) is not correct. All options for the docker run command must come before the image name. Anything after the image name is considered a command to run inside the container. In this case "--user=mysql" will be considered as a command that the system will attempt to run and fail.
The last error message you shared shows that there is some file not found in the working dir which causes the app to abort. You may work around that by using a script as the command to run in the container which changes dir to the right location.

What's the best way to share files from Windows to Boot2docker VM?

I have make my code ready on Windows, but I find it's not easy to share to boot2docker.
I also find that boot2docker can't persistent my changes. For example, I create a folder, /temp, after I restart boot2docker. This folder disappears, and it's very inconvenient.
What is your way when you have some code on Windows, but you need to dockerize them?
---update---
I try to update the setting in VirtualBox and restart boot2docker, but it's not working on my machine.
docker#boot2docker:/$ ls -al /c
total 4
drwxr-xr-x 3 root root 60 Jun 17 05:42 ./
drwxrwxr-x 17 root root 400 Jun 17 05:42 ../
dr-xr-xr-x 1 docker staff 4096 Jun 16 09:47 Users/
Boot2Docker is a small Linux VM running on VirtualBox. So before you can use your files (from Windows) in Docker (which is running in this VM), you must first share your code with the Boot2Docker VM itself.
To do so, you mount your Windows folder to the VM when it is shutdown (here a VM name of default is assumed):
C:/Program Files/Oracle/VirtualBox/VBoxManage sharedfolder \
add default -name win_share -hostpath c:/work
(Alternatively you can also open the VirtualBox UI and mount the folder to your VM just as you did in your screenshot!)
Now ssh into the Boot2Docker VM for the Docker Quickstart Terminal:
docker-machine ssh default
Then perform the mount:
Make a folder inside the VM: sudo mkdir /VM_share
Mount the Windows folder to it: sudo mount -t vboxsf win_share /VM_share
After that, you can access C:/work inside your Boot2Docker VM:
cd /VM_share
Now that your code is present inside your VM, you can use it with Docker, either by mounting it as a volume to the container:
docker-machine ssh default
docker run --volume /VM_share:/folder/in/container some/image
Or by using it while building your Docker image:
...
ADD /my_windows_folder /folder
...
See this answer.
I have Windows 10 Home edition with Docker toolbox 1.12.2 and VirtualBox 5.1.6.
I was able to mount a folder under C:\Users successfully in my container without doing any extra steps such as docker-machine ssh default.
Example:
docker run -it --rm -v /c/Users/antonyj/Documents/code:/mnt ubuntu /bin/bash
So having your files under C:\Users probably is the simplest thing to do.
If you do not want to have your files under C:\Users, then you have to follow the steps in the accepted answer.
Using Docker Toolbox, the shared directory can only be /c/User:
Invalid directory. Volume directories must be under your Users directory
Enter image description here
Step1&Step2 Command in the "Docker Quickstart Terminal" in the implementation of Step1 & Step2 can be:
# Step 1. VirtualBox. Add the error in the command line, in the VirtualBox image interface manually add, as shown above.
"C:/Program Files/Oracle/VirtualBox/VBoxManage.exe" sharedfolder add default --name "E_DRIVE" --hostpath "e:\\" --automount
# Try 1. Only a temporary effect. Restart VM after sharing failure.
#docker-machine ssh default "sudo mkdir -p /e" # Create a directory identifier, consistent with the Windows drive letter
#docker-machine ssh default "sudo mount -t vboxsf -o uid=1000,gid=50 E_DRIVE /e"
# Try 2. Modify /etc/fstab. Do not use the permanent mount. Each restart /etc/fstab content will be reset
#docker-machine ssh default "sudo sed -i '$ a\E_DRIVE /e vboxsf uid=1000,gid=50 0 0' /etc/fstab"
# Step 2. `C:\Program Files\Docker Toolbox\start.sh` https://github.com/docker/machine/issues/1814#issuecomment-239957893
docker-machine ssh default "cat <<EOF | sudo tee /var/lib/boot2docker/bootlocal.sh && sudo chmod u+x /var/lib/boot2docker/bootlocal.sh
#!/bin/sh
mkdir -p /e
mount -t vboxsf -o uid=1000,gid=50 E_DRIVE /e
EOF
"
Then restart the VM. Try this: docker run --name php-fpm --rm -it -v /e:/var/www/html php:7.1.4-fpm /bin/bash
References:
What's the best way to share files from Windows to Boot2docker VM?
http://hessian.cn/p/1502.html
Windows + Boot2Docker, How to add D:\ drive to be accessible from within docker?
In the System Tray, you should have the cute Docker whale swimming. Right click and select Settings.
Click on Apply. This will bring up the Credentials dialog and you will need to provide your current Windows credentials. Ensure that you give it correctly. I also suspect that you might need to be an administrator.
To mount our host directory (C:\data) in a container, we are going to use the -v (volume) flag while running the container. A sample run is shown here:
I have CentOS in my local Docker container.
docker run -v c:/data:/data **centos** ls /data
Mount shared folder Windows guest with Linux host (vm name 'default'):
Shutdown 'default' VM:
cd "C:\Program Files\Oracle\VirtualBox"
VBoxManage controlvm default poweroff
Add shared folder command line:
./VBoxManage sharedfolder add default -name win_share -hostpath "C:\docker\volumes"
Start VM (headless only command line interface):
/VBoxManage startvm headless default
Connect to ssh:
docker-machine ssh default
Create VM sharedfolder directory:
sudo mkdir /sharedcontent
Mount Windows folder to host VM:
sudo mount -t vboxsf win_share /sharedcontent

docker error: /var/run/docker.sock: no such file or directory

I am new to docker. I have a shell script that loads data into impala and I want a docker file that runs builds an image and run the container.
I am on mac, installed boot2docker and have the DOCKER_HOST env set up.
bash-3.2$ docker info
Containers: 0
Images: 0
Storage Driver: aufs
Root Dir: /mnt/sda1/var/lib/docker/aufs
Dirs: 0
Execution Driver: native-0.2
Kernel Version: 3.15.3-tinycore64
Debug mode (server): true
Debug mode (client): false
Fds: 10
Goroutines: 10
EventsListeners: 0
Init Path: /usr/local/bin/docker
Sockets: [unix:///var/run/docker.sock tcp://0.0.0.0:2375]
I am trying to just installed a pre-built image using:
sudo docker pull busybox
I get this error:
sudo docker pull busybox
2014/08/18 17:56:19 Post http:///var/run/docker.sock/images/create?fromImage=busybox&tag=: dial unix /var/run/docker.sock: no such file or directory
Is something wrong with my docker setup?
When I do a docker pull busybox, It pulls the image and download is complete.
bash-3.2$ docker pull busybox
Pulling repository busybox
a9eb17255234: Download complete
fd5373b3d938: Download complete
d200959a3e91: Download complete
37fca75d01ff: Download complete
511136ea3c5a: Download complete
42eed7f1bf2a: Download complete
c120b7cab0b0: Download complete
f06b02872d52: Download complete
120e218dd395: Download complete
1f5049b3536e: Download complete
bash-3.2$ docker run busybox /bin/echo Hello Doctor
Hello Doctor
Am I missing something?
You don't need to run any docker commands as sudo when you're using boot2docker as every command passed into the boot2docker VM runs as root by default.
You're seeing the error when you're running as sudo because sudo doesn't have the DOCKER_HOST env set, only your user does.
You can confirm this by doing a:
$ env
Then a
$ sudo env
And looking for DOCKER_HOST in each output.
As for having a docker file that runs your script, something like this might work for you:
Dockerfile
FROM busybox
# Copy your script into the docker image
ADD /path/to/your/script.sh /usr/local/bin/script.sh
# Run your script
CMD /usr/local/bin/script.sh
Then you can run:
docker build -t your-image-name:your-tag .
This will build your docker image, which you can see by doing a:
docker images
Then, to run your container, you can do a:
docker run your-image-name:your-tag
This run command will start a container from the image you created with your Dockerfile and your build command and then it will finish once your script.sh has finished executing.
You can quickly setup your environment using shellinit
At your command prompt execute:
$(boot2docker shellinit)
That will populate and export the environment variables and initialize other features.
docker pull will fail if docker service is not running. Make sure it is running by
:~$ ps aux | grep docker
root 18745 1.7 0.9 284104 13976 ? Ssl 21:19 0:01 /usr/bin/docker -d
If it is not running, you can start it by
sudo service docker start
For Ubuntu 15 and above use
sudo systemctl start docker
On my MAC when I start boot2docker-vm on the terminal using
boot2docker start
I see the following
To connect the Docker client to the Docker daemon, please set:
export DOCKER_CERT_PATH=<my things>
export DOCKER_TLS_VERIFY=1
export DOCKER_HOST=tcp://<ip>:2376
After setting these environment variables I was able to run the build without the problem.
Update [2016-04-28] If you are using a the recent versions of docker you can do
eval $(docker-machine env) will set the environment
(docker-machine env will print the export statements)
I also got this error. Though, I did not use boot2docker but just installed "plain" docker on Ubuntu (see https://docs.docker.com/installation/ubuntulinux/).
I got the error ("dial unix /var/run/docker.sock: no such file or directory. Are you trying to connect to a TLS-enabled daemon without TLS?") because the docker daemon was not running, yet.
On Ubuntu, you need to start the service:
sudo service docker start
See also http://blog.arungupta.me/resolve-dial-unix-docker-sock-error-techtip64
For boot2docker on Windows, after seeing:
FATA[0000] Get http:///var/run/docker.sock/v1.18/version:
dial unix /var/run/docker.sock: no such file or directory.
Are you trying to connect to a TLS-enabled daemon without TLS?
All I did was:
boot2docker start
boot2docker shellinit
That generated:
export DOCKER_CERT_PATH=C:\Users\vonc\.boot2docker\certs\boot2docker-vm
export DOCKER_TLS_VERIFY=1
export DOCKER_HOST=tcp://192.168.59.103:2376
Finally:
boot2docker ssh
And docker works again
In Linux, first of all execute sudo service docker start in terminal.
If you're using CentOS 7, and you've installed Docker via yum, don't forget to run:
$ sudo systemctl start docker
$ sudo systemctl enable docker
This will start the server, as well as re-start it automatically on boot.
To setup your environment and to keep it for the future sessions you can do:
echo 'export DOCKER_HOST="tcp://$(boot2docker ip 2>/dev/null):2375";' >> ~/.bashrc
Then:
source ~/.bashrc
And your environment will be setup in every session
The first /var/run/docker.sock refers to the same path in your boot2docker virtual machine. Correcly write for windows /var/run/docker.sock
You, maybe the not the OP, but someone may have a directory called /var/run/docker.sock/ already due to how many times you hack and slash to get things right with docker (especially noobs). Delete that directory and try again.
This helped me on my way to getting it to work on Centos 7.
I have installed the docker using offline method and post server restart docker is not running.
So, I executed the below command it worked for me!
/usr/bin/dockerd > /dev/null
run the following commands, OS = CentOS / RHLE / Amazon Linux, etc.
sudo systemctl start docker
sudo systemctl enable docker
sudo systemctl status docker
chmod 777 /var/run/docker.sock

How to edit Docker container files from the host?

Now that I found a way to expose host files to the container (-v option) I would like to do kind of the opposite:
How can I edit files from a running container with a host editor?
sshfs could probably do the job but since a running container is already some kind of host directory I wonder if there is a portable (between aufs, btrfs and device mapper) way to do that?
The best way is:
$ docker cp CONTAINER:FILEPATH LOCALFILEPATH
$ vi LOCALFILEPATH
$ docker cp LOCALFILEPATH CONTAINER:FILEPATH
Limitations with $ docker exec: it can only attach to a running container.
Limitations with $ docker run: it will create a new container.
Whilst it is possible, and the other answers explain how, you should avoid editing files in the Union File System if you can.
Your definition of volumes isn't quite right - it's more about bypassing the Union File System than exposing files on the host. For example, if I do:
$ docker run --name="test" -v /volume-test debian echo "test"
The directory /volume-test inside the container will not be part of the Union File System and instead will exist somewhere on the host. I haven't specified where on the host, as I may not care - I'm not exposing host files, just creating a directory that is shareable between containers and the host. You can find out exactly where it is on the host with:
$ docker inspect -f "{{.Volumes}}" test
map[/volume_test:/var/lib/docker/vfs/dir/b7fff1922e25f0df949e650dfa885dbc304d9d213f703250cf5857446d104895]
If you really need to just make a quick edit to a file to test something, either use docker exec to get a shell in the container and edit directly, or use docker cp to copy the file out, edit on the host and copy back in.
We can use another way to edit files inside working containers (this won't work if container is stoped).
Logic is to:
-)copy file from container to host
-)edit file on host using its host editor
-)copy file back to container
We can do all this steps manualy, but i have written simple bash script to make this easy by one call.
/bin/dmcedit:
#!/bin/sh
set -e
CONTAINER=$1
FILEPATH=$2
BASE=$(basename $FILEPATH)
DIR=$(dirname $FILEPATH)
TMPDIR=/tmp/m_docker_$(date +%s)/
mkdir $TMPDIR
cd $TMPDIR
docker cp $CONTAINER:$FILEPATH ./$DIR
mcedit ./$FILEPATH
docker cp ./$FILEPATH $CONTAINER:$FILEPATH
rm -rf $TMPDIR
echo 'END'
exit 1;
Usage example:
dmcedit CONTAINERNAME /path/to/file/in/container
The script is very easy, but it's working fine for me.
Any suggestions are appreciated.
There are two ways to mount files into your container. It looks like you want a bind mount.
Bind Mounts
This mounts local files directly into the container's filesystem. The containerside path and the hostside path both point to the same file. Edits made from either side will show up on both sides.
mount the file:
❯ echo foo > ./foo
❯ docker run --mount type=bind,source=$(pwd)/foo,target=/foo -it debian:latest
# cat /foo
foo # local file shows up in container
in a separate shell, edit the file:
❯ echo 'bar' > ./foo # make a hostside change
back in the container:
# cat /foo
bar # the hostside change shows up
# echo baz > /foo # make a containerside change
# exit
❯ cat foo
baz # the containerside change shows up
Volume Mounts
mount the volume
❯ docker run --mount type=volume,source=foovolume,target=/foo -it debian:latest
root#containerB# echo 'this is in a volume' > /foo/data
the local filesystem is unchanged
docker sees a new volume:
❯ docker volume ls
DRIVER VOLUME NAME
local foovolume
create a new container with the same volume
❯ docker run --mount type=volume,source=foovolume,target=/foo -it debian:latest
root#containerC:/# cat /foo/data
this is in a volume # data is still available
syntax: -v vs --mount
These do the same thing. -v is more concise, --mount is more explicit.
bind mounts
-v /hostside/path:/containerside/path
--mount type=bind,source=/hostside/path,target=/containerside/path
volume mounts
-v /containerside/path
-v volumename:/containerside/path
--mount type=volume,source=volumename,target=/containerside/path
(If a volume name is not specified, a random one is chosen.)
The documentaion tries to convince you to use one thing in favor of another instead of just telling you how it works, which is confusing.
Here's the script I use:
#!/bin/bash
IFS=$'\n\t'
set -euox pipefail
CNAME="$1"
FILE_PATH="$2"
TMPFILE="$(mktemp)"
docker exec "$CNAME" cat "$FILE_PATH" > "$TMPFILE"
$EDITOR "$TMPFILE"
cat "$TMPFILE" | docker exec -i "$CNAME" sh -c 'cat > '"$FILE_PATH"
rm "$TMPFILE"
and the gist for when I fix it but forget to update this answer:
https://gist.github.com/dmohs/b50ea4302b62ebfc4f308a20d3de4213
If you think your volume is a "network drive", it will be easier.
To edit the file located in this drive, you just need to turn on another machine and connect to this network drive, then edit the file like normal.
How to do that purely with docker (without FTP/SSH ...)?
Run a container that has an editor (VI, Emacs). Search Docker hub for "alpine vim"
Example:
docker run -d --name shared_vim_editor \
-v <your_volume>:/home/developer/workspace \
jare/vim-bundle:latest
Run the interactive command:
docker exec -it -u root shared_vim_editor /bin/bash
Hope this helps.
I use sftp plugin from my IDE.
Install ssh server for your container and allow root access.
Run your docker container with -p localport:22
Install from your IDE a sftp plugin
Example using sublime sftp plugin:
https://www.youtube.com/watch?v=HMfjt_YMru0
The way I am doing is using Emacs with docker package installed. I would recommend Spacemacs version of Emacs. I would follow the following steps:
1) Install Emacs (Instruction) and install Spacemacs (Instruction)
2) Add docker in your .spacemacs file
3) Start Emacs
4) Find file (SPC+f+f) and type /docker:<container-id>:/<path of dir/file in the container>
5) Now your emacs will use the container environment to edit the files
docker run -it -name YOUR_NAME IMAGE_ID /bin/bash
$>vi path_to_file
The following worked for me
docker run -it IMAGE_NAME /bin/bash
eg. my image was called ipython/notebook
docker run -it ipython/notebook /bin/bash

Resources