Docker Commands equivalent - docker

Whats the difference between commands docker container run vs docker run
Also whats the difference between docker image build vs docker build

Docker previously uses the commands like docker run and docker build to run a container or build an image. In later versions, they felt they should specify and make a group as #tarun-lalwani suggested so there is no difference in docker run or docker container run(they are just aliases of each other).

if you just run the docker command
$ docker
Usage: docker COMMAND
A self-sufficient runtime for containers
Options:
--config string Location of client config files (default "/Users/tarun.lalwani/.docker")
-D, --debug Enable debug mode
--help Print usage
-H, --host list Daemon socket(s) to connect to
-l, --log-level string Set the logging level ("debug"|"info"|"warn"|"error"|"fatal") (default "info")
--tls Use TLS; implied by --tlsverify
--tlscacert string Trust certs signed only by this CA (default "/Users/tarun.lalwani/.docker/ca.pem")
--tlscert string Path to TLS certificate file (default "/Users/tarun.lalwani/.docker/cert.pem")
--tlskey string Path to TLS key file (default "/Users/tarun.lalwani/.docker/key.pem")
--tlsverify Use TLS and verify the remote
-v, --version Print version information and quit
Management Commands:
checkpoint Manage checkpoints
config Manage Docker configs
container Manage containers
image Manage images
network Manage networks
node Manage Swarm nodes
plugin Manage plugins
secret Manage Docker secrets
service Manage services
stack Manage Docker stacks
swarm Manage Swarm
system Manage Docker
volume Manage volumes
Commands:
attach Attach local standard input, output, and error streams to a running container
build Build an image from a Dockerfile
commit Create a new image from a container's changes
cp Copy files/folders between a container and the local filesystem
create Create a new container
deploy Deploy a new stack or update an existing stack
diff Inspect changes to files or directories on a container's filesystem
events Get real time events from the server
exec Run a command in a running container
export Export a container's filesystem as a tar archive
history Show the history of an image
images List images
import Import the contents from a tarball to create a filesystem image
info Display system-wide information
inspect Return low-level information on Docker objects
kill Kill one or more running containers
load Load an image from a tar archive or STDIN
login Log in to a Docker registry
logout Log out from a Docker registry
logs Fetch the logs of a container
pause Pause all processes within one or more containers
port List port mappings or a specific mapping for the container
ps List containers
pull Pull an image or a repository from a registry
push Push an image or a repository to a registry
rename Rename a container
restart Restart one or more containers
rm Remove one or more containers
rmi Remove one or more images
run Run a command in a new container
save Save one or more images to a tar archive (streamed to STDOUT by default)
search Search the Docker Hub for images
start Start one or more stopped containers
stats Display a live stream of container(s) resource usage statistics
stop Stop one or more running containers
tag Create a tag TARGET_IMAGE that refers to SOURCE_IMAGE
top Display the running processes of a container
unpause Unpause all processes within one or more containers
update Update configuration of one or more containers
version Show the Docker version information
wait Block until one or more containers stop, then print their exit codes
Run 'docker COMMAND --help' for more information on a command.
Now you have groups to manager multiple commands like container, image. When you run this group
$ docker image
Usage: docker image COMMAND
Manage images
Options:
--help Print usage
Commands:
build Build an image from a Dockerfile
history Show the history of an image
import Import the contents from a tarball to create a filesystem image
inspect Display detailed information on one or more images
load Load an image from a tar archive or STDIN
ls List images
prune Remove unused images
pull Pull an image or a repository from a registry
push Push an image or a repository to a registry
rm Remove one or more images
save Save one or more images to a tar archive (streamed to STDOUT by default)
tag Create a tag TARGET_IMAGE that refers to SOURCE_IMAGE
So as docker build is an alias to docker container build. Docker initially was all about first level commands and later the team realized they need a way to group the commands and that is why to keep it compatible with old clients you have these aliases

Related

How to specify volume for docker container in CircleCI configuration?

I did not manage to find out how to mount volume of docker image in config.yml for integrating with CircleCI.
Official document gives those variables for
container usage, entry point, command, etc., but none about volume mounting.
The scenario is, the building of my project requires two docker containers, the main container and the other container for service foo. To use the service foo, I need expose some artifacts generated in earlier steps to foo container and do some next steps.
Anyone has idea whether I can do that?
As taken from CircleCI documentation:
Mounting Folders
It’s not possible to mount a folder from your job space into a container in Remote Docker (and vice versa). But you can use docker cp command to transfer files between these two environments. For example, you want to start a container in Remote Docker and you want to use a config file from your source code for that:
- run: |
# creating dummy container which will hold a volume with config
docker create -v /cfg --name configs alpine:3.4 /bin/true
# copying config file into this volume
docker cp path/in/your/source/code/app_config.yml configs:/cfg
# starting application container using this volume
docker run --volumes-from configs app-image:1.2.3
In the same way, if your application produces some artifacts that need to be stored, you can copy them from Remote Docker:
- run: |
# starting container with our application
# make sure you're not using `--rm` option otherwise container will be killed after finish
docker run --name app app-image:1.2.3
- run: |
# once application container finishes we can copy artifacts directly from it
docker cp app:/output /path/in/your/job/space

Docker container migration

I'm trying to use the experimental checkpoint feature to implement container migration between machines. I've found many examples of checkpointing and restoring on the same machine but I've only found this documentation about migrating checkpoints between different machines:
https://circleci.com/blog/checkpoint-and-restore-docker-container-with-criu/
However, the commands it uses is outdated and docker checkpoint restore is not available anymore. Instead docker start --checkpoint syntax should be used. I've done my use case as follow:
Host 1: Has a docker container running which I do a checkpoint at a location in $CHECKPOINT_FOLDER which is a shared folder among different machines with docker checkpoint create --checkpoint-dir=$CHECKPOINT_FOLDER $NAME checkpoint-$NAME where $NAME is the name of the running container (one-13 in this case).
Host 2: Has access to $CHECKPOINT_FOLDER folder and I can see the created one. I run docker start --checkpoint-dir $CHECKPOINT_FOLDER --checkpoint checkpoint-$NAME $NAME where $NAME again is the same name of the container that was running at host 1 (one-13). However I get this error:
No such container: one-13
Which makes me think that I have to create a container before starting a checkpoint but then, how do I do so? isn't supposed to be created automatically from the checkpoint? If not, is there a way to pass the checkpoint to the docker create command? What's the workflow for this use case?
Thank you.
Before restoring the container in the destination host, you have to create a container:
sudo docker create --name $NAME <container-image>
Creating the container $NAME is to make sure that the base image is downloaded and the disk space is allocated, you can check in /var/lib/docker/containers/
Then you can restore it with the share dump files in $CHECKPOINT_FOLDER
sudo docker start --checkpoint=checkpoint-$NAME --checkpoint-dir=$CHECKPOINT_FOLDER $NAME
or specifically with checkpoint-name
sudo docker start --checkpoint=checkpoint-one-13 --checkpoint-dir=$CHECKPOINT_FOLDER $NAME

Name a docker at build and how to retrieve it

I tried to build a docker image. Then on docker images command, the list displays:
REPOSITORY TAG IMAGE ID CREATED SIZE
<none> <none> eaaf8e203bd4 1 min ago 253.2 MB
Is there a way in my Dockerfile to specify a name for this build? Or at docker build . command line?
Another question: I want to upload via SFTP the docker container on my production server and run it. Where are the containers stored?
You should use the -t option of docker build
docker build -t name:tag
The tag is optional.
I want to upload via SFTP the docker container on my production server and run it.
You should "upload" the image, and by that I mean push it to a docker registry running on your server.
You could also commit a running container into an intermediate image (which would freeze the running state of the container, but would not preserve the volume data, if one was declared in that container)
Then copy that archive, and docker import it.
See "How to move docker containers between different hosts".
Once imported, see "Where are docker images stored on the host machine?".
(/var/lib/docker/containers for containers)

Docker: how to clone container and its data into a new one

Is there a way to clone a container and its data into a new one with different starting parameters?
At the moment I'm only able to start a new cloned container (from custom image) WITHOUT the data.
I tell you what I have to do: I started a "docker-jenkins" container with some starting parameters and then configured it, but now I noticed that I forgot some important starting parameters so I wanna restart the same container adding more starting parameters...
The problem is (if I understand well) that I cannot modify the starting parameters of existing running container, so my idea is to start a cloned one (data INCLUDED) with different parameters but I don't understand how to do it...
Can someone help me?
1. Using volumes
If your sole point is to persist your data you need to use Volumes.
A data volume is a specially-designated directory within one or more
containers that bypasses the Union File System. Data volumes provide
several useful features for persistent or shared data:
Volumes are initialized when a container is created. If the container’s base image contains data at the specified mount point,
that existing data is copied into the new volume upon volume
initialization. (Note that this does not apply when mounting a host
directory.)
Data volumes can be shared and reused among containers.
Changes to a data volume are made directly.
Changes to a data volume will not be included when you update an image.
Data volumes persist even if the container itself is deleted.
Source:
https://docs.docker.com/engine/tutorials/dockervolumes/
Essentially you map a folder from your machine to one into your container.
When you kill the container and spawn a new instance (with modified parameters) your volume (with the existing data) is re-mapped.
Example:
docker run -p 8080:8080 -p 50000:50000 -v /your/home:/var/jenkins_home jenkins
Source:
https://hub.docker.com/_/jenkins/
2. Using commit to create snapshots
A different route is to make use of the docker commit command.
It can be useful to commit a container’s file changes or settings into
a new image. This allows you debug a container by running an
interactive shell, or to export a working dataset to another server.
Generally, it is better to use Dockerfiles to manage your images in a
documented and maintainable way.
The commit operation will not include any data contained in volumes
mounted inside the container.
https://docs.docker.com/engine/reference/commandline/commit/
$ docker ps
ID IMAGE COMMAND CREATED STATUS PORTS
c3f279d17e0a ubuntu:12.04 /bin/bash 7 days ago Up 25 hours
197387f1b436 ubuntu:12.04 /bin/bash 7 days ago Up 25 hours
$ docker commit c3f279d17e0a svendowideit/testimage:version3
f5283438590d
$ docker images
REPOSITORY TAG ID CREATED SIZE
svendowideit/testimage version3 f5283438590d 16 seconds ago 335.7 MB
It is also possible to commit with altered configuration:
docker commit --change='CMD ["apachectl", "-DFOREGROUND"]' -c "EXPOSE 80" c3f279d17e0a svendowideit/testimage:version4
To clone a container in docker, you can use docker commit and create a snapshot the container
Use docker images to view the docker image REPOSITORY and TAG names.
Use docker ps -a to view the available containers and note the CONTAINER ID of the container of which a snapshot is to be created.
use docker commit <CONTAINER ID> <REPOSITORY>:<TAG> to create snapshot and save it as an image.
Again use docker images to view the saved image.
To access saved snapshot
run,
docker run -i -t <IMAGE ID> /bin/bash
docker ps -a
docker start <CONTAINER ID>
docker exec -ti <CONTAINER ID> bash

Docker: How to save running instance?

I am running an instance of docker, and I would like to save my work - the docs just aren't 100% clear on how to do this, so I'm asking here. I opened the docker instance using:
docker run -it [public dockerhub name]
Now I would like to save all my work locally so that I can come back to it. I don't particularly want to check it into dockerhub, unless that's advisable.
Here's what I have done. I have opened a new docker CLI tab, and done docker ps there to find the ID of the running docker instance. Then in the same tab I tried doing this:
docker commit <docker-id> me/myinstance
This gave me a commit hash.
Can I now safely exit the running docker instance? What command would I use to open it again - do I need to store the commit hash, or can I just do docker run -it me/myinstance?
As the docs mention:
You pull an image from Docker hub
You run that image on a container using docker run <image>
When you make changes to a container, you're not changing the underlying image, so those changes are not persisted if the container is stopped. To persist the changes you've made to the container, you create a new image with docker commit <container_id>
In the example that is on Docker docs:
# What containers are running on my system?
$ docker ps
ID IMAGE COMMAND CREATED
c3f279d17e0a ubuntu:12.04 /bin/bash 7 days ago
197387f1b436 ubuntu:12.04 /bin/bash 7 days ago
# Create a new image called svendowideit/testimage, tag it as "version3"
$ docker commit c3f279d17e0a svendowideit/testimage:version3
f5283438590d
# What images do I have on my system?
$ docker images
REPOSITORY TAG ID
svendowideit/testimage version3 f5283438590d
This way, you have persisted the changes to container c3f279d17e0a, on a new image, called svendowideit/testimage:version3.
Now you have an image with your modification, so you can run it as many times as you want on a container:
$ docker run svendowideit/testimage:version3
Again, containers are stateless. Any change you make inside a container, is lost when that container stops. One way to persist data even after a container exists, is by using volumes. This way your container has access to a directory in the host filesystem, that you can read and write.
Changes made inside a container are not lost when the container exits and containers (container applications) are not stateless unless you have specifically separated the data storage from the application (by mounting folders from the host filesystem or sending data to a database outside of the container).
To see your changes persisted in a container, start the old container (docker start ~) instead of creating a new container (docker run ~).
This is easier to do if you name your containers.
ie.
docker run -it --name containerName imageName
do stuff to your container
docker kill containerName
docker start containerName
You will see that your changes are persisted in that container.
You can also commit your container as an image, which can be pushed to a registry or exported to a file.

Resources