Does docker pull image in Powershell install on Windows system as well - docker

I executed the, "docker pull nginx" in Windows Powershell.
On pulling it downloads an image which is in a few MB's
I have Windows 10 pro.
Then i ran nginx as below,
"docker run --name mynginx1 -P -d nginx"
Does the pull command also install nginx on my Windows machine as well ?

No - the docker pull command doesn't install anything, it just downloads the docker image locally. After the pull there's no container running on the host (which is actually a VM in Windows - slightly different if you're using docker desktop or docker-machine, but I won't get into the weeds here). The docker run command is what actually runs a container in the docker host.

Related

Docker - unknown command "run"

I am trying to run a docker container on an Ubuntu WSL2 instance on Windows 10. Docker version installed in 20.10.17. I have been able to run docker commands and build the image successfully.
The output of docker images:
When I try to run the container using the command docker run -p 5000:5000 test it gives the following error:
I have never had this issue with docker before and not sure why it thinks it's an npm command. Does anyone know why this is happening?

Docker Desktop in Ubuntu not showing containers those are build with sudo privilege

I build and run an Docker Container using sudo privilege to do so I ran bellow commands
This command to build the container and its build successfully.
sudo docker build -t getting-started .
After that I ran the docker container using bellow command
sudo docker run -dp 3000:3000 getting-started
After running the docker container everything is running fine and I am able to see my container when I ran bellow command
sudo docker ps
But the problem is I am not able to see my container that I just built and ran in my Docker Desktop.
Note: If I build and run the docker container without sudo privilege then I am able to see the container in Docker Desktop.
So now what should I do to manage my containers using Docker Desktop those are build and running using sudo privilege.
i am facing the same problems for a while and didn't get anything that worked for me as of yet but maybe in your case if you can enable docker to run in rootless mode or add your user to docker group enable privileges to enable the user to use docker with out sudo it may also work for the docker desktop to access those images in the sudo mode.
try
https://askubuntu.com/questions/477551/how-can-i-use-docker-without-sudo
https://docs.docker.com/engine/install/linux-postinstall/
the other solution i thought up is to make the docker desktop use the context you are using for the docker engine which is the default not the desktop-linux it will create when its starting up maybe that will enable it to read the past containers you were using
or another solution is to run the docker desktop in sudo mode i dont know how to do that as of yet but its worth a shot if you can find out how
if your build successed you can run
docker ps -a
to see all the working and stoped containers
and you can run
docker logs --tail=50 container-name
so you can see the container logs and start fixing the issue

Docker learn issue

I'm learning docker with docker with get-started pages (https://docs.docker.com/get-started/part2/#sample-dockerfile) in official docker site. In part 2 "Build and run your image" i'm trying to run docker container with next command:
docker run --publish 8000:8080 --detach --name bb bulletinboard:1.0
I run command docker ps, and there is no docker running. I tried docker start bb an there is still nothing. I use docker in Ubuntu 20.04 virtual machine. And thus localhost:8000 web page "can't be reached". Docker building was successfull (i got message Successfully tagged bulletinboard:1.0)

How to Copy Files From Docker Ubuntu Container to Windows Host

I can't figure out how to copy files from a docker ubuntu container to a windows host, or vice versa.
My host is Windows 10. When I start Docker, I run the Ubuntu image using
docker run -it ubuntu bash
The documentation I've read says that the way the transfer files is with docker cp, but apparently that command doesn't exist in this ubuntu image, i.e., bash: docker: command not found.
This must be a dumb oversight on my part. Can someone please give me a little help?
You need to run docker cp command on host machine.
The command template is:
docker cp <containerId>:<src_path_inside_container> <target_host_path>

gcloud docker not working on Compute Engine VM

I am trying to get docker images from Container Engine to run on a Compute Engine VM. On my laptop I can run gcloud docker pull gcr.io/projectid/image-tag
I just spun up a Debian VM on Compute Engine, but when I try to run any gcloud docker command I get ERROR: (gcloud.docker) Docker is not installed.
> gcloud --version
Google Cloud SDK 140.0.0
alpha 2017.01.17
beta 2017.01.17
bq 2.0.24
bq-nix 2.0.24
core 2017.01.17
core-nix 2017.01.17
gcloud
gsutil 4.22
gsutil-nix 4.22
> gcloud docker --version
ERROR: (gcloud.docker) Docker is not installed.
https://cloud.google.com/sdk/gcloud/reference/docker makes it seem like gcloud docker should work.
Am I supposed to install docker on the VM before running gcloud docker?
Per intuition i tried to install docker with sudo apt-get install docker, but I was wrong, the actual docker package name is docker.io, so I restarted the process and worked this way:
Install the docker package:
sudo apt-get install docker.io
Test if docker is working
sudo gcloud docker ps
Pull your image from the image repository, e.g. gcr.io. If dont have
a particular tag use the latest one.
sudo gcloud docker -- pull gcr.io/$PROJECT_NAME/$APPLICATION_IMAGE_NAME:latest
Run your image. Remember to specify the port mapping correctly, the first port is the one will be exposed in the GCE instance and the second one is the one exposed internally by the docker container, e.g EXPOSE 8000. For instance in the following example my app is configured to work at the 8000 port but it will be accessed by the public on the default www port, the 80.
sudo docker run -d -p 80:8000 --name=$APPLICATION_IMAGE_NAME \
--restart=always gcr.io/$PROJECT_NAME/$APPLICATION_IMAGE_NAME:latest
The --restart flag will allow this container to be restarted every time the instance restarts
I hope it works for you.
Am I supposed to install docker on the VM before running gcloud docker?
Yes. The error message is telling you that Docker needs to be installed on the machine for gcloud docker to work.
You can either install docker manually on your Debian VM or you can launch a VM that has docker pre-installed onto the machine, such as the Container-Optimized OS from Google.

Resources