I am trying to use a local docker container registry as my imageRegistry .
I created a registry by -
docker run –d –p 5000:5000 –-name registry registry:2
I have tagged and pushed my image to localhost:5000, and I can see it running by docker images.
I have modified my launch.json as -
{
"configurations": [
{
"name": "Run/Debug on Kubernetes",
"type": "cloudcode.kubernetes",
"request": "launch",
"skaffoldConfig": "${workspaceFolder}/skaffold.yaml",
"watch": true,
"cleanUp": true,
"portForward": true,
"imageRegistry": "localhost:5000"
}
]
}
But when I do a Run on Kubernetes,
I get an error waiting for rollout to finish: 0 of 1 updated replicas are available...
pod/serviceb-847d79694c-6lxbd: container server is waiting to start: localhost:5000/serviceb:latest#sha256:*** can't be pulled.
I'm guessing you are using a local cluster solution such as minikube, Docker's built-in Kubernetes cluster, kind, microk8s, or k3s. Although these clusters are running on your machine, they're running within a virtual machine and so their localhost is the VM, not your machine. So your private registry is not accessible within the cluster.
Related
I have once run portainer (official image).
Now it is always restarting after boot and after i stop it.
I changed the restart policy with docker update.
When I run docker inspect on the container i see:
"RestartPolicy": {
"Name": "",
"MaximumRetryCount": 0
},
When I use docker update and explicitly set the RestartPolicy to "no", and then stop the container, it restarts anyway (a new container based on the same image) and the RestartPolicy is set to the above again.
I purged everything using docker system purge -a
When stopping the container and immediatly removing the image, it says it cannot delete because there is a container running. (Which is true, because it immediatly restarts)
I even uninstalled docker and removed ~/.docker and then reinstalled docker. Upon startup of the docker daemon, the container was running again.
I really don't understand what else I can do. But I don't need and don't want Portainer running anymore.
You will need to run this command to stop restarting your container:
docker update --restart=no container-name
Docker beginner here.
I created a simple asp.net web application , which on run shows me the default page of application.
Using the docker build command, I create a image out of it and further using the docker run command docker run -d --name {containername} -p 81:8080 {imageid}. Now when I try to access the container image over local host on browser i.e. http://localhost:81/, I am getting 'The site cannot be reached' error. I expected the same default page of application to open over the exposed port 81.
My docker client is windows/amd and docker server is linux/amd. The docker version I am using is 19.03.08
Using docker inspect I could see
"PortBindings": {
"8080/tcp": [
{
"HostIp": "",
"HostPort": "81"
}
]
},
and "IPAddress": "" in networksettings.
docker ps and docker ps -a
I would appreciate any help or suggestion.
From the screen shots attached, it seems your container is killed as soon as its started. You should have a process running in the container to keep it up & running. Only then will you be able to access via the host ip:port
In this case http://localhost:81
In your docker ps -a the status is exited. Ideally it should be something like this if your container is up & running.
docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
4c01db0b339c ubuntu:12.04 bash 17 seconds ago Up 16 seconds
I know that this question has been done on the web, but I am not able to find a suitable solution for me.
I have one server (VM1) with sshfs installed that should provide remote file system storage. I have another server (VM2) where containers are run, I would like that these containers use volumes hosted in VM1.
I followed this official docker guide
So in VM1 I ran:
docker plugin install vieux/sshfs DEBUG=1 sshkey.source=/home/debian/.ssh/
In VM 2 I ran:
docker volume create --name remotevolume -d vieux/sshfs -o sshcmd=debian#vm1:/home/debian/sshfs300 -o IdentityFile=/home/csicari/data/Mega/lavoro/keys/vm-csicari.key -o -o allow_other -o nonempty
This is the inspect output:
[
{
"CreatedAt": "0001-01-01T00:00:00Z",
"Driver": "vieux/sshfs:latest",
"Labels": {},
"Mountpoint": "/mnt/volumes/895d7f7679f69131500c786d7fe5fdc1",
"Name": "remotevolume",
"Options": {
"IdentityFile": "/home/csicari/data/Mega/lavoro/keys/vm-csicari.key",
"sshcmd": "debian#vm1:/home/debian/sshfs300"
},
"Scope": "local"
}
]
In VM1 I ran also:
docker run -it -v remotevolume:/home -d ubuntu
But I got this error:
docker: Error response from daemon: VolumeDriver.Mount: sshfs command execute failed: exit status 1 (read: Connection reset by peer
). See 'docker run --help'.
Maybe this is a long back asked question maybe it will help other newbies. The remote VM /etc/ssh/sshd_config file content check the property PasswordAuthentication yes
If it is 'yes' we can use with password passing parameter. otherwise, change it to 'no' and restart the ssh or sshd service.
service ssh restart
service ssh status
And also PasswordAuthentication. Depending on your PAM configuration, so check that as well.
If it is AWS instance reset the password using the command passwd ubuntu # Here ubuntu is the default user in ec2 ubuntu instance
My final conclusion is that I wasn't able to set the /c/users/... location because it wasn't shared in "Docker".
After this I was able to see the /c/users/.. directory in all my container instances. I was then able to use the -v flag with this directory on every instance basically writing files to my host machine.
What I still don't get is that I don't think I'm actually using volumes at the moment... But it works...
I'm trying to have my Docker-hosted Redis instance to persist its data but the mounted volume doesn't seem to be used. I was using Docker with VirtualBox/boot2docker where the composition worked, however I have since moved to Docker for Windows where the compose file still works, but I'm not sure about the volumes property.
My docker-compose.yml file:
vq-redis:
image: redis:latest
ports:
- "6379:6379"
volumes:
- /c/users/r/.docker/data/redis/data:/data
It doesn't matter if I add or remove the volumes definition, because it will always show something like this with docker inspect:
"Mounts": [
{
"Name": "40791b26771b5d62778d85b0ef24e74e516f95d32cf217424232ce8f8a1b8c6f",
"Source": "/var/lib/docker/volumes/40791b26771b5d62778d85b0ef24e74e516f95d32cf217424232ce8f8a1b8c6f/_data",
"Destination": "/data",
"Driver": "local",
"Mode": "rw",
"RW": true,
"Propagation": "rprivate"
}
],
Is the volumes property still working with Docker for Windows or am I missing a point?
Edit:
If I run...
docker run --name vq-redis -d -v //c/users/r/.docker/data/vq-redis:/data redis redis-server --appendonly yes
... I can see the container appearing in Kitematic and with docker inspect I can see a mount going to my local folder. However the local folder isn't shown in Kitematic...
If I add data to the Redis server hosted in the container, then stop the container and start it again the data is gone.
I tried setting the local folder manually in Kitematic. This restarts the container so it seems, but I'm unsure if the initial parameters are passed again. You say:
"If the volumes aren't networked on run"
I guess they were actually networked on run as seen in the console.
Still, I can add data to the Redis instance hosted in the container. But as soon as I restart the container it's gone...
It should work. I assume you didn't get any errors (e.g., permission issues, etc.) and that you are removing old builds before rebuilding. Does the "/var/lib/docker/volumes/4079..." directory get created?
You could try using double leading slashes on Windows, which was a work-around for some versions:
volumes:
- //c/users/r/.docker/data/redis/data:/data
Redis wouldn't have anything to do with the volume not being created but have you tried other services or even basic docker create -v ... or docker run -v ...?
UPDATE:
There may be some gaps in your understanding of how Docker works that may be getting in the way here.
If you do docker run --name some-redis -d redis redis-server --appendonly yes it will create a volume similar to the one you have in your docker inspect output. Clearly you don't have a /var/lib/docker/volumes/... directory on your Windows machine -- that's in the VM docker host (e.g., boot2docker). How you get to the Docker host volumes differs depending on a number of factors.
If the volumes aren't networked on run, restarting won't help. Do docker stop some-redis && docker rm some-redis and re-run.
Eg. running this command
docker run --name some-redis -d -v $(pwd)/data:/data redis redis-server --appendonly yes
should work as you expect.
ls ./data => appendonly.aof.
It will obviously be empty at first. Destroying the container and creating a new one with the same directory will show the data is still there:
docker exec some-redis echo "set bar baz" | redis-cli
docker stop some-redis
docker rm some-redis
docker run --name some-redis2 -d -v $(pwd)/data:/data redis redis-server --appendonly yes
docker exec some-redis2 echo "get bar" | redis-cli
=> "baz"
(the previous value for "bar" set in the destroyed container).
If this doesn't work for you there could be some issues specific to your environment -- perhaps try a Vagrant-based solution or beta Docker or a native Linux host.
I'm running Docker 1.11 on OS X and I'm trying to figure out where my local volumes are being written. I created a Docker volume by running docker volume create --name mysql. I then ran docker volume inspect mysql and it output the following:
[
{
"Name": "mysql",
"Driver": "local",
"Mountpoint": "/mnt/sda1/var/lib/docker/volumes/mysql/_data",
"Labels": {}
}
]
The issue is /mnt/sda1/var/lib/docker/volumes/mysql/_data doesn't actually exist on my machine. I thought maybe the issue was that it didn't actually get created until it was used by a container so I started a container by running docker run --name mysql -v mysql:/var/lib/mysql -e MYSQL_ROOT_PASSWORD=mysql -P -d mysql:5.7 and then created a database in MySQL, but the mount point still doesn't exist. I even ran docker inspect mysql to ensure it's using the correct volume and got the following:
...
"Mounts": [
{
"Name": "mysql",
"Source": "/mnt/sda1/var/lib/docker/volumes/mysql/_data",
"Destination": "/var/lib/mysql",
"Driver": "local",
"Mode": "z",
"RW": true,
"Propagation": "rprivate"
}
],
...
At this point I'm completely lost as to where the data is being written. What am I missing?
Because Docker is based on Linux, it cannot run directly on Windows/OS X. Instead, it runs inside a VirtualBox virtual machine (a Docker Machine) that runs a Linux operating system. That's why when you install Docker Toolbox you see that VirtualBox is installed.
To see files and folders inside this virtual machine, use
docker-machine ssh default
default is the name of the default Docker Machine.
Docker 19.3.8
I've changed the mountpoint for a folder that I created in my Mac.
After that it worked.
$ mkdir volume_database
$ docker container run -it \
--volume=/volume_database:/volume_database debian