I have a fresh install of Docker on Windows 11. I've updated WSL2.
In PowerShell I've run the command to install Memgraph Platform:
Windows PowerShell
Copyright (C) Microsoft Corporation. All rights reserved.
Install the latest PowerShell for new features and improvements! https://aka.ms/PSWindows
PS C:\Users\mjones> wsl --update
Installing: Windows Subsystem for Linux
Windows Subsystem for Linux has been installed.
PS C:\Users\mjones> docker run -it -p 7687:7687 -p 7444:7444 -p 3000:3000 -v mg_lib:/var/lib/memgraph memgraph/memgraph-platform
Unable to find image 'memgraph/memgraph-platform:latest' locally
latest: Pulling from memgraph/memgraph-platform
bbeef03cda1f: Pull complete
02daf11f09c6: Pull complete
5af8c9d8a2b2: Pull complete
0491e760b2fc: Pull complete
eb6fb9e064cc: Pull complete
d5d9807e2348: Pull complete
380fd2f22c95: Pull complete
1dfa183f30c6: Pull complete
f82d62f564a4: Pull complete
73b2e2343891: Pull complete
0a07ae5e7150: Pull complete
0ade360c640a: Pull complete
a5d9a38455ce: Pull complete
b334dbe05140: Pull complete
8d4b2a9136fb: Pull complete
f89e8fe0da1d: Pull complete
ab8f88c89515: Pull complete
1673e45f8514: Pull complete
4f4fb700ef54: Pull complete
Digest: sha256:d0c983be57497f098be23324b604cbdf5ad6fc29dbb90bdad909ada1da9f73ab
Status: Downloaded newer image for memgraph/memgraph-platform:latest
Memgraph Lab is running at localhost:3000
mgconsole 1.3
Connected to 'memgraph://127.0.0.1:7687'
Type :help for shell usage
Quit the shell by typing Ctrl-D(eof) or :quit
memgraph>
Is there a docker command that I can use to check which exactly version of Memgraph Platform was installed? I can see that the tag is latest, but how can I tell to which image does it correspond?
I've taken a look at https://hub.docker.com/r/memgraph/memgraph-platform/tags and I still don't get it. I'm new to Docker. Is there a command that would says "lastes is in fact 2.6.5-memgraph2.5.2-lab2.4.0-mage1.6"? I pruse this on the fact that those two images have the same DIGEST
I'm working on google container optimised OS (COS) trying to pull an image from Google Container Registry using docker-compose. I completed the the authentication using docker-credential-gcr.
Now
docker pull gcr.io/projectname/nextjs works
however
> docker-compose pull
Pulling nextjs ... error
ERROR: for nextjs unauthorized: You don't have the needed permissions to perform this operation, and you may have invalid credentials.
the problem was that the docker-compose alias did not support the gcr authentication.
The following steps fixed it.
Delete ~/.docker/config.json
change the alias in .bashrc to:
alias docker-compose='docker run --rm -v /var/run/docker.sock:/var/run/docker.sock -v "$PWD:$PWD" -w="$PWD" cryptopants/docker-compose-gcr'
docker pull cryptopants/docker-compose-gcr
docker-credential-gcr configure-docker
docker-compose pull works
I have 2 EC2s, one with Gitlab-ee installed, another with Docker installed and running Gitlab-Runner and a Registry container.
Gitlab-Runner is working, and picks up the commit to Gitlab, shipping it to docker for it's build phase. However during the build phase when the docker container attempts to login to the Registry container it errors with "http: server gave HTTP response to HTTPS client"
Docker Login code:
docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
Troubleshooting done:
If I ssh into the server, I can login with sudo docker login localhost:5000
The same error occurs if the registry is referenced with $CI_REGISTRY / localhost / DNS name
I ensured the CI_REGISTRY is set in gitlab.rb
I saw some mentions online about needing to use the --insecure-registries flag in the docker.service Exec line, and I did that as well and get the same error.
This works if the docker installation is on the same server, but I'm trying to decouple the two applications from each other so they can be managed separately.
Software verions:
Docker Version: 19.03.6
Gitlab-ee Version: 12.8.1
Gitlab-Runner Version: 12.8.0
If anyone could help me on this, it would be greatly appreciated! I've been banging my head against it for 2 days.
Is there a way I can download a Docker image/container using, for example, Firefox and not using the built-in docker-pull.
I am blocked by the company firewall and proxy, and I can't get a hole through it.
My problem is that I cannot use Docker to get images, that is, Docker save/pull and other Docker supplied functions since it is blocked by a firewall.
Just an alternative - This is what I did in my organization for couchbase image where I was blocked by a proxy.
On my personal laptop (OS X)
~$ $ docker save couchbase > couchbase.tar
~$ ls -lh couchbase.docker
-rw------- 1 vikas devops 556M 12 Dec 21:15 couchbase.tar
~$ xz -9 couchbase.tar
~$ ls -lh couchbase.tar.xz
-rw-r--r-- 1 vikas staff 123M 12 Dec 22:17 couchbase.tar.xz
Then, I uploaded the compressed tar ball to Dropbox and downloaded on my work machine. For some reason Dropbox was open :)
On my work laptop (CentOS 7)
$ docker load < couchbase.tar.xz
References
https://docs.docker.com/engine/reference/commandline/save/
https://docs.docker.com/engine/reference/commandline/load/
I just had to deal with this issue myself - downloading an image from a restricted machine with Internet access, but no Docker client for use on a another restricted machine with the Docker client, but no Internet access. I posted my question to the DevOps Stack Exchange site:
Downloading Docker Images from Docker Hub without using Docker
With help from the Docker Community I was able to find a resolution to my problem. What follows is my solution.
So it turns out that the Moby Project has a shell script on the Moby GitHub account which can download images from Docker Hub in a format that can be imported into Docker:
download-frozen-image-v2.sh
The usage syntax for the script is given by the following:
download-frozen-image-v2.sh target_dir image[:tag][#digest] ...
The image can then be imported with tar and docker load:
tar -cC 'target_dir' . | docker load
To verify that the script works as expected, I downloaded an Ubuntu image from Docker Hub and loaded it into Docker:
user#host:~$ bash download-frozen-image-v2.sh ubuntu ubuntu:latest
user#host:~$ tar -cC 'ubuntu' . | docker load
user#host:~$ docker run --rm -ti ubuntu bash
root#1dd5e62113b9:/#
In practice I would have to first copy the data from the Internet client (which does not have Docker installed) to the target/destination machine (which does have Docker installed):
user#nodocker:~$ bash download-frozen-image-v2.sh ubuntu ubuntu:latest
user#nodocker:~$ tar -C 'ubuntu' -cf 'ubuntu.tar' .
user#nodocker:~$ scp ubuntu.tar user#hasdocker:~
and then load and use the image on the target host:
user#hasdocker:~ docker load -i ubuntu.tar
user#hasdocker:~ docker run --rm -ti ubuntu bash
root#1dd5e62113b9:/#
I adapted a python script for having an OS independant solution:
docker-drag
Use it like that, and it will create a TAR archive that you will be able to import using docker load :
python docker_pull.py hello-world
python docker_pull.py alpine:3.9
python docker_pull.py kalilinux/kali-linux-docker
Use Skopeo. It is a tool specifically made for that (and others) purpose.
After install simply execute:
mkdir ubuntu
skopeo --insecure-policy copy docker://ubuntu ./ubuntu
Copy these files and import as you like.
First, check if your Docker daemon is configured for using the proxy. With boot2docker and docker-machine, for instance, this is done on docker-machine create, with the --engine-env option.
If this is just a certificate issue (i.e., Firefox does access Docker Hub), try and install that certificate:
openssl s_client -connect index.docker.io:443 -showcerts /dev/null | openssl x509 -outform PEM > docker.pem
sudo cp docker.pem /etc/pki/ca-trust/source/anchors/
sudo update-ca-trust
sudo systemctl restart docker
sudo docker run hello-world
The other workaround (not a recommended solution) would be to access Docker Hub without relying on certificate with --insecure-registry:
If the firewall is actively blocking any Docker pull, to the point you can't even access Docker Hub from Firefox, then you would need to docker save/docker load an image archive. Save it from a machine where you did access Docker Hub (and where the docker pull succeeded). Load it on your corporate machine (after approval of your IT system administrators, of course).
Note: you cannot easily "just" download an image, because it is often based on top of other images which you would need to download too. That is what docker pull does for you. And that is what docker save does too (create one archive composed of all the necessary images).
The OP Ephreal adds in the comments:
[I] didn't get my corp image to work either.
But I found that I could download the Docker file and recreate the image my self from scratch.
This is essentially the same as downloading the image.
So, by definition, a Docker pull client command actually needs to talk to a Docker daemon, because the Docker daemon assembles layers one by one for you.
Think of it as a POST request - it's causing a mutation of state, in the Docker daemon itself. You're not 'pulling' anything over HTTP when you do a pull.
You can pull all the individual layers over REST from the Docker registry, but that won't actually be the same semantics as a pull, because pull is an action that specifically tells the daemon to go and get all the layers for an image you care about.
Another possibly might be an option for you if your company firewall (and policy) allows for connecting to a remote SSH server. In that case you can simply set up a SSH tunnel to tunnel any traffic to the Docker registry through it.
The Answer and solution to my original question were that I found that I could download the Docker file and all the necessary support files and recreate the image my self from scratch. This is essentially the same as downloading the image.
This solution has been in the questions and comments above, I just pinned it out here.
This is although no longer an issue for me since my company have changed policy and allowed docker pull commands to work.
thanks #Ham Co for answer,
I adapted a golang tool for having an OS independant solution:
golang http pull docker image
./gopull download redis
get a docker importable archive redis.tar
References:
https://github.com/NotGlop/docker-drag
I have an ubuntu server without a display which runs a vncserver and I access it via VNC.
On this server I have tried to run the chrome example from this blog : https://blog.jessfraz.com/post/docker-containers-on-the-desktop/
but it failed like this:
>./runChrome.sh
Warning: '--cpuset' is deprecated, it will be replaced by '--cpuset-cpus' soon. See usage.
Unable to find image 'jess/chrome:latest' locally
latest: Pulling from jess/chrome
42b46c8b387a: Pull complete
9402e656a0ac: Pull complete
753b4bb947ba: Pull complete
9f3ad4f52cb2: Pull complete
c3374db106fe: Pull complete
0cdf8bc021c3: Pull complete
e1db72a1498b: Pull complete
fe339b19b201: Pull complete
7b966fb57da2: Already exists
Digest: sha256:65185c906ab67ca126ca49943cc5c4f05d2e6c9aac04a505fa3f5e6b183b72da
Status: Downloaded newer image for jess/chrome:latest
WARNING: Your kernel does not support swap limit capabilities, memory limited without swap.
[1:1:0729/171614:ERROR:browser_main_loop.cc(185)] Running without the SUID sandbox! See https://code.google.com/p/chromium/wiki/LinuxSUIDSandboxDevelopment for more information on developing with the sandbox on.
No protocol specified
[1:1:0729/171614:ERROR:browser_main_loop.cc(231)] Gtk: cannot open display: unix:1
>cat runChrome.sh
docker run -it --net host --cpuset 0 --memory 512mb -v /tmp/.X11-unix:/tmp/.X11-unix -e DISPLAY=unix$DISPLAY -v $HOME/Downloads:/root/Downloads -v $HOME/.config/google-chrome/:/data --device /dev/snd --name chrome jess/chrome
Any idea how to fix this ?
What might be going wrong ?
This worked:
docker run -e DISPLAY -v $HOME/.Xauthority:/home/ghc/.Xauthority --net=host -ti 80d81a4ae162 /bin/bash
Got inspired by the comments here : http://fabiorehm.com/blog/2014/09/11/running-gui-apps-with-docker/