I have an ubuntu server without a display which runs a vncserver and I access it via VNC.
On this server I have tried to run the chrome example from this blog : https://blog.jessfraz.com/post/docker-containers-on-the-desktop/
but it failed like this:
>./runChrome.sh
Warning: '--cpuset' is deprecated, it will be replaced by '--cpuset-cpus' soon. See usage.
Unable to find image 'jess/chrome:latest' locally
latest: Pulling from jess/chrome
42b46c8b387a: Pull complete
9402e656a0ac: Pull complete
753b4bb947ba: Pull complete
9f3ad4f52cb2: Pull complete
c3374db106fe: Pull complete
0cdf8bc021c3: Pull complete
e1db72a1498b: Pull complete
fe339b19b201: Pull complete
7b966fb57da2: Already exists
Digest: sha256:65185c906ab67ca126ca49943cc5c4f05d2e6c9aac04a505fa3f5e6b183b72da
Status: Downloaded newer image for jess/chrome:latest
WARNING: Your kernel does not support swap limit capabilities, memory limited without swap.
[1:1:0729/171614:ERROR:browser_main_loop.cc(185)] Running without the SUID sandbox! See https://code.google.com/p/chromium/wiki/LinuxSUIDSandboxDevelopment for more information on developing with the sandbox on.
No protocol specified
[1:1:0729/171614:ERROR:browser_main_loop.cc(231)] Gtk: cannot open display: unix:1
>cat runChrome.sh
docker run -it --net host --cpuset 0 --memory 512mb -v /tmp/.X11-unix:/tmp/.X11-unix -e DISPLAY=unix$DISPLAY -v $HOME/Downloads:/root/Downloads -v $HOME/.config/google-chrome/:/data --device /dev/snd --name chrome jess/chrome
Any idea how to fix this ?
What might be going wrong ?
This worked:
docker run -e DISPLAY -v $HOME/.Xauthority:/home/ghc/.Xauthority --net=host -ti 80d81a4ae162 /bin/bash
Got inspired by the comments here : http://fabiorehm.com/blog/2014/09/11/running-gui-apps-with-docker/
Related
I'm getting possibly incorrect behavior and a bad error message if I run an image if a linked container is not found:
# this works:
> docker run --rm -d --name natsserver nats
> docker run --rm -it --name hello-world --link natsserver hello-world
# now stop natsserver again...
> docker stop natsserver
When I run hello-world again with the same command, I don't understand the first part of the error handling - why does docker try to pull?
> docker run --rm -it --name hello-world --link natsserver hello-world
Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
Digest: sha256:b8ba256769a0ac28dd126d584e0a2011cd2877f3f76e093a7ae560f2a5301c00
Status: Image is up to date for hello-world:latest
docker: Error response from daemon: could not get container for natsserver: No such container: natsserver.
See 'docker run --help'.
And things get even worse if I try to run an image I have built locally:
> docker build -t nats-logger .
[...]
Successfully tagged nats-logger:latest
> docker run --rm -it --name nats-logger --link=natsserver nats-logger
Unable to find image 'nats-logger:latest' locally
docker: Error response from daemon: pull access denied for nats-logger, repository does not exist or may require 'docker login': denied: requested access to the resource is denied.
See 'docker run --help'.
So my questions are:
a) Is docker allowed to try to pull in this case, or is this a bad behavior?
b) Is this really a bad error message, or did I miss something?
P.S.: I'm running Docker version 19.03.2, build 6a30dfc on Windows 10.
Is docker allowed to try to pull in this case
Docker will pull image if it is not available on the machine.
Unable to find image 'hello-world:latest' locally
This warning message is not due to linking, it is because hello-world:latest is not exist in your system local images. so whe run docker run it will look on local then will pull from remote if not exist.
Now First thing, Better to use docker-compose instead of Legacy container links.
You can not link the container if it's not running. verify the container natsserver using docker ps and then if it is running then you can link.
docker run --rm -it --name hello-world --link natsserver:my_natserver_host hello-world
Once up you can then check the linking.
docker inspect hello-world | grep -A 1 Links
Legacy container links
Warning: The --link flag is a legacy feature of Docker. It may
eventually be removed. Unless you absolutely need to continue using
it, we recommend that you use user-defined networks to facilitate
communication between two containers instead of using --link. One
feature that user-defined networks do not support that you can do with
--link is sharing environment variables between containers. However, you can use other mechanisms such as volumes to share environment
variables between containers in a more controlled way.
simply try "docker login".
check if your image name is exist in docker hub
and check correct docker build command -> docker build -t image-name .
review the correctness of Docker file script
I'm on windows 10 pro, using docker for windows, using linux containers
i have logged in using docker login -u username -p password
and i'm getting this issue alot, not just with httpd but also with django, mysql .. etc
> docker container run -p 8080:80 -d --name n2 httpd
Unable to find image 'httpd:latest' locally
latest: Pulling from library/httpd
3d77ce4481b1: Downloading
73674f4d9403: Download complete
d266646f40bd: Download complete
ce7b0dda0c9f: Download complete
01729050d692: Download complete
014246127c67: Download complete
7cd2e04cf570: Download complete
docker: unauthorized: authentication required.
See 'docker run --help'.
to solve this issue update windows to winver 1803 (the new April Spring Creators update)
I have tried to install TFServing in docker container several times.
However I still can't build it without any error.
I follow the installation steps on the official site. But I still meet compile error during building.
I doubt that if there is some flaw in dockerfile which I built.
I attach the screenshot of the error.
I found a dockerfile on the net which meet my demand.
https://github.com/posutsai/DeepLearning_Docker/blob/master/Tensorflow-serving/Dockerfile-Tensorflow-serving-gpu
As of now (Oct 2019), official docker image of TFServing for both CPU & GPU is available at https://hub.docker.com/r/tensorflow/serving.
To set up your tfserving image on docker, just pull the image & start the container.
Pull the official image
docker pull tensorflow/serving:latest
Start the container
docker run -p 8500:8500 -p 8501:8501 --mount type=bind,source=/path/to/model/dir,target=/models/inception --name tfserve -e MODEL_NAME=inception -t tensorflow/serving:latest
For using GPU, pull GPU specific image and pass the appropriate parameters in docker command
docker pull tensorflow/serving:latest-gpu
docker run -p 8500:8500 -p 8501:8501 --mount type=bind,source=/path/to/model/dir,target=/models/inception --name tfserve_gpu -e MODEL_NAME=inception --gpus all -t tensorflow/serving:latest-gpu --per_process_gpu_memory_fraction=0.001
Please note,
gpus all flag is used to allocate all available GPUs (in case you have multiple in your machine) to the docker container.
User gpus device=1 to select first GPU device, if you need to restrict usage to a particular device.
per_process_gpu_memory_fraction flag is used to restrict GPU memory usage by the tfserving docker image. Pass its value according to your program need.
I'm brand new to both TeamCity and Docker. I'm struggling to get a Docker container with TeamCity running and usable on my local machine. I've tried several things, to no avail:
I installed Docker for Mac per instructions here. I then tried to run the following command, documented here, for setting up teamcity in docker:
docker run -it --name teamcity-server-instance \
-v c:\docker\data:/data/teamcity_server/datadir \
-v c:\docker\logs:/opt/teamcity/logs \
-p 8111:8111 \
jetbrains/teamcity-server
That returned the following error: docker: Error response from daemon: Invalid bind mount spec "c:dockerdata:/data/teamcity_server/datadir": invalid mode: /data/teamcity_server/datadir.
Taking a different tack, I tried to follow the instructions here - I tried running the following command:
docker run -it --name teamcity -p 8111:8111 sjoerdmulder/teamcity
The terminal indicated that it was starting up a web server, but I can't browse to it at localhost, nor at localhost:8111 (error ERR_SOCKET_NOT_CONNECTED without the port, and ERR_CONNECTION_REFUSED with the port).
Since the website with the docker run command says to install Docker via Docker Toolbox, I then installed that at the location they pointed to (here). I then tried the
docker-machine ip default
command they suggested, but it didn't work, error "Host does not exist: "default"". That makes sense, since the website said the "default" vm would be created by running Docker Quickstart and I didn't do that, but they don't provide any link to Docker Quickstart, so I don't know what they are talking about.
To try to get the IP address the container was running on, I tried this command
docker inspect --format='{{.Name}} - {{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' $(docker ps -aq)
That listed the names of the running containers, each followed by a hyphen, then nothing. I also tried
docker ps -a
That listed running contaners also, but didn't give the IP. Also, the port is blank, and the status says "exited (130) 4 minutes ago", so it doesn't seem like the container stayed alive after starting.
I also tried again with port 80, hoping that would make the site show at localhost:
docker run -it --name teamcity2 -p 80:80 sjoerdmulder/teamcity
So at this point, I'm completely puzzled and blocked - I can't start the server at all following the instructions on hub.docker.com, and I can't figure out how to browse to the site that does start up with the other instructions.
I'll be very grateful for any assistance!
JetBrains now provides official docker images for TeamCity. I would recommend starting with those.
The example command in their TeamCity server image looks like this
docker run -it --name teamcity-server-instance \
-v <path to data directory>:/data/teamcity_server/datadir \
-v <path to logs directory>:/opt/teamcity/logs \
-p <port on host>:8111 \
jetbrains/teamcity-server
That looks a lot like your first attempt. However, c:\docker\data is a Windows file path. You said you're running this on a mac, so that's definitely not going to work.
Once TeamCity starts, it should be available on port 8111. That's what -p 8111:8111 part of the command does. It maps port 8111 on your machine to port 8111 in the VM Docker for Mac creates to run your containers. ERR_CONNECTION_REFUSED could be caused by several things. Two most likely possibilities are
TeamCity could take a little while to start up and maybe you didn't give it enough time. Solution is to wait.
-it would start the TeamCity container in interactive mode. If you exit out of the terminal window where you ran the command, the container will also probably terminate and will be inaccessible. Solution is to not close the window or run the container in detached mode.
There is a good overview of the differences between Docker for Mac and Docker Toolbox here: Docker for Mac vs. Docker Toolbox. You don't need both, and for most cases you'll want to use Docker for Mac for testing stuff out locally.
I create a registry mirror. Can I pull an image without Internet? I have created a mirror using this command:
docker run -d -p 5555:5000 -e STORAGE_PATH=/mirror -e STANDALONE=false -e MIRROR_SOURCE=https://registry-1.docker.io -e MIRROR_SOURCE_INDEX=https://index.docker.io -v /Users/v11/Documents/docker-mirror:/mirror --restart=always --name mirror registry
When I pull an image like hello-world:
docker pull image
I can find the image in local path what I set "/Users/v11/Documents/docker-mirror". Does it mean I succeed in creating mirror? But, I closed the Internet, and delete the hello-world:
docker rmi hello-world
and pull again, but it failed. I want to know whether I must use mirror with Internet? if not, what is wrong with me?
By the way, I have started docker daemon with this ENV:
docker --insecure-registry 192.168.59.103:5555 --registry-mirror=http://192.168.59.103:5555 -d &
and 192.168.59.103 is my boot2docker ip.
In your configuration you specified a non-standalone setup using a MIRROR_SOURCE_INDEX. So whenever you want to search for an image that index is queried. The other MIRROR_SOURCE gets queried for the image itself.
You might be able to retrieve an image offline with a pull request (that has already been pulled before while you were online). But you are not able to issue docker search commands when the index is not available.
If you want to be completely independent from the public docker registry then you would need to setup your own private registry.