I have run the command docker context create PikaServer --docker "host=ssh://pika_node" and it seems to work just fine.
Image 1
I can see the right container and the right image from the server. When I use the Docker Extension with VSCode I got the failed to connect error. Not sure why I am getting the error. Not sure it is a docker issue. It might be a VSCode issue.
Image 2
From the terminal only, using "host=ssh://pika_node" is just fine, I can access the remote images and containers. However, with VSCode using Docker extension, it is just not possible to use the ssh alias pika_node.
Image 3
How can I handle this error?
UPDATE
Be aware I use ssh-jump.
Image 4
Image 3 shows you the cause of your problem:
The "host" part in the Docker endpoint string (ssh://username#host:port) must be either a globally-resolvable DNS machine name, or an IP address. Docker extension will not be able to use host aliases defined in the SSH configuration file.
And in Image 4 you've shown how you're using a host alias pika_node.
to resolve this, use the IP address of the docker host in your context:
host=ssh://xxx.xxx.xxx.xxx
instead of using the pika_node alias to refer to it.
Related
i previously ran jellyfin on my desktop comuputer and put a lot of work into the manual creation of collection (descriptions, folder pictures etc.). Now i want to implement jellyfin on my brandnew Synology DS220+ NAS, which is running Jellyfin on Docker. If my understanding of Docker is correct, it is running an instance of the jellyfin app. So while its running, i am not able to see the folders/files of jellyfin in my FileStation-Browser.
So my question is: How can i force Jellyfin/Docker to use the existing Jellyfin-Collection-Data from my desktopPC (which are bacically .xml files).
Thanks in advance!!
You need to map filepaths for config, cache and media. From the jellyfin docs:
docker run -d -v /srv/jellyfin/config:/config -v /srv/jellyfin/cache:/cache -v /media:/media --net=host jellyfin/jellyfin:latest
Arguments of -v flag means volume mapping. Left of colon(:) is your machine's path while right side is the path viewed by jellyfin in its container.
If you don't map internal volumes of a container with your machine, you cannot view those files and the data created will be destroyed on container shutdown.
So basically replace the config,cache and media paths of the above command with your folder paths
The docker run hello-world by #vanshaj is ment to be executed in the terminal window (ssh session) and not in a docker container.
Another approach would be to install the jellyfin package of synocommunity.com. This package does not use docker and is available since just a few days ago.
Newer to Docker and trying to understand how images work. I ran the following command:
sudo docker search hello-world
and it returned this:
docker.io docker.io/carinamarina/hello-world-app This is a sample Python web application,
I then ran:
sudo docker run docker.io/carinamarina/hello-world-app
...and this was the output from the terminal:
* Running on http://0.0.0.0:5000/ (Press CTRL+C to quit)
I don't understand. How can the IP address be 0.0.0.0? I entered that into a browser and got nothing. I tried localhost:5000 and got nothing.
How does one get to see this webapp run?
tl;dr
you need to publish the port to the host network to see the application working
long version:
well good for you to start working with docker
I will start with explaining a little bit about docker then I will explain what is happing over there
First of all, there is a difference between "image" and "container"
Image is the blueprint that is used to create containers from
so you write the definition of the image like (install this, copy that from the host or build that.......etc) in the image file and then you tell docker to build this image and then RUN containers from that image
so if you have like 1 image and you run 2 containers from it they both will have the same instructions( definition )
what happened with you
when you invoke the RUN command first thing you will see is
Unable to find image 'carinamarina/hello-world-app:latest' locally
That's mean that the local docker will not find the image(blueprint) locally with the name docker.io/carinamarina/hello-world-app so it will do the following steps
so it will start pulling the image from the remote registry
then
then it will start extracting the layers of the image
then it will start the container and show the logs from INSIDE CONTAINER
Why it didn't run with you
the application is running inside the container on port 5000
the container has a whole different network than the host that's running on (centos7 machine in your case )
you will have to make a port forwarding between the docker network and the host network so you can USE the application from the HOST
you can read more about that here docker networking
I recommend the following places to start with
let's play with docker
docker tutorial for beggines
how to configure docker containers proxy ?
First of all,
I tried to use the way that setted '/etc/systemd/system/docker.service.d/http-proxy.conf' (https://docs.docker.com/config/daemon/systemd/#httphttps-proxy) and it really works for docker daemon, but it doesn't work for docker containers, it seems this way just take effect for some command like 'docker pull'
Secondary,
I have a lot of docker containers, I don't want to use 'docker run -e http_proxy=xxx... ' command every time when I start a container.
So I guess if there is such a way automatically load the global configuration file when the container starts, I googled it and got it to set the file '~/.docker/config.json'(How to configure docker container proxy?, this way still does not work for me.
(
my host machine system is centos7, here is my docker -v:
Docker version 1.13.1, build 6e3bb8e/1.13.1
)
I feel that it may be related to my docker version or the docker started by the systemd service, so ~/.docker/config.json does not take effect.
Finally ,
I just hope that modifying configuration files will allow all my containers to automatically configure environment variables when it start (that is auto set environment variables 'http_proxy=http://HostIP:8118 https_proxy=http://HostIP:8118' when a container start, like Dockerfile param ENV) . I want to know if there is such a way? And if this way can be realised I can make the container use the host's proxy, after all, my host's agent is working properly.
But I was wrong, I tried to run a container,then set http_proxy=http://HostIP:8118 and https_proxy=http://HostIP:8118, but when I use the command 'wget facebook.com' and I got 'Connecting to HostIP:8118... failed: No route to host.', But, the host machine(centos7) can successfully execute the wget, And I can successfully ping the host in the container. I don't know why it might be related to firewalls and the 8118 port.
It is Over,
OMG.. I have no other way, can anyone help me?
==============================
ps:
You can see from the screenshot below, I actually want to install goa and goagen but report an error, maybe because of network reasons, I want to open the agent to try, so...only have the above problem.
1.my go docker container
enter image description here
go docker wget
2.my host
my host wget
You need version 17.07 or more recent to automatically pass the proxy to containers you start using the config.json file. The 1.13 releases are long out of support.
This is well documented from docker:
https://docs.docker.com/network/proxy/
I'm currently working on a server written in C++ which uses cpprestsdk. cpprestsdk uses boost::asio. Much of my development is done on the Mac and not in a Docker container. When it come time to deploy a new version, I build to run in a Docker container which will then be run on an EC2 instance.
I recently added in support to get config files from S3. To do this I use cpprestsdk's http_client. Within the code to send requests, it will do an asio async_resolve. This will fail with the error
Host not found (authoritative)
I have determined that if I change the base image to ubuntu:16.04, then it will run properly. busybox and alpine will yield that error. I have also discovered that if I use curl then it can download the data from the endpoint I'm using. However if I use nslookup, it cannot find anything. For example, running nslookup on google.com yields
nslookup: can't resolve '(null)': Name does not resolve
I have tried a few things based on what I have found online such as using numeric_service with the constructor of the query, using "http" as the port instead of "80" (I've done the SSL versions too). I've also tried to run the container with the host's DNS.
For example, this is one of the links I reviewed.
boost asio: "host not found (authorative)"
So far, I haven't been able to find out how to get this to work properly.
Note, as a fallback I can use ubuntu:16.04 as the base image. I would just prefer the smaller size of busybox or alpine
I am using docker swarm and would like to deploy a service with docker-compose. My service uses a custom image called myuser/myrepo:mytag that I successfully deploy to Docker-Hub to a private repository.
My docker-compose looks like this:
version: "3.3"
services:
myservice:
image: myuser/myrepo:mytag
ports:
- "8080:8080"
Before executing, I successfully pulled the image with: docker pull myuser/myrepo:mytag
When I run docker stack deploy -c docker-compose.yml myapp I always receive the error: "No such image: myuser/myrepo:mytag".
Interestingly, running the same file using only: docker-compose up (i.e. without swarm mode) everything works fine and the service starts up.
I really don't understand why this is failing?
I've already tried cleaning up docker with docker system prune and then repull my image, no success.
Already found the solution.
My image is hosted on a private repository.
Besides the swarm manager (where I executed the commands), I had a running swarm worker.
When I ran docker stack deploy -c docker-compose.yml myapp docker deployed the service to the worker node (not the manager node as I thought).
At the worker node, docker had no credentials to pull the image from the private repository.
Hence, to fix this either pass the flag --with-registry-auth (which pushes the credentials for the repository to the worker node) or make sure that the service is deployed to a node where the image is present.
See: https://docs.docker.com/engine/reference/commandline/deploy/
I want to add another scenario that leads to the same outcome (error message) so that people won't bang their heads against the wall.
Another possibility is that you are trying to deploy the image with the insecure registry but forget to edit daemon.json on the server pulling the image.
If that is the case, lets this answer act as a reminder; and save you some time.
I had similar issue on mac when behind the corporate firewall.
I was able to resolve only after connecting directly to internet.
Just to update, while I am on VPN, I am able to access the internet without any proxy settings, and am able to download (docker) images just fine with docker run. Issue is only with docker-compose.
I did try changing the nameserver to 8.8.8.8 in resolv.conf in my VMs, but issue was not resolved.
For me I struggled with an image I had deployed to a new registry I configured in my swarm. I was updating the stack using Portainer.
I configured all the necessary certificates and logins on all the nodes and verified I had uploaded the image using the following commands:
curl -X GET https://myregistry:5000/v2/_catalog
curl -X GET https://myregistry:5000/v2/{image}/tags/list
No matter what I tried I always had the "No such image" error displayed on the service instances.
In a last ditch attempt I created a service (without the compose file) using exactly the same URL for my image as I had previously and it worked, i.e. docker found the image and started the service! Further attempts using the compose file then worked properly for this and all other new images.
Weird.