My computer is behind a proxy
I built a docker image with a Shiny App and with this lines in the dockerfile :
ENV http_proxy=myproxy.fr:5555
ENV https_proxy=myproxy.fr:5555
When I run the docker, my Shiny App starts well but it stops 2mn later because it can't access to the internet. In the log file there is this error :
Warning in file(file, "rt") :
unable to connect to 'www.openstreetmap.org' on port 80.
Warning: Error in file: cannot open the connection
And the Shiny App works well outside the docker, even behind the proxy
It appears that the ENV variable are only set for the root user.
Any clue to deal with this proxy issue in Docker ?
Thanx
I added the proxy variables in
/etc/R/Renviron.site
while building the image and it works now ;-)
Related
I have run the command docker context create PikaServer --docker "host=ssh://pika_node" and it seems to work just fine.
Image 1
I can see the right container and the right image from the server. When I use the Docker Extension with VSCode I got the failed to connect error. Not sure why I am getting the error. Not sure it is a docker issue. It might be a VSCode issue.
Image 2
From the terminal only, using "host=ssh://pika_node" is just fine, I can access the remote images and containers. However, with VSCode using Docker extension, it is just not possible to use the ssh alias pika_node.
Image 3
How can I handle this error?
UPDATE
Be aware I use ssh-jump.
Image 4
Image 3 shows you the cause of your problem:
The "host" part in the Docker endpoint string (ssh://username#host:port) must be either a globally-resolvable DNS machine name, or an IP address. Docker extension will not be able to use host aliases defined in the SSH configuration file.
And in Image 4 you've shown how you're using a host alias pika_node.
to resolve this, use the IP address of the docker host in your context:
host=ssh://xxx.xxx.xxx.xxx
instead of using the pika_node alias to refer to it.
Im setting a dev environment using docker & wsl 2
no much experience in the topic , just basic knowledge of linux terminal commands and the concept of docker .
now Im trying to dockerize a laravel application in an nginx container , the nginx ships with a user called 'www-data' , but my host has the user of 'ahmed' .
now , when I mount the application directory to the container , I get permission issues regarding to the truth saying that the user inside the container has no rights to access the files maintained by the user 'ahmed'
changing the ownership to www-data didnt work with me , the vscode server ( running remotely from wsl ) cant update files because the permissions were set to another user 'www-data' !
so what to do in this case ?
Newer to Docker and trying to understand how images work. I ran the following command:
sudo docker search hello-world
and it returned this:
docker.io docker.io/carinamarina/hello-world-app This is a sample Python web application,
I then ran:
sudo docker run docker.io/carinamarina/hello-world-app
...and this was the output from the terminal:
* Running on http://0.0.0.0:5000/ (Press CTRL+C to quit)
I don't understand. How can the IP address be 0.0.0.0? I entered that into a browser and got nothing. I tried localhost:5000 and got nothing.
How does one get to see this webapp run?
tl;dr
you need to publish the port to the host network to see the application working
long version:
well good for you to start working with docker
I will start with explaining a little bit about docker then I will explain what is happing over there
First of all, there is a difference between "image" and "container"
Image is the blueprint that is used to create containers from
so you write the definition of the image like (install this, copy that from the host or build that.......etc) in the image file and then you tell docker to build this image and then RUN containers from that image
so if you have like 1 image and you run 2 containers from it they both will have the same instructions( definition )
what happened with you
when you invoke the RUN command first thing you will see is
Unable to find image 'carinamarina/hello-world-app:latest' locally
That's mean that the local docker will not find the image(blueprint) locally with the name docker.io/carinamarina/hello-world-app so it will do the following steps
so it will start pulling the image from the remote registry
then
then it will start extracting the layers of the image
then it will start the container and show the logs from INSIDE CONTAINER
Why it didn't run with you
the application is running inside the container on port 5000
the container has a whole different network than the host that's running on (centos7 machine in your case )
you will have to make a port forwarding between the docker network and the host network so you can USE the application from the HOST
you can read more about that here docker networking
I recommend the following places to start with
let's play with docker
docker tutorial for beggines
I'm currently working on a server written in C++ which uses cpprestsdk. cpprestsdk uses boost::asio. Much of my development is done on the Mac and not in a Docker container. When it come time to deploy a new version, I build to run in a Docker container which will then be run on an EC2 instance.
I recently added in support to get config files from S3. To do this I use cpprestsdk's http_client. Within the code to send requests, it will do an asio async_resolve. This will fail with the error
Host not found (authoritative)
I have determined that if I change the base image to ubuntu:16.04, then it will run properly. busybox and alpine will yield that error. I have also discovered that if I use curl then it can download the data from the endpoint I'm using. However if I use nslookup, it cannot find anything. For example, running nslookup on google.com yields
nslookup: can't resolve '(null)': Name does not resolve
I have tried a few things based on what I have found online such as using numeric_service with the constructor of the query, using "http" as the port instead of "80" (I've done the SSL versions too). I've also tried to run the container with the host's DNS.
For example, this is one of the links I reviewed.
boost asio: "host not found (authorative)"
So far, I haven't been able to find out how to get this to work properly.
Note, as a fallback I can use ubuntu:16.04 as the base image. I would just prefer the smaller size of busybox or alpine
I am currently trying to deploy a basic task queue and frontend using celery, rabbitmq and flower on Kubernetes (and minikube). I am following the example here:
https://github.com/kubernetes/kubernetes/tree/release-1.3/examples/celery-rabbitmq
I can get everything to work following the instructions; however, when I run docker build on the Dockerfile in ./celery-app-add, push the image to my own repository and replace endocode/celery-app-add with <mine>/celery-app-add, I can't get the example to run anymore. I am assuming that the Dockerfile in source control is wrong because if I pull the endocode/celery-app-add image and run bash in the image, it loads in as the root user (as opposed to user with <mine>/celery-app-add Dockerfile).
After booting up all of the containers and services, I can see the following in the logs:
2016-08-18T21:05:44.846591547Z AttributeError: 'ChannelPromise' object has no attribute '__value__'
The celery logs show:
2016-08-19T01:38:49.933659218Z [2016-08-19 01:38:49,933: ERROR/MainProcess] consumer: Cannot connect to amqp://guest:**#rabbit:5672//: [Errno -2] Name or service not known.
If I echo RABBITMQ_SERVICE_SERVICE_HOST within the container, it appears as the same host as indicated in the rabbitmq-service after running kubectl get services.
I am not really sure where to go from here. Any suggestions are appreciated. Also, I added USER root (won't run this in production, don't worry) to my Dockerfile and still ran into the same issues above. docker history endocode/celery-app-add hasn't been too helpful either.
Turns out the problem is based around this celery issue. Celery prefers to use CELERY_BROKER_URL over anything that can be set in the app configuration. To fix this, I unset CELERY_BROKER_URL in the Dockerfile and it picked up my configuration correctly.