How to run grunt-connect within docker container - docker

I have Grunt running a node connect (grunt-contrib-connect) web server on localhost:8080 in a docker container. I run the container with this command:
$ docker run --rm -it -p 8080:8080 js1972/yeoman:v3.
Inside my container I run the grunt connect task with grunt develop.
I'm on a Mac so using boot2docker. boot2docker ip says the host ip is 192.168.59.103 so I should access the connect server with http://192.168.59.103:8080 in my browser.
However this doesn't work and I just get a safari can't connect to server message. (note that the port forwarding works just fine when I use a simple python web server like on the docker website examples.)
Any idea whats wrong here? The same process works perfectly well outside of docker.
I've got a feeling its something to do with running Connect on localhost. I've tried various combinations of --add-hosts and -p localhost:8080:8080 and so on to no avail...
If it helps here's my docker file and gruntfile:
https://dl.dropboxusercontent.com/u/7546923/Dockerfile
https://dl.dropboxusercontent.com/u/7546923/Gruntfile.js
Rgds, Jason.

Change localhost to 0.0.0.0.
At the moment the container is only listening for internal connections on the local interface. Using 0.0.0.0 will tell it to listen to all interfaces including the one the Docker host is connected to.

Modify the ip of the hostname in the connect settings of the Gruntfile:
// The actual grunt server settings
connect: {
options: {
port: 8080,
// Change this to '0.0.0.0' to access the server from outside.
hostname: '0.0.0.0',
livereload: 35729
},
And open the server and reload port to the boot2docker vm:
boot2docker poweroff # vm needs to be off to modify the network settings
VBoxManage modifyvm "boot2docker-vm" --natpf1 "containergruntserver,tcp,,8080,,8080"
VBoxManage modifyvm "boot2docker-vm" --natpf1 "containergruntreload,tcp,,35729,,35729"
boot2docker up

Related

How to connect to dockerized Redis Server?

I am unable to connect (timeout) to a dockerized redis server1 which was started with the command of:
docker run -it --rm -p 6379:6379 redis:alpine
I've tried to start the server with configs:
set bind to 0.0.0.0
protected-mode no
I am however able to connect to my redis server with another docker container with the following command:
docker run -it --rm redis:alpine redis-cli -h host.docker.internal -p 6379
and also tried configuring the same parameters through cli as well.
When I try to connect the connection times out, I tried with both internal ip
172.17.0.x
and with the internal domain name:
host.docker.internal
to no avail. Further note that I was able to connect to redis server when installed with
brew install redis
on the host.
What am I missing, how can I resolve the issue so that I can connect to redis-server that is within a docker container from the container's host?
Environment details
OS: MacOS Monterey Version: 12.6 (21G115)
Docker version 20.10.17, build 100c701
1 More specifically I've tried with both
rdcli -h host.docker.internal in the mac terminal and also on application side with StackExchange.Redis.
More specifically I've tried with both rdcli -h host.docker.internal in the mac terminal
The host.docker.internal is a DNS name to access the docker host from inside a container. It's a bad practice to use this inside one container to talk to another container. Instead you'd create a network, create both containers in that network, and use the container name to connect. When that's done, the port doesn't even need to be published.
From the host, that's when you connect to the published port. Since the container is deployed with -p 6379:6379, you should be able to access that port from the hostname of your host, or localhost on that host:
rdcli -h localhost

docker container is not accessible from other machines on host's network

I was doing some devops and writing a script to turn my current host/nginx server/nginx setup into a host/docker/nginx server/docker/nginx set up so I can keep directories and etc the same between them.
The problem is that any ports I expose on a docker container are only accessible on the host and not from any other machines on the host network.
When typing 192.168.0.2 from a machine such as 192.168.0.3 it just says took too long to respond, but typing 192.168.0.2 from 192.168.0.2 will bring up the welcome to nginx page?! The interesting part is I did a wireshark analysis on en0 on port 80 and there are actually some packets coming through
See pastebins of packet inspections:
LAN to docker: https://pastebin.com/4qR2d1GV
Host to docker: https://pastebin.com/Wbng9nDB
I've tried using docker run -p 80:80 nginx/nginx and docker run -p 192.168.0.2:80:80 nginx/nginx and docker run -p 127.0.0.1:80:80 nginx/nginx but this doesn't seem to fix anything.
Should see welcome to nginx when connecting from 192.168.0.3 to 192.168.0.2.
this is in my dev environment which is an osx 10.13.5 system.
when I push this to my ubuntu 16.04 server it works just fine with the containerized nginx accessible from the www and when I run ngnix on my host without docker I can connect from external machines on the network too
Your description is a bit confusing the 127.0.0.1 within the port line will bind it to localhost only - you won't be able to access the docker from another machine. Remove the IP address and you should be able to access the docker from outside localhost.

Access a docker container running on a VM machine from LAN connected PC

I want to run a Google Earth Engine Datalab on a server and access it from another PC on LAN. Server's OS is Windows Server 2012. So, following https://developers.google.com/earth-engine/python_install-datalab-local, I did:
Install Docker Toolbox
Define a Local Workspace
Create the container, but changing: -p "127.0.0.1:8081:8080" for -p "8081:8080". This is the full command:
docker run -it -p "8081:8080" -v "$WORKSPACE:/content" -e "PROJECT_ID=$GCP_PROJECT_ID" $CONTAINER_IMAGE_NAME (see the link
It works, and I can access it through 192.168.99.100:8081. But that only works for localhost, so I followed this answer https://stackoverflow.com/a/36458215/2791453, did all steps, and now I open a browser on the server, and access the Datalab through 196.168.0.55:8081 (that's the server LAN address), but I cannot access it from another computer connected to LAN.
it seems like firewall issue.
you can test networking is working OK from other host with ping
ping 196.168.0.55
if it is OK you can test if the port is open with netcat
nc -z 196.168.0.55 8081 -v
if it says open port it means there will be issue inside docker container, if it timeout or other error it will be firewall related

Docker: how to open ports to the host machine?

What could be the reason for Docker containers not being able to connect via ports to the host system?
Specifically, I'm trying to connect to a MySQL server that is running on the Docker host machine (172.17.0.1 on the Docker bridge). However, for some reason port 3306 is always closed.
The steps to reproduce are pretty simple:
Configure MySQL (or any service) to listen on 0.0.0.0 (bind-address=0.0.0.0 in ~/.my.cnf)
run
$ docker run -it alpine sh
# apk add --update nmap
# nmap -p 3306 172.17.0.1
That's it. No matter what I do it will always show
PORT STATE SERVICE
3306/tcp closed mysql
I've tried the same with an ubuntu image, a Windows host machine, and other ports as well.
I'd like to avoid --net=host if possible, simply to make proper use of containerization.
It turns out the IPs weren't correct. There was nothing blocking the ports and the services were running fine too. ping and nmap showed the IP as online but for some reason it wasn't the host system.
Lesson learned: don't rely on route in the container to return the correct host address. Instead check ifconfig or ipconfig on the Linux or Windows host respectively and pass this IP via environment variables.
Right now I'm transitioning to using docker-compose and have put all required services into containers, so the host system doesn't need to get involved and I can simply rely on Docker's DNS. This is much more satisfying.

docker running splash container but localhost does not load (windows 10)

I am following this tutorial to use splash to help with scraping webpages.I installed Docker toolbox and did these two steps:
$ docker pull scrapinghub/splash
$ docker run -p 5023:5023 -p 8050:8050 -p 8051:8051 scrapinghub/splash
I think it is running correctly, based on the prompted message in Docker window, which looks like this:
However, when I open the `localhost:8050' in a web browser, it says the localhost is not working.
What might have gone wrong in this case? Thanks!
You have mapped the port to your docker host (the VM), but you have not port-forwarded that same port to your actual "localhost" (your Windows host)
You need to declare that port-forwarding in the Network settings of your VM (for instance "default"), or with VBoxManage controlvm commands.
Then and only then could you access that port (used by your VM) from your Windows host (localhost).
That or you can access that same port using the IP address of your boot2docker VM: see docker-machine ls.
#user3768495, when you use http://192.168.99.100:8050/ , you are actually using the docker-machine ip and this ip would be available on your machine only and not on the network. To map it to localhost, you do need to port-forward the same port to your localhost. I was having the same issue and I detailed the process in below link.
https://stackoverflow.com/a/35737787/4820675

Resources