Can't connect on Dart debug through virtualbox - dart

I'm on windows, but on my Virtualbox I have one debian server.
I have create a Dart example and run it with:
dart run --pause-isolates-on-start --observe
With this i can connect to:
http://127.0.0.1:8181/XDFGgXmZ5Xc=/devtools/#/?uri=ws%3A%2F%2F127.0.0.1%3A8181%2FXDFGgXmZ5Xc%3D%2Fws
But this link don't work!
When I try to access even inside the Virtualbox with the W3M (command line browser) there is no output.
I have checked for open ports and the port 8181 is open.
Can you guys tell me some ideas to fix this?

You need to change the bind-address since the default are localhost which means only programs running inside your VM can get access to the observatory.
From dart help run:
--enable-vm-service=<[<port>[/<bind-address>]]> Enables the VM service and listens on the specified port for
connections (default port number is 8181, default bind address is
localhost).
If you want to make the observatory to every single network interface on your machine, you can change the bind-address to 0.0.0.0 instead. Be sure you don't allow this to be externally accessed from the internet. Alternative, you can specify the IP of the network interface you want the observatory to listen on.
dart run --pause-isolates-on-start --observe --enable-vm-service=8181/0.0.0.0

Related

nginx proxy_pass connection refused for flask development server [duplicate]

I'm trying to run a simple web server on a Raspberry Pi with Flask. When I run my Flask app, it says:
running on http://127.0.0.1:5000/
But when I enter this address on my laptop's in Chrome, I get
ERR_CONNECTION_REFUSED
I can open 127.0.0.1:5000 on the Raspberry Pi's browser. What do I need to do to connect from another computer?
Run your app like this:
if __name__ == '__main__':
app.run(host='0.0.0.0')
It will make the server externally visible. If the IP address of the machine is 192.168.X.X then, from the same network you can access it in 5000 port. Like, http://192.168.X.X:5000
when you are running the server via flask run change it to flask run --host=0.0.0.0
to connect, find the IPV4 address of the server that your script is running on. On the same network, go to http://[IPV4 address]:5000
A reason could also be in firewall refusing incoming connections on port 5000. Try:
sudo ufw allow 5000
app.run(host='0.0.0.0',port=5000)
if you run your app in this way then your server will be visible externally.
Steps by Setp:
Run your app by using the following command
app.run(host='0.0.0.0',port=5000)
Go to the window cmd . Type ipconfig and get the get the IPV4 adress suppose your IPV4 address is 192.168.X.X
Go to the mobile browser and type the 192.168.X.X:5000
If you have debug = True inside your app.run(), then it will not be visible to other machines either. Specify host and port inside app.run() without the debug = True.
You will have to run the development server such that it listens to requests on all interfaces and not just the local one
Ask Flask to listen on 0.0.0.0:PORT_NUMBER
or any other port you may choose
On MacOS 12.4 (Monterey) I couldn't load localhost nor my local IP but it worked with both of these:
0.0.0.0
127.0.0.1
Just change the URL in the browser if it loads with "localhost".
Both devices must be connected to same network.
Use app.run(host='0.0.0.0',port=5000) and run with your own Ipv4
address like his http://[Your Ipv4 address]:5000
If you are connecting this with android app then don't forget to
put INTERNET permission in manifest file.
Contrary to popular believe 127.0.0.1 is not the same a localhost.
I solved the issue above by setting 127.0.0.1 explicitly on both ends.
Well i was also here to know answer but i guess i found out problem. The reason is that you may not activated or run flask that's why it is showing error. For that you have to start from terminal "flask run" and then surely your flask will run...
The issue may occur, if VPN is on, so try to switch it off.

Failing to connect to localhost from inside a container Connection refused

I'm currently testing an Ansible role using Molecule.
Basically, Molecule launches a container that is Ansible compliant and runs the role on it.
In order to test the container, Molecule also embed unit tests using Testinfra. The python unit tests are run from within the container so you can check the compliance of the role.
As I'm working on an Nginx based role, one of the unit tests is simply issuing a curl http://localhost:80
I do get the below error message in response:
curl: (7) Failed to connect to localhost port 80: Connection refused
When I:
launch a Vagrant machine
apply the role with Ansible
connect via vagrant ssh
issue a curl http://localhost command
nginx answers correctly.
Therefore, I believe that:
the role is working properly and Nginx is installed correctly
Docker has a different way to set-up the network. In a way, localhost and 127.0.0.1 are not the same anymore.
My questions are the following:
Am I correct?
Can this difference be overcome so the curl would work?
Docker containers start in their own network namespace by default. This namespace includes a separate loopback interface (127.0.0.1) that is distinct from the same interface on the host and any other containers. If you want to access an application from another container or via a published port on the host, you need to listen on all interfaces (0.0.0.0) rather than the loopback interface.
One other issue I often see is at some layer in the connection (the host, or inside of a container), the "localhost" name is mapped to the IPv6 value of ::1 in the /etc/host file, and somewhere in that connection only the IPv4 value is valid (either where the port was published, the application is listening, or IPv6 isn't enabled on the host or docker engine). Therefore, make sure to try connecting to the IPv4 address directly, 127.0.0.1, to eliminate any potential IPv6 issues.
Regarding the curl command and how to correct it, I cannot answer that without more details on how you are running the curl (is it in a separate container), how you are running your application, and how the two are joined on the network (did you create a new network in docker for your application and unit tests to run). The typical solution is to create a new network in docker, run both containers on that network, and connect via docker's included DNS to the container or service name of the destination, e.g. curl http://my_app/.
Edit: based on the comments, if your application and curl command are both running inside the same container, then curl http://127.0.0.1/ should work. There's no change I'm aware of needed with to curl to make it work inside of a container vs on a VM. The error you are seeing is likely from the application not starting and listening on the port as expected, possibly a race condition where the curl command is run too soon, or the base assumptions of how the tool works is incorrect. Start by changing the unit test to verify the application is up and running and listening on the port with commands like ps -ef and ss -lt.
it actually have nothing to do with the differences between Docker and Vagrant (i.e. containers vs VMs).
The testInfra code is actually run from outside the container / VM, hence the fact the subprocess.call(['curl', 'http://localhost']) is failing.
In order to run a command from the container / VM, I should use:
host.check_output('curl http://localhost')

Make docker machine available under host name in Windows

I'm trying to make a docker machine available to my Windows by a host name. After creating it like
docker-machine create -d virtualbox mymachine
and setting up a docker container that exposes the port 80, how can I give that docker machine a host name such that I can enter "http://mymachine/" into my browser to load the website? When I change "mymachine" to the actual IP address then it works.
There is an answer to this question but I would like to achieve it without an entry in the hosts file. Is that possible?
You might want to refer to docker documentaion:
https://docs.docker.com/engine/userguide/networking/#exposing-and-publishing-ports
You expose ports using the EXPOSE keyword in the Dockerfile or the
--expose flag to docker run. Exposing ports is a way of documenting which ports are used, but does not actually map or open any ports.
Exposing ports is optional.
You publish ports using the --publish or --publish-all flag to docker
run. This tells Docker which ports to open on the container’s network
interface. When a port is published, it is mapped to an available
high-order port (higher than 30000) on the host machine, unless you
specify the port to map to on the host machine at runtime. You cannot
specify the port to map to on the host machine when you build the
image (in the Dockerfile), because there is no way to guarantee that
the port will be available on the host machine where you run the
image.
I also suggest reviewing the -P flag as it differs from the -p one.
Also i suggest you try "Kitematic" for Windows or Mac, https://kitematic.com/ . It's much simpler (but dont forget to commit after any changes!)
Now concerning the network in your company, it has nothing to do with docker, as long as you're using docker locally on your computer it wont matter what configuration your company set. Even you dont have to change any VM network config in order to expose things to your local host, all comes by default if you're using Vbox ( adapter 1 ==> NAT & adapter 2 ==> host only )
hope this is what you're looking for
If the goal is to keep it as simple as possible for multiple developers, localhost will be your best bet. As long as the ports you're exposing and publishing are available on host, you can just use http://localhost in the browser. If it's a port other than 80/443, just append it like http://localhost:8080.
If you really don't want to go the /etc/hosts or localhost route, you could also purchase a domain and have it route to 127.0.0.1. This article lays out the details a little bit more.
Example:
dave-mbp:~ dave$ traceroute yoogle.com
traceroute to yoogle.com (127.0.0.1), 64 hops max, 52 byte packets
1 localhost (127.0.0.1) 0.742 ms 0.056 ms 0.046 ms
Alternatively, if you don't want to purchase your own domain and all developers are on the same network and you are able to control DHCP/DNS, you can setup your own DNS server to include a private route back to 127.0.0.1. Similar concept to the Public DNS option, but a little more brittle since you might allow your devs to work remote, outside of a controlled network.
Connecting by hostname requires that you go through hostname to IP resolution. That's handled by the hosts file and falls back to DNS. This all happens before you ever touch the docker container, and docker machine itself does not have any external hooks to go out and configure your hosts file or DNS servers.
With newer versions of Docker on windows, you run containers with HyperV and networking automatically maps ports to localhost so you can connect to http://localhost. This won't work with docker-machine since it's spinning up virtualbox VM's without the localhost mapping.
If you don't want to configure your hosts file, DNS, and can't use a newer version of docker, you're left with connecting by IP. What you can do is use a free wildcard DNS service like http://xip.io/ that maps any name you want, along with your IP address, back to that same IP address. This lets you use things like a hostname based reverse proxy to connect to multiple containers inside of docker behind the same port.
One last option is to run your docker host VM with a static IP. Docker-machine doesn't support this directly yet, so you can either rely on luck to keep the same IP from a given range, or use another tool like Vagrant to spin up the docker host VM with a static IP on the laptop. Once you have a static IP, you can modify the host file once, create a DNS entry for every dev, or use the same xip.io URL, to access the containers each time.
If you're on a machine with Multicasting DNS (that's Bonjour on a Mac), then the approach that's worked for me is to fire up an Avahi container in the Docker Machine vbox. This lets me refer to VM services at <docker-machine-vm-name>.local. No editing /etc/hosts, no crazy networking settings.
I use different Virtualbox VMs for different projects for my work, which keeps a nice separation of concerns (prevents port collisions, lets me blow away all the containers and images without affecting my other projects, etc.)
Using docker-compose, I just put an Avahi instance at the top of each project:
version: '2'
services:
avahi:
image: 'enernoclabs/avahi:latest'
network_mode: 'host'
Then if I run a webserver in the VM with a docker container forwarding to port 80, it's just http://machine-name.local in the browser.
You can add a domain name entry in your hosts file :
X.X.X.X mymachine # Replace X.X.X.X by the IP of your docker machine
You could also set up a DNS server on your local network if your app is meant to be reachable from your coworkers at your workplace and if your windows machine is meant to remain up as a server.
that would require to make your VM accessible from local network though, but port forwarding could then be a simple solution if your app is the only webservice running on your windows host. (Note that you could as well set up a linux server to avoid using docker-machine on windows, but you would still have to set up a static IP for this server to ensure that your domain name resolution works).
You could also buy your own domain name (or get a free one) and assign it your docker-machine's IP if you don't have rights to write in your hosts file.
But these solution may not work anymore after some time if app host doesn't have a static IP and if your docker-machine IP changes). Not setting up a static IP doesn't imply it will automatically change though, there should be some persistence if you don't erase the machine to create a new one, but that wouldn't be guaranteed either.
Also note that if you set up a DNS server, you'd have to host it on a device with a static IP as well. Your coworkers would then have to configure their machine to use this one.
I suggest nginx-proxy. This is what I use all the time. It comes in especially handy when you are running different containers that are all supposed to answer to the same port (e.g. multiple web-services).
nginx-proxy runs seperately from your service and listens to docker-events to update it's own configuration. After you spun up your service and query the port nginx-proxy is listening to, you will be redirected to your service. Therefore you either need to start nginx-proxy with the DEFAULT_HOST flag or send the desired host as header param with the request.
As I am running this only with plain docker, I don't know if it works with docker-machine, though.
If you go for this option, you can decide for a certain domain (e.g. .docker) to be completely resolved to localhost. This can be either done company-wide by DNS, locally with hosts file or an intermediate resolver (the specific solution depends on your OS, of course). If you then try to reach http://service1.docker nginx-proxy will route to the container that has then ENV VIRTUAL_HOST=service1.docker. This is really convenient, because it only needs one-time setup and is from then on dynamic.

Allow a container running via docker-machine to connect with Mysql or XDEBUG port on parent OSX system without using an OSX DHCP assigned ip address?

I've got the following setup:
OSX running MySQL listening on all network adaptors at port 3306
XDEBUG enabled IDE listening on port 9000 on the base OSX system.
docker-machine host running on the OSX system with the host ip 192.168.99.100
A debian based docker container with a mysql client running on the docker host and HHVM running with xdebug looking to connect to some lucky remote host on port 9000.
The ip addresses change frequently on the OSX system due to being assigned via DHCP, so I want the docker container to be able to somehow be able to hit the mysql server regardless of what IP the native OSX network adaptors get assigned (without manually updating it). Also, I need a stable ip I can provide my HHVM server.ini file a remotes host for Xdebug.
With running a base system of linux this isn't an issue as the docker host and the actual native machine running docker are one-and-the-same. Also, there are several ways for a container to learn of the host's ip so the issue isn't hitting the docker host.
However, in OSX running docker-machine, the host ain't the native OSX system, but instead is a VM running in virtual box (assuming you're using the vb driver, and who the sam hill blazes isn't?).
The only thing I could think of was to port forward request on 3306 to the docker-machine host (192.168.99.100 which never changes) to the OSX's port 3306. Then have the container hit the docker-machine host for Mysql requests. IF this works, I could rinse and repeat for any port I port I need to link like xdebug on port 9000.
Does anyone know how to accomplish this or have another suggestion?
Figured a way out without needing to make any changes that provides a consistent ip to connect to on the base OSX system. Docker machine sets things up in such a way to make this possible.
Docker machine creates a virtualbox VM with 2 network adaptors, one set up as host-only, the other set as NAT. Don't know why it creates 2, but
The host-only adaptor provides the OSX an ip of 192.168.99.1 and the various VM's using it get addresses starting with 192.168.99.100. However, inside the VM network, you can't use the address 192.168.99.1 to hit ports on the parent OSX system (not sure why, but guessing host only intends to be only communication between the VMs).
The NAT network adaptor is set so the OSX get's the ip 10.0.2.2 and the VM get's 10.0.2.15. With a NAT, you can route to the OSX system at 10.0.2.2 from both the docker host VM and containers running on the host.
Since this 10.0.2.2 address for the OSX machine doesn't change (unless you screw with the virtual box networking settings) bingo, got what I need.

Access docker-machine VM ports without port forwarding

I'm trying to learn Docker and so far I have run into a lot of "work arounds" that are needed in docker-machine but not in boot2docker.
My current issue is accessing my docker containers from my host.
I have my Windows host, running a VM created with docker-machine, and inside that docker VM I'm running a simple nginx server container.
The nginx container is ran to expose it's 80 port to the docker-machine's 8000.
docker run -d -p 8000:80 nginx
And what I'm trying to achieve is being able to open this server from my Windows using a browser.
If I in Windows use curl (Git bash, not ssh-ed into the docker-machine VM) using the IP that docker-machine ip gives me, then it works. But using my browser doesn't (I'm using Microsoft Edge currently), I can get the browser to work if I set up a NAT port forwarding.
curl $(docker-machine ip dev):8000
As I've read it should be possible to access the VM ports without specifying port forwarding rules for every port, that VirtualBox should expose and forward those automatically.
What am I doing wrong or do I have to specify port forwarding rules for every port between my VM and host OS that I want to use?
After another day of digging I had the wild and crazy idea to try another browser and it works fine.
So for anyone running into this issue and you're using Microsoft Edge (to try it out like me), switch browser. Chrome and even old IE works fine.
In Bash, $(docker-machine ip dev) means "run the command docker-machine ip dev, take the output of that command and insert it into the command-line here".
If you run
docker-machine ip dev
in your shell, it will print an IP address which you can use in your web browser.
If you put this address in /etc/hosts (or your platform's analog) you can use a friendly alias for the IP address.
In my case, it was because my browser was configured with a proxy pac (Proxy_auto-config file) which did not include 192.169.x.y internal boot2docker machines ips.
That means:
the browser tries to contact the enterprise proxy (and that fails),
while a regular CMD shell session, with a NO_PROXY environment variable set to 192.168.x.y, would allow a curl http://192.168.x.y/... to work without a glitch.

Resources