Trouble connecting to a TCP server through Docker on WSL 2 - docker

I'm using WSL2 on Windows 10 using an Ubuntu image, and Docker for Desktop Windows (2.2.2.0) with the WSL integration.
I have a super basic rust tcp server. I think the only relevant bit is:
let listener = TcpListener::bind("127.0.0.1:8080").unwrap();
println!("Listening on 8080");
for stream in listener.incoming() {
println!("Received connection");
let stream = stream.unwrap();
handle_connection(stream);
}
I can cargo install and run the binary without issue; the line above prints, I can curl localhost:8080 from WSL and see the response as I'd expect from the rest of the code.
I wanted to turn it into a docker image. Here's the Dockerfile.
FROM rust:1.40 as builder
COPY . .
RUN cargo install --path . --root .
FROM debian:buster-slim
COPY --from=builder ./bin/coolserver ./coolserver
EXPOSE 8080
ENTRYPOINT ["./coolserver"]
I then do:
docker build -t coolserver .
docker run -it --rm -p 8080:8080 coolserver
I see Listening on 8080 as expected (i.e. no panic), but attempting to curl localhost:8080 yields curl: (52) Empty reply from server. This, I don't know what to make of. Logging suggests my program gets to the point where it reaches listener.incoming(), but does not enter into the block.
To see if it was something to do with my setup (Docker for Desktop, WSL, etc.) or my Dockerfile, I followed the README for the docker-http-https-echo image, successfully. I can curl it on the specified ports.
I don't know how to debug further. Thanks in advance.

EXPOSE keyword is to open up ports for inter container communication for using these ports from host you have to use -p 8080:8080 while running docker via docker run

#CarlosRafaelRamirez resolved it for me. It was as simple as binding to 0.0.0.0 rather than the loopback address 127.0.0.1. More info here: https://pythonspeed.com/articles/docker-connection-refused/

Related

Docker Port Not Exposing Golang

I am building a golang WebService in docker. The build seems fine but I am unable to expose the port for external (outside of container) access. When I curl from the command line (inside the container) the app appears to work fine.
I saw quite a few posts of similar problems but unfortunately many were not resolved or didn't seem applicable.
FROM golang:alpine
RUN mkdir /go/src/webservice_refArch
ADD . /go/src/webservice_refArch
WORKDIR /go/src/webservice_refArch
RUN apk add curl
RUN cd /go/src/webservice_refArch/ && go get ./...
RUN cd /go/src/webservice_refArch/cmd/reference-w-s-server && go build -o ../../server
EXPOSE 7878
ENTRYPOINT ["./server", "--port=7878"]
I have tried both:
:7878
localhost:7878
I was facing the same issue. Then what I did is change the ListernHost from localhost to 0.0.0.0 and it worked.
To debug this tried curl inside the container it was working fine but outside the container the response of the curl was blank. The port mapped but the content was not served outside the container. Once you change the "localhost" to 0.0.0.0 it will work.
See https://docs.docker.com/engine/reference/run/#expose-incoming-ports, just expose port in dockerfile is not enough.
You can add -p 7878:7878 when start container, or use -P to let docker set a automatical host port mapping for you.
If you do not want to do above, you can also add --net=host when start the container, then container will use host's network, if also works for you.
if you are trying to access the port inside your docker container from your local machine, you need map it to the desired port on your local machine
docker run -p 7878:7878 IMAGE
Then you should be able to access it on your host

.Net Core WebApi refuses connection in Docker container

I am trying out Docker with a small WebApi which I have written in dotnet core.
The Api seems to work fine because when I run it with dotnet run it starts normally and is reachable on port 5000. But when I run it in a Docker container it starts, but I cannot reach it on the exposed/mapped port. I'm running Docker on Windows 10 withing VirtualBox.
My Dockerfile looks like this:
FROM microsoft/aspnetcore-build:latest
COPY . /app
WORKDIR /app
RUN dotnet restore
EXPOSE 5000
ENV ASPNETCORE_URLS http://*:5000
ENTRYPOINT ["dotnet", "run"]
I am building the dontainer like this:
docker build -t api-test:v0 .
And run it with this command:
docker run -p 5000:5000 api-test:v0
The output of the run command is:
Hosting environment: Production
Content root path: /app
Now listening on: http://localhost:5000
I have also tried different approaches of binding the URL:
as http://+:5000, http://0.0.0.0:5000, http://localhost:5000, ...
via CLI parameters --urls / --server.urls
but without success. Does anyone see what I'm doing wrong or missing?
Now listening on: http://localhost:5000
Binding to localhost will not work for your scenario. You need to get the app to bind to 0.0.0.0 for the docker port forwarding to work. Once you do that, you should be able to reach the app on the VM IP, port 5000
Make sure your service is listening on all ports using http://*:500 or similar (if it prints localhost when running, it won't work).
If you set up your docker environment with VirtualBox and used e.g. docker-machine, you n need to use the IP address of the virtual machine that runs the docker containers. you can get the IP via docker-machine ip default.
I found a way round this. All you need to do is edit launchSettings.json change to "applicationUrl": "http://*:5000/" of your app setting. Build the image. Then run the image docker run -d -p 81:5000 aspnetcoreapp after it runs get ip address of the container docker exec container_id ipconfig. Then in browser http://container_ip:5000/api/values. For some reason it does not work http://localhost:81 still need to figure out why that is.
I had the same issue recently with Docker version 20.x. The comment above provided good lights. If anyone faces the same issue here is how I've solved: Edit launchSettings.json to
"applicationUrl": "http://localhost:5001;http://host.docker.internal:5001",
This will let you test your web API locally and also to be consumed from the container.

Running rust on Docker: Empty reply from server

I'd like to run a rust web app in a docker container. I'm new to both technologies so I've started out simple.
Here is main.rs:
extern crate iron;
use iron::prelude::*;
use iron::status;
fn main() {
fn hello_world(_: &mut Request) -> IronResult<Response> {
Ok(Response::with((status::Ok, "Hello World!")))
}
Iron::new(hello_world).http("127.0.0.1:8080").unwrap();
}
Cargo.toml
[package]
name = "docker"
version = "0.1.0"
[dependencies]
iron = "*"
Dockerfile (adapted from this tutorial)
FROM jimmycuadra/rust
EXPOSE 8080
COPY Cargo.toml /source
COPY src/main.rs /source/src/
CMD cargo run
These are the commands I ran:
docker build -t oror/rust-test
docker run -it -p 8080:8080 --rm -v $(pwd):/source -w /source oror/rust-test cargo run
docker ps
Terminal Ouput
ifconfig to get my machine's IP address: 192.168.0.6
curl 192.168.0.6:8080 to connect to my rust web app
curl: (52) Empty reply from server
I've tried localhost:8080 and I still get the same output.
What am I missing?
The problem is your web server is listening to requests from 127.0.0.1 (local interface) but from inside your container. From the container point of view, your host is outside so you need to listen to requests from 0.0.0.0, then it should works.
Iron::new(hello_world).http("0.0.0.0:8080").unwrap();
If you need to filter where your requests come from, I suggest you to do it from outside your container with a firewall or something like that.

Connection refused on docker container

I'm new to Docker and trying to make a demo Rails app. I made a dockerfile that looks like this:
FROM ruby:2.2
MAINTAINER marko#codeship.com
# Install apt based dependencies required to run Rails as
# well as RubyGems. As the Ruby image itself is based on a
# Debian image, we use apt-get to install those.
RUN apt-get update && apt-get install -y \
build-essential \
nodejs
# Configure the main working directory. This is the base
# directory used in any further RUN, COPY, and ENTRYPOINT
# commands.
RUN mkdir -p /app
WORKDIR /app
# Copy the Gemfile as well as the Gemfile.lock and install
# the RubyGems. This is a separate step so the dependencies
# will be cached unless changes to one of those two files
# are made.
COPY Gemfile Gemfile.lock ./
RUN gem install bundler && bundle install --jobs 20 --retry 5
# Copy the main application.
COPY . ./
# Expose port 8080 to the Docker host, so we can access it
# from the outside.
EXPOSE 8080
# The main command to run when the container starts. Also
# tell the Rails dev server to bind to all interfaces by
# default.
CMD ["bundle", "exec", "rails", "server", "-b", "0.0.0.0", "-p", "8080"]
I then built it like so:
docker build -t demo .
And call a command to start the server which does start the server on port 8080:
Johns-MacBook-Pro:demo johnkealy$ docker run -it demo
=> Booting WEBrick
=> Rails 4.2.5 application starting in development on http://0.0.0.0:8080
=> Run `rails server -h` for more startup options
=> Ctrl-C to shutdown server
[2016-04-23 16:50:34] INFO WEBrick 1.3.1
[2016-04-23 16:50:34] INFO ruby 2.2.4 (2015-12-16) [x86_64-linux]
[2016-04-23 16:50:34] INFO WEBrick::HTTPServer#start: pid=1 port=8080
I then try to find the correct IP to navigate to:
Johns-MacBook-Pro:demo johnkealy$ docker-machine ip default
192.168.99.100
I navigate to http://192.168.99.100:8080 and get the error This site can’t be reached 192.168.99.100 refused to connect.
What could I be doing wrong ?
You need to publish the exposed ports by using the following options:
-P (upper case) or --publish-all that will tell Docker to use random ports from your host and map them to the exposed container's ports.
-p (lower case) or --publish=[] that will tell Docker to use ports you manually set and map them to the exposed container's ports.
The second option is preferred because you already know which ports are mapped. If you use the first option then you will need to call docker inspect demo and check which random ports are being used from your host at the Ports section.
Just run the following command:
docker run -it -p 8080:8080 demo
After that your url will work.
If you are using Docker toolkit on window 10 home you will need to access the webpage through docker-machine ip command. It is generally 192.168.99.100:
It is assumed that you are running with publish command like below.
docker run -it -p 8080:8080 demo
With Window 10 pro version you can access with localhost or corresponding loopback 127.0.0.1:8080 etc (Tomcat or whatever you wish). This is because you don't have a virtual box there and docker is running directly on Window Hyper V and loopback is directly accessible.
Verify the hosts file in window for any digression. It should have
127.0.0.1 mapped to localhost
I had the same problem. I was using Docker Toolbox on Windows Home.
Instead of localhost I had to use http://192.168.99.100:8080/.
You can get the correct IP address using the command:
docker-machine ip
The above command returned 192.168.99.100 for me.
Command EXPOSE in your Dockerfile lets you bind container's port to some port on the host machine but it doesn't do anything else.
When running container, to bind ports specify -p option.
So let's say you expose port 5000. After building the image when you run the container, run docker run -p 5000:5000 name. This binds container's port 5000 to your laptop/computers port 5000 and that portforwarding lets container to receive outside requests.
This should do it.
In Docker Quickstart Terminal run following command:
$ docker-machine ip 192.168.99.100
In Windows, you also normally need to run command line as administrator.
As standard-user:
docker build -t myimage -f Dockerfile .
Sending build context to Docker daemon 106.8MB
Step 1/1 : FROM mcr.microsoft.com/dotnet/core/runtime:3.0
Get https://mcr.microsoft.com/v2/: dial tcp: lookup mcr.microsoft.com on [::1]:53: read udp [::1]:45540->[::1]:53: read:
>>>connection refused
But as an administrator.
docker build -t myimage -f Dockerfile .
Sending build context to Docker daemon 106.8MB
Step 1/1 : FROM mcr.microsoft.com/dotnet/core/runtime:3.0
3.0: Pulling from dotnet/core/runtime
68ced04f60ab: Pull complete e936bd534ffb: Pull complete caf64655bcbb: Pull complete d1927dbcbcab: Pull complete Digest: sha256:e0c67764f530a9cad29a09816614c0129af8fe3bd550eeb4e44cdaddf8f5aa40
Status: Downloaded newer image for mcr.microsoft.com/dotnet/core/runtime:3.0
---> f059cd71a22a
Successfully built f059cd71a22a
Successfully tagged myimage:latest
Make sure that you use the -p flag before the image name like this:
docker run -p 8080:8080 demo

Docker container with Blazegraph Triple Store not working possibly due to networking

I'm preparing a Docker image to teach my students the basics of Linked Data. I want them to actually prepare proper RDF and simulate the process of publishing it on the web as Linked Data, so I have prepared a Docker image comprising:
Triple Store: Blazegraph, listening to port 9999.
GRefine. I have copied an instance of Open Refine, with the RDF extension included. Listening to port 3333.
Linked Data Server: I have copied an instance of Jetty, with Pubby inside it. Listening to port 8080.
I have tested the three in my localhost (runing Ubuntu 14.04) and they work fine. This is the Dockerfile I'm using to build the image:
FROM ubuntu:14.04
MAINTAINER Mikel Egaña Aranguren <my.email#x.com>
RUN apt-get update && apt-get install -y openjdk-7-jre wget curl
RUN mkdir /LinkedDataServer
COPY google-refine-2.5 /LinkedDataServer/google-refine-2.5
COPY blazegraph /LinkedDataServer/blazegraph
COPY jetty /LinkedDataServer/jetty
EXPOSE 9999
EXPOSE 3333
EXPOSE 8080
WORKDIR /LinkedDataServer
CMD java -server -jar blazegraph/bigdata-bundled.jar
CMD google-refine-2.5/refine -i 0.0.0.0
WORKDIR /LinkedDataServer/jetty
CMD java -jar start.jar jetty.port=8080
I run the container and it does map the appropriate ports:
docker run -d -p 9999:9999 -p 3333:3333 -p 8080:8080 mikeleganaaranguren/linked-data-server:0.0.1
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
a08709d23acb mikeleganaaranguren/linked-data-server:0.0.1 /bin/sh -c 'java -ja 5 seconds ago Up 4 seconds 0.0.0.0:3333->3333/tcp, 0.0.0.0:8080->8080/tcp, 0.0.0.0:9999->9999/tcp dreamy_engelbart
The triple store, for example, seems to be working. If I go to 127.0.0.1:9999, I can access the triple store:
However, if try to do anything (queries, upload data, ...), the triple store simply fails with an "ERROR: Could not contact server". Since the same setting works on the host, I assume I'm doing something wrong with Docker. I have tried with -P instead of mapping the ports, and with --net=host, but I get the same error.
PS: Jetty also fails in the same fashion, and GRefine is not even working.
You'll need to make sure to use the IP of the docker container to access the Blazegraph instance. Outside of the container, it will not be running on 127.0.0.1, but rather the IP assigned to the docker container.
You'll need to run something like
docker inspect --format '{{ .NetworkSettings.IPAddress }}' "CONTAINER ID"
Where CONTAINER ID is the value of your docker instance.

Resources