Unable to access Docker application from browser - docker

I have a .NET Core 2.1 API which, when run via Visual Studio in Windows, receives a HTTP GET from the browser and successfully returns data from a MySQL DB.
I've been trying to add the API to a Docker container (inside Ubuntu) - this builds, but I can't access the API from the browser:
What I've tried:
Checking available ports. Here's the relevant output:
Connecting using a different browser: In Chromium, it says 'This site can't be reached: localhost unexpectedly closed the connection.'
Testing unused ports in the browser to see if this returns a different message - for Firefox, unused ports return 'Unable to connect' instead of 'Secure connection Failed', and in Chromium they return 'localhost refused to connect.' instead of 'localhost unexpectedly closed the connection.'
In Firefox, configuring 'security.tls.insecure_fallback_hosts' to 'localhost'.
Double-checking that ports are mapped correctly in docker-compose.yml - under 'ports' for the relevant container, 44329 is mapped to 44329.
Running a DB query when the API starts and logging the result: this is successful. I run docker-compose up, the API starts, connects to the DB container, and it logs the result of an SQL query to a text file. So the problem is unlikely to be anything to do with the database.
Logging GET requests: Inside Windows, the GET requests are successfully logged, but in the Ubuntu Docker container, they're not.
This might be relevant: In Windows, the API only works properly if you start it from Visual Studio. If you execute dotnet webapi.dll, you get this output
...\netcoreapp2.1>dotnet webapi.dll
...
Now listening on: http://localhost:5000
Now listening on: https://localhost:5001
Application started. Press Ctrl+C to shut down.
So here it's not running on 44329; instead it's accessible via 5001 in the browser, where you get this message which is not chill at all:
Clicking 'ADVANCED' allows you to continue to the API.
So I've also tried connecting via 5001 in Ubuntu without luck.
Here's the output from 'docker-compose up':
matt#Matt-Ubuntu:~/docker2$ docker-compose up
Starting docker2_mysql_1 ... done
Starting docker2_dbmodelmapper_1 ... done
Attaching to docker2_mysql_1, docker2_dbmodelmapper_1
... (mysql stuff) ...
dbmodelmapper_1 | Hosting environment: Production
dbmodelmapper_1 | Content root path: /app
dbmodelmapper_1 | Now listening on: http://[::]:80
dbmodelmapper_1 | Application started. Press Ctrl+C to shut down.
Seeing the port 80, I've tried connecting to that, without success.
The relevant container as shown by docker ps is:
What kind of issue am I looking at here?

As pointed out by multiple people in the comments (thanks sp0gg, Martin Ullrich and Daniel Lerps), the API was incorrectly listening on port 80.
The solution was to map port 5000 to 44329 in docker-compose.yml for the API container, and also modify the Dockerfile to pass ports in as arguments to dotnet when starting the ASP.NET API:
ENTRYPOINT ["dotnet", "webapi.dll", "--urls", "http://*:5000;http://*:5001"]

Related

Keep always running Go http server app inside a Docker container

I have a Docker container that runs just a Go binary I created that is a http server with Gin framework. I don't use any other Web Server, just Go's internal http server. Inside my Dockerfile at the end of file I have this:
EXPOSE 80
CMD ["/home/project/microservices/backend/engine/cmd/main"]
I use docker-compose to run the container and restart: always for each container. And it works!
But my question is that, if the http server that I created fails due to programming error or something, It will restart? how can I check this? does Docker has tools for this?
I tried go with Supervisord but it has some problems and I was not successful on running it.
I want a workaround to keep the http server inside container always running.
What can I do?
You can try killing the process from the host. Find the process id using something like
ps -aux | grep main
Then kill it using
sudo kill <process id>
Docker will then restart it. By using
docker ps
you should see that the 'status' has changed to something like Up 10 seconds.

ksqlDB CLI not working with a dockerized ksqlDB Server

I'm trying to connect a ksqlDB CLI (running on a container using image 0.20.0) but it says the [ksqlDB] Server Status is unknow...
CLI v0.20.0, Server v<unknown> located at http://127.0.0.1:8088
WARNING: Could not identify server version.
Non-matching CLI and server versions may lead to unexpected errors.
Server Status: <unknown>
... which is funny since I'm running the ksqlDB server (version 0.20.0 as well) based on these instructions and I see the startup log
[2021-08-23 12:28:10,795] INFO ksqlDB API server listening on http://0.0.0.0:8088 (io.confluent.ksql.rest.server.KsqlRestApplication:389)
[2021-08-23 12:28:10,796] INFO Server up and running (io.confluent.ksql.rest.server.KsqlServerMain:93)
[2021-08-23 12:28:11,923] INFO Successfully submitted metrics to Confluent via secure endpoint (io.confluent.support.metrics.submitters.ConfluentSubmitter:146)
Also, in Docker Desktop (I'm running this on a Windows) I see it under the Container/Apps tab as running on port 8088 and it allows me to "Open in browser" where I see the response
KsqlServerInfo
version "0.20.0"
kafkaClusterId "lkc-****"
ksqlServiceId "default_"
serverStatus "RUNNING"
Any idea of what's going on?
By default, the container doesn't know how to reach an external server; it will try to connect to itself on 127.0.0.1
You would need to follow some steps like the following
# create a network
docker network create ksql-network
# TODO start Zookeeper and Kafka
# start the server on that network
docker run -d --name=ksql-server --network=ksql-network ... confluentinc/ksql-server:<version>
# Start the CLI to point at the '--name' on the '--network'
docker run --network=ksql-network confluentinc/ksql-cli:<version> http://ksql-server:8088
Or, you should just use Compose

testcafe failing to connect to server when using --proxy option

I'm trying to run testcafe in our pipeline (Semaphore) using a docker image based on the official one, where the only additions are copying our tests inside it and install some other additional npm packages used by them. Those tests run against a test environment, that for security reasons can only be accessed either via VPN or a proxy. I'm using the --proxy flag but the test run fails with the message:
ERROR Unable to establish one or more of the specified browser connections
1 of 1 browser connections have not been established:
- chromium:headless
Hints:
- Use the "browserInitTimeout" option to allow more time for the browser to start. The timeout is set to 2 minutes for local browsers and 6 minutes for remote browsers.
- The error can also be caused by network issues or remote device failure. Make sure that the connection is stable and the remote device can be reached.
I'm trying to find out what's the problem, but as testcafe doesn't have a verbose mode, and the --dev flag doesn't seem to log anything anywhere; so I don't have any clue why it's not connecting. My test command is:
docker run --rm -v ~/results:/app/results --env HTTP_PROXY=$HTTP_PROXY $ECR_REPO:$TESTCAFE_IMAGE chromium:headless --proxy $HTTP_PROXY
If I try to run the tests without the proxy flag, they reach the test environment; can't run the tests as the page shown is not our app but a maintenace page served by default for connections outside the vpn or that doesn't come from the proxy.
If I go inside the testcafe container and run:
curl https://mytestserver.dev --proxy $HTTP_PROXY
it connects without any problem.
I've also tried to use firefox:headless instead of Chromium, but I've found out that it actually ignores the --proxy option altogether (reckon it's a bug).
We have a cypress container in that same pipeline going through that same proxy and it connects and runs the tests flawlessly.
Any insight about what the problem could be would be much appreciated.

How to configure the port to be exposed for dockerized Rasa-NLU

I'm new with Rasa and docker.
My attempt to dockerize Rasa-NLU consists of the below steps:
Instructions were referred from here
Did a Git clone of latest Rasa-NLU
Copied Dockerfile_full (from within /docker) to the root directory
Changed the port number specified in config_default.json and Dockerfile_full from default(5000) to 5048.
Build using: docker build -t rasa_nlu .
Run the docker on a port(5048) different from the default(5000) port.
However, the following gets logged in the console:
INFO:rasa_nlu.data_router:Logging requests to '/app/logs/rasa_nlu_log-20170928-091903-1.log'.
INFO:__main__:Started http server on port 5000
2017-09-28 09:19:03+0000 [-] Log opened.
2017-09-28 09:19:03+0000 [-] Site starting on 5000
2017-09-28 09:19:03+0000 [-] Starting factory <twisted.web.server.Site instance at 0x7fbab0bfdd40>
If I try to hit the Rasa endpoint locally using CURL, I get a connection reset error. My doubts of a wrong port being referred were confirmed when checked within docker container (using docker exec)it was running on port 5000.
Can someone help me out here as to where exactly I'm going wrong and where should the port number be configured ?
Thanks in advance!
Dockerfile_full expects the config file to be in the sample_configs folder. Also Dockerfile_full uses the config_spacy_duckling.json config file. So make sure you replace the below reference in the dockerfile. You can either change the config file it copies in or change the port configuration in the correct file.
COPY sample_configs/config_spacy_duckling.json ${RASA_NLU_HOME}/config.json
Ignoring that, why change the port in both locations? All you need to do is change it in your docker run or compose command.
docker run -p 5048:5000 rasa/rasa_nlu:latest-full
Add and change port in sample_configs/config_spacy_duckling.json. If you see the Dockerfile this the config that is copied and it has no port defined. So once you put port in this and build it would work

Docker run does not deploy

So we followed the Docker get started tutorial
(https://docs.docker.com/get-started/part2/). The build works, the command
docker run -p 4000:80 friendlyhello
works but when we go to http://localhost:4000, nothing is reached. We just followed the tutorial step by step but don't see anything.
Yes we also went to localhost:4001.
Is this perhaps something that has to do witht the message "system pool is not available on windows"?
Here's a screenshot of our docker output
Firstly talking about the issue that you've pointed out yourself, this is identified as an issue that can't be fixed for Windows.
Please try downgrading to version 1.12.x so that these warnings do not pop-up anymore. This solution is found working for most of us.
level-info msg="Unable to use system certificate pool: crypto/x509: system root pool is not available on Windows"
Coming to the main issue that you're facing, which is as follows:
Error response from daemon: driver failed programming external connectivity on endpoint objective_joliot
This says that the port 4000 is already in use on Docker VM or possibly on your system. You can either stop whatever is running on that port or change the port used in your Docker command.
To change to the external port 8080, use:
docker run -d -p 8080:80 --name objective_joliot nginx
Hope this helps!!!

Resources