I have a java application as JAR. This JAR application runs fine from my machine, meaning it can send and receive HTTP Requests to and from an API Endpoint (let's call this endpoint example.com/api/).
And then i built a docker image of this JAR Application, and tried to run the image as container from my docker desktop. But then i got this error.
the error i got
it seems like my application cant reach the url from inside the docker container. I tried to set the proxy in Settings -> Resources -> Proxies -> Manual proxies configuraton, and put my company proxy since i'm inside my company network. But still it doesn't work.
I tried to google this problem but almost nothing shows up (anything that shows up have little correlation with my problem). Anyone knows what seems to be the problem? What should I do?
First check if your container is able to communicate with the endpoint. Ping or curl it from the container shell. If you use proxy, set environment variables in container:
export http_proxy=http://server-ip:port
export https_proxy=https://server-ip:port
I am new to Cypress, I am trying to run a simple test on a docker container but I get this error:
cy.visit() failed trying to load:
http://bp.localhost:84/
We attempted to make an http request to this URL but the request failed without a response.
We received this error at the network level:
> Error: getaddrinfo ENOTFOUND bp.localhost
Common situations why this would fail:
- you don't have internet access
- you forgot to run / boot your web server
- your web server isn't accessible
- you have weird network configuration settings on your computer
But my container is running and I can access to the test website from my browser
I have been looking around for a solution, but most of the ones I've found are related to Cypress being inside the same docker image.
I have installed it locally with npm install since I cannot modify the image itself.
How do I access the above URL then?
As the title states, I'm trying to figure out how I can get my .NET Core worker service deployed inside a Docker container to communicate with a REST API on my local machine (Docker host) via HTTPS.
NOTE: The Docker container is able to communicate with the REST API via plain-old HTTP just fine, so this seems to purely be an issue with HTTPS/SSL.
For background, I have a ASP.NET Core REST API deployed on my local machine via IIS. I have one binding for HTTP (port 8001) and one for HTTPS (port 8101). Additionally, said API also interfaces with my companies (model) IdentityServer instance.
Additionally, I have a .NET Core worker service which communicates with the aforementioned REST API (using an HttpClient under the hood). I've packaged this into a Docker image/container (using a Dockerfile, etc.).
When I run the container using the HTTP (not HTTPS) endpoints, everything is fine and the container is able to interface with the REST API on the local machine (provided I use host.docker.internal in place of localhost in URLs).
However, when I switch to HTTPS, things go haywire and I receive the following error:
System.Net.Http.HttpRequestException: The SSL connection could not be established, see inner exception. ---> System.Security.Authentication.AuthenticationException: The remote certificate is invalid according to the validation procedure.
I searched around for solutions but nothing seems to be working, but maybe I'm doing something wrong.
I've tried exporting (via Windows Certificate Manager) all the relevant certificates I could come up with as .cer files, changing their extensions to .crt and adding them to the Docker container via the Dockerfile:
COPY FooBar.crt /usr/local/share/ca-certificates/
RUN update-ca-certificates
ENTRYPOINT ["dotnet", "My.Project.dll"]
(Omitted the extraneous build/publish steps as I know those are working fine, but this snippet is part of a larger multi-stage Dockerfile).
NOTE: update-ca-certificates did recognize the certificates, so I'm under the impression they were added. I thought that there might have been an issue since I'm using a multi-stage Docker build, but even adding the certificates right before the ENTRYPOINT seems to make no difference.
I'm honestly not sure what else it could be at this point (or if my use case is even supported).
OS: Windows 10 Pro
The whole setup for properly starting up the Web UI seems confusing to me.
There’s the source code to the chirpstack-application-server and its finished docker image. Running docker-compose up at the source code directory starts all the necessary backend services, but not the UI. In the source code, there’s a section with the UI inside the /ui directory. Starting this through npm works up until after this console log:
Note that the development build is not optimized. To create a
production build, use npm run build.
After this I get this proxy error:
Proxy error: Could not proxy request /swagger/internal.swagger.json
from localhost:3000 to http ://localhost:8080/. See https:// nodejs.
org/api/errors.html#errors_common_system_errors for more information
(ECONNREFUSED).
Then there’s the chirpstack-appliaction from precompiled binary. I started this one by first creating the config file chirpstack-application-server configfile > chirpstack-application-server.toml and then starting the executable ./chirpstack-application-server.exe. Here I just get a connection error to PostgreSQL:
time=“2020-09-17T11:09:08+02:00” level=warning msg=“storage: ping
PostgreSQL database error, will retry in 2s” error=“dial tcp
[::1]:5432: connectex: No connection could be made because the target
machine actively refused it.”
So what am I missing to get the UI up and running locally?
We are using a python based solution which shall load and store files from S3. For developing and local testing we are using a vagrant environment with docker and docker-compose. We have two docker-compose defintions - one for the assisting backend services (mongo, restheart, redis and s3) and the other one containing the python based REST API exposing solution using the backend services.
When our "front-end" docker-compose group interacts with restheart this works fine (using the name of the restheart container as server host in http calls). When we are doing the same with scality/s3 server this does not work.
The interesting part is, that we have created a test suite for using the scality/s3 server from a python test suite running on the host (windows10) over the forwarded ports through vagrant to the docker container of scality/s3 server within the docker-compose group. We used the endpoint_url localhost and it works perfect.
In the error case (when frontend web service wants to write to S3) the "frontend" service always responds with:
botocore.exceptions.ClientError: An error occurred (InvalidURI) when calling the CreateBucket operation: Could not parse the specified URI. Check your restEndpoints configuration.
And the s3server always responds with http 400 and the message:
s3server | {"name":"S3","clientIP":"::ffff:172.20.0.7","clientPort":49404,"httpMethod":"PUT","httpURL":"/raw-data","time":1521306054614,"req_id":"e385aae3c04d99fc824d","level":"info","message":"received request","hostname":"cdc8a2f93d2f","pid":83}
s3server | {"name":"S3","bytesSent":233,"clientIP":"::ffff:172.20.0.7","clientPort":49404,"httpMethod":"PUT","httpURL":"/raw-data","httpCode":400,"time":1521306054639,"req_id":"e385aae3c04d99fc824d","elapsed_ms":25.907569,"level":"info","message":"responded with error XML","hostname":"cdc8a2f93d2f","pid":83}
We are calling the scality with this boto3 code:
s3 = boto3.resource('s3',
aws_access_key_id='accessKey1',
aws_secret_access_key='verySecretKey1',
endpoint_url='http://s3server:8000')
s3_client = boto3.client('s3',
aws_access_key_id='accessKey1',
aws_secret_access_key='verySecretKey1',
endpoint_url='http://s3server:8000')
s3.create_bucket(Bucket='raw-data') # here the exception comes
bucket = self.s3.Bucket('raw-data')
This issue is quite common. In your config.json file, which you mount in your Docker container, I assume, there is a restEndpoints section, where you must associate a domain name with a default region. What that means is your frontend domain name should be specified in there, matching a default region.
Do note that that default region does not prevent you from using other regions: it's just where your buckets will be created if you don't specify otherwise.
In the future, I'd recommend you open an issue directly on the Zenko Forum, as this is where most of the community and core developpers are.
Cheers,
Laure