Port forward to LocalStack sqs doesn't work - amazon-sqs

I'm trying to work with LocalStack sqs queue on docker.
The docker name is awslocal.
I've done port forward on awslocal:4566 -> localhost:9009
From my local service I try to connect to that queue by configuring sqs endpoint: https://localhost:9009
When trying to connect I receive an error:
Unable to execute HTTP request: awslocal
Any ideas?

Related

AWS ToolKit docker container not resolving internal service URIs

I am running an AWS Lambda locally via AWS Toolkit. The function, through a long dependency chain, calls an internal service endpoint that throws a ConnectionTimeoutException. That endpoint works when called locally.
Toolkit spins up a container to run the lambda in using the bridge docker network running on my local machine. My local machine is also running a proxy client in another container, and using docker network inspect bridge from my local terminal, I can see both the proxy and Toolkit containers are registered on the bridge network. When I shell into the running lambda container, my cUrl command to the internal service times out. That same command on my local machine succeeds.
Shouldn't the cUrl command work from within the lambda container?
local machine bridge network
connection time out exception
failed: connect timed out; nested exception is org.apache.http.conn.ConnectTimeoutException: Connect to internal.service.uri:80
Our SQUID proxy does not support service discovery.
This means the container has to have environment vars set to the proxy IP:
export http_proxy=http://172.17.0.2:3128
export HTTP_PROXY=http://172.17.0.2:3128
export https_proxy=http://172.17.0.2:3128
export HTTPS_PROXY=http://172.17.0.2:3128
export NO_PROXY=localhost
then it works.
next step is to figure out how to set those within the, container via Aws Toolkit

ECS unable to send API requests to backend using service discovery

I deployed a front and back apps with ECS fargate, both of them are up and running and I can access them from my browser. They are configured on the same VPC and subnet.
The backend has service discovery configured and my server's DNS address was inserted to my react application.
I read in this thread cannot-connect-two-ecs-services-via-service-discovery that if I use axios from my browser to access my server using service discovery it will not work.
The error I am getting is: net::ERR_NAME_NOT_RESOLVED
How can I achieve communication between these 2 services with service discovery? Am I missing something?

How to allow outbound requests from a Google Cloud Run to external MySql instance

I've created a google cloud run service, acting as an api which needs to connect to a mysql instance correctly hosted in aws rds. I can see that the google cloud run container is unable to connect to the mysql server instance.
My suspicion is that outbound connections are blocked from the google cloud run container.
Hence my question is if anyone knows how to allow/configure outbound requests in gcloud.
I can run the container locally and connect to AWS RDS successfully. On Cloud Run I get this error:
MySql.Data.MySqlClient.MySqlException (0x80004005): Unable to connect to any of the specified MySQL hosts. ---> MySql.Data.MySqlClient.MySqlException (0x80004005): Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding.

ENOTFOUND Error on startup Secure Gateway Client on Docker container using Kubernetes

I have a single Docker container running (into a VPN network) on Bluemix to run my Secure Gateway Client in order to stabilish a VPN connection between Cloud to Site and it's working.
I'm trying to migrate this container to run on Kubernetes because IBM will deprecate single docker instances on Bluemix. But running the same image into a Kubernetes Cluster a get the error bellow:
[2017-10-02 13:52:11.766] [ERROR] (Client ID 17) The response is code: ENOTFOUND, message: getaddrinfo
Does anyone knows what is happening? The secure gateway client image I'm using is "ibmcom/secure-gateway-client"
Thanks,

Connect Docker Containers: Frontend to GraphQL Backend via Docker Compose on the same Host

Suppose I'm on a host machine with docker-compose running 2 containers/services:
backend graphql (ports: 8000:8000)
frontend react (ports: 8081:8081)
In the frontend container, where my react + apollo code lives, I need to set this const:
// frontend container code
export const APOLLO = {
uri: 'http://0.0.0.0:8000/graphql' // << not working, what to use here?
};
However, the uri value is not able to connect successfully to the backend graphql endpoint. I'm receiving errors such as Error Network error: request to http://0.0.0.0:8000/graphql failed, reason: connect ECONNREFUSED 0.0.0.0:8000
The containers work fine on their own. I am able to navigate to http://0.0.0.0:8000, http://0.0.0.0:8000/graphql, http://0.0.0.0:8081 to interact with them individually. I am also able to enter each container and reach the other via their service name-spaces with ping backend or ping frontend.
However, when I do uri: 'http://backend:8000/graphql' or uri: 'http://backend/graphql' in my code, i get the error Error Network error: only absolute urls are supported.
On docker inspect backend, I get the backend container's IP address as: '172.18.0.5'. Which i tried to plug into the uri as uri: 'http://172.18.0.5/graphql', but I get Error Network error: Network request failed with status 403 - "Forbidden"
How should I connect backend docker container to the frontend within the code given these scenarios?
Thanks!
Fixed it by running the servers locally instead of Docker and found that backend was rejecting frontend entry due to CORS headers not set. Whitelisted frontends' ip and it worked. Tested again in Docker containers with the backend ip http://172.18.0.5/graphql and connection was perfect.
Hope this helps!
Edit: Referring to the container name in the url hostname i.e. http://backend/graphql also works thanks to the docker network bridge setup by docker compose. This is a better solution than hardcoding the docker container ip above.
This is an issue that occurs when node-fetch does not have access to a protocol or hostname
https://github.com/bitinn/node-fetch/blob/e2603d31c767cd5111df2ff6e2977577840656a4/src/request.js#L125
if (!parsedURL.protocol || !parsedURL.hostname) {
throw new TypeError('Only absolute URLs are supported');
}
Depending on how your graphql backend processes queries, it might be a good idea to log out the URLs for each of your service endpoint and ensure the URL contains a host AND protocol or the fetch will fail.
For myself, the error occurred for me when my host variable was coming back from the ENV for my service endpoints as undefined.

Resources