I have this envoy.yaml
admin:
access_log_path: /tmp/admin_access.log
address:
socket_address: { address: 0.0.0.0, port_value: 9901 }
static_resources:
listeners:
- name: listener_0
address:
socket_address: { address: 0.0.0.0, port_value: 8080 }
filter_chains:
- filters:
- name: envoy.filters.network.http_connection_manager
typed_config:
'#type': type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
codec_type: auto
stat_prefix: ingress_http
route_config:
name: local_route
virtual_hosts:
- name: local_service
domains: ['*']
routes:
- match: { prefix: '/' }
route:
cluster: echo_service
timeout: 0s
max_stream_duration:
grpc_timeout_header_max: 0s
cors:
allow_origin_string_match:
- prefix: '*'
allow_methods: GET, PUT, DELETE, POST, OPTIONS
allow_headers: keep-alive,user-agent,cache-control,content-type,content-transfer-encoding,custom-header-1,x-accept-content-transfer-encoding,x-accept-response-streaming,x-user-agent,x-grpc-web,grpc-timeout
max_age: '1728000'
expose_headers: custom-header-1,grpc-status,grpc-message
http_filters:
- name: envoy.filters.http.grpc_web
- name: envoy.filters.http.cors
- name: envoy.filters.http.router
clusters:
- name: echo_service
connect_timeout: 0.25s
type: logical_dns
http2_protocol_options: {}
lb_policy: round_robin
load_assignment:
cluster_name: cluster_0
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: node-server
port_value: 9090
This file is copied from this official example.
But when I try to go ahead with the docs
$ docker-compose pull node-server envoy commonjs-client
$ docker-compose up node-server envoy commonjs-client
I get this:
ERROR: No such service: node-server
if I run docker-compose pull envoy I get ERROR: No such service: envoy
What did I miss?
It seems like I got the wrong assumption in my comments. This repository contains Dockerfiles with a context to create said image when running the docker compose command.
In your example, the command:
docker-compose pull node-server envoy commonjs-client
should check if the images are available locally. If not, they should be able to build them.
Confusing to me is that you pointed to a docker-compose.yaml file stashed away somewhere deep in the examples folder. If you were to run that command I can see why you'd get the error. The relative path to the envoy Dockerfile is ./net/grpc/gateway/docker/envoy/Dockerfile, which is not accessible from the echo location.
It should however be accessible from your project root (i.e. the directory of this file https://github.com/grpc/grpc-web/blob/master/docker-compose.yml). Have you tried running it from there?
Fyi: What should happen after a pull is compose notifying you that the image cannot be found in your local repository and proceeding to create it based on the Dockerfile it found in the path relative to root (./net/grpc/gateway/docker/envoy/Dockerfile).
Related
I have a web app using gRPC Web to interact with my gRPC service through a dockerized Envoy Proxy. When I try calling the endpoint exposed in Envoy, I receive the following error:
gRPC Error (code: 14, codeName: UNAVAILABLE, message: upstream connect error or disconnect/reset before headers. reset reason: protocol error, details: [], rawResponse: null, trailers: {content-length: 0})
Here is my client side code:
class FiltersService {
static ResponseFuture<Filters> getFilters() {
GrpcWebClientChannel channel =
GrpcWebClientChannel.xhr(Uri.parse('http://localhost:9000'));
FiltersServiceClient clientStub = FiltersServiceClient(
channel,
);
return clientStub.getFilters(Void());
}
}
Here is my Envoy.yaml:
admin:
access_log_path: /tmp/admin_access.log
address:
socket_address:
address: 0.0.0.0
port_value: 9901
static_resources:
listeners:
- name: listener_0
address:
socket_address: { address: 0.0.0.0, port_value: 9000 }
filter_chains:
- filters:
- name: envoy.http_connection_manager
typed_config:
"#type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
codec_type: auto
stat_prefix: ingress_http
route_config:
name: local_route
virtual_hosts:
- name: local_service
domains: ["*"]
routes:
- match: { prefix: "/" }
route:
cluster: greeter_service
max_grpc_timeout: 0s
cors:
allow_origin_string_match:
- prefix: "*"
allow_methods: GET, PUT, DELETE, POST, OPTIONS
allow_headers: keep-alive,user-agent,cache-control,content-type,content-transfer-encoding,custom-header-1,x-accept-content-transfer-encoding,x-accept-response-streaming,x-user-agent,x-grpc-web,grpc-timeout
max_age: "1728000"
expose_headers: custom-header-1,grpc-status,grpc-message
http_filters:
- name: envoy.filters.http.grpc_http1_bridge
- name: envoy.filters.http.grpc_web
- name: envoy.filters.http.cors
- name: envoy.filters.http.router
clusters:
- name: greeter_service
connect_timeout: 0.25s
type: logical_dns
upstream_connection_options:
tcp_keepalive:
keepalive_time: 300
lb_policy: round_robin
load_assignment:
cluster_name: cluster_0
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: host.docker.internal
port_value: 9001
Where port 9001 is where my gRPC service is running. I was able to verify the service is working properly by directly calling it from Kreya, and I received the correct response.
I am also able to successfully access the admin server from localhost:9901.
However, when I try calling the endpoint exposed by Envoy (9000) through either Kreya or client side, I get
gRPC Error (code: 14, codeName: UNAVAILABLE, message: upstream connect error or disconnect/reset before headers. reset reason: protocol error, details: [], rawResponse: null, trailers: {content-length: 0})
I am running the following commands to run the Dockerized Envoy:
docker build -t my-envoy:1.0 .
docker run -p 9000:9000 -p 9901:9901 my-envoy:1.0
After a lot of frustration and playing around, I finally figured it out. Looking at the documentation, it seems like the envoy.filters.httl.grpc_web filter can translate a request to both HTTP/2 and HTTP/3. I am assuming (though if someone more knowledgeable in this field can correct me, please do) that without specifying, Envoy does not know what protocol to translate to. As such, simply adding http2_protocol_options: {} to my cluster resolved the issue. I'm including the full cluster block below for anyone that may come across the same issue in the future.
clusters:
- name: greeter_service
connect_timeout: 0.25s
type: logical_dns
lb_policy: round_robin
http2_protocol_options: {}
load_assignment:
cluster_name: cluster_0
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: host.docker.internal
port_value: 9001
For some reason, whenever I tried using the --networks=host flag in my docker run command, I was unable to access the admin portal, and so the solutions that I came across, and the tutorials for using Envoy to proxy for gRPC Web weren't very beneficial in resolving this issue. Hopefully this is helpful to someone facing the same issue.
I have gRPC server in scala Play Framework which exposes gRPC hello world example service on port 9000. I'm trying to connect it with React web client. It seems that I'm having connection issues with Envoy proxy which is deployed to docker container on Mac.
I'm always getting the same error which I believe means that Envoy is not able to connect with backend:
code: 2
message: "Http response at 400 or 500 level"
metadata: Object { }
My docker file to build Envoy is:
FROM envoyproxy/envoy:v1.12.2
COPY ./envoy.yaml /etc/envoy/envoy.yaml
CMD /usr/local/bin/envoy -c /etc/envoy/envoy.yaml -l trace --log-path /tmp/envoy_info.log
And I'm building it using this script:
echo --- Building my-envoy docker image ---
docker build -t my-envoy:1.0 .
echo --- Running my-envoy docker image ---
docker run -d -p 8080:8080 -p 9901:9901 --network=host my-envoy:1.0
Envoy configuration is defined in this yaml file:
admin:
access_log_path: /tmp/admin_access.log
address:
socket_address: { address: 0.0.0.0, port_value: 9901 }
static_resources:
listeners:
- name: listener_0
address:
socket_address: { address: 0.0.0.0, port_value: 8080 }
filter_chains:
- filters:
- name: envoy.http_connection_manager
config:
codec_type: auto
stat_prefix: ingress_http
route_config:
name: local_route
virtual_hosts:
- name: local_service
domains: ["*"]
routes:
- match: { prefix: "/" }
route:
cluster: greeter_service
max_grpc_timeout: 0s
cors:
allow_origin:
- "*"
allow_methods: GET, PUT, DELETE, POST, OPTIONS
allow_headers: keep-alive,user-agent,cache-control,content-type,content-transfer-encoding,custom-header-1,x-accept-content-transfer-encoding,x-accept-response-streaming,x-user-agent,x-grpc-web,grpc-timeout
max_age: "1728000"
expose_headers: custom-header-1,grpc-status,grpc-message
http_filters:
- name: envoy.grpc_web
- name: envoy.cors
- name: envoy.router
clusters:
- name: greeter_service
connect_timeout: 0.25s
type: logical_dns
http2_protocol_options: {}
lb_policy: round_robin
hosts: [{ socket_address: { address: host.docker.internal, port_value: 9000 }}]
I could not find anything relevant in envoy log file except these logs:
[2020-12-15 01:28:18.747][8][debug][upstream] [source/common/upstream/logical_dns_cluster.cc:72] starting async DNS resolution for host.docker.internal
[2020-12-15 01:28:18.747][8][trace][upstream] [source/common/network/dns_impl.cc:160] Setting DNS resolution timer for 5000 milliseconds
[2020-12-15 01:28:18.748][8][trace][upstream] [source/common/network/dns_impl.cc:160] Setting DNS resolution timer for 5000 milliseconds
[2020-12-15 01:28:18.749][8][debug][upstream] [source/common/upstream/logical_dns_cluster.cc:79] async DNS resolution complete for host.docker.internal
[2020-12-15 01:28:21.847][8][debug][main] [source/server/server.cc:175] flushing stats
[2020-12-15 01:28:23.751][8][debug][upstream] [source/common/upstream/logical_dns_cluster.cc:72] starting async DNS resolution for host.docker.internal
[2020-12-15 01:28:23.751][8][trace][upstream] [source/common/network/dns_impl.cc:160] Setting DNS resolution timer for 5000 milliseconds
[2020-12-15 01:28:23.753][8][trace][upstream] [source/common/network/dns_impl.cc:160] Setting DNS resolution timer for 5000 milliseconds
[2020-12-15 01:28:23.753][8][debug][upstream] [source/common/upstream/logical_dns_cluster.cc:79] async DNS resolution complete for host.docker.internal
I have microservice containers running in docker. I want to setup envoy to serve as a single gateway for a number of APIs. In my docker-compose.yml I have a service personapi defined and the following section defining the gateway:
apigateway:
image: ${DOCKER_REGISTRY-}apigateway
build:
context: .
dockerfile: src/envoy/Dockerfile
ports:
- "7999:10000"
depends_on:
- personapi
There is also the envoy.yaml file which is copied to the gateway image and contains the following:
admin:
access_log_path: /tmp/admin_access.log
address:
socket_address: { address: 127.0.0.1, port_value: 9901 }
static_resources:
listeners:
- name: listener_0
address:
socket_address: { address: 127.0.0.1, port_value: 10000 }
filter_chains:
- filters:
- name: envoy.filters.network.http_connection_manager
typed_config:
"#type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
stat_prefix: ingress_http
codec_type: AUTO
route_config:
name: local_route
virtual_hosts:
- name: local_service
domains: ["*"]
routes:
- match: { prefix: "/" }
route: { cluster: some_service }
http_filters:
- name: envoy.filters.http.router
clusters:
- name: some_service
connect_timeout: 0.25s
type: STRICT_DNS
lb_policy: ROUND_ROBIN
load_assignment:
cluster_name: some_service
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: personapi
port_value: 80
When I open the console for the gateway service, these two commands work:
Connect to the person service and download the list of people directly
curl personapi/person
Connect to envoy and use it to route to the person service
curl localhost:10000/person
Now on the host machine, when I try to connect to the gateway service on the port 7999 ( specified in the docker compose to map to apigateway port 10000 ) I get an empty response - not even a status code. There seem to be something listening, but it is refusing to answer any requests.
How do I expose envoy to the host machine running docker?
The issue you ran into here isn't specific to docker, but rather to how network interfaces work.
In the one that did not work, you were binding the Envoy listener to 127.0.0.1. This is the loopback interface and you will only be able to call this from the same machine it's running on. In this example, you would need to docker exec into the container in order to be able to call this interface.
In the one that did work, you bound to 0.0.0.0 which is the IPV4 way of saying "i'll accept connections from anywhere". That binding let you address the Envoy listener from outside the Docker container.
It started working after address was updated in socket_address
# this works:
socket_address: { address: 0.0.0.0, port_value: 10000 }
# this did not:
socket_address: { address: 127.0.0.1, port_value: 10000 }
So I have a dapr service hosted at: 192.168.1.34:50459
I'm trying to have it communicate with my web application using grpc-web. To do so I have an envoy proxy (according to https://github.com/grpc/grpc-web/issues/347).
My envoy.yaml file is as follows:
static_resources:
listeners:
- name: listener_0
address:
socket_address: { address: 0.0.0.0, port_value: 4949 }
filter_chains:
- filters:
- name: envoy.http_connection_manager
config:
codec_type: auto
stat_prefix: ingress_http
route_config:
name: local_route
virtual_hosts:
- name: local_service
domains: ["*"]
routes:
- match: { prefix: "/" }
route:
cluster: greeter_service
max_grpc_timeout: 0s
cors:
allow_origin_string_match:
- prefix: "*"
allow_methods: GET, PUT, DELETE, POST, OPTIONS
allow_headers: keep-alive,user-agent,cache-control,content-type,content-transfer-encoding,custom-header-1,x-accept-content-transfer-encoding,x-accept-response-streaming,x-user-agent,x-grpc-web,grpc-timeout
max_age: "1728000"
expose_headers: custom-header-1,grpc-status,grpc-message
http_filters:
- name: envoy.grpc_web
- name: envoy.cors
- name: envoy.router
clusters:
- name: greeter_service
connect_timeout: 0.25s
type: logical_dns
http2_protocol_options: {}
lb_policy: round_robin
hosts: [{ socket_address: { address: 192.168.1.34, port_value: 50459 }}]
It should listen at 0.0.0.0:4949, and forward it to 192.168.1.34:50459
But when I start this proxy with
docker run -d -v envoy.yaml:/etc/envoy/envoy.yaml:ro -p 4949:4949 -p 50459:50459 envoyproxy/envoy:v1.15.0
It routes it to 0.0.0.0:50459
enter image description here
Does anyone know how to resolve this?
I don't know much about envoy but can you configure it to log to mounted path? See if the route is actually configured? Also, I assume you're setting the --dapr-grpc-port explicitly in Dapr run command? Have you tried setting it to a diff port in case of collisions?
I am trying to configure my envoy proxy to allow for save requests from my angular application to my application server using grpc. I have a letsencrypt certificate loaded, but the requests fail and chrome prints a: ERR_CERT_COMMON_NAME_INVALID when trying to connect. I have an apache2 running serving my web application. The envoy proxy on docker and the web application are running on the same machine.
my envoy.yaml:
admin:
access_log_path: /tmp/admin_access.log
address:
socket_address: { address: 0.0.0.0, port_value: 9901 }
static_resources:
listeners:
- name: listener_0
address:
socket_address: { address: 0.0.0.0, port_value: 17887 }
filter_chains:
- filters:
- name: envoy.http_connection_manager
config:
codec_type: auto
stat_prefix: ingress_http
stream_idle_timeout: 0s
route_config:
name: local_route
virtual_hosts:
- name: local_service
domains: ["myactualdomain.com"]
routes:
- match: { prefix: "/" }
route:
cluster: greeter_service
max_grpc_timeout: 0s
cors:
allow_origin_string_match:
- prefix: "*"
allow_methods: GET, PUT, DELETE, POST, OPTIONS
allow_headers: keep-alive,user-agent,cache-control,content-type,content-transfer-encoding,custom-header-1,x-accept-content-transfer-encoding,x-accept-response-streaming,x-user-agent,x-grpc-web,grpc-timeout
max_age: "1728000"
expose_headers: grpc-status,grpc-message
http_filters:
- name: envoy.grpc_web
- name: envoy.cors
- name: envoy.router
tls_context:
common_tls_context:
alpn_protocols: "h2"
tls_certificates:
- certificate_chain: { filename: "/etc/fullchain.pem" }
private_key: { filename: "/etc/privkey.pem" }
clusters:
- name: greeter_service
connect_timeout: 1.00s
type: logical_dns
http2_protocol_options: {}
lb_policy: round_robin
hosts: [{ socket_address: { address: localhost, port_value: 17888 }}]
I was thinking i might be because I am not using the traditional https port.
Any help appreciated.
I actually got it working. First of all I added a new subdomain for the envoy proxy and created a new pair of certificates. Also don't do this: domains: ["myactualdomain.com"] but rather ["*"] as this leads to a CORS violation. If you only connect with grpc-web and the envoy don't use ssl as they run on the same machine anyway. If you wan't to do that though, you might want to take a look at that: https://medium.com/#farcaller/how-to-configure-https-backends-in-envoy-b446727b2eb3, I didn't try it though.