I am trying to implement lua script into an evnoy configuration file
What I want is to write my lua code within a local lua file and then scpecify my script file inside the envoy configuration file
This is my yaml file:
static_resources:
listeners:
- address:
socket_address:
address: 0.0.0.0
port_value: 10000
filter_chains:
- filters:
- name: envoy.filters.network.http_connection_manager
typed_config:
'#type': "type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager"
stat_prefix: http_proxy
route_config:
name: all
virtual_hosts:
- name: allbackend_cluster
domains:
- '*'
routes:
- match: { prefix: "/"}
route:
cluster: cluster_wackopicko
http_filters:
- name: envoy.filters.http.router
- name: envoy.lua
typed_config:
"#type": type.googleapis.com/envoy.extensions.filters.http.lua.v3.Lua
inline_code: |
function envoy_on_response(response_handle)
body_size = response_handle:body():length()
response_handle:headers():add("response-body-size", tostring(body_size))
response_handle:headers():add("foo", "bar")
end
clusters:
- name: cluster_wackopicko
connect_timeout: 1s
lb_policy: ROUND_ROBIN
load_assignment:
cluster_name: cluster_wackopicko
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: 127.0.0.1
port_value: 8081
What I need to change in this config file to export the lua code to external file which will be located in /envoy/lua/scripts/ folder on my ubuntu server ?
Envoy >= 1.23.0
Since Envoy v1.23.0 (and this PR), you can define your script in default_source_code. See documentation.
Example:
-- /envoy/lua/scripts/myscript.lua
function envoy_on_response(response_handle)
body_size = response_handle:body():length()
response_handle:headers():add("response-body-size", tostring(body_size))
response_handle:headers():add("foo", "bar")
end
name: envoy.filters.http.lua
typed_config:
"#type": type.googleapis.com/envoy.extensions.filters.http.lua.v3.Lua
default_source_code:
filename: /envoy/lua/scripts/myscript.lua
Envoy < 1.23.0
With older versions, you can use source_codes with filename. See documentation.
However, you have to add inline_code. This is a known issue.
Example:
-- /envoy/lua/scripts/myscript.lua
function add_headers(response_handle)
body_size = response_handle:body():length()
response_handle:headers():add("response-body-size", tostring(body_size))
response_handle:headers():add("foo", "bar")
end
name: envoy.filters.http.lua
typed_config:
"#type": type.googleapis.com/envoy.extensions.filters.http.lua.v3.Lua
inline_code: |
require("envoy.lua.scripts.myscript")
function envoy_on_response(response_handle)
add_headers(response_handle)
end
source_codes:
myscript.lua:
filename: /envoy/lua/scripts/myscript.lua
Related
I have a web app using gRPC Web to interact with my gRPC service through a dockerized Envoy Proxy. When I try calling the endpoint exposed in Envoy, I receive the following error:
gRPC Error (code: 14, codeName: UNAVAILABLE, message: upstream connect error or disconnect/reset before headers. reset reason: protocol error, details: [], rawResponse: null, trailers: {content-length: 0})
Here is my client side code:
class FiltersService {
static ResponseFuture<Filters> getFilters() {
GrpcWebClientChannel channel =
GrpcWebClientChannel.xhr(Uri.parse('http://localhost:9000'));
FiltersServiceClient clientStub = FiltersServiceClient(
channel,
);
return clientStub.getFilters(Void());
}
}
Here is my Envoy.yaml:
admin:
access_log_path: /tmp/admin_access.log
address:
socket_address:
address: 0.0.0.0
port_value: 9901
static_resources:
listeners:
- name: listener_0
address:
socket_address: { address: 0.0.0.0, port_value: 9000 }
filter_chains:
- filters:
- name: envoy.http_connection_manager
typed_config:
"#type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
codec_type: auto
stat_prefix: ingress_http
route_config:
name: local_route
virtual_hosts:
- name: local_service
domains: ["*"]
routes:
- match: { prefix: "/" }
route:
cluster: greeter_service
max_grpc_timeout: 0s
cors:
allow_origin_string_match:
- prefix: "*"
allow_methods: GET, PUT, DELETE, POST, OPTIONS
allow_headers: keep-alive,user-agent,cache-control,content-type,content-transfer-encoding,custom-header-1,x-accept-content-transfer-encoding,x-accept-response-streaming,x-user-agent,x-grpc-web,grpc-timeout
max_age: "1728000"
expose_headers: custom-header-1,grpc-status,grpc-message
http_filters:
- name: envoy.filters.http.grpc_http1_bridge
- name: envoy.filters.http.grpc_web
- name: envoy.filters.http.cors
- name: envoy.filters.http.router
clusters:
- name: greeter_service
connect_timeout: 0.25s
type: logical_dns
upstream_connection_options:
tcp_keepalive:
keepalive_time: 300
lb_policy: round_robin
load_assignment:
cluster_name: cluster_0
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: host.docker.internal
port_value: 9001
Where port 9001 is where my gRPC service is running. I was able to verify the service is working properly by directly calling it from Kreya, and I received the correct response.
I am also able to successfully access the admin server from localhost:9901.
However, when I try calling the endpoint exposed by Envoy (9000) through either Kreya or client side, I get
gRPC Error (code: 14, codeName: UNAVAILABLE, message: upstream connect error or disconnect/reset before headers. reset reason: protocol error, details: [], rawResponse: null, trailers: {content-length: 0})
I am running the following commands to run the Dockerized Envoy:
docker build -t my-envoy:1.0 .
docker run -p 9000:9000 -p 9901:9901 my-envoy:1.0
After a lot of frustration and playing around, I finally figured it out. Looking at the documentation, it seems like the envoy.filters.httl.grpc_web filter can translate a request to both HTTP/2 and HTTP/3. I am assuming (though if someone more knowledgeable in this field can correct me, please do) that without specifying, Envoy does not know what protocol to translate to. As such, simply adding http2_protocol_options: {} to my cluster resolved the issue. I'm including the full cluster block below for anyone that may come across the same issue in the future.
clusters:
- name: greeter_service
connect_timeout: 0.25s
type: logical_dns
lb_policy: round_robin
http2_protocol_options: {}
load_assignment:
cluster_name: cluster_0
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: host.docker.internal
port_value: 9001
For some reason, whenever I tried using the --networks=host flag in my docker run command, I was unable to access the admin portal, and so the solutions that I came across, and the tutorials for using Envoy to proxy for gRPC Web weren't very beneficial in resolving this issue. Hopefully this is helpful to someone facing the same issue.
I have this envoy.yaml
admin:
access_log_path: /tmp/admin_access.log
address:
socket_address: { address: 0.0.0.0, port_value: 9901 }
static_resources:
listeners:
- name: listener_0
address:
socket_address: { address: 0.0.0.0, port_value: 8080 }
filter_chains:
- filters:
- name: envoy.filters.network.http_connection_manager
typed_config:
'#type': type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
codec_type: auto
stat_prefix: ingress_http
route_config:
name: local_route
virtual_hosts:
- name: local_service
domains: ['*']
routes:
- match: { prefix: '/' }
route:
cluster: echo_service
timeout: 0s
max_stream_duration:
grpc_timeout_header_max: 0s
cors:
allow_origin_string_match:
- prefix: '*'
allow_methods: GET, PUT, DELETE, POST, OPTIONS
allow_headers: keep-alive,user-agent,cache-control,content-type,content-transfer-encoding,custom-header-1,x-accept-content-transfer-encoding,x-accept-response-streaming,x-user-agent,x-grpc-web,grpc-timeout
max_age: '1728000'
expose_headers: custom-header-1,grpc-status,grpc-message
http_filters:
- name: envoy.filters.http.grpc_web
- name: envoy.filters.http.cors
- name: envoy.filters.http.router
clusters:
- name: echo_service
connect_timeout: 0.25s
type: logical_dns
http2_protocol_options: {}
lb_policy: round_robin
load_assignment:
cluster_name: cluster_0
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: node-server
port_value: 9090
This file is copied from this official example.
But when I try to go ahead with the docs
$ docker-compose pull node-server envoy commonjs-client
$ docker-compose up node-server envoy commonjs-client
I get this:
ERROR: No such service: node-server
if I run docker-compose pull envoy I get ERROR: No such service: envoy
What did I miss?
It seems like I got the wrong assumption in my comments. This repository contains Dockerfiles with a context to create said image when running the docker compose command.
In your example, the command:
docker-compose pull node-server envoy commonjs-client
should check if the images are available locally. If not, they should be able to build them.
Confusing to me is that you pointed to a docker-compose.yaml file stashed away somewhere deep in the examples folder. If you were to run that command I can see why you'd get the error. The relative path to the envoy Dockerfile is ./net/grpc/gateway/docker/envoy/Dockerfile, which is not accessible from the echo location.
It should however be accessible from your project root (i.e. the directory of this file https://github.com/grpc/grpc-web/blob/master/docker-compose.yml). Have you tried running it from there?
Fyi: What should happen after a pull is compose notifying you that the image cannot be found in your local repository and proceeding to create it based on the Dockerfile it found in the path relative to root (./net/grpc/gateway/docker/envoy/Dockerfile).
envoy container failing while startup with the below error
Configuration does not parse cleanly as v3. v2 configuration is deprecated and will be removed from Envoy at the start of Q1 2021: Unknown field in: {"static_resources":{"listeners":[{"address":{"socket_address":{"address":"0.0.0.0","port_value":443}},"filter_chains":[{"tls_context":{"common_tls_context":{"tls_certificates":[{"private_key":{"filename":"/etc/ssl/private.key"},"certificate_chain":{"filename":"/etc/ssl/keychain.crt"}}]}},"filters":[{"typed_config":{"route_config":{"name":"local_route","virtual_hosts":[{"domains":["*"],"routes":[{"match":{"prefix":"/"},"route":{"host_rewrite_literal":"127.0.0.1","cluster":"service_envoyproxy_io"}}],"name":"local_service"}]},"#type":"type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager","http_filters":[{"name":"envoy.filters.http.router"}],"access_log":[{"typed_config":{"#type":"type.googleapis.com/envoy.extensions.access_loggers.file.v3.FileAccessLog","path":"/dev/stdout"},"name":"envoy.access_loggers.file"}],"stat_prefix":"ingress_http"},"name":"envoy.filters.network.http_connection_manager"}]}],"name":"listener_0"}],"clusters":[{"load_assignment":{"cluster_name":"service_envoyproxy_io","endpoints":[{"lb_endpoints":[{"endpoint":{"address":{"socket_address":{"port_value":8080,"address":"127.0.0.1"}}}}]}]},"connect_timeout":"30s","name":"service_envoyproxy_io","dns_lookup_family":"V4_ONLY","transport_socket":{"name":"envoy.transport_sockets.tls","typed_config":{"#type":"type.googleapis.com/envoy.extensions.transport_sockets.tls.v3.UpstreamTlsContext","sni":"www.envoyproxy.io"}},"type":"LOGICAL_DNS"}]}}
Here's my envoy.yaml file
static_resources:
listeners:
- name: listener_0
address:
socket_address:
address: 0.0.0.0
port_value: 443
filter_chains:
- filters:
- name: envoy.filters.network.http_connection_manager
typed_config:
"#type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
stat_prefix: ingress_http
access_log:
- name: envoy.access_loggers.file
typed_config:
"#type": type.googleapis.com/envoy.extensions.access_loggers.file.v3.FileAccessLog
path: /dev/stdout
http_filters:
- name: envoy.filters.http.router
route_config:
name: local_route
virtual_hosts:
- name: local_service
domains: ["*"]
routes:
- match:
prefix: "/"
route:
host_rewrite_literal: 127.0.0.1
cluster: service_envoyproxy_io
tls_context:
common_tls_context:
tls_certificates:
- certificate_chain:
filename: "/etc/ssl/keychain.crt"
private_key:
filename: "/etc/ssl/private.key"
clusters:
- name: service_envoyproxy_io
connect_timeout: 30s
type: LOGICAL_DNS
# Comment out the following line to test on v6 networks
dns_lookup_family: V4_ONLY
load_assignment:
cluster_name: service_envoyproxy_io
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: 127.0.0.1
port_value: 8080
transport_socket:
name: envoy.transport_sockets.tls
typed_config:
"#type": type.googleapis.com/envoy.extensions.transport_sockets.tls.v3.UpstreamTlsContext
sni: www.envoyproxy.io
I'm I doing something wrong here?
The error message states that: Configuration does not parse cleanly as v3. v2 configuration is deprecated and will be removed from Envoy at the start of Q1 2021. The v2 xDS APIs are deprecated and will be removed form Envoy in Q1 2021, as per the API versioning policy.
According to the official docs you got the following options:
In the interim, you can continue to use the v2 API for the transitional period by:
Setting --bootstrap-version 2 on the CLI for a v2 bootstrap file.
Enabling the runtime envoy.reloadable_features.enable_deprecated_v2_api feature. This is implicitly enabled if a v2 --bootstrap-version is set.
Or Configure Envoy to use the v3 API
More details can be found in the linked docs.
So I have a dapr service hosted at: 192.168.1.34:50459
I'm trying to have it communicate with my web application using grpc-web. To do so I have an envoy proxy (according to https://github.com/grpc/grpc-web/issues/347).
My envoy.yaml file is as follows:
static_resources:
listeners:
- name: listener_0
address:
socket_address: { address: 0.0.0.0, port_value: 4949 }
filter_chains:
- filters:
- name: envoy.http_connection_manager
config:
codec_type: auto
stat_prefix: ingress_http
route_config:
name: local_route
virtual_hosts:
- name: local_service
domains: ["*"]
routes:
- match: { prefix: "/" }
route:
cluster: greeter_service
max_grpc_timeout: 0s
cors:
allow_origin_string_match:
- prefix: "*"
allow_methods: GET, PUT, DELETE, POST, OPTIONS
allow_headers: keep-alive,user-agent,cache-control,content-type,content-transfer-encoding,custom-header-1,x-accept-content-transfer-encoding,x-accept-response-streaming,x-user-agent,x-grpc-web,grpc-timeout
max_age: "1728000"
expose_headers: custom-header-1,grpc-status,grpc-message
http_filters:
- name: envoy.grpc_web
- name: envoy.cors
- name: envoy.router
clusters:
- name: greeter_service
connect_timeout: 0.25s
type: logical_dns
http2_protocol_options: {}
lb_policy: round_robin
hosts: [{ socket_address: { address: 192.168.1.34, port_value: 50459 }}]
It should listen at 0.0.0.0:4949, and forward it to 192.168.1.34:50459
But when I start this proxy with
docker run -d -v envoy.yaml:/etc/envoy/envoy.yaml:ro -p 4949:4949 -p 50459:50459 envoyproxy/envoy:v1.15.0
It routes it to 0.0.0.0:50459
enter image description here
Does anyone know how to resolve this?
I don't know much about envoy but can you configure it to log to mounted path? See if the route is actually configured? Also, I assume you're setting the --dapr-grpc-port explicitly in Dapr run command? Have you tried setting it to a diff port in case of collisions?
I am trying to configure my envoy proxy to allow for save requests from my angular application to my application server using grpc. I have a letsencrypt certificate loaded, but the requests fail and chrome prints a: ERR_CERT_COMMON_NAME_INVALID when trying to connect. I have an apache2 running serving my web application. The envoy proxy on docker and the web application are running on the same machine.
my envoy.yaml:
admin:
access_log_path: /tmp/admin_access.log
address:
socket_address: { address: 0.0.0.0, port_value: 9901 }
static_resources:
listeners:
- name: listener_0
address:
socket_address: { address: 0.0.0.0, port_value: 17887 }
filter_chains:
- filters:
- name: envoy.http_connection_manager
config:
codec_type: auto
stat_prefix: ingress_http
stream_idle_timeout: 0s
route_config:
name: local_route
virtual_hosts:
- name: local_service
domains: ["myactualdomain.com"]
routes:
- match: { prefix: "/" }
route:
cluster: greeter_service
max_grpc_timeout: 0s
cors:
allow_origin_string_match:
- prefix: "*"
allow_methods: GET, PUT, DELETE, POST, OPTIONS
allow_headers: keep-alive,user-agent,cache-control,content-type,content-transfer-encoding,custom-header-1,x-accept-content-transfer-encoding,x-accept-response-streaming,x-user-agent,x-grpc-web,grpc-timeout
max_age: "1728000"
expose_headers: grpc-status,grpc-message
http_filters:
- name: envoy.grpc_web
- name: envoy.cors
- name: envoy.router
tls_context:
common_tls_context:
alpn_protocols: "h2"
tls_certificates:
- certificate_chain: { filename: "/etc/fullchain.pem" }
private_key: { filename: "/etc/privkey.pem" }
clusters:
- name: greeter_service
connect_timeout: 1.00s
type: logical_dns
http2_protocol_options: {}
lb_policy: round_robin
hosts: [{ socket_address: { address: localhost, port_value: 17888 }}]
I was thinking i might be because I am not using the traditional https port.
Any help appreciated.
I actually got it working. First of all I added a new subdomain for the envoy proxy and created a new pair of certificates. Also don't do this: domains: ["myactualdomain.com"] but rather ["*"] as this leads to a CORS violation. If you only connect with grpc-web and the envoy don't use ssl as they run on the same machine anyway. If you wan't to do that though, you might want to take a look at that: https://medium.com/#farcaller/how-to-configure-https-backends-in-envoy-b446727b2eb3, I didn't try it though.