Hasura action handler does not exist (Name or service not known) - docker

I have an instance of Hasura running in the Docker container and action handler API written in NodeJS/Express running directly on the machine.
When I test everything locally it works like a charm ( macOS M1 ), but when I replicate the same setup on the server ( Ubuntu 20.04 ) it returns an action handler does not exist error.
All other queries and mutations work well, just actions return this error
You can see that both Hasura and NodeJS apps are running
hasura docker container
localhost api call
In local development my action handler base url is
http://host.docker.internal:5000/api
and it is working fine.
I have also tried
http://localhost:5000/api
http://127.0.0.1:5000/api
This is the exact error Hasura action call returns
{
"errors": [
{
"extensions": {
"internal": {
"error": {
"type": "http_exception",
"message": "ConnectionFailure Network.Socket.getAddrInfo (called with preferred socket type/protocol: AddrInfo {addrFlags = [], addrFamily = AF_UNSPEC, addrSocketType = Stream, addrProtocol = 0, addrAddress = 0.0.0.0:0, addrCanonName = Nothing}, host name: Just \"host.docker.internal\", service name: Just \"5000\"): does not exist (Name or service not known)"
},
"response": null,
"request": {
"body": {
"session_variables": {
"x-hasura-role": "public"
},
"input": {
"email": "",
"password": ""
},
"action": {
"name": "login"
}
},
"url": "http://host.docker.internal:5000/api/auth/login",
"headers": []
}
},
"path": "$",
"code": "unexpected"
},
"message": "http exception when calling webhook"
}
]
}

If someone encounters the same issue this is how I solved it
Add this to docker-compose.yml file
extra_hosts:
- "host.docker.internal:host-gateway"
Allow ports 8080 and 5000 in firewall
sudo ufw allow 8080
sudo ufw allow 5000

Related

Hasura query action exception

Got a small problem (I guess). I created c# rest web API on docker swarm environment. Rest API is working properly - tested via the postman. Then I tried to compose Hasura service on the same docker swarm environment. The console is working properly also. The problem is with query action.
Code:
Action definition:
type Query {
getWeatherForecast : [WeatherForecastResonse]
}
New types definition:
type WeatherForecastResonse {
date : String
temperatureC : Int
temperature : Int
summary : String
}
Handler:
http://{api ip}:{api port}/WeatherForecast
While trying to execute query:
query MyQuery {
getWeatherForecast {
temperature
summary
date
temperatureC
}
}
All I got from response is error with json:
{
"errors": [
{
"extensions": {
"internal": {
"error": "invalid json: Error in $: not enough input",
"response": {
"status": 405,
"body": "",
"headers": [
{
"value": "Mon, 14 Jun 2021 13:54:00 GMT",
"name": "Date"
},
{
"value": "Kestrel",
"name": "Server"
},
{
"value": "0",
"name": "Content-Length"
},
{
"value": "GET",
"name": "Allow"
}
]
},
"request": {
"body": {
"session_variables": {
"x-hasura-role": "admin"
},
"input": {},
"action": {
"name": "getWeatherForecast"
}
},
"url": "http://{api ip}:{api port}/WeatherForecast",
"headers": []
}
},
"path": "$",
"code": "unexpected"
},
"message": "not a valid json response from webhook"
}
]
}
I got desired response by using postman white calling: http://{api ip}:{api port}/WeatherForecast (GET method)
Where should I improve, to finally get desired result from rest api?
P.S. hasura version: v2.0.0-alpha.4 (tried also with v1.3.3)
UPDATE:
Released a new version of web API. Inside WeatherForecastController included a new method with POST attribute. Query remained the same, but now graphql query returns what I want.
So the question is: Is it possible to call/access web api methods with GET attribute with Hasura action query?
From the version v2.1.0 and above we can do this using the REST Connectors.Hasura Actions RESTConnectors Methods
Go to the Actions tab on the console and create or modify an action. Scroll down to Configure REST Connectors.
In the Configure REST Connectors section, click on Add Request Options Transform
Along with this you can do a lot of other configurations.
No, currently it's not possible, Hasura always makes POST requests to the action handler:
When the action is executed i.e. when the query or the mutation is called, Hasura makes a POST request to the handler with the action arguments and the session variables.
Source: https://hasura.io/docs/latest/graphql/core/actions/action-handlers.html#http-handler

Serilog not creating log file on production server

I've created a C# .net5.0 console application and during testing Serilog has been working without incident, logging to Console and File (same folder; path="log.txt"). However, when I run on the application on our server, neither Console nor File logging sinks are working! I assume now that the issue is not the sinks themselves but Serilog not actually working.
I've tried enabling the self log:
Serilog.Debugging.SelfLog.Enable(msg =>
Console.WriteLine(msg)
);
but even running in the debugger in my dev environment, the Console.WriteLine(msg) line is never called!
My appsettings.json is as follows:
{
"Serilog": {
"MinimumLevel": {
"Default": "Debug",
"Override": {
"Microsoft": "Information",
"System": "Information"
}
},
"WriteTo": [
{
"Name": "Console",
"Args": {
"theme": "Serilog.Sinks.SystemConsole.Themes.AnsiConsoleTheme::Code, Serilog.Sinks.Console",
"outputTemplate": "[{Timestamp:HH:mm:ss} {Level:u3}] {Message:lj} {NewLine}{Exception}"
}
},
{
"Name": "File",
"Args": {
"path": "log.txt",
"rollingInterval": "Infinite",
"outputTemplate": "{Timestamp:HH:mm:ss.fff} [{Level:u3}] {Message:lj}{NewLine}{Exception}",
"shared": false
}
}
],
"Enrich": [ "FromLogContext" ]
},
"Database": {
"Server": "(local)",
"Database": "ActivbaseLive"
},
"Email": {
"SmtpHost": "localhost",
"SmtpPort": 25,
"SmtpSecurityOption": "None",
"SmtpUsername": null,
"SmtpPassword": null,
"SmtpSender": "\"Activbase Learning Platform\" <noreply#activbase.net>"
}
}
I've tried absolute paths (using double backslashes in appsettings.json).
I've tried pre-creating the log file (e.g. log.txt and log200428.txt) and setting permissions to Everyone Full Control but neither of these changes fix the problem and they don't explain why the Console sink doesn't write either.
Here is how Serilog is being configured during start-up which is where I suspect the problem is (even through it works in dev environment):
return Host.CreateDefaultBuilder()
.ConfigureLogging(logging =>
{
logging.ClearProviders();
})
.UseSerilog((hostContext, loggerConfiguration) =>
{
loggerConfiguration.ReadFrom.Configuration(hostContext.Configuration);
})
.ConfigureAppConfiguration((hostContext, builder) =>
{
builder.AddEnvironmentVariables();
})
.ConfigureServices(services =>
{
services.AddHostedService<Worker>();
...
});
}
Any ideas why Serilog isn't working in production?
The Path you provide should be absolute.
Some thing like this:
"path": "E:/wwwroot/QA/BackgroundWorkerService/Logs/logFile_.log"
Even I had the same issue, the above fix worked fine for me...
For my api application running in IIS: I had to assign the following permissions to the log folder for the IIS_IUSRS. I didn't need an absolute path!

Vault is not appearing in Consul(ui) services list

Configs:
consul-config.json:
{
"datacenter": "localhost",
"data_dir": "/consul/data",
"log_level": "DEBUG",
"server": true,
"ui": true,
"ports": {
"dns": 53
}
}
vault-config.json:
{
"ui": true,
"backend": {
"consul": {
"address": "consul:8500",
"path": "vault/"
}
},
"listener": {
"tcp":{
"address": "0.0.0.0:8200",
"tls_disable": 1
}
}
}
Problem:
Vault appears in services list of consul, when i've used:
consul - v1.7.2
vault - v1.3.4
But, when i'm using another versions, there's no vault service in the list(in consul ui):
consul - v1.6.5
vault - v1.4.0
Here's some info from logs:
I need solution to use 2nd variant, i searched stackoverflow, documentation and other resources, but there's no any solution, maybe i skipped :).
Have you experienced like this ?

How can I retrieve the tag from the syslog logs that are sent to Logstash?

I have set up my Docker daemon so that the logs of all my containers are forwarded to a Logstash application listening on port 5000, using the following configuration for daemon.json :
{
"log-driver": "syslog",
"log-opts": {
"syslog-address": "udp://localhost:5000",
"syslog-format": "rfc3164",
"tag": "{{.Name}}"
},
"hosts": [
"tcp://0.0.0.0:2375",
"unix:///var/run/docker.sock"
]
}
Since many different containers are creating logs at the same time, I would like to be able to filter the container names when I visualize their logs within my ELK stack. However, I'm not sure how I can retrieve, in Logstash, the "tag" that I have set as part of the "log-opts" in the Docker daemon configuration above.
What I tried is to simply retrieve it as a variable and forward it to a field in the Logstash configuration, but it just stores the text "%{tag}" as a string. Is it possible to retrieve the tag of the source container in the Logstash configuration?
logstash.conf :
input {
udp {
port => 5000
type => syslog
}
}
output {
elasticsearch {
hosts => ["elasticsearch"]
}
}
filter {
if [type] == "syslog" {
if [message] =~ "^<\d+>\s*\w+\s+\d+\s\d+:\d+:\d+\s\S+\s\w+(\/\S+|)\[\d+\]:.*$" {
grok {
match => {
"message" => "%{SYSLOGTIMESTAMP:timestamp} %{SYSLOGHOST:hostname} %{DATA:container_hash}(?:\[%{POSINT}\])?: %{GREEDYDATA:real_message}"
}
remove_field => ["message"]
}
mutate {
add_field => {
"tag" => "%{tag}"
}
}
}
}
}
Edit : If I don't remove the message field like I do in the logstash configuration, then the message field looks something like this when I view the logs in Kibana :
<30>May 15 15:13:23 devlocal e9713f013ebb[1284]: 192.168.56.110 - - [15/May/2019:15:13:23 +0200] "GET /server/status HTTP/1.0" 200 54 0.003 "-" "GuzzleHttp/6.3.3 curl/7.64.0 PHP/7.2.17" "172.30.0.2"
So the tag that I'm looking for isn't part of the message ; hence I don't know from where I can retrieve it.
Looks like the problem could be related to the log-driver that you chose.
Changing the log-driver to gelf should give you access to tags, and a variety of other fields e.g. below
{
"_index": "logstash-2017.04.27",
"_type": "docker",
"_id": "AVuuiZbeYg9q2vv-JShe",
"_score": null,
"_source": {
"source_host": "172.18.0.1",
"level": 6,
"created": "2017-04-27T08:24:45.69023959Z",
"message": "My Message Thu Apr 27 08:31:44 UTC 2017",
"type": "docker",
"version": "1.1",
"command": "/bin/sh -c while true; do echo My Message `date`; sleep 1; done;",
"image_name": "alpine",
"#timestamp": "2017-04-27T08:31:44.338Z",
"container_name": "squarescaleweb_plop_1",
"host": "plop-xps",
"#version": "1",
"tag": "staging",
"image_id": "sha256:4a415e3663882fbc554ee830889c68a33b3585503892cc718a4698e91ef2a526",
"container_id": "12b7bcd3f2f54e017680090d01330f542e629a4528f558323e33f7894ec6be53"
},
"fields": {
"created": [
1493281485690
],
"#timestamp": [
1493281904338
]
},
"sort": [
1493281904338
]
}
example from:
https://gist.github.com/eunomie/e7a183602b8734c47058d277700fdc2d
You would also need to send your logs via UDP instead of TCP.
You can change your daemon.json to read
{
"log-driver": "syslog",
"log-opts": {
"gelf-address": "udp://localhost:<PORT>"
"tag": "{{.Name}}"
},
"hosts": [
"tcp://0.0.0.0:2375",
"unix:///var/run/docker.sock"
]
}
I'm not sure what port you have logstash configured to receive UDP packets, but for GELF it seems like 12201 is the default for logstash.
After the messages are sent into logstash, you can create a pipeline to extract the fields of your choice. e.g. [container_name]

Can I use "router" in roadhog's "proxy" prop?

When I config my .roadhogrc, roadhog's doc said I can set "proxy" prop same as webpack-dev-server#proxy.
I add a "router" prop, then my page can not load, due to the "index.css" timeout.
So, how to use "proxy" in .roadhogrc?
or any way to change the api to test env in localhost?
I can't find "dva" or "roadhog" tag, so sorry to use "antd" tag.
{
"entry": "src/index.js",
"env": {
"development": {
"extraBabelPlugins": [
"dva-hmr",
"transform-runtime",
["import", { "libraryName": "antd", "style": "css" }]
],
"proxy":{
"router":{
"http://api.eshop.corploft.com" : "http://test.corploft.com"
}
}
},
"production": {
"extraBabelPlugins": [
"transform-runtime",
["import", { "libraryName": "antd", "style": "css" }]
],
"outputPath":"build/yijiayi"
}
}
}
Maybe I told a joke...
All the proxy and mock settings are acting on localhost network, so I can only change the requests to local server. If I want to change a request to other servers, all above settings has no help.
SwitchyOmega (in Chrome App Store) or ihosts can help.

Resources