I'm developing a little project with microservices, I want to configure Kong as the API-Gateway for my structure but I'm not able to forward incoming requests from Kong to my services.
For example, I have a docker container running locally on address 172.18.0.15 and port 8006. I can access the api just fine at 172.18.0.15:8006/compute, I would like to configure kong in order to be able to make the same call at localhost:8001/compute. Am I missing something? Should api be present in my url in order for Kong to discover my api? This is the configuration that I've added to Kong, I basically just added all REST methods and set url:port as the upstream-url field.
{
"data": [
{
"created_at": 1514479841682,
"http_if_terminated": false,
"https_only": false,
"id": "9cf925a5-2c27-4dbc-ae2a-e70fba94ef53",
"methods": [
"GET",
"POST",
"PUT",
"DELETE"
],
"name": "compute",
"preserve_host": false,
"retries": 5,
"strip_uri": true,
"upstream_connect_timeout": 60000,
"upstream_read_timeout": 60000,
"upstream_send_timeout": 60000,
"upstream_url": "http://172.18.0.15:8006"
}
],
"total": 1
}
This is not working for me: when I try to access the API through localhost, the request times out with a 499 error. Any help is highly appreciated.
EDIT: I've tried adding my API by setting the URI field instead. I've inspected using docker logs kong and it looks like kong is definetely accessing the correct address, but is timing out for some unknown reason:
2017/12/28 17:55:25 [error] 47#0: *103 upstream timed out (110: Connection timed out) while connecting to upstream, client: 172.17.0.1, server: kong, request: "GET /compute?origin=20,20&destination=20,20 HTTP/1.1", upstream: "http://172.18.0.15:8006/compute?origin=20,20&destination=20,20", host: "localhost:8000"
However, if I access the same address using curl it works just fine:
# curl http://172.18.0.15:8006/compute?origin=20,20\&destination=20,20
[]
I highly doubt anybody else will encounter the same issue but the problem was in my Docker configuration, I forgot to put the api gateway on the same subnet as the services. Fixing this solved my issue.
Related
When I execute google-ads python sample code I get the following error
DEBUG:google.auth.transport.requests:Making request: POST https://accounts.google.com/o/oauth2/token
DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): accounts.google.com:443
DEBUG:urllib3.connectionpool:https://accounts.google.com:443 "POST /o/oauth2/token HTTP/1.1" 200 None
E0421 09:57:53.365121806 21019 ssl_transport_security.cc:1455] Handshake failed with fatal error SSL_ERROR_SSL: error:1000007d:SSL routines:OPENSSL_internal:CERTIFICATE_VERIFY_FAILED.
INFO:google.ads.googleads.client:Request
-------
Method: /google.ads.googleads.v6.services.GoogleAdsService/SearchStream
Host: googleads.googleapis.com
Headers: {
"developer-token": "REDACTED",
"x-goog-api-client": "gl-python/3.8.6 grpc/1.37.0 gax/1.26.3",
"x-goog-request-params": "customer_id="
}
Request: query: "\n SELECT\n campaign.id,\n campaign.name\n FROM campaign\n ORDER BY campaign.id"
Response
-------
Headers: {}
Fault: {
"created": "#1619013473.365323139",
"description": "Failed to pick subchannel",
"file": "src/core/ext/filters/client_channel/client_channel.cc",
"file_line": 5419,
"referenced_errors": [
{
"created": "#1619013473.365317068",
"description": "failed to connect to all addresses",
"file": "src/core/ext/filters/client_channel/lb_policy/pick_first/pick_first.cc",
"file_line": 397,
"grpc_status": 14
}
]
}
I am behind a corporate network that uses its own certificate for every Internet connection.
Our servers trust the internal certificates. I replaced the CA certs in certifi with our CA certs and urllib3 connects fine to both accounts.google.com and googleads.googleapis.com.
curl to both URLs work fine as well
From the error above looks like urllib3 connected fine but GRPC had an issue. Given curl works, the OS ca certs are fine so where does GRPC pick the ca certs from that I would need to add our corporate issuer certs to?
google-ads: 10.0.0 / python: 3.8.0 / RHEL7
If the corporate network has a proxy swaps the credentials, CERTIFICATE_VERIFY_FAILED might occur. You may want to supply your company's root credentials via https://grpc.github.io/grpc/python/grpc.html#create-client-credentials. I'm not sure how to do it with google-auth, you may want to post your question there: https://github.com/googleapis/google-auth-library-python
I am trying to mock an external (REST) server used by my system under test.
I am choosing MockServer (http://www.mock-server.com/) for mocking the external REST server.
I am running mock server standalone as in:
$ java -jar ./mockserver-netty-5.3.0-jar-with-dependencies.jar
-serverPort 1080 -proxyPort 1090 -proxyRemotePort 80 -proxyRemoteHost www.mock-server.com 2018-05-23 14:05:57,703 INFO o.m.m.MockServer
MockServer started on port: 1080 2018-05-23 14:05:57,747 INFO
o.m.p.d.DirectProxy MockServer started on port: 1090
I am not sure, having read the documentation, where I should define the expectations (viz., the responses the mock should yield based on incoming requests).
Can anyone explain?
Thanx,
R
It can be done by PUT, ie:
curl -v -X PUT "http://localhost:1080/expectation" -d '{
"httpRequest" : {
"path" : "/some/path"
},
"httpResponse" : {
"body" : "some_response_body"
}
}'
More info https://www.mock-server.com/mock_server/creating_expectations.html and go for REST API type of expectation
I used Postman to create the expectation. For creating expectations send a PUT request to http://localhost:portnumber/mockserver/expectation.
You can check the expectation and logs using this URL http://localhost:portnumber/mockserver/dashboard in the browser.
I have situation where I want to create large number of entities on orion. I am using docker version of Orion and mongo with this docker-compose.
version: "3"
services:
mongo:
image: mongo:3.4
volumes:
- /data/docker-mongo/db:/data/db
- /data/docker-mongo/log/mongodb.log:/var/log/mongodb/mongod.log
command: --nojournal
orion:
image: fiware/orion
volumes:
- /data/docker-mongo/log/contextBroker.log:/tmp/contextBroker.log
links:
- mongo
ports:
- "1026:1026"
command: -dbhost mongo
Now problems happens when I want to upload 2000 entities (opening new connection for each, I know it can be done different but for now this is request), I successfully create no more than 600 (or less never exact number) of them rest fail to create with error:
"error": {
"errno": "ECONNRESET",
"code": "ECONNRESET",
"syscall": "read"
},
So I assume this issue has something to do with maxConnections, reqPoolSize etc settings in Orion. But in docker I failed to locate Orion config file, I have no way of knowing when I type commands like contextBroker -maxConnections 123456 that setting is being accepted by Orion and docker container.
Also log of Orion is empty, and i cannot determined what is causing this issue when Orion is running on docker.
So main questions:
Can Orion running on docker be used in same manner as Orion running on VM (are there some fallbacks)
And how do I check this problem when Orion is running in docker, because I read a lot of docs/issues but no luck (or I missed something).
If you have some advice/soultion it would really help.
Thanks
{
"orion" : {
"version" : "1.13.0-next",
"uptime" : "2 d, 15 h, 46 m, 34 s",
"git_hash" : "ae72acf9e8eeaacaf4eb138f7de37bfee4514c6b",
"compile_time" : "Fri May 4 10:12:18 UTC 2018",
"compiled_by" : "root",
"compiled_in" : "1901fd6bb51a",
"release_date" : "Fri May 4 10:12:18 UTC 2018",
"doc" : "https://fiware-orion.readthedocs.org/en/master/"
}
}
{ Error: socket hang up
at createHangUpError (_http_client.js:313:15)
at Socket.socketOnEnd (_http_client.js:416:23)
at Socket.emit (events.js:187:15)
at endReadableNT (_stream_readable.js:1090:12)
at process._tickCallback (internal/process/next_tick.js:63:19) code: 'ECONNRESET' }
error:
{ Error: connect ECONNREFUSED ipofvirtualm:1026
at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1174:14)
errno: 'ECONNREFUSED',
code: 'ECONNREFUSED',
syscall: 'read',
address: 'ipofvm',
port: 1026 },
options:
{ method: 'POST',
uri: 'http://ip:1026/v2/entities?options=keyValues',
headers:
{ 'Fiware-Service': 'some service',
'Fiware-ServicePath': 'some servicepath' },
body:
{ id: 'F0B935',
type: 'Transaction',
refEmitter: 'F0B935',
refReceiver: '7501JXG',
refCapturer: 'testtdata',
date: '12/12/2017 13:25',
refTransferredResources: 'testtdata',
transferredLoad: 92 },
json: true,
callback: [Function: RP$callback],
transform: undefined,
simple: true,
resolveWithFullResponse: false,
transform2xxOnly: false },
I am using request promise library for making calls, i try others they had same issue. Now since i cannot send u all 2000 responses i will try to describe. So it when i start to send this it behave. It create like 30 entities then next few or more return response saying ECONNRESET then it start creating again and so on.
What confuse me is that it is not failing totally meaning it works but not as intended. Also it seem that Orion close socket or hang it up some period then he is open again and create as normal and so on. If u need any more info ask, and thanks for quick answer.
instead of opening a new connection per entity why don't you use
POST /v2/op/update
and create all entities in just one batch? or a couple of batches
See some code at
https://github.com/Fiware/dataModels/blob/master/Weather/WeatherObserved/harvest/spain_weather_observed_harvest.py#L235
With regards to CLI argument passing to CB running inside docker, use the command line in docker compose file, eg:
command: -dbhost mongo -maxConnections 123456
However, I'm not sure that would help to solve the problem, as Orion should deal with your use case without any special customization. Looking to the error message (which seems to be about some problema at TCP layer) I wonder if docker networking layer is acting as bottleneck in some way...
In addition, the suggestion done by Jose Manuel Cantera about using POST /v2/op/update would be a good idea. It would reduce connection stress at network layer and may help to alleviate the problem.
If you cannot change your update strategy, maybe using an inter-request delay (100-200ms) could also help.
I am trying to submit a transaction to Hyperledger Sawtooth v1.0.1 using javascript to a validator running on localhost. The code for the post request is as below:
request.post({
url: constants.API_URL + '/batches',
body: batchListBytes,
headers: { 'Content-Type': 'application/octet-stream' }
}, (err, response) => {
if (err) {
console.log(err);
return cb(err)
}
console.log(response.body);
return cb(null, response.body);
});
The transaction gets processed when submitted from an backend nodejs application, but it returns an OPTIONS http://localhost:8080/batches 405 (Method Not Allowed) error when submitted from client. These are the options that I have tried:
Inject Access-Control-Allow-* headers into the response using an extension: The response still gives the same error
Remove the custom header to bypass preflight request: This makes the validator throw an error as shown:
...
sawtooth-rest-api-default | KeyError: "Key not found: 'Content-Type'"
sawtooth-rest-api-default | [2018-03-15 08:07:37.670 ERROR web_protocol] Error handling request
sawtooth-rest-api-default | Traceback (most recent call last):
...
The unmodified POST request from the browser gets the following response headers from the validator:
HTTP/1.1 405 Method Not Allowed
Content-Type: text/plain; charset=utf-8
Allow: GET,HEAD,POST
Content-Length: 23
Date: Thu, 15 Mar 2018 08:42:01 GMT
Server: Python/3.5 aiohttp/2.3.2
So, I guess OPTIONS method is not handled in the validator. A GET request for the state goes through fine when the CORS headers are added. This issue was also not faced in Sawtooth v0.8.
I am using docker to start the validator, and the commands to start it are a slightly modified version of those given in the LinuxFoundationX: LFS171x course. The relevant commands are below:
bash -c \"\
sawadm keygen && \
sawtooth keygen my_key && \
sawset genesis -k /root/.sawtooth/keys/my_key.priv && \
sawadm genesis config-genesis.batch && \
sawtooth-validator -vv \
--endpoint tcp://validator:8800 \
--bind component:tcp://eth0:4004 \
--bind network:tcp://eth0:8800
Can someone please guide me as to how to solve this problem?
CORS issues are always the best.
What is CORS?
Your browser trying to protect users from bring directed to a page they think is the frontend for an API, but is actually fraudulent. Anytime a web page tries to access an API on a different domain, that API will need to explicitly give the webpage permission, or the browser will block the request. This is why you can query the API from Node.js (no browser), and can put the REST API address directly into your address bar (same domain). However, trying to go from localhost:3000 to localhost:8008 or from file://path/to/your/index.html to localhost:8008 is going to get blocked.
Why doesn't the Sawtooth REST API handle OPTIONS requests?
The Sawtooth REST API does not know the domain you are going to run your web page from, so it can't whitelist it explicitly. It is possible to whitelist all domains, but this obviously destroys any protection CORS might give you. Rather than try to weigh the costs and benefits of this approach for all Sawtooth users everywhere, the decision was made to make the REST API as lightweight and security agnostic as possible. Any developer using it would be expected to put it behind a proxy server, and they can make whatever security decisions they need on that proxy layer.
So how do you fix it?
You need to setup a proxy server that will put the REST API and your web page on the same domain. There is no quick configuration option for this. You will have to set up an actual server. Obviously there are lots of ways to do this. If you are already familiar with Node, you could serve the page from Node.js, and then have the Node server proxy the API calls. If you are already running all of the Sawtooth components with docker-compose though, it might be easier to use Docker and Apache.
Setting up an Apache Proxy with Docker
Create your Dockerfile
In the same directory as your web app create a text file called "Dockerfile" (no extension). Then make it look like this:
FROM httpd:2.4
RUN echo "\
LoadModule proxy_module modules/mod_proxy.so\n\
LoadModule proxy_http_module modules/mod_proxy_http.so\n\
ProxyPass /api http://rest-api:8008\n\
ProxyPassReverse /api http://rest-api:8008\n\
RequestHeader set X-Forwarded-Path \"/api\"\n\
" >>/usr/local/apache2/conf/httpd.conf
This is going to do a couple of things. First it will pull down the httpd module from DockerHub, which is just a simple static server. Then we are using a bit of bash to add five lines to Apache's configuration file. These five lines import the proxy modules, tell Apache that we want to proxy http://rest-api:8008 to the /api route, and set the X-Forwarded-Path header so the REST API can properly build response URLs. Make sure that rest-api matches the actual name of the Sawtooth REST API service in your docker compose file.
Modify your docker compose file
Now, to the docker compose YAML file you are running Sawtooth through, you want to add a new property under the services key:
services:
my-web-page:
build: ./path/to/web/dir/
image: my-web-page
container_name: my-web-page
volumes:
- ./path/to/web/dir/public/:/usr/local/apache2/htdocs/
expose:
- 80
ports:
- '8000:80'
depends_on:
- rest-api
This will build your Dockerfile located at ./path/to/web/dir/Dockerfile (relative to the docker compose file), and run it with its default command, which is to start up Apache. Apache will serve whatever files are located in /usr/local/apache2/htdocs/, so we'll use volumes to link the path to your web files on your host machine (i.e. ./path/to/web/dir/public/), to that directory in the container. This is basically an alias, so if you update your web app later, you don't need to restart this docker container to see the changes. Finally, ports will take the server, which is at port 80 inside the container, and forward it out to localhost:8000.
Running it all
Now you should be able to run:
docker-compose -f path/to/your/compose-file.yaml up
And it will start up your Apache server along with the Sawtooth REST API and validator and any other services you defined. If you go to http://localhost:8000, you should see your web page, and if you go to http://localhost:8000/api/blocks, you should see a JSON representation of the blocks on chain. More importantly you should be able to make the request from your web app:
request.post({
url: 'api/batches',
body: batchListBytes,
headers: { 'Content-Type': 'application/octet-stream' }
}, (err, response) => console.log(response) );
Whew. Sorry for the long response, but I'm not sure if it is possible to solve CORS any faster. Hopefully this helps.
The transaction Header should have details like, address of the block where it would be save. Here is example which I have used and is working fine for me :
String payload = "create,0001,BLockchain CPU,Black,5000";
logger.info("Sending payload as - "+ payload);
String payloadBytes = Utils.hash512(payload.getBytes()); // --fix for invaluid payload seriqalization
ByteString payloadByteString = ByteString.copyFrom(payload.getBytes());
String address = getAddress(IDEM, ITEM_ID); // get unique address for input and output
logger.info("Sending address as - "+ address);
TransactionHeader txnHeader = TransactionHeader.newBuilder().clearBatcherPublicKey()
.setBatcherPublicKey(publicKeyHex)
.setFamilyName(IDEM) // Idem Family
.setFamilyVersion(VER)
.addInputs(address)
.setNonce("1")
.addOutputs(address)
.setPayloadSha512(payloadBytes)
.setSignerPublicKey(publicKeyHex)
.build();
ByteString txnHeaderBytes = txnHeader.toByteString();
byte[] txnHeaderSignature = privateKey.signMessage(txnHeaderBytes.toString()).getBytes();
String value = Signing.sign(privateKey, txnHeader.toByteArray());
Transaction txn = Transaction.newBuilder().setHeader(txnHeaderBytes).setPayload(payloadByteString)
.setHeaderSignature(value).build();
BatchHeader batchHeader = BatchHeader.newBuilder().clearSignerPublicKey().setSignerPublicKey(publicKeyHex)
.addTransactionIds(txn.getHeaderSignature()).build();
ByteString batchHeaderBytes = batchHeader.toByteString();
byte[] batchHeaderSignature = privateKey.signMessage(batchHeaderBytes.toString()).getBytes();
String value_batch = Signing.sign(privateKey, batchHeader.toByteArray());
Batch batch = Batch.newBuilder()
.setHeader(batchHeaderBytes)
.setHeaderSignature(value_batch)
.setTrace(true)
.addTransactions(txn)
.build();
BatchList batchList = BatchList.newBuilder()
.addBatches(batch)
.build();
ByteString batchBytes = batchList.toByteString();
String serverResponse = Unirest.post("http://localhost:8008/batches")
.header("Content-Type", "application/octet-stream")
.body(batchBytes.toByteArray())
.asString()
.getBody();
I have simple services as:
transactions-core-service and transactions-api-service.
transactions-api-service invokes transactions-core-service to return a list of transactions. transactions-api-service is enabled with hystrix command.
Both are registered in Eureka server with below services ids:
TRANSACTIONS-API-SERVICE n/a (1) (1) UP (1) - 192.168.2.12:transactions-api-service:8083
TRANSACTIONS-CORE-SERVICE n/a (1) (1) UP (1) - 192.168.2.12:transactions-core-service:8087
Below is Zuul server:
#SpringBootApplication
#Controller
#EnableZuulProxy
public class ZuulApplication {
public static void main(String[] args) {
new SpringApplicationBuilder(ZuulApplication.class).web(true).run(args);
}
}
Zuul Configurations:
===============================================
info:
component: Zuul Server
server:
port: 8765
endpoints:
restart:
enabled: true
shutdown:
enabled: true
health:
sensitive: false
zuul:
ignoredServices: "*"
routes:
transactions-api-service:
path: transactions/accounts/**
serviceId: transactions-api-service
eureka:
client:
serviceUrl:
defaultZone: http://localhost:8761/eureka/
logging:
level:
ROOT: INFO
org.springframework.web: DEBUG
===============================================
When I try to invoke transactions-api-service with url (http://localhost:8765/transactions/accounts/123/transactions/786) I get Zuul Exception:
2016-02-13 11:29:29.050 WARN 4936 --- [nio-8765-exec-1]
o.s.c.n.z.filters.post.SendErrorFilter : Error during filtering
com.netflix.zuul.exception.ZuulException: Forwarding error
at org.springframework.cloud.netflix.zuul.filters.route.RibbonRoutingFilter.forward(RibbonRoutingFilter.java:131)
~[spring-cloud-net flix-core-1.1.0.M3.jar:1.1.0.M3]
at org.springframework.cloud.netflix.zuul.filters.route.RibbonRoutingFilter.run(RibbonRoutingFilter.java:76)
~[spring-cloud-netflix- core-1.1.0.M3.jar:1.1.0.M3] ......
If I invoke the transactions-api-service individually (with localhost /accounts/123/transactions/786), it works fine.
Am I missing any configurations on Zuul?
You need to change zuul execution timeout by adding this property in application.yml of zuul server:
# Increase the Hystrix timeout to 60s (globally)
hystrix:
command:
default:
execution:
isolation:
thread:
timeoutInMilliseconds: 60000
Please refer to this thread on netflix issues: https://github.com/spring-cloud/spring-cloud-netflix/issues/321
Faced same issue. In my case, zuul was using service discovery. As a solution, below configuration worked like a charm.
ribbon.ReadTimeout=60000
Reference to the property usage is here.
You have an incorrect indentation. Instead of:
zuul:
ignoredServices: "*"
routes:
transactions-api-service:
path: transactions/accounts/**
serviceId: transactions-api-service
It should be:
zuul:
ignoredServices: "*"
routes:
transactions-api-service:
path: transactions/accounts/**
serviceId: transactions-api-service
you can use this to avoid 500 error
hystrix.command.default.execution.isolation.thread.timeoutInMilliseconds=1000000
zuul.host.connect-timeout-millis=10000
zuul.host.socket-timeout-millis=1000000
In case if your Zuul gateway uses discovery service for service lookup in that case you can disable the hystrix timeout or increase the hysterix timeout as below :
# Disable Hystrix timeout globally (for all services)
hystrix.command.default.execution.timeout.enabled: false
#To disable timeout foror particular service,
hystrix.command.<serviceName>.execution.timeout.enabled: false
# Increase the Hystrix timeout to 60s (globally)
hystrix.command.default.execution.isolation.thread.timeoutInMilliseconds: 60000
# Increase the Hystrix timeout to 60s (per service)
hystrix.command.<serviceName>.execution.isolation.thread.timeoutInMilliseconds: 60000
I was having same issue with zuul server, it got resolved with below property
Let's say you have 2 clients clientA and clientB,
so for clientA, spring.application.name=clientA and server.port=1111
for clientB spring.application.name=clientB and server.port=2222 in there respective application.propeties files.
You want to connect this 2 servers to ZuulServer which is running on port 8087.
add below properties in you ZuulServer application.properties file
spring.application.name=gateway-service
eureka.client.serviceUrl.defaultZone=http://localhost:8761/eureka
eureka.client.register-with-eureka=true
eureka.client.fetch-registry=true
clientA.ribbon.listOfServers=http://localhost:1111
clientB.ribbon.listOfServers=http://localhost:2222
server.port=8087
Note: I am using Eureka Client with my Zuul Server. you can skip that part. Adding this solution in case its helpful for someone.