Vaadin 23 behind Traefik reverse proxy - vaadin

I'm looking for an example of Traefik toml config file to serve a newly created Vaadin app. My current config is :
[http]
# Add the router
[http.routers]
[http.routers.esus]
rule = "Host(`app.acme.com`)"
service = "esus"
entrypoints = ["web","websecure"]
passHostHeader = true
[http.routers.esus.tls]
certResolver = "letsencrypt"
# Add the service
[http.services]
[http.services.esus.loadbalancer]
[[http.services.esus.loadbalancer.servers]]
url = "http://srvdock01:8086"
With that, I've got a blank screen. I suppose that it is due to header forward configuration or maybe websocket.
Some help would be really appreciated.
Regards,
Alex

Related

Hyperledger Sawtooth - Preflight error while submitting transaction

I am trying to submit a transaction to Hyperledger Sawtooth v1.0.1 using javascript to a validator running on localhost. The code for the post request is as below:
request.post({
url: constants.API_URL + '/batches',
body: batchListBytes,
headers: { 'Content-Type': 'application/octet-stream' }
}, (err, response) => {
if (err) {
console.log(err);
return cb(err)
}
console.log(response.body);
return cb(null, response.body);
});
The transaction gets processed when submitted from an backend nodejs application, but it returns an OPTIONS http://localhost:8080/batches 405 (Method Not Allowed) error when submitted from client. These are the options that I have tried:
Inject Access-Control-Allow-* headers into the response using an extension: The response still gives the same error
Remove the custom header to bypass preflight request: This makes the validator throw an error as shown:
...
sawtooth-rest-api-default | KeyError: "Key not found: 'Content-Type'"
sawtooth-rest-api-default | [2018-03-15 08:07:37.670 ERROR web_protocol] Error handling request
sawtooth-rest-api-default | Traceback (most recent call last):
...
The unmodified POST request from the browser gets the following response headers from the validator:
HTTP/1.1 405 Method Not Allowed
Content-Type: text/plain; charset=utf-8
Allow: GET,HEAD,POST
Content-Length: 23
Date: Thu, 15 Mar 2018 08:42:01 GMT
Server: Python/3.5 aiohttp/2.3.2
So, I guess OPTIONS method is not handled in the validator. A GET request for the state goes through fine when the CORS headers are added. This issue was also not faced in Sawtooth v0.8.
I am using docker to start the validator, and the commands to start it are a slightly modified version of those given in the LinuxFoundationX: LFS171x course. The relevant commands are below:
bash -c \"\
sawadm keygen && \
sawtooth keygen my_key && \
sawset genesis -k /root/.sawtooth/keys/my_key.priv && \
sawadm genesis config-genesis.batch && \
sawtooth-validator -vv \
--endpoint tcp://validator:8800 \
--bind component:tcp://eth0:4004 \
--bind network:tcp://eth0:8800
Can someone please guide me as to how to solve this problem?
CORS issues are always the best.
What is CORS?
Your browser trying to protect users from bring directed to a page they think is the frontend for an API, but is actually fraudulent. Anytime a web page tries to access an API on a different domain, that API will need to explicitly give the webpage permission, or the browser will block the request. This is why you can query the API from Node.js (no browser), and can put the REST API address directly into your address bar (same domain). However, trying to go from localhost:3000 to localhost:8008 or from file://path/to/your/index.html to localhost:8008 is going to get blocked.
Why doesn't the Sawtooth REST API handle OPTIONS requests?
The Sawtooth REST API does not know the domain you are going to run your web page from, so it can't whitelist it explicitly. It is possible to whitelist all domains, but this obviously destroys any protection CORS might give you. Rather than try to weigh the costs and benefits of this approach for all Sawtooth users everywhere, the decision was made to make the REST API as lightweight and security agnostic as possible. Any developer using it would be expected to put it behind a proxy server, and they can make whatever security decisions they need on that proxy layer.
So how do you fix it?
You need to setup a proxy server that will put the REST API and your web page on the same domain. There is no quick configuration option for this. You will have to set up an actual server. Obviously there are lots of ways to do this. If you are already familiar with Node, you could serve the page from Node.js, and then have the Node server proxy the API calls. If you are already running all of the Sawtooth components with docker-compose though, it might be easier to use Docker and Apache.
Setting up an Apache Proxy with Docker
Create your Dockerfile
In the same directory as your web app create a text file called "Dockerfile" (no extension). Then make it look like this:
FROM httpd:2.4
RUN echo "\
LoadModule proxy_module modules/mod_proxy.so\n\
LoadModule proxy_http_module modules/mod_proxy_http.so\n\
ProxyPass /api http://rest-api:8008\n\
ProxyPassReverse /api http://rest-api:8008\n\
RequestHeader set X-Forwarded-Path \"/api\"\n\
" >>/usr/local/apache2/conf/httpd.conf
This is going to do a couple of things. First it will pull down the httpd module from DockerHub, which is just a simple static server. Then we are using a bit of bash to add five lines to Apache's configuration file. These five lines import the proxy modules, tell Apache that we want to proxy http://rest-api:8008 to the /api route, and set the X-Forwarded-Path header so the REST API can properly build response URLs. Make sure that rest-api matches the actual name of the Sawtooth REST API service in your docker compose file.
Modify your docker compose file
Now, to the docker compose YAML file you are running Sawtooth through, you want to add a new property under the services key:
services:
my-web-page:
build: ./path/to/web/dir/
image: my-web-page
container_name: my-web-page
volumes:
- ./path/to/web/dir/public/:/usr/local/apache2/htdocs/
expose:
- 80
ports:
- '8000:80'
depends_on:
- rest-api
This will build your Dockerfile located at ./path/to/web/dir/Dockerfile (relative to the docker compose file), and run it with its default command, which is to start up Apache. Apache will serve whatever files are located in /usr/local/apache2/htdocs/, so we'll use volumes to link the path to your web files on your host machine (i.e. ./path/to/web/dir/public/), to that directory in the container. This is basically an alias, so if you update your web app later, you don't need to restart this docker container to see the changes. Finally, ports will take the server, which is at port 80 inside the container, and forward it out to localhost:8000.
Running it all
Now you should be able to run:
docker-compose -f path/to/your/compose-file.yaml up
And it will start up your Apache server along with the Sawtooth REST API and validator and any other services you defined. If you go to http://localhost:8000, you should see your web page, and if you go to http://localhost:8000/api/blocks, you should see a JSON representation of the blocks on chain. More importantly you should be able to make the request from your web app:
request.post({
url: 'api/batches',
body: batchListBytes,
headers: { 'Content-Type': 'application/octet-stream' }
}, (err, response) => console.log(response) );
Whew. Sorry for the long response, but I'm not sure if it is possible to solve CORS any faster. Hopefully this helps.
The transaction Header should have details like, address of the block where it would be save. Here is example which I have used and is working fine for me :
String payload = "create,0001,BLockchain CPU,Black,5000";
logger.info("Sending payload as - "+ payload);
String payloadBytes = Utils.hash512(payload.getBytes()); // --fix for invaluid payload seriqalization
ByteString payloadByteString = ByteString.copyFrom(payload.getBytes());
String address = getAddress(IDEM, ITEM_ID); // get unique address for input and output
logger.info("Sending address as - "+ address);
TransactionHeader txnHeader = TransactionHeader.newBuilder().clearBatcherPublicKey()
.setBatcherPublicKey(publicKeyHex)
.setFamilyName(IDEM) // Idem Family
.setFamilyVersion(VER)
.addInputs(address)
.setNonce("1")
.addOutputs(address)
.setPayloadSha512(payloadBytes)
.setSignerPublicKey(publicKeyHex)
.build();
ByteString txnHeaderBytes = txnHeader.toByteString();
byte[] txnHeaderSignature = privateKey.signMessage(txnHeaderBytes.toString()).getBytes();
String value = Signing.sign(privateKey, txnHeader.toByteArray());
Transaction txn = Transaction.newBuilder().setHeader(txnHeaderBytes).setPayload(payloadByteString)
.setHeaderSignature(value).build();
BatchHeader batchHeader = BatchHeader.newBuilder().clearSignerPublicKey().setSignerPublicKey(publicKeyHex)
.addTransactionIds(txn.getHeaderSignature()).build();
ByteString batchHeaderBytes = batchHeader.toByteString();
byte[] batchHeaderSignature = privateKey.signMessage(batchHeaderBytes.toString()).getBytes();
String value_batch = Signing.sign(privateKey, batchHeader.toByteArray());
Batch batch = Batch.newBuilder()
.setHeader(batchHeaderBytes)
.setHeaderSignature(value_batch)
.setTrace(true)
.addTransactions(txn)
.build();
BatchList batchList = BatchList.newBuilder()
.addBatches(batch)
.build();
ByteString batchBytes = batchList.toByteString();
String serverResponse = Unirest.post("http://localhost:8008/batches")
.header("Content-Type", "application/octet-stream")
.body(batchBytes.toByteArray())
.asString()
.getBody();

Yaws code inside <erl></erl> not running

I am trying out Yaws, however I have run into a bump. The code inside my .yaws file is not compiling when I got to the path, instead it is being printed on the window. Here is my code and configuration:
<erl>
method(Arg) ->
Rec = Arg#arg.req,
Rec#http_request.method.
out(Arg) ->
{ehtml, f("Method: ~s", [method(Arg)])}.
</erl>
Server configuration:
<server localhost>
port = 8000
listen = 127.0.0.1
docroot = /home/something/
dir_listings = true
dav = true
auth_log = true
statistics = true
</server>
Any info would really be appreciated, thank you.
The problem is that you have dav = true in your server configuration, which turns on WebDAV, a protocol for content management. Under this configuration, a .yaws file is treated as just a regular file, not as one that requires special Yaws processing, which is why you see the verbatim contents of the file when you access it via your browser.
Removing dav = true from your configuration and then restarting Yaws will make it process your example.yaws file as you expect.

Where does stomp_interface come from?

In order to enable https communications between OpsCenter and DSE nodes, I have to set stomp_interface to opscenter.mydomain.com in /var/lib/datastax-agent/conf/address.yaml on each node. (After the fix, I no longer have to do this.)
Whenever I do a configure job from OpsCenter, it changes this stomp_interface value back to nn.nn.nn.nn. (After the fix, it still does this, but it doesn't break the agent HTTP communications anymore.)
Where does this parameter come from? Can I set it on the OpsCenter node in the /etc/opscenter/clusters/cluster_name.conf file?
Is it part of the [agents] section?
What is the parameter name and value that I should be adding?
opscenterd is now (the fix was to add the incoming_interface line):
# opscenterd.conf
[webserver]
port = 8888
interface = 0.0.0.0
ssl_keyfile = /var/lib/opscenter/ssl/opscenter.key
ssl_certfile = /var/lib/opscenter/ssl/opscenter.pem
ssl_port = 8443
[authentication]
enabled = True
[stat_reporter]
[agents]
use_ssl = true
incoming_interface = opscenter.mydomain.com
address.yaml before fix:
use_ssl: 1
stomp_interface: 1.2.3.4 (the opscenter external IP.
opscenter.mydomain.com also works)
stomp_port: 61620
local_interface: 2.3.4.5 (the external IP for this cluster node)
agent_rpc_interface: 0.0.0.0
agent_rpc_broadcast_address: 2.3.4.5
poll_period: 60
disk_usage_update_period: 60
rollup_rate: 200
rollup_rate_unit: second
jmx_host: 127.0.0.1
jmx_port: 7199
jmx_user: someuser
jmx_pass: somepassword
status_reporting_interval: 20
ec2_metadata_api_host: 169.254.169.254
metrics_enabled: true
jmx_metrics_threadpool_size: 5
hosts: ["2.3.4.5", "3.4.5.6", "4.5.6.7", "5.6.7.8"]
cassandra_port: 9042
thrift_port: 9160
cassandra_user: someuser
cassandra_pass: somepassword
runs_sudo: true
cassandra_install_location: /usr/share/dse
cassandra-conf: /etc/dse/cassandra/cassandra.yaml
cassandra_binary_location: /usr/bin
cassandra_conf_location: /etc/dse/cassandra
dse_env_location: /etc/dse
dse_binary_location: /usr/bin
dse_conf_location: /etc/dse
spark_conf_location: /etc/dse/spark
monitored_cassandra_user: someuser
monitored_cassandra_pass: somepassword
tcp_response_timeout: 120000
pong_timeout_ms: 120000
cluster_name.conf (I updated the seed_hosts to match those in the address.yaml hosts config in order to satisfy a Best Practices alert
that they should all be the same):
[destinations]
active =
[kerberos]
default_service =
opscenterd_client_principal =
opscenterd_keytab_location =
agent_keytab_location =
agent_client_principal =
[agents]
ssl_keystore_password =
ssl_keystore =
[jmx]
password = somepassword
port = 7199
username = someuser
[cassandra]
ssl_truststore_password =
cql_port = 9042
seed_hosts = 2.3.4.5, 3.4.5.6, 4.5.6.7, 5.6.7.8
username = someuser
password = somepassword
ssl_keystore_password =
ssl_keystore =
ssl_truststore =
Based on your comment for further information, I figured it out.
I added the incoming_interface = opscenter.mydomain.com to the [agents] section of the opscenterd.conf. (That wasn't present before markc's comment.)
I restarted service opscenterd.
Next, I was able to go back to OpsCenter LifeCycle Manager and do a fresh Install and Configure on the cluster, and all of the job steps completed successfully.
(Note: Don't change the rack names on nodes from what they were before, and select autoBootStrap = true on the Configure / Install requests.)
The datastax-agents are fully Up and Active. After the Configure and Install, the address.yaml files contained the public IP address of the OpsCenter node as the stomp_interface. (I changed one stomp_interface manually to be opscenter.mydomain.com, and that also works.)
I will also edit the question and post the requested information.
Thanks markc!

Can't connect java client to Marklogic database

I've just installed a MarkLogic nosql database out of the box on a windows machine.
I wrote a simple javaclient to put data in to the database but I get this error:
org.apache.http.conn.HttpHostConnectException: Connection to http://my.caci.local:8003 refused
at org.apache.http.impl.conn.DefaultClientConnectionOperator.openConnection(DefaultClientConnectionOperator.java:158)
The Marklogic database is started. This is the code :
DatabaseClient client = DatabaseClientFactory.newClient("localhost", 8003, "admin", "admin", Authentication.DIGEST);
XMLDocumentManager docMgr = client.newXMLDocumentManager(); BinaryDocumentManager binMgr = client.newBinaryDocumentManager();
DOMHandle handle = new DOMHandle(); for (int i = 0; i < AANT_PERSONEN; i++) {
Document document = createDocument(i);
String docId = "/zaak/" + 20;
handle.set(document);
docMgr.write(docId, handle); }
....
The Marklogic console reports the following ports to be active on my.caci.local:
Default :: Admin : 8001 [HTTP]
Default :: App-Services : 8000 [HTTP]
Default :: HealthCheck : 7997 [HTTP]
Default :: Manage : 8002 [HTTP]
I'm new to marklogic and this is my question:
- what port should I use to connect to from my java client?
In agreement with MystyxMac, I notice the console does not report a REST server on 8003.
Here's the documentation for setting up a REST server:
http://docs.marklogic.com/guide/rest-dev/intro#id_97899
You should also add users for the rest-reader, rest-writer, and rest-admin roles.
Hoping that helps,
Erik Hennum
For testing purposes you can simply switch the port you are using to 8000.
From the documentation:
When you install MarkLogic Server, a pre-configured REST API instance
is available on port 8000. This instance uses the Documents database
as the content database and the Modules database as the modules
database.
The instance on port 8000 is convenient for getting started, but you
will usually create a dedicated instance for production purposes.
http://docs.marklogic.com/guide/rest-dev/service#id_15309

uWSGI as a standalone http server with lua

I'm trying to set up uWSGI to run as a standalone server running a simple LUA script(right now, as a POC, using the hello world from http://uwsgi-docs.readthedocs.org/en/latest/Lua.html).
Here's my uwsgi.ini file:
[uwsgi]
master = true
workers = 1
threads = 8
listen = 4096
max-request = 512
pidfile = /uwsgi/logs/uwsgi.pid
procname-master = uWSGI master
auto-procname = true
lua = /uwsgi/hello.lua
socket-timeout = 30
socket = /uwsgi/uwsgi_1.sock
http = 127.0.0.1:80
http-to = /uwsgi/uwsgi_1.sock
When sending a web request, an empty response is received, and uWSGI process outputs:
-- unavailable modifier requested: 0 --
I've read this usually means a plugin is missing, however, LUA plugin is installed, and when doing the same but through NGINX everything works fine, which means there's no problem loading LUA.
Any help please?
Thanks.
Somebody told me I had to add http-modifier1 = 6 and now it works.
Still don't understand what does '6' mean, but whatever.

Resources