spring cloud dataflow http post errors - spring-cloud-dataflow

trying the a demo on getting-started from Spring-Cloud DataFlow's website, I debugged the Spring-Cloud DataFlow local server 1.4.0.RELEASE in IDEA, ran the SCDF shell with command line(windows),and i finished the steps as follows in shell:
app register --name http --type source --uri maven://org.springframework.cloud.stream.app:http-source-rabbit:1.2.0.RELEASE
app register --name log --type sink --uri maven://org.springframework.cloud.stream.app:log-sink-rabbit:1.1.0.RELEASE
stream create --name httptest --definition "http --server.port=9000 | log" --deploy
http post --target http://localhost:9000 --data "hello world"
step 1-3 were fine, but when I ran the step 4, I kept getting error messages like:
{"exception":"java.nio.charset.UnsupportedCharsetException",
"path":"/",
"error":"Internal Server Error",
"Message":"x-ibm1166",
"timestamp":1524899078020,
"status":500
}

Related

Fail to publish telemetry data to ThingsBoard Cloud

I am new to things board and using ThingsBoard free trail account. I am getting problem when I try to publish telemetry data ThingsBoard cloud via MQTT.
I followed below document link.
https://thingsboard.io/docs/paas/reference/mqtt-api/#telemetry-upload-api
I have created device on ThingsBaord and copied accesstoken from the device then trying to send telemetry data with mqtt and mosquitto but getting following problem.
WIth MQTT:-
mqtt pub -v -q 1 -h "mqtt.thingsboard.cloud" -t "v1/devices/me/telemetry" -u 'aahhahhhahh' -m "{"temperature":42}"
Error: read ECONNRESET
at TCP.onread (net.js:659:25) errno: 'ECONNRESET', code: 'ECONNRESET', syscall: 'read' }
With mosquitto_pub
mosquitto_pub -d -q 1 -h "mqtt.thingsboard.cloud" -t "v1/devices/me/telemetry" -u " aahhahhhahh " -m "{"temperature":42}"
Client null sending CONNECT
Error: The connection was lost.
I have tried with MQTTBox chrome plugin as well. Looks like it is connecting to thingsboard cloud but failing to send message to topic.
MQTTBox screenshot
Can anyone help me why I am getting issue while publishing message to device?

Cannot connect nlu in Docker container

I am trying to run Botpress with docker. I set my Dockerfile as follows:
FROM botpress/server:v11_9_5
ADD . /botpress
WORKDIR /botpress
CMD ["./bp"]
After building image, I run docker run my_image:latest to start my botpress. However it cannot connect to Duckling server.
According to the log,
03:20:32.917 Mod[nlu] Couldn't reach the Duckling server , so it will be disabled.
For more informations (or if you want to self-host it), please check the docs at
https://botpress.io/docs/build/nlu/#system-entities
[Error, connect ECONNREFUSED 127.0.0.1:8000]
STACK TRACE
Error: connect ECONNREFUSED 127.0.0.1:8000
at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1158:14)
My nlu.json setting is as follow:
{
"$schema": "../../assets/modules/nlu/config.schema.json",
"confidenceTreshold": 0.7,
"ducklingURL": "https://duckling.botpress.io",
"ducklingEnabled": true,
"autoTrainInterval": "30s",
"preloadModels": false,
"languageModel": "en",
"fastTextOverrides": {}
}
Duckling is bundled with Botpress when using the Docker image (and is expected to be started when you start Botpress). There is an environment variable which tells it to use the local version of duckling.
If you run the image directly, both processes are started at the same time.
There are a couple of examples on how to run both of them here: https://github.com/botpress/botpress/tree/master/examples/docker-compose
Basically:
command: bash -c "./duckling -p 8000 & ./bp"

Unable to read Pub/Sub messages from local emulator in Apache beam

I am trying to run a simple Apache Beam pipeline with the DirectRunner that reads from a Pub/Sub subscription and writes the messages to disk.
The pipeline works fine when I run it against GCP, however when I try to run it against my local Pub/Sub emulator, it doesn't seem to be doing anything.
I am using a custom Options class that extends the org.apache.beam.sdk.io.gcp.pubsub.PubsubOptions class.
public interface Options extends PubsubOptions {
#Description("Pub/Sub subscription to read the input from")
#Required
ValueProvider<String> getInputSubscription();
void setInputSubscription(ValueProvider<String> valueProvider);
}
The pipeline is quite simple
pipeline
.apply("Read Pub/Sub Messages", PubsubIO.readMessagesWithAttributes()
.fromSubscription(options.getInputSubscription()))
.apply("Add a fixed window", Window.into(FixedWindows.of(Duration.standardSeconds(WINDOW_SIZE))))
.apply("Convert Pub/Sub To String", new PubSubMessageToString())
.apply("Write Pub/Sub messages to local disk", new WriteOneFilePerWindow());
The pipeline is executed with the following options
mvn compile exec:java \
-Dexec.mainClass=DefaultPipeline \
-Dexec.cleanupDaemonThreads=false \
-Dexec.args=" \
--project=my-project \
--inputSubscription=projects/my-project/subscriptions/my-subscription \
--pubsubRootUrl=http://127.0.0.1:8681 \
--runner=DirectRunner"
I am using this Pub/Sub emulator docker image and executing it with the following command:
docker run --rm -ti -p 8681:8681 -e PUBSUB_PROJECT1=my-project,topic:my-subscription marcelcorso/gcloud-pubsub-emulator:latest
Is there more configuration required to make this work?
Turns out that an Apache Beam pipeline is unable to read from a local Pub/Sub emulator if you have GOOGLE_APPLICATION_CREDENTIALS environment variable set.
Once I removed this environment variable which was pointing to a GCP service account, the pipeline worked seamlessly with the local Pub/Sub emulator.
You can troubleshoot the local emulator by issuing manual HTTP requests to it (via curl), like so:
$ curl -d '{"messages": [{"data": "c3Vwc3VwCg=="}]}' -H "Content-Type: application/json" -X POST localhost:8681/v1/projects/my-project/topics/topic:publish
{
"messageIds": ["5"]
}
$
$ curl -d '{"returnImmediately":true, "maxMessages":1}' -H "Content-Type: application/json" -X POST localhost:8681/v1/projects/my-project/subscriptions/my-subscription:pull
{
"receivedMessages": [{
"ackId": "projects/my-project/subscriptions/my-subscription:9",
"message": {
"data": "c3Vwc3VwCg==",
"messageId": "5",
"publishTime": "2019-04-30T17:26:09Z"
}
}]
}
$
Or by pointing the gcloud command-line tool at it:
$ CLOUDSDK_API_ENDPOINT_OVERRIDES_PUBSUB=localhost:8681 gcloud pubsub topics list
Also, note that when the emulator comes up, it creates the topic and subscription from scratch, so there are no messages on them. If your pipeline expects to immediately pull messages on the subscription, that would explain why it seems “stuck”. Note that when you run the pipeline at GCP, the topic and subscription you use there may already have messages on them.

How to know if my program is completely started inside my docker with compose

In my CI chain I execute end-to-end tests after a "docker-compose up". Unfortunately my tests often fail because even if the containers are properly started, the programs contained in my containers are not.
Is there an elegant way to verify that my setup is completely started before running my tests ?
You could poll the required services to confirm they are responding before running the tests.
curl has inbuilt retry logic or it's fairly trivial to build retry logic around some other type of service test.
#!/bin/bash
await(){
local url=${1}
local seconds=${2:-30}
curl --max-time 5 --retry 60 --retry-delay 1 \
--retry-max-time ${seconds} "${url}" \
|| exit 1
}
docker-compose up -d
await http://container_ms1:3000
await http://container_ms2:3000
run-ze-tests
The alternate to polling is an event based system.
If all your services push notifications to an external service, scaeda gave the example of a log file or you could use something like Amazon SNS. Your services emit a "started" event. Then you can subscribe to those events and run whatever you need once everything has started.
Docker 1.12 did add the HEALTHCHECK build command. Maybe this is available via Docker Events?
If you have control over the docker engine in your CI setup you could execute docker logs [Container_Name] and read out the last line which could be emitted by your application.
RESULT=$(docker logs [Container_Name] 2>&1 | grep [Search_String])
logs output example:
Agent pid 13
Enter passphrase (empty for no passphrase): Enter same passphrase again: Identity added: id_rsa (id_rsa)
#host SSH-2.0-OpenSSH_6.6.1p1 Ubuntu-2ubuntu2.6
#host SSH-2.0-OpenSSH_6.6.1p1 Ubuntu-2ubuntu2.6
parse specific line:
RESULT=$(docker logs ssh_jenkins_test 2>&1 | grep Enter)
result:
Enter passphrase (empty for no passphrase): Enter same passphrase again: Identity added: id_rsa (id_rsa)

how do I resolve docker issues with ice login?

I am using use the ice command line interface for IBM Container Services, and I am seeing a couple of different problems from a couple of different boxes I am testing with. Here is one example:
[root#cds-legacy-monitor ~]# ice --verbose login --org chrisr#ca.ibm.com --space dev --user chrisr#ca.ibm.com --registry registry-ice.ng.bluemix.net
#2015-11-26 01:38:26.092288 - Namespace(api_key=None, api_url=None, cf=False, cloud=False, host=None, local=False, org='chrisr#ca.ibm.com', psswd=None, reg_host='registry-ice.ng.bluemix.net', skip_docker=False, space='dev', subparser_name='login', user='chrisr#ca.ibm.com', verbose=True)
#2015-11-26 01:38:26.092417 - Executing: cf login -u chrisr#ca.ibm.com -o chrisr#ca.ibm.com -s dev -a https://api.ng.bluemix.net
API endpoint: https://api.ng.bluemix.net`
Password>
Authenticating...
OK
Targeted org chrisr#ca.ibm.com
Targeted space dev
API endpoint: https://api.ng.bluemix.net (API version: 2.40.0)
User: chrisr#ca.ibm.com
Org: chrisr#ca.ibm.com
Space: dev
#2015-11-26 01:38:32.186204 - cf exit level: 0
#2015-11-26 01:38:32.186340 - config.json path: /root/.cf/config.json
#2015-11-26 01:38:32.186640 - Bearer: <long string omitted>
#2015-11-26 01:38:32.186697 - cf login succeeded. Can access: https://api-ice.ng.bluemix.net/v3/containers
Authentication with container cloud service at https://api-ice.ng.bluemix.net/v3/containers completed successfully
You can issue commands now to the container service
Proceeding to authenticate with the container cloud registry at registry-ice.ng.bluemix.net
#2015-11-26 01:38:32.187317 - using bearer token
#2015-11-26 01:38:32.187350 - config.json path: /root/.cf/config.json
#2015-11-26 01:38:32.187489 - Bearer: <long pw string omitted>
#2015-11-26 01:38:32.187517 - Org Guid: dae00d7c-1c3d-4bfd-a207-57a35a2fb42b
#2015-11-26 01:38:32.187551 - docker login -u bearer -p '<long pw string omitted>' -e a#b.c registry-ice.ng.bluemix.net
FATA[0012] Error response from daemon: </html>
#2015-11-26 01:38:44.689721 - docker call exit level: 256
docker is not available on your system or is not properly configured
Could not authenticate with cloud registry at registry-ice.ng.bluemix.net
You can still use IBM Containers but will not be able to run local docker containers, push, or pull images
#2015-11-26 01:38:44.689842 - Exit err level = 2**
On the other box, it also fails, but the final error is slightly different.
#2015-11-26 01:44:48.916034 - docker login -u bearer -p '<long pw string omitted>' -e a#b.c registry-ice.ng.bluemix.net
Error response from daemon: Unexpected status code [502] : <html>
<head><title>502 Bad Gateway</title></head>
<body bgcolor="white">
<center><h1>502 Bad Gateway</h1></center>
<hr><center>nginx</center>
</body>
</html>
#2015-11-26 01:45:02.582753 - docker call exit level: 256
docker is not available on your system or is not properly configured
Could not authenticate with cloud registry at registry-ice.ng.bluemix.net
You can still use IBM Containers but will not be able to run local docker containers, push, or pull images
#2015-11-26 01:45:02.582868 - Exit err level = 2
Any thoughts on what might be causing these issues?
The errors are referring the same problem, ice isn't finding any docker env locally.
It doesn't prevent working remotely on Bluemix but without a local docker env ice cannot work with local containers

Resources