I'm trying to send same logs to multiple Kinesis Firehose Stream on multiple AWS account via Fluent Bit v1.8.12. How can I use the role_arn in kinesis_firehose OUTPUT property correctly? I'm able to send to firehose A but not firehose B. Also, role A on AWS A can assume role B on AWS account B.
This is what I'm trying to do
This is fluent bit OUTPUT conf
[OUTPUT]
Name kinesis_firehose
Match aaa
region eu-west-1
delivery_stream a
time_key time
role_arn arn:aws:iam::11111111111:role/role-a
# THIS ONE DOES NOT WORK
[OUTPUT]
Name kinesis_firehose
Match bbb
region eu-west-1
delivery_stream b
time_key time
role_arn arn:aws:iam::22222222222:role/role-b
fluent bit pod logs says:
[2022/06/21 15:03:12] [error] [aws_credentials] STS assume role request failed
[2022/06/21 15:03:12] [ warn] [aws_credentials] No cached credentials are available and a credential refresh is already in progress. The currentco-routine will retry.
[2022/06/21 15:03:12] [error] [signv4] Provider returned no credentials, service=firehose
[2022/06/21 15:03:12] [error] [aws_client] could not sign request
[2022/06/21 15:03:12] [error] [output:kinesis_firehose:kinesis_firehose.1] Failed to send log records to b
[2022/06/21 15:03:12] [error] [output:kinesis_firehose:kinesis_firehose.1] Failed to send log records
[2022/06/21 15:03:12] [error] [output:kinesis_firehose:kinesis_firehose.1] Failed to send records
The problem was that I didn't know which role the fluent-bit pod was assuming. Enablind fluent-bit debug logs helped me.
It appears that fluent-bit assumes a particular role x that includes many EKS policies. I added to this role a policy that let this role x assume both roles role a (can write to Kinesis in account AWS A) and role b (can write to Kinesis in account AWS B). No changes were made to fluent bit configuration.
The solution is painted below:
I am working on a Spring Cloud Stream Kafka application. I have added only consumers to consume messages from topics and deliver them to a third party using FIX protocol.
It is working fine till this point, but now the third party sends back the response and I would like to produce them to a new topic. When I added a Supplier in my existing code, it starts behaving weirdly. bootstrap.servers config changes from remoteHost broker to localhost and started giving below error:
[AdminClient clientId=adminclient-1] Connection to node -1 (localhost/127.0.0.1:9092) could not be established> Broker may not be available.
error would come if trying to connect localhost as there isn't any Kafka setup.
Below is my application.yml file:
spring.cloud.stream.function.definition: amerData;emeaData;ackResponse #added new ackResponse here
spring.cloud.stream.kafka.streams:
binder:
brokers: remoteHost:9092
configuration:
schema.registry.url: remoteHost:8081
default.key.serde: org.apache.kafka.common.serialization.Serdes$StringSerde
default.value.serde: io.confluent.kafka.streams.serdes.avro.SpecificAvroSerde
bindings:
ackResponse-out-0: #new addition
producer.configuration:
key.serializer: io.confluent.kafka.serializers.KafkaAvroSerializer
value.serializer: io.confluent.kafka.serializers.KafkaAvroSerializer
spring.cloud.stream.bindings:
amerData-in-0:
destination: topic1
emeaData-in-0:
destination: topic2
ackResponse-out-0: #new addition
destination: topic3
and tried possible options for Supplier -> Supplier<String> ackResponse() or Supplier<Message<String>> ackResponse()
It only doesn't change remoteHost to localhost when I am doing Supplier<KStream<String,String>> ackResponse(), then bootstrap.servers show the configured remote one, but this isn't correct and I can't write the received response (mostly a string or json) like this to a Kafka topic.
I did configure my consumers as Consumer<KStream<String, AVROPOJO1>> amerData() and Consumer<KStream<String, AVROPOJO2>> emeaData() as per need & they work fine.
Am I missing or messing up something? Can't we have producer/consumer both in the same spring cloud stream application? Using Streambridge also couldn't solve this. Could someone help?
If you are adding a Supplier bean as you have done, it becomes a regular producer that is using the MessageChannel based Kafka binder. You need to add the regular Kafka binder in your project (spring-cloud-stream-binder-kafka). The bindings for that should be under spring.cloud.stream.kafka.bindings. I see that you have it defined above under spring.cloud.stream.kafka.streams.bindings. I wonder if that is the issue?
I am trying to use Mosquitto MQTT Broker v1.5.8.
I am using mosquitto_auth_plugin for User Authentication. (https://github.com/jpmens/mosquitto-auth-plug)
I created a mysql server with users and acls tables.
I want to setup a user that can subscribe to topic test/#.
So I setup the rw = 5 in the acls table for that user
However the user is not able to subscribe to any test/# but can subscribe to test/123
I looked at one of the issue posted https://github.com/jpmens/mosquitto-auth-plug/issues/356
but since the Repo is archived, I cant ask questions there.
mysql> select * from acls;
+----+--------------+--------------------+----+
| id | username | topic | rw |
+----+--------------+--------------------+----+
| 1 | test | test/# | 5 |
+----+--------------+--------------------+----+
Does anyone know how the checksum field in active_storage_blobs is calculated when using ActiveStorage on rails 5.2+?
For bonus points, does anyone know how I can get it to use an md5 checksum that would match the one from the md5 CLI command?
Lets Break It Down
I know i'm a bit late to the party, but this is more for those that come across this in a search for answers. So here it is..
Background:
Rails introduced loads of new features in version 5.2, one of which was ActiveStorage. The official final release came out on April 9th, 2018.
Rails 5.2 Official Release Notes
Disclaimer:
So to be perfectly clear, the following information pertains to out-of-the-box vanilla active storage. This also doesn't take into account some crazy code-fu that revolves around some one off scenario.
With that said, the checksum is calculated differently depending on your Active Storage setup. With the vanilla out-of-the-box Rails Active Storage, there are 2 "types" (for lack of a better term) of configuration.
Proxy Uploads
Direct Uploads
Proxy Uploads
File Upload Flow: [Client] → [RoR App] → [Storage Service]
Comm. Flow: Can vary but in most cases it should be similar to File upload flow.
Pointed out above in SparkBao's answer is a "Proxy Upload". Meaning you upload the file to your RoR application and perform some sort of processing before sending the file to your configured storage service (AWS, Azure, Google, BackBlaze, etc...). Even if you set your storage service to "localdisk" the logic still technically applies, even though your RoR application is the storage endpoint.
A "Proxy Upload" approach isn't ideal for RoR applications that are deployed in the cloud on services like Heroku. Heroku has a hardset limit of 30 seconds to complete your transaction and send a response back to your client (end user). So if your file is fairly large, you need to consider the time it takes for your file to upload, and then account for the amount of time to calculate the checksum. If your caught in a scenario where you can't complete the request with a response in the 30 seconds you will need to use the "Direct Upload" approach.
Proxy Uploads Answer:
The Ruby class Digest::MD5 is used in the method compute_checksum_in_chunks(io) as pointed out by Spark.Bao.
Direct Uploads
File Upload Flow: [Client] → [Storage Service]
Comm. Flow: [Client] → [RoR App] → [Client] → [Storage Service] → [Client] → [RoR App] → [Client]
Our fine friends that maintain and develop Rails have already done all the heavy lifting for us. I won't go into details on how to setup a direct upload, but here is a link on how » Rails EdgeGuide - Direct Uploads.
Proxy Uploads Answer:
Now with all that said, with a vanilla out-of-the-box "Direct Uploads" setup, a file checksum is calculated by leveraging SparkMD5 (JavaScript).
Below is a snippet from the Rails Active Storage Source Code- (activestorage.js)
var fileSlice = File.prototype.slice || File.prototype.mozSlice || File.prototype.webkitSlice;
var FileChecksum = function() {
createClass(FileChecksum, null, [ {
key: "create",
value: function create(file, callback) {
var instance = new FileChecksum(file);
instance.create(callback);
}
} ]);
function FileChecksum(file) {
classCallCheck(this, FileChecksum);
this.file = file;
this.chunkSize = 2097152;
this.chunkCount = Math.ceil(this.file.size / this.chunkSize);
this.chunkIndex = 0;
}
createClass(FileChecksum, [ {
key: "create",
value: function create(callback) {
var _this = this;
this.callback = callback;
this.md5Buffer = new sparkMd5.ArrayBuffer();
this.fileReader = new FileReader();
this.fileReader.addEventListener("load", function(event) {
return _this.fileReaderDidLoad(event);
});
this.fileReader.addEventListener("error", function(event) {
return _this.fileReaderDidError(event);
});
this.readNextChunk();
}
},
Conclusion
If there is anything I missed I do apologize in advance. I tried to be as thorough as possible.
So to Sum things up the following should suffice as an acceptable answer:
Proxy Upload Configuration: The ruby class Digest::MD5
Direct Upload Configuration: The JavaScript hash library SparkMD5.
the source code is here: https://github.com/rails/rails/blob/6aca4a9ce5f0ae8af826945b272842dbc14645b4/activestorage/app/models/active_storage/blob.rb#L369-L377
def compute_checksum_in_chunks(io)
Digest::MD5.new.tap do |checksum|
while chunk = io.read(5.megabytes)
checksum << chunk
end
io.rewind
end.base64digest
end
in my project, I need to use this checksum value to judge whether the user uploads the duplicated file, I use the following code to get the same value with above method:
md5 = Digest::MD5.file(params[:file].tempfile.path).base64digest
puts "========= md5: #{md5}"
the output:
========= md5: F/9Inmc4zdQqpeSS2ZZGug==
database data:
pry(main)> ActiveStorage::Blob.find_by(checksum: 'F/9Inmc4zdQqpeSS2ZZGug==')
ActiveStorage::Blob Load (2.7ms) SELECT "active_storage_blobs".* FROM "active_storage_blobs" WHERE "active_storage_blobs"."checksum" = $1 LIMIT $2 [["checksum", "F/9Inmc4zdQqpeSS2ZZGug=="], ["LIMIT", 1]]
=> #<ActiveStorage::Blob:0x00007f9a16729a90
id: 1,
key: "gpN2NSgfimVP8VwzHwQXs1cB",
filename: "15 Celebrate.mp3",
content_type: "audio/mpeg",
metadata: {"identified"=>true, "analyzed"=>true},
byte_size: 9204528,
checksum: "F/9Inmc4zdQqpeSS2ZZGug==",
created_at: Thu, 29 Nov 2018 01:38:15 UTC +00:00>
It’s a base64-encoded MD5 digest of the blob’s data. I’m afraid Active Storage doesn’t support hexadecimal checksums like those emitted by md5(1). Sorry!
For your bonus question (and potentially also the main one):
You can convert the checksum from base64 to hex (like the md5(1) command supports) and back.
Converting a hexadecimal digest to base64 in Ruby:
def hex_to_base64(hexdigest)
Base64.strict_encode64([hex_string].pack("H*"))
end
From base64 to hex:
def base64_to_hex(base64_string)
Base64.decode64(base64_string).each_byte.map { |b| "%02x" % b.to_i }.join
end
In my application i have to implement web service.and want to login with it.there are some parameters and method in it which are as below:
Parameter: mailaddress String with #
password String
Return: If ok, then you receive a loginToken. (> 0)
If not ok, then loginToken < 0
-1 = user not found
-2 = wrong password
When you can not reach the server, you have to inform the user in dialog, with “Server not available”. In the cases -1 or -2 you should inform the user.
the web service is in wsdl format and i don't know how to use it.
Suppose there is a link http://google.com so how can i do login please help
It is one of the framework , we can use
pico - A light iOS web service client framework.
http://maniacdev.com/2012/01/tool-soap-based-web-services-made-easy-on-ios
this post explained how to consume wsdl services. its has 2 example projects too.