I'm trying to integrate Amplify to my project but I'm having some issues with the configuration.
The backend is sending the S3 Storage configuration to my project so I have to configurate Amplify with the data received.
I tried to configurate the storage following this test but It's failing with the following error:
PluginError: Unable to decode configuration
Recovery suggestion: Make sure the plugin configuration is JSONValue
▿ pluginConfigurationError : 3 elements
- .0 : "Unable to decode configuration"
- .1 : "Make sure the plugin configuration is JSONValue"
- .2 : nil
This is my code:
func amplifyConfigure() {
do {
Amplify.Logging.logLevel = .verbose
try Amplify.add(plugin: AWSCognitoAuthPlugin())
try Amplify.add(plugin: AWSS3StoragePlugin())
let storageConfiguration = StorageCategoryConfiguration(
plugins: [
"awsS3StoragePlugin": [
"bucket": "bucket",
"region": "us-west-2",
"defaultAccessLevel": "protected"
]
]
)
let amplifyConfiguration = AmplifyConfiguration(storage: storageConfiguration)
try Amplify.configure(amplifyConfiguration)
// LOG success.
} catch {
// LOG Error.
}
}
Can someone help me with this custom configuration?
Thanks!
It seems that config cannot be declared directly in one go for some reason, possibly type-related. For me it works if I declare it in multiple steps. Try replacing this:
let storageConfiguration = StorageCategoryConfiguration(
plugins: [
"awsS3StoragePlugin": [
"bucket": "bucket",
"region": "us-west-2",
"defaultAccessLevel": "protected"
]
]
)
with this:
var storageConfigurationJson : [String:JSONValue] = [ "awsS3StoragePlugin" : [] ]
storageConfigurationJson["awsS3StoragePlugin"] = ["bucket": "bucket",
"region": "us-west-2",
"defaultAccessLevel": "protected"]
let storageConfiguration = StorageCategoryConfiguration(plugins: storageConfigurationJson)
I've only used Amplify config with AuthCategoryConfiguration, so in case StorageCategoryConfiguration has a different syntax you may need to adjust my suggested code accordingly.
Related
I have generic repository "my_repo". I uploaded files there from jenkins with to paths like my_repo/branch_buildNumber/package.tar.gz and with custom property "tag" like "1.9.0","1.10.0" etc. I want to get item/file with latest/newest tag.
I tried to modify Example 2 from this link ...
https://www.jfrog.com/confluence/display/JFROG/Using+File+Specs#UsingFileSpecs-Examples
... and add sorting and limit the way it was done here ...
https://www.jfrog.com/confluence/display/JFROG/Artifactory+Query+Language#ArtifactoryQueryLanguage-limitDisplayLimitsandPagination
But im getting "unknown property desc" error.
The Jenkins Artifactory Plugin, like most of the JFrog clients, supports File Specs for downloading and uploading generic files.
The File Specs schema is described here. When creating a File Spec for downloading files, you have the option of using the "pattern" property, which can include wildcards. For example, the following spec downloads all the zip files from the my-local-repo repository into the local froggy directory:
{
"files": [
{
"pattern": "my-local-repo/*.zip",
"target": "froggy/"
}
]
}
Alternatively, you can use "aql" instead of "pattern". The following spec, provides the same result as the previous one:
{
"files": [
{
"aql": {
"items.find": {
"repo": "my-local-repo",
"$or": [
{
"$and": [
{
"path": {
"$match": "*"
},
"name": {
"$match": "*.zip"
}
}
]
}
]
}
},
"target": "froggy/"
}
]
}
The allowed AQL syntax inside File Specs does not include everything the Artifactory Query Language allows. For examples, you can't use the "include" or "sort" clauses. These limitations were put in place, to make the response structure known and constant.
Sorting however is still available with File Specs, regardless of whether you choose to use "pattern" or "aql". It is supported throw the "sortBy", "sortOrder", "limit" and "offset" File Spec properties.
For example, the following File Spec, will download only the 3 largest zip file files:
{
"files": [
{
"aql": {
"items.find": {
"repo": "my-local-repo",
"$or": [
{
"$and": [
{
"path": {
"$match": "*"
},
"name": {
"$match": "*.zip"
}
}
]
}
]
}
},
"sortBy": ["size"],
"sortOrder": "desc",
"limit": 3,
"target": "froggy/"
}
]
}
And you can do the same with "pattern", instead of "aql":
{
"files": [
{
"pattern": "my-local-repo/*.zip",
"sortBy": ["size"],
"sortOrder": "desc",
"limit": 3,
"target": "local/output/"
}
]
}
You can read more about File Specs here.
(After answering this question here, we also updated the File Specs documentation with these examples).
After a lot of testing and experimenting i found that there are many ways of solving my main problem (getting latest version of package) but each of way require some function which is available in paid version. Like sort() in AQL or [RELEASE] in REST API. But i found that i still can get JSON with a full list of files and its properties. I can also download each single file. This led me to solution with simple python script. I can't publish whole but only the core which should bu fairly obvious
import requests, argparse
from packaging import version
...
query="""
items.find({
"type" : "file",
"$and":[{
"repo" : {"$match" : \"""" + args.repository + """\"},
"path" : {"$match" : \"""" + args.path + """\"}
}]
}).include("name","repo","path","size","property.*")
"""
auth=(args.username,args.password)
def clearVersion(ver: str):
new = ''
for letter in ver:
if letter.isnumeric() or letter == ".":
new+=letter
return new
def lastestArtifact(response: requests):
response = response.json()
latestVer = "0.0.0"
currentItemIndex = 0
chosenItemIndex = 0
for results in response["results"]:
for prop in results['properties']:
if prop["key"] == "tag":
if version.parse(clearVersion(prop["value"])) > version.parse(clearVersion(latestVer)):
latestVer = prop["value"]
chosenItemIndex = currentItemIndex
currentItemIndex += 1
return response["results"][chosenItemIndex]
req = requests.post(url,data=query,auth=auth)
if args.verbose:
print(req.text)
latest = lastestArtifact(req)
...
I just want to point that THIS IS NOT permanent solution. We just didnt want to buy license yet only because of one single problem. But if there will be more of such problems then we definetly buy PRO subscription.
I am trying to use Google Cloud Speech-to-text, using the client libraries, from a node.js environment, and I see something I don't understand: I get a different result for the same example audio file, and the same configuration, depending on whether I am using it from the original sample bucket, or from my own bucket.
There are the requests and responses:
The baseline is Google's own test data file, available here: https://storage.googleapis.com/cloud-samples-tests/speech/brooklyn.flac
Request:
{
"config": {
"encoding": "FLAC",
"languageCode": "en-US",
"sampleRateHertz": 16000,
"enableAutomaticPunctuation": true
},
"audio": {
"uri": "gs://cloud-samples-tests/speech/brooklyn.flac"
}
}
Response:
{
"results": [
{
"alternatives": [
{
"transcript": "How old is the Brooklyn Bridge?",
"confidence": 0.9831430315971375
}
]
}
]
}
So far, so good. But, if I download this audio file, re-upload it to my own bucket, and do the same, then:
Request:
{
"config": {
"encoding": "FLAC",
"languageCode": "en-US",
"sampleRateHertz": 16000,
"enableAutomaticPunctuation": true
},
"audio": {
"uri": "gs://goe-transcript-creation/brooklyn.flac"
}
}
Response:
{
"results": [
{
"alternatives": [
{
"transcript": "how old is",
"confidence": 0.8902621865272522
}
]
}
]
}
As you can see this is the same request. The re-uploaded audio data is here: https://storage.googleapis.com/goe-transcript-creation/brooklyn.flac
This the exact same file as in the first example... not a bit of difference.
Still, the results are different; I only get half of the sentence.
What am I missing here? Thanks.
Update 1:
The same thing happens with the CLI tool, too:
$ gcloud ml speech recognize gs://cloud-samples-tests/speech/brooklyn.flac --language-code=en-US
{
"results": [
{
"alternatives": [
{
"confidence": 0.98314303,
"transcript": "how old is the Brooklyn Bridge"
}
]
}
]
}
$ gcloud ml speech recognize gs://goe-transcript-creation/brooklyn.flac --language-code=en-US
ERROR: (gcloud.ml.speech.recognize) INVALID_ARGUMENT: Invalid recognition 'config': bad encoding..
$ gcloud ml speech recognize gs://goe-transcript-creation/brooklyn.flac --language-code=en-US --encoding=FLAC
ERROR: (gcloud.ml.speech.recognize) INVALID_ARGUMENT: Invalid recognition 'config': bad sample rate hertz.
$ gcloud ml speech recognize gs://goe-transcript-creation/brooklyn.flac --language-code=en-US --encoding=FLAC --sample-rate=16000
{
"results": [
{
"alternatives": [
{
"confidence": 0.8902483,
"transcript": "how old is"
}
]
}
]
}
It's also interesting that when pulling the audio from the other bucket, I need to specify encoding and sample rate, otherwise it doesn't work... but it's not necessary when I am using the original test bucket.
Update 2:
If I don't use Google Cloud Storage, but upload the data directly in the speech-to-text request, it works as intended:
$ gcloud ml speech recognize brooklyn.flac --language-code=en-US
{
"results": [
{
"alternatives": [
{
"confidence": 0.98314303,
"transcript": "how old is the Brooklyn Bridge"
}
]
}
]
}
So the problem doesn't seems to be with the recognition itself, but accessing the audio data. The obvious guess would be that maybe it's the fault of the uploading, and the data is somehow corrupted along the way?
We can verify that by pulling the data from the cloud, and comparing with the original. It doesn't seem to be broken.
So maybe it's a problem when the S-T-T service is accessing the storage service? But why with one bucket only? Or is it some kind of file metadata problem?
I'm using awsconfiguration.json for AWS Cognito for my iOS Application written in swift. But I'm afraid of security that awsconfiguration.json is stored in my local directory. How can I protect this json file against a third man attack?
Please see similar Github Issue https://github.com/aws-amplify/aws-sdk-ios/issues/1671
The comments talk about
the file is non-sensitive data, so resources that should be accessed by authenticated users should be configured with the approiate controls. Amplify CLI helps you with this, depending on the resources you are provisioning in AWS.
there is a way to configure it in-memory via AWSInfo.configureDefaultAWSInfo(awsConfiguration)
Configure your AWS dependencies using in-memory configuration instead of the configuration JSON file as suggested by AWS documentation.
Sample code:
func buildAuthConfiguration() -> [String:JSONValue] {
return [
"awsCognitoAuthPlugin": [
"IdentityManager": [
"Default": [:]
],
"Auth": [
"Default": [
"authenticationFlowType": "String"
]
],
"CognitoUserPool": [
"Default": [
"PoolId": "String",
"AppClientId": "String",
"Region": "String"
]
],
"CredentialsProvider": [
"CognitoIdentity": [
"Default": [
"PoolId": "String",
"Region": "String"
]
]
]
]
]
}
func buildAPIConfiguration() -> [String: JSONValue] {
return [
"awsAPIPlugin": [
"apiName" : [
"endpoint": "String",
"endpointType": "String",
"authorizationType": "String",
"region": "String"
]
]
]
}
func configureAmplify() {
let authConf = AuthCategoryConfiguration(plugins: buildAuthConfiguration())
let apiConf = APICategoryConfiguration(plugins: buildAPIConfiguration())
let config = AmplifyConfiguration(
analytics: nil,
api: apiConf,
auth: authConf,
dataStore: nil,
hub: nil,
logging: nil,
predictions: nil,
storage: nil
)
try Amplify.configure(config)
// Rest of your code
}
Source: https://github.com/aws-amplify/amplify-ios/issues/1171#issuecomment-832988756
You can provide data protection to your app files by saving it into file directory
Following documentation can help you to achieve it.
https://developer.apple.com/documentation/uikit/protecting_the_user_s_privacy/encrypting_your_app_s_files
The fix to add a new constructor has been released in 2.13.6 version of the SDK.
to allow passing a JSONObject containing the configuration from the awsconfiguration.json file. You can store the information in JSONObject in your own security mechanism and provide it at runtime through the constructor.
https://github.com/aws-amplify/aws-sdk-android/pull/1002
I'm trying to add offline capabilities to our iOS app (simply put, an app that manages users and projects), which relies on Firebase. This is using Swift 3.0. I have followed the guides and did the following:
Added this to my application delegate, right after FIRApp.configure():
FIRDatabase.database().persistenceEnabled = true
called keepSynced(true) on users/userKey and all of said user's projects/projectKey nodes.
The app works fine while online (obviously) and keeps working equally well offline, even if I restart it while disconnected from the internet. The problem arises when I create a new project while offline. I use the following to create a new project:
let projectKey = FIRDatabase.database().reference(withPath: "projects").childByAutoId().key
let logsKey = FIRDatabase.database().reference(withPath: "projects").child(projectKey).child("logs").childByAutoId().key
FIRDatabase.database().reference().updateChildValues([
"projects/\(projectKey)/key1" : value1,
"projects/\(projectKey)/key2" : [
"subkey1" : subvalue1,
"subkey2" : subvalue2
],
"projects/\(projectKey)/key3/\(logKey)" : [
"subkey3" : subvalue3,
"subkey4" : subvalue4
]
]) { error, ref in
if error != nil {
print("Error")
return
}
}
After the project's creation, if I attempt to call observeSingleEvent on "projects/projectKey/key1" or "projects/projectKey/key2", all is good. However, calling the same function on "projects/projectKey/key3/logKey" never triggers the block/callback - it only gets called if the connection comes back.
I have enabled the FIRDatabase logging (which confirms the local write takes place) to look for hints but can't seem to figure out the issue.
Is there something I'm missing?
Note: Using the latest Firebase iOS SDK (3.5.2).
EDIT: It seems to work fine if I expand the deep links:
let projectKey = FIRDatabase.database().reference(withPath: "projects").childByAutoId().key
let logsKey = FIRDatabase.database().reference(withPath: "projects").child(projectKey).child("logs").childByAutoId().key
FIRDatabase.database().reference().updateChildValues([
"projects/\(projectKey)" : [
"key1" : value1,
"key2" : [
"subkey1" : subvalue1,
"subkey2" : subvalue2
],
"key3" : [
logKey : [
"subkey3" : subvalue3,
"subkey4" : subvalue4
]
]
]
]) { error, ref in
if error != nil {
print("Error")
return
}
}
Could this be a bug in the way Firebase manages its local cache states when offline? It's as if it didn't know about intermediate keys created using deep links in updateChildValues.
Kafka 0.8.1.1 (kafka_2.8.0-0.8.1.1.tgz)
I am using jmxtrans to do JMX monitoring of a Kafka instance (which is running in docker). Unfortunately, kafka metrics are not being returned.
I have tried a few things to debug this and know that kafka is running correctly (I can produce/consume messages successfully) have concluded that jmxtrans does return JMX metrics (for example, java.lang:type=Memory, attribute=HeapMemoryUsage returns correct data) so the general kafka and JMX capability seems to be working. Also, I can access the metrics when I use jconsole -- the metrics seem to be captured with data in all relevant fields.
When I try jmxtrans using the following configuration, unfortunately, I do not get any information back (no data at all in fact). I believe the metrics are supposed to be captured based upon the kafka documentation ("kafka.server:type=BrokerTopicMetrics", attribute="MessagesInPerSec")
The following is the jmxtrans configuration that I used:
{
"servers" : [ {
"port" : "9999",
"host" : "10.0.1.201",
"queries" : [ {
"outputWriters" : [ {
"#class" : "com.googlecode.jmxtrans.model.output.StdOutWriter",
"settings" : {
}
} ],
"obj" : "kafka.server:type=BrokerTopicMetrics",
"attr" : [ "MessagesInPerSec" ]
} ],
"numQueryThreads" : 2
} ]
}
I am not sure why data is not returned. Maybe I setup an invalid jmxtrans configuration or perhaps I am specifying the metric improperly.
Any help is appreciated.
After a lot of experimentation, I have now resolved the question. For completeness, below is how I resolved the problem.
It appears that I specified the "obj" value incorrectly.
The CORRECT obj value (an example) is as follows:
"obj": "\"kafka.server\":type=\"BrokerTopicMetrics\",name \"AllTopicsLogBytesAppendedPerSec\"",
"attr": [ "Count" ]
Note that the "obj" value requires additional quotes. This seems unusual and different than the normal pattern I have seen (no quotes) for other JMX obj values.
JMXTRANS provided valid output after putting the correct (quoted) values in the obj string...
As I could've found out in ./bin/jmxtrans.sh, by default the stdout/log file is /dev/null.
LOG_FILE=${LOG_FILE:-"/dev/null"}
That's why it's important to set the env var to something you can use to see the output:
LOG_FILE=log.txt ./bin/jmxtrans.sh start kafka.json
I'm using the following kafka.json configuration file:
{
"servers" : [ {
"port" : "10101",
"host" : "localhost",
"queries" : [ {
"outputWriters" : [ {
"#class" : "com.googlecode.jmxtrans.model.output.StdOutWriter",
"settings" : {
}
} ],
"obj" : "kafka.server:type=BrokerTopicMetrics,name=MessagesInPerSec,topic=testowo",
"attr" : [ "Count" ]
} ],
"numQueryThreads" : 2
} ]
}
When you start jmxtrans it will query the broker with JMX on localhost:10101 about Count attribute for testowo topic. It will print the result out to the file LOG_FILE every 60 secs (you can change it using SECONDS_BETWEEN_RUNS env var), e.g.
LOG_FILE=log.txt SECONDS_BETWEEN_RUNS=5 ./bin/jmxtrans.sh start kafka.json
You may want to use other writers of jmxtrans so the output is not intermingled, e.g.
{
"servers" : [ {
"port" : "10101",
"host" : "localhost",
"queries" : [ {
"outputWriters" : [ {
"#class" : "com.googlecode.jmxtrans.model.output.KeyOutWriter",
"settings" : {
"outputFile" : "testowo-counts.txt",
"maxLogFileSize" : "10MB",
"maxLogBackupFiles" : 200,
"delimiter" : "\t",
"debug" : true
}
} ],
"obj" : "kafka.server:type=BrokerTopicMetrics,name=MessagesInPerSec,topic=testowo",
"attr" : [ "Count" ]
} ],
"numQueryThreads" : 2
} ]
}
And the last but not least, to set the JMX port to a known value use JMX_PORT env var when starting a Kafka broker using ./bin/kafka-server-start.sh, i.e.
JMX_PORT=10101 ./bin/kafka-server-start.sh config/server.properties