Using characters such as ) in Bluemix runtime environment variables - environment-variables

I've got a ruby on rails app running on Bluemix. With this app I use a couple of services, one of which is Object Storage.
Logically, I want to put the credentials that I use for each environment (dev and prod) in the environment variables that you can specify in the runtime tab within Bluemix.
I want to put a password like this in there:
23aSeefae,,)ewFe
The runtime environment is not accepting the ) sign.
It says:
I have tried double quotes, single quotes and I have tried to escape the ) sign with a backslash.
Any help would be appreciated. Is there any way in which I can store my variables outside of my app and within the Bluemix environment instead?
PS: password is not a real password.

You have to bind (connect) your Object Service instance to your application in Bluemix so the VCAP_SERVICES environment variable is automatically created for you.
Here is an example of a VCAP_SERVICES env variable for an application binding Object Storage service instance (I have modified some data for security reasons):
{
"Object-Storage": [
{
"credentials": {
"auth_url": "https://identity.open.softlayer.com",
"project": "object_storage_a92583b3_329e_4ed8_8918_xxx",
"projectId": "7f1f5659d21340dfaa4568dxxxx",
"region": "dallas",
"userId": "abcdefghxxxxxxxxxxxxx",
"username": "admin_3ff9bf1e187e7fa02e28c96232dxxxxxxx",
"password": "BF_0_)s3#xxxXXbY^",
"domainId": "79fc08601744486abf930000000000",
"domainName": "761111",
"role": "admin"
},
"syslog_drain_url": null,
"label": "Object-Storage",
"provider": null,
"plan": "standard",
"name": "app-object-storage",
"tags": [
"storage",
"ibm_release",
"ibm_created"
]
}
]
}
You can then read this as JSON object in your ruby code, for example:
vcap_services = JSON.parse(ENV['VCAP_SERVICES'])
credentials = vcap_services["Object-Storage"][0]["credentials"]
password = credentials["password"]

I've gotten help from the Bluemix support as well now. This is by far most easy way to do what I want:
You can set environment variables through the Cloud Foundry command line interface.
cf set-env <APP_NAME> <ENV_VAR_NAME> <ENV_VAR_VALUE>
You will have to restage your app before you can use them.

Related

What does the CDKToolkit's BootstrapVersion SSM parameter represent?

I am using AWS CDK toolkit to create our infrasture. I created helloworld-stack.ts file and when I do cdk synth then this process creates HelloWorldStack.template.json file.
In this file we have some auto generated elements. Like this one.
Now, I am not able to understand, how bootstraping pushes this "/cdk-bootstrap/hnb659fds/version" to SSM store and why this key always has value 14.
Can someone help me to understand this behaviour?
"Parameters": {
"BootstrapVersion": {
"Type": "AWS::SSM::Parameter::Value<String>",
"Default": "/cdk-bootstrap/hnb659fds/version",
"Description": "Version of the CDK Bootstrap resources in this environment, automatically retrieved from SSM Parameter Store. [cdk:skip]"
}
},
After reading AWS offical doc regarding bootstrapping, I got the answer.
https://docs.aws.amazon.com/cdk/v2/guide/bootstrapping.html
In this doc, they mentioned it this is their template version.

Azure container instance is not accessible using URL via browser

I have created a new Container instance in Azure. Below are the steps.
Step:1- I created a new Cognitive Services (A Language Service) and used its "Key" and "Endpoint" value inside Container Instance
Step:2- I created a new Container Instance, and provide it all the required information as mentioned in the below article.
https://learn.microsoft.com/en-us/azure/container-instances/container-instances-quickstart-portal
but I changed the PORT 80 to "5001" and Image "mcr.microsoft.com/azure-cognitive-services/textanalytics/healthcare:latest".
Below are env variable I used
{
"name": "Eula",
"value": "accept"
},
{
"name": "RAI_TERMS",
"value": "accept"
},
{
"name": "Billing",
"value": "XXXXXXXXXXXXXXXXXXXXXXXXXXX"
},
{
"name": "ApiKey",
"value": "4a46537f51f64765864cabc20318bdcc"
},
{
"name": "enablelro",
"value": "true"
}
Finally it was created and deployed successfully. Now I tried to access it via below url
http://FQDN:5001/Demo/
FQDN--> qualified domain name is used in the url
its not accessible though instance is up and running properly.
It doesn't matter from which port you are trying to access. instead of using this url http://FQDN:5001/Demo/ would suggest you please use FDQN or IP address of container instance.
Using the complete FQDN when identifying something is the way it is supposed to be.
You can refer this thread same i have reprod related to your question. In which i have used FDQN to access the Conatainer Instance.

chronograf: Not able to add default influxDB connection when using OAuth 2.0

I configured Chronograph to use generic OAuth 2.0 (using cloud foundry UAA). Users authentication works fine but the problem is that the default influxdb connection is not taken into consideration. In fact this configuration works:
chronograf --log-level="debug" --resources-path="/usr/share/chronograf/resources" --influxdb-url="http://influxDB.log.database:8086" --influxdb-username="usename" --influxdb-password="pass"
here is the content of /usr/share/chronograf/resources folder:
influxdb.src:
{
"id": "9999",
"name": "MyInfluxDB",
"username": "user1,
"password": "password1",
"url": "http://influxDB.log.database:8086",
"type": "influx",
"insecureSkipVerify": true,
"default": true,
"telegraf": "telegraf.autogen",
"organization": "Default"
}
Both connections are automatically created when chronoraf starts :
MyInfluxDB
http://influxDB.log.database:8086
but When I run chronograf with the following options (To use OAuth 2.0 and create an influxdb connection) :
export TOKEN_SECRET="token_secret"; export JWKS_URL="https://uaa/token_keys"; export PUBLIC_URL="http://chronograf:8888"; chronograf --log-level="debug" --resources-path="/usr/share/chronograf/resources" --generic-name="generic" --generic-client-id="id" --generic-client-secret="secret" --generic-scopes="openid" --generic-auth-url="https://uaa/oauth/authorize" --generic-token-url="https://uaa/oauth/token" --generic-api-url="https://uaa/userinfo"
The OAuth 2.0 works fine but once redirected to the chronograf dashboard I cannot see the connections and even when I created a connection manually and I log in I cannot found any connection that is created automatically on startup as wanted.
the field organization needs an id. The id for the Default orginization uses a lower case d. If you change your src file to,
{
"id": "9999",
"name": "MyInfluxDB",
"username": "user1,
"password": "password1",
"url": "http://influxDB.log.database:8086",
"type": "influx",
"insecureSkipVerify": true,
"default": true,
"telegraf": "telegraf.autogen",
"organization": "default"
}
It should now work.
you can see where the id is defined in their source here https://github.com/influxdata/chronograf/blob/9d8a49ba0ef8131cdce22d73718859f55f434db2/bolt/organizations.go#L20

Multiple target environments and AWSGoogleSignIn

Hello I am working with multiple AWS frameworks on an Ios project. The app is setup to target the specific backend environments though a dev and prod target in xcode.
This generally works fine though the use of constants and macros to use the different identity pools etc on build.
However I am now using AWSGoogleSignInProvider to link google sign-in and cognito. this requires a awsconfiguration.json file in the project which contains the google id and the cognito Id.
{
"Version": "1.0",
"CredentialsProvider": {
"CognitoIdentity": {
"Default": {
"PoolId": "***",
"Region": "***"
}
}
},
"IdentityManager": {
"Default": {}
},
"GoogleSignIn": {
"ClientId-iOS": "***",
"Permissions": "email,profile,openid"
}
}
Im unsure on how i can target dev/prod since i would need to use different pool ids depending on environments. Can't use two files with different names and targets since naming is "immutable" and cant use any macros in the Json file itself.
By looking at the AWS framework it seams there is no way to manually set any of these, and the shared instance gets the google id through the the Json file on instantiation or throws.

AWS S3 Access Policy - web browser vs API

UPDATE: I eventually answered my own question. See the Answers section for a tutorial that solves this problem.
The question:
What exactly is the policy that is needed for an external source to access an AWS S3 bucket through the API controls?
Details:
I'm following the Rails Tutorial by Michael Hartl, and I reached the end of lesson 11 where we use CarrierWave to store image files in an AWS S3 bucket. I was able to get it to work (had to add a region ENV variable) but only with a user who has full admin privileges. Obviously that's not ideal. I created a User account specifically for the purpose, but all the walkthroughs only seem to be concerned with web browser access. In fact, I was able to create policies that would allow the user to only be able to read, write, and delete in the specific bucket, but that only worked through a web browser and not through the API. The API access only worked when I attached the AdministratorAccess policy.
Here's what I have so far:
Policy: AllowRootLevelListingOfMyBucket
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowGroupToSeeBucketListAndAlsoAllowGetBucketLocationRequiredForListBucket",
"Action": [
"s3:ListAllMyBuckets",
"s3:GetBucketLocation"
],
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::*"
]
},
{
"Sid": "AllowRootLevelListingOfMyBucket",
"Action": [
"s3:ListBucket"
],
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::MyBucket"
],
"Condition": {
"StringEquals": {
"s3:prefix": [
""
],
"s3:delimiter": [
"/"
]
}
}
}
]
}
Policy: AllowUserToReadWriteObjectDataInMyBucket
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowUserToReadWriteObjectDataInMyBucket",
"Action": [
"s3:GetObject",
"s3:PutObject",
"s3:DeleteObject"
],
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::MyBucket/*"
]
}
]
}
As I said, this allows web browser access, but API access attempts return an "AccessDenied" error: Excon::Errors::Forbidden (Expected(200) <=> Actual(403 Forbidden)
What do I need to add for API access?
Update: I have narrowed down the problem a bit. There is some "Action" that I need to give permission for, but I haven't been able to identify the action exactly. But using a wildcard works, and I've been able to lock down the user account to only be able to access one bucket. Here's the change I made:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowUserToReadWriteObjectDataInMyBucket",
"Action": [
"s3:*"
],
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::MyBucket/*"
]
}
]
}
I eventually answered my own question, and created a tutorial that others might want to follow:
The first thing you need to do is go back over the code that Hartl provided. Make sure you typed it (or copy/pasted it) in exactly as shown. Out of all the code in this section, there is only one small addition you might need to make. The "region" environment variable. This is needed if you create a bucket that is not in the default US area. More on this later. Here is the code for /config/initializers/carrier_wave.rb:
if Rails.env.production?
CarrierWave.configure do |config|
config.fog_credentials = {
# Configuration for Amazon S3
:provider => 'AWS',
:aws_access_key_id => ENV['S3_ACCESS_KEY'],
:aws_secret_access_key => ENV['S3_SECRET_KEY'],
:region => ENV['S3_REGION']
}
config.fog_directory = ENV['S3_BUCKET']
end
end
That line :region => ENV['S3_REGION'] is a problem for a lot of people. As you continue this tutorial you will learn what it's for.
You should be using that block of code exactly as shown. Do NOT put your actual keys in there. We'll send them to Heroku separately.
Now let's move on to your AWS account and security.
First of all, create your AWS account. For the most part, it is like signing up for any web site. Make a nice long password and store it someplace secure, like an encrypted password manager. When you make your account, you will be given your first set of AWS keys. You will not be using those in this tutorial, but you might need them at some point in the future so save those somewhere safe as well.
Go to the S3 section and make a bucket. It has to have a unique
name, so I usually just put the date on the end and that does it. For example, you might name it "my-sample-app-bucket-20160126". Once you
have created your bucket, click on the name, then click on Properties.
It's important for you to know what "Region" your bucket is in. Find it,
and make a note of it. You'll use it later.
Your main account probably has full permissions to everything, so let's not use that for transmitting random data between two web services. This could cost you a lot of money if it got out. We'll make a limited user instead. Make a new User in the IAM section. I named it "fog", because that's the cloud service software that handles the sending and receiving. When you create it, you will have the option of displaying and/or downoading the keys associated with the new user. It's important you keep this in a safe
and secure place. It does NOT go into your code, because that will probably
end up in a repository where other people can see it. Also, don't give this
new user a password, since it will not be logging into the AWS dashboard.
Make a new Group. I called mine "s3railsbucket". This is where the
permissions will be assigned. Add "fog" to this group.
Go to the Policies section. Click "Create Policy" then select "Create Your
Own Policy". Give it a name that starts with "Allow" so it will show up near
the top of the list of policies. It's a huge list. Here's what I did:
Policy Name: AllowFullAccessToMySampleAppBucket20160126
Description: Allows remote write/delete access to S3 bucket named
my-sample-app-bucket-20160126.
Policy Document:
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "s3:*",
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::my-sample-app-bucket-20160126",
"arn:aws:s3:::my-sample-app-bucket-20160126/*"
]
}
]
}
Go back to the Group section, select the group you made, then add
your new policy to the group.
That's it for AWS configuration. I didn't need to make a policy to allow
"fog" to list the contents of the bucket, even though most tutorials I tried
said that was necessary. I think it's only necessary when you want a user
that can log in through the dashboard.
Now for the Heroku configuration. This stuff gets entered in at your
command prompt, just like 'heroku run rake db:migrate' and such. This is
where you enter the actual Access Key and Secret Key you got from the "fog" user you created earlier.
$ heroku config:set S3_ACCESS_KEY=THERANDOMKEYYOUGOT
$ heroku config:set S3_SECRET_KEY=an0tHeRstRing0frAnDomjUnK
$ heroku config:set S3_REGION=us-west-2
$ heroku config:set S3_BUCKET=my-sample-app-bucket-20160126
Look again at that last one. Remember when you looked at the Properties of
your S3 bucket? This is where you enter the code associated with your
region. If your bucket is not in Oregon, you will have to change us-west-2 to your actual region code. This link worked when this tutorial was written:
http://docs.aws.amazon.com/general/latest/gr/rande.html#s3_region
If that doesn't work, Google "AWS S3 region codes".
After doing all this and double-checking for mistakes in the code, I got
Heroku to work with AWS for storage of pictures!

Resources