We have Bower hosted in a Nexus 3 repository. Our .bowerrc file looks like:
{
"directory": "bower_components",
"registry": {
"search": [
"https://<host>/nexus/repository/bower/"
]
},
"resolvers": [ "bower-nexus3-resolver" ]
}
So far it has had anonymous access so this worked fine. However, Nexus has now had authentication enabled so we need some way of authenticating.
I've read some documentation and it suggests we need to add
{
"nexus" : {
"username" : "myusername",
"password" : "mypassword"
}
}
But this uses plain text credentials. Is there a way to use authentication without plain text credentials?
Professional (licensed) customers can use user tokens as documented here.
To OSS users, I know that might sound a bit like NXRM doesn't care, but what is shown there comes from Bower not from NXRM, so to have that guarded, is really (in this example) a Bower change.
Related
I want to Mirror bitbuket repo in Google Source Repository so, what should I need to pass in below json request body.
{
"mirrorConfig": {
"url": "",
"deployKeyId": "",
"webhookId": ""
},
"name": "",
"pubsubConfigs": {}
}```
As mentioned in the projects.repos resource reference, the mirrorConfig field is currently set to read-only, so it is not possible to set any values for it manually.
Currently it is not possible to mirror the repository via the API, you will have to connect to the external sources through the Cloud Console, like explained in the Mirroring a Bitbucket repository documentation.
I'm trying to resolve a bigger issue by splitting it into smaller bits. The first problem is that i don't know how to hide properly. for the purpose of this post, i've created a simple demo app that gets deployed to docker (available on github). It has two microservices inside: OcelotGateway (OcelotIdentity project) deployed to localhost:7060 and IdentityServer microservice (Identity project) deployable to localhost:7050. Here's my ocelot configuration file:
{
"ReRoutes": [
{
"DownstreamPathTemplate": "/{route}",
"UpstreamPathTemplate": "/identity/{route}",
"UpstreamHttpMethod": [ "Get", "Options", "Post" ],
"DownstreamScheme": "http",
"ServiceName": "identity"
}
],
"GlobalConfiguration": {
"RequestIdKey": "OcRequestId",
"AdministrationPath": "/administration"
}
}
So i expect to see IdentityServer's quickstart page at localhost:7060/identity, but i get 404 instead. This page works fine when i'm reaching it directly at Identity server's url (localhost:7050).
You probably already figured out the answer, but just for future generations; I suppose the problem is your catch-all, that expects something like /identity/something to be passed to /something.
To display the quick-start page, you should define another re-route, that only catches /identity and forwards to /. Then, no something is required and the re-route should work just fine.
Also, the scheme should better be https.
What is the REST API call to anonymously upload to a publicly shared folder on OneDrive?
I have tried sharing a folder both thru the OneDrive Web UI creating a link attributed with: "Anyone with this link can edit this item", and using the REST API:
POST https://graph.microsoft.com/v1.0/drives/{driveId}/items/{sharedFolderId}/createLink
Content-type: application/json
{
"type": "edit",
"scope": "anonymous"
}
In both cases, I can read from the shared folder without logging on by
GET https://api.onedrive.com/v1.0/shares/{shareId}/items/{sharedFolderId}
I can also read the permission itself using
GET https://api.onedrive.com/v1.0/shares/{shareId}/items/{sharedFolderId}/permissions
=>
{
"#odata.context":"https://api.onedrive.com/v1.0/$metadata#shares('{shareId}')/items('{sharedFolderId')/permissions",
"value":
[
{
"id":"{permissionId}",
"link":
{
"application":
{
"displayName":"{my own app}",
"id":"{short app id}"
},
"type":"edit",
"webUrl":"https://1drv.ms/u/{shareId}"
},
"roles":["write"],
"shareId":"{shareId}",
"expirationDateTime":"0001-01-01T00:00:00Z",
"hasPassword":false
}
]
}
However trying to upload a file or create a subfolder, i.e.
PUT https://api.onedrive.com/v1.0/shares/{shareId}/driveItem:/{filename}:/content
Content-type: text/plain
some text goes here
or
POST https://api.onedrive.com/v1.0/shares/{shareId}/items/{sharedFolderId}/children
Content-type: application/json
{
"name": "TestFolder",
"folder": { }
}
both fail as unauthorized calls - but isn't the whole point of an "edit" link with "anonymous" scope that "anyone with this link can edit this item"?
I have tried various combinations of https://graph.microsoft.com/v1.0 instead of https://api.onedrive.com/v1.0 and /drives/{driveId} instead of /shares/{shareId} as well as /shares/{shareToken}, where shareToken is the "u!"-encoding of the webUrl from the link in the permission.
So far without being able to figure out the right REST API call. I hope someone is able to help :-)
You can download my TestOneDrive Visual Studio test project to reproduce the issues. It also contains initialization code to create and share the folder.
Due to no guys from Product Group following this and no official docs announced this, so I suggest you to submit an feature request first on UserVocie or vote up an existing one close to your issue.
Hello I am working with multiple AWS frameworks on an Ios project. The app is setup to target the specific backend environments though a dev and prod target in xcode.
This generally works fine though the use of constants and macros to use the different identity pools etc on build.
However I am now using AWSGoogleSignInProvider to link google sign-in and cognito. this requires a awsconfiguration.json file in the project which contains the google id and the cognito Id.
{
"Version": "1.0",
"CredentialsProvider": {
"CognitoIdentity": {
"Default": {
"PoolId": "***",
"Region": "***"
}
}
},
"IdentityManager": {
"Default": {}
},
"GoogleSignIn": {
"ClientId-iOS": "***",
"Permissions": "email,profile,openid"
}
}
Im unsure on how i can target dev/prod since i would need to use different pool ids depending on environments. Can't use two files with different names and targets since naming is "immutable" and cant use any macros in the Json file itself.
By looking at the AWS framework it seams there is no way to manually set any of these, and the shared instance gets the google id through the the Json file on instantiation or throws.
UPDATE: I eventually answered my own question. See the Answers section for a tutorial that solves this problem.
The question:
What exactly is the policy that is needed for an external source to access an AWS S3 bucket through the API controls?
Details:
I'm following the Rails Tutorial by Michael Hartl, and I reached the end of lesson 11 where we use CarrierWave to store image files in an AWS S3 bucket. I was able to get it to work (had to add a region ENV variable) but only with a user who has full admin privileges. Obviously that's not ideal. I created a User account specifically for the purpose, but all the walkthroughs only seem to be concerned with web browser access. In fact, I was able to create policies that would allow the user to only be able to read, write, and delete in the specific bucket, but that only worked through a web browser and not through the API. The API access only worked when I attached the AdministratorAccess policy.
Here's what I have so far:
Policy: AllowRootLevelListingOfMyBucket
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowGroupToSeeBucketListAndAlsoAllowGetBucketLocationRequiredForListBucket",
"Action": [
"s3:ListAllMyBuckets",
"s3:GetBucketLocation"
],
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::*"
]
},
{
"Sid": "AllowRootLevelListingOfMyBucket",
"Action": [
"s3:ListBucket"
],
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::MyBucket"
],
"Condition": {
"StringEquals": {
"s3:prefix": [
""
],
"s3:delimiter": [
"/"
]
}
}
}
]
}
Policy: AllowUserToReadWriteObjectDataInMyBucket
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowUserToReadWriteObjectDataInMyBucket",
"Action": [
"s3:GetObject",
"s3:PutObject",
"s3:DeleteObject"
],
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::MyBucket/*"
]
}
]
}
As I said, this allows web browser access, but API access attempts return an "AccessDenied" error: Excon::Errors::Forbidden (Expected(200) <=> Actual(403 Forbidden)
What do I need to add for API access?
Update: I have narrowed down the problem a bit. There is some "Action" that I need to give permission for, but I haven't been able to identify the action exactly. But using a wildcard works, and I've been able to lock down the user account to only be able to access one bucket. Here's the change I made:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowUserToReadWriteObjectDataInMyBucket",
"Action": [
"s3:*"
],
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::MyBucket/*"
]
}
]
}
I eventually answered my own question, and created a tutorial that others might want to follow:
The first thing you need to do is go back over the code that Hartl provided. Make sure you typed it (or copy/pasted it) in exactly as shown. Out of all the code in this section, there is only one small addition you might need to make. The "region" environment variable. This is needed if you create a bucket that is not in the default US area. More on this later. Here is the code for /config/initializers/carrier_wave.rb:
if Rails.env.production?
CarrierWave.configure do |config|
config.fog_credentials = {
# Configuration for Amazon S3
:provider => 'AWS',
:aws_access_key_id => ENV['S3_ACCESS_KEY'],
:aws_secret_access_key => ENV['S3_SECRET_KEY'],
:region => ENV['S3_REGION']
}
config.fog_directory = ENV['S3_BUCKET']
end
end
That line :region => ENV['S3_REGION'] is a problem for a lot of people. As you continue this tutorial you will learn what it's for.
You should be using that block of code exactly as shown. Do NOT put your actual keys in there. We'll send them to Heroku separately.
Now let's move on to your AWS account and security.
First of all, create your AWS account. For the most part, it is like signing up for any web site. Make a nice long password and store it someplace secure, like an encrypted password manager. When you make your account, you will be given your first set of AWS keys. You will not be using those in this tutorial, but you might need them at some point in the future so save those somewhere safe as well.
Go to the S3 section and make a bucket. It has to have a unique
name, so I usually just put the date on the end and that does it. For example, you might name it "my-sample-app-bucket-20160126". Once you
have created your bucket, click on the name, then click on Properties.
It's important for you to know what "Region" your bucket is in. Find it,
and make a note of it. You'll use it later.
Your main account probably has full permissions to everything, so let's not use that for transmitting random data between two web services. This could cost you a lot of money if it got out. We'll make a limited user instead. Make a new User in the IAM section. I named it "fog", because that's the cloud service software that handles the sending and receiving. When you create it, you will have the option of displaying and/or downoading the keys associated with the new user. It's important you keep this in a safe
and secure place. It does NOT go into your code, because that will probably
end up in a repository where other people can see it. Also, don't give this
new user a password, since it will not be logging into the AWS dashboard.
Make a new Group. I called mine "s3railsbucket". This is where the
permissions will be assigned. Add "fog" to this group.
Go to the Policies section. Click "Create Policy" then select "Create Your
Own Policy". Give it a name that starts with "Allow" so it will show up near
the top of the list of policies. It's a huge list. Here's what I did:
Policy Name: AllowFullAccessToMySampleAppBucket20160126
Description: Allows remote write/delete access to S3 bucket named
my-sample-app-bucket-20160126.
Policy Document:
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "s3:*",
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::my-sample-app-bucket-20160126",
"arn:aws:s3:::my-sample-app-bucket-20160126/*"
]
}
]
}
Go back to the Group section, select the group you made, then add
your new policy to the group.
That's it for AWS configuration. I didn't need to make a policy to allow
"fog" to list the contents of the bucket, even though most tutorials I tried
said that was necessary. I think it's only necessary when you want a user
that can log in through the dashboard.
Now for the Heroku configuration. This stuff gets entered in at your
command prompt, just like 'heroku run rake db:migrate' and such. This is
where you enter the actual Access Key and Secret Key you got from the "fog" user you created earlier.
$ heroku config:set S3_ACCESS_KEY=THERANDOMKEYYOUGOT
$ heroku config:set S3_SECRET_KEY=an0tHeRstRing0frAnDomjUnK
$ heroku config:set S3_REGION=us-west-2
$ heroku config:set S3_BUCKET=my-sample-app-bucket-20160126
Look again at that last one. Remember when you looked at the Properties of
your S3 bucket? This is where you enter the code associated with your
region. If your bucket is not in Oregon, you will have to change us-west-2 to your actual region code. This link worked when this tutorial was written:
http://docs.aws.amazon.com/general/latest/gr/rande.html#s3_region
If that doesn't work, Google "AWS S3 region codes".
After doing all this and double-checking for mistakes in the code, I got
Heroku to work with AWS for storage of pictures!