StoredProfileAWSCredentials complains it cannot find the profile in a Web application - asp.net-mvc

I am seeing the following error being returned when I run the StoredProfileAWSCredentials default constructor from within a web application:
System.AggregateException: One or more errors occurred. ---> System.ArgumentException: Path cannot be the empty string or all whitespace.
Parameter name: path
at System.IO.Directory.GetParent(String path)
at Amazon.Runtime.StoredProfileAWSCredentials.DetermineCredentialsFilePath(String profilesLocation) in d:\Jenkins\jobs\build-sdkandtools-release\workspace\sdk\src\AWSSDK_DotNet35\Amazon.Runtime\AWSCredentials.cs:line 354
at Amazon.Runtime.StoredProfileAWSCredentials..ctor(String profileName, String profilesLocation) in d:\Jenkins\jobs\build-sdkandtools-release\workspace\sdk\src\AWSSDK_DotNet35\Amazon.Runtime\AWSCredentials.cs:line 300
at Amazon.Runtime.StoredProfileAWSCredentials..ctor(String profileName) in d:\Jenkins\jobs\build-sdkandtools-release\workspace\sdk\src\AWSSDK_DotNet35\Amazon.Runtime\AWSCredentials.cs:line 270
at Amazon.Runtime.StoredProfileAWSCredentials..ctor() in d:\Jenkins\jobs\build-sdkandtools-release\workspace\sdk\src\AWSSDK_DotNet35\Amazon.Runtime\AWSCredentials.cs:line 260
I am calling this method with no parameters. I have a "default" profile defined which has worked when it has been used. The web application is running under IIS 6.1 in an application pool which uses my credentials (since the AWS credentials were obtained under my login).
I have tried using the two-parameter constructor with the name "default" and the local disk path to the RegisteredAccounts.json file (which was generated for me by the AWS Toolkit extension for VS 2013).
What is happening here and how do I fix it?

You need to set the path in the Web.config file:
<appSettings>
<add key="AWSProfilesLocation" value="path\to\RegisteredAccounts.json" />
<add key="AWSProfileName" value="default"/>
<add key="AWSRegion" value="us-west-2" />
</appSettings>
The credentials file should be in this format:
[default]
aws_access_key_id = XXXXXXXXXXXXXXXXXXX
aws_secret_access_key = XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
Another option is to include your keys directly in the web/app.config:
<appSettings>
<add key="AWSAccessKey" value="your key" />
<add key="AWSSecretKey" value="your key"/>
</appSettings>
There are more details available in the documentation.

I struggled with the same issue, fortunately the source code is open on GitHub:
https://github.com/aws/aws-sdk-net/tree/master/AWSSDK_DotNet35
When you use the SDK store, it tries to get the path of the RegisteredAccounts.json file with this code:
System.Environment.GetFolderPath(System.Environment.SpecialFolder.LocalApplicationData) + "/AWSToolkit"
In my local environment it pointed to my user profile, something like "C:\Users\MyUser\AppData\Local\AWSToolkit"
But in the server it returned a blank string because I was using the Network User in the app pool, then I changed it for a local user in the server and even so the path was different from the one I've got before:
"C:\Windows\system32\config\systemprofile\AppData\Local\AWSToolkit"
Then I copied the .json file there, but didn't work until I logged in with the new user in the server and with the PowerShell tool of the SDK created a new file with the following command:
Set-AWSCredentials -AccessKey -SecretKey -StoreAs
It created the file in the AppData located in the user folder, then I copied it to the path in system32.
And then it finally worked. Seems too cumbersome, but I really needed the credentials to be encrypted and when you use AWSProfilesLocation you have to store the UserAcessKey and UserSecretKey without any security in a plain text file.

Related

Unable to upload media files on production Umbraco server

So on local I'm able to upload images fine, but on my deployed website I'm unable to upload images to the media content. I receive an "internal 500 error" with the following path in chrome:
POST http://[url]/umbraco/backoffice/UmbracoApi/Media/PostAddFile?origin=blueimp 500 (Internal)
I looked at umbraco logs and saw this: 2015-12-19 00:15:15,234 [P16524/D8/T73] ERROR Umbraco.Web.WebApi.Filters.FileUploadCleanupFilterAttribute - Could not acquire actionExecutedContext.Response.Content System.NullReferenceException: Object reference not set to an instance of an object. at Umbraco.Web.WebApi.Filters.FileUploadCleanupFilterAttribute.OnActionExecuted(HttpActionExecutedContext actionExecutedContext)
And on the database table dbo.UmbracoLog when I try to upload it creates a new record: id userId NodeId Datestamp logHeader logComment 1885 0 0 2015-12-19 00:15:15.233 New Media 'IMG_3242.jpg' was created
These failed uploads are the only ones with a NodeId of 0. Not sure if that's strange. Any help would be much appreciated.
I'm using Umbraco v7.3.1.
So I did some digging around on this error and found that it may be a very "un" user-friendly error message. The error is coming up as null reference issue, but the real problem is the ASP.net "default" request size stopping uploading of a large image file. Added the following to the web.config and that resolved the issue:
<system.web>
<httpRuntime maxRequestLength="204800" executionTimeout="99999"/>
</system.web>
On Azure try removing fcnMode="Single" from the httpRuntime node and adding targetFramework="4.5"

Mule HTTP Request Config with OAuth2

I am experimenting with OAuth2 on HTTP request connector. It is throwing the below exception always:
SAXParseException: cvc-complex-type.2.4.a: Invalid content was found starting with element 'oauth2:authorization-code-grant-type'. One of '{"http://www.mulesoft.org/schema/mule/core":annotations, "http://www.mulesoft.org/schema/mule/http":abstract-http-request-authentication-provider, "http://www.mulesoft.org/schema/mule/tcp":client-socket-properties, "http://www.mulesoft.org/schema/mule/tls":context, "http://www.mulesoft.org/schema/mule/http":raml-api-configuration, "http://www.mulesoft.org/schema/mule/http":proxy, "http://www.mulesoft.org/schema/mule/http":ntlm-proxy}' is expected
Here is my configuration:
<http:request-config name="SF_Authorize_Configuration" protocol="HTTPS" host="${login.host}" basePath="${oauth2.url}" port="80" doc:name="Authorize Configuration" >
<oauth2:authorization-code-grant-type clientId="my_client_id" clientSecret="my_client_secret" redirectionUrl="http://localhost:8081/oauth2callback">
<oauth2:authorization-request authorizationUrl="https://my.api.com/services/oauth2/authorize" localAuthorizationUrl="http://localhost:8082/authorization" scopes="access_user_details, read_user_files">
</oauth2:authorization-request>
<oauth2:token-request tokenUrl="https://my.api.com/services/oauth2/token"/>
</oauth2:authorization-code-grant-type>
</http:request-config>
This means that you have not provided the xml namespace for the xml tag.
If you have not used the UI to create this then please create using the design and then you can copy and paste your specific tag later by replacing it.
Edited answer,
It was similar for me for API kit. I re installed the Studio(Unzipped it again). This might work .
I have encountered the same issue. It was resolved by adding oauth2 namespace at the start mule tag, e.g.:
<mule xmlns:http="http://www.mulesoft.org/schema/mule/http"
...
xmlns:oauth2="http://www.mulesoft.org/schema/mule/oauth2"
http://www.mulesoft.org/schema/mule/oauth2
...
>

RightFax 10.5 java integration issue

I am trying to do some POC on Rightfax integration with JAVA API. Installed all required components in Rightfax Server (JAVA/XML API) and configured the IIS (took care while installing the rightfax server) while running the sample java program getting following message
here are the details of the output in debug mode
<XML_FAX_SUBMIT java="1" stylesheet="XML_FAX_SUBMIT.xslt" xmlns="x-schema:XML_FAX_SUBMIT.xdr">
<INCLUDE_BEG>xml.beg</INCLUDE_BEG>
<SENDER>
<RF_USER>ADMINISTRATOR</RF_USER>
</SENDER>
<DESTINATIONS>
<FAX>
<TO_FAXNUM>555-7777</TO_FAXNUM>
</FAX>
</DESTINATIONS>
<BODY>
How about some body text.
Line 2
Line 3
</BODY>
<INCLUDE_END>xml.end</INCLUDE_END>
</XML_FAX_SUBMIT>
Initiating Connection to: http://<name>/rfxml/rfwebcon.dll
RETURN XML:
<?xml version="1.0"?>
<XML_FAX_SUBMIT_REPLY>
<FAX unique_id="unknown">
<STATUS_CODE>-1</STATUS_CODE>
<STATUS_MSG>Failed to load XML into DOM tree.</STATUS_MSG>
</FAX>
</XML_FAX_SUBMIT_REPLY>
Message Successfully Transported
ID: unknown
`Code`: -1
**Msg: Failed to load XML into DOM tree.**
Ended
could anyone help me if you come across this type of issue or any configuration is missing at Fax Server or IIS side.
//Create a outbound fax object
RFaxSubmit faxSubmit= new RFaxSubmit();
//set XMLNS and make sure you have XML_FAX_SUBMIT_schema.xml in your classpath.
faxSubmit.m_FaxDocument.setXMLNS("classpath:XML_FAX_SUBMIT_schema.xml");
This file must be available on your RightFax server, # this location \RightFax\Production\xml\schemas\XML_FAX_SUBMIT_schema.xml download this file or ask server support and add it in your classpath.

Website address ending in colon giving server error

When I have a website address like http://www.websiteaddress2323.com/info/Value23:
the website is giving HTTP 500 Internal Server Error because the address is ending with colon :
I added the following in the web.config so that any links after the path is valid
<add name="UrlRoutingHandler1" type="System.Web.Routing.UrlRoutingHandler,
System.Web, Version=4.0.0.0, Culture=neutral,
PublicKeyToken=b03f5f7f11d50a3a" path="/info/*" verb="GET" />
This works on my localhost server, but when I push it out to Azure, it is giving the 500 Internal Server Error.
Is there anyway to fix this? What I want to do is permanent rewrite the address to /info/Value23 by removing the colon, but the page load module is not getting called and throws the error before it had the chance to call the page_load code so that I can catch this address and redirect.
If there is colon at last, then webserver is expecting a Port number, that's why its giving you Server Error.
Here is URL details
http://www.utoronto.com/webdocs/HTMLdocs/NewHTML/url.html
For correct url, either you will need to provide Port number or remove Colon

Amazon S3 : Access Denied for URL using symbols

I would like to download some files uploaded on my S3 Server.
For the moment, all my buckets and files inside them are public, so I can download what I want.
Unfortunately, I can't access to files using special characters like a space or "&"...
I tried to change the special characters in my URL by HTML code :
http://s3-eu-west-1.amazonaws.com/custom.bucket/mods/b&b.jar
by
http://s3-eu-west-1.amazonaws.com/custom.bucket/mods/b%26b.jar
But I always have the same error :
<Error>
<Code>AccessDenied</Code>
<Message>Access Denied</Message>
<RequestId>3E987FCE07075166</RequestId>
<HostId>
O2EIujdbiAeYg44rsezQlargfT7qVSL8SpqbTxkd/1UwxQrwZ3SJ+R3NlHyGF7rI
</HostId>
</Error>
Anybody could resolve this problem ?
I can't rename them because there are used by other applications.
I am able to download public files with '&' in the name with no problems using curl:
curl https://s3.amazonaws.com/mybucket/test/b%26b.jar
Recheck the permissions on your file using the AWS console. Make sure the file has "Grantee: Everyone", and Open/Download permissions clicked, as in this screenshot:
Make sure to click the "save" button after you add these credentials. Alternatively, try using your security credentials.
I am able to download file with special character:
# wget --no-check-certificate https://s3-us-west-2.amazonaws.com/bucket1234/b%26b.jar
--2013-12-01 14:15:20-- https://s3-us-west-2.amazonaws.com/bucket1234/b%26b.jar
Resolving s3-us-west-2.amazonaws.com... 54.240.252.26
Connecting to s3-us-west-2.amazonaws.com|54.240.252.26|:443... connected.
WARNING: certificate common name `*.s3-us-west-2.amazonaws.com' doesn't match requested host name `s3-us-west-2.amazonaws.com'.
HTTP request sent, awaiting response... 200 OK
Length: 0 [application/x-java-archive]
Saving to: `b&b.jar'
[ <=> ] 0 --.-K/s in 0s
2013-12-01 14:15:22 (0.00 B/s) - `b&b.jar' saved [0/0]
Are you sure that this file is "Publicly visible"? could you double check the permissions for this file ? This is definitely not an issue with the special character.
Can you just login to aws s3 console and check what download link shows there?
Is there any mismatch in the link because of double encoding? Please make sure you are not doing any URL encoding from your code while uploading file.
In your case it could be:
http://s3-eu-west-1.amazonaws.com/custom.bucket/mods/b%2526b.jar

Resources