I want to try to get information of local git that pushed to my git server, so I can use username as an user in my authorization and then user just need to put their password. How can I solve this problem because I can't get that information. I used library to make it, maybe someone could help me and I would appreciate any help from you
This is how I get git request to the server
public ActionResult Smart(string username, string project, string service, string verb)
{
switch (verb)
{
case "info/refs":
return InfoRefs(username, project, service);
case "git-upload-pack":
return ExecutePack(username, project, "git-upload-pack");
case "git-receive-pack":
return ExecutePack(username, project, "git-receive-pack");
default:
return RedirectToAction("Tree", "Repository", new { Name = project });
}
}
There is no such thing as a git username. There are the signatures for the author and committer for each commit and then there's whatever you made the user authenticate with.
The only way you're going to be able to know who initiated the push is to ask the authentication layer that you put in front of the git protocol. If you use HTTP to serve the repository, that information would be in the HTTP library, if you use SSH, in the SSH library.
Related
I'm creating a MVC Core app and deploying it to an Azure App Service. I'm trying to send emails using SendGrid from the application which seems to be working fine in my local environment but does not work in production. I'm using free subscriptions for anything Azure.
I've followed this pretty much to the tee.
This type of question has popped up on stack overflow and github (here and here, etc), but after going through about 50 such posts nothing seems to be working for me. Reading through the documentation in SendGrid doesn't help a lot either because all the examples provided looks like my own code. I don't get any exceptions, and like I mentioned it works just fine locally.
Please help
Code
string sendGridApiKey = _configuration["SENDGRID_API_KEY"];
var client = new SendGridClient(sendGridApiKey);
var msg = new SendGridMessage();
msg.SetFrom(new EmailAddress(email: "management#enr.com",
name: "ENR Management"));
msg.AddTo(new EmailAddress(email: user.Email, name: user.FriendlyName));
msg.SetSubject("Reset Password");
msg.AddContent(MimeType.Html, $"Please reset your password by <a href='{HtmlEncoder.Default.Encode(callbackUrl)}'> clicking here </a>.");
msg.AddContent(MimeType.Text, "Please reset your password by clicking the link");
var response = await client.SendEmailAsync(msg).ConfigureAwait(false);
Being called by
_emailService.SendResetPasswordEmail(
user: user,
callbackUrl: callbackUrl).Wait();
appsettings.json
{
"ConnectionStrings": {
"DefaultConnection": "XXX",
"ENRModelsDB": "XXX"
},
"Logging": {
"LogLevel": {
"Default": "Warning"
}
},
"SENDGRID_API_KEY": "SG.XXX",
"AllowedHosts": "*"
}
I also have the same key/value in my App Service in Azure under Configuration -> Application setting for what it's worth.
Could it be that your App Service has the configuration setup with different value?
Another suggestion to you is you to debug your app running in the App Service to see what exactly is happening.
Introduction to Remote Debugging on Azure Web Sites
*it is old but it will give you the idea.
I finally found the issue and I feel so stupid.
I only send 1 email from my app, the password reset email. On my live environment, it would fail at this step in ForgotPassword.cshtml.cs (the scaffolded page)
if (user == null || !(await _userManager.IsEmailConfirmedAsync(user)))
{
// Don't reveal that the user does not exist or is not confirmed
return RedirectToPage("./ForgotPasswordConfirmation");
}
because when I seeded the user I did not set email confirmed to be true.
Could not have done it without the remote debug suggestion. It never even got to the part where it is supposed to send the email, and no errors reports because there was none.
Found some newer articles (here and here) to help with the remote debugging which came with its own rabbit holes.
Thanx for the suggestion #KodiaMx
We use Bitbucket server and want to trigger a Jenkins build whenever something is pushed to Bitbucket.
I tried to set up everything according to this page:
https://wiki.jenkins.io/display/JENKINS/BitBucket+Plugin
So I created a Post Webhook in Bitbucket, pointing at the Jenkins Bitbucket plugin's endpoint.
Bitbucket successfully notifies the plugin when a push occurs. According to the Jenkins logs, the plugin then iterates over all jobs where "Build when a change is pushed to BitBucket" is checked, and tries to match that job's repo URL to the URL of the push that occurred.
So, if the repo URL is
https://jira.mycompany.com/stash/scm/PROJ/project.git, the plugin tries to match it against
https://jira.mycompany.com/stash/PROJ/project, which obviously fails.
As per official info from Atlassian, Bitbucket cannot be prevented from inserting the "/scm/" part in the path.
The corresponding code in the Bitbucket Jenkins plugin is in class com.cloudbees.jenkins.plugins.BitbucketPayloadProcessor:
private void processWebhookPayloadBitBucketServer(JSONObject payload) {
JSONObject repo = payload.getJSONObject("repository");
String user = payload.getJSONObject("actor").getString("username");
String url = "";
if (repo.getJSONObject("links").getJSONArray("self").size() != 0) {
try {
URL pushHref = new URL(repo.getJSONObject("links").getJSONArray("self").getJSONObject(0).getString("href"));
url = pushHref.toString().replaceFirst(new String("projects.*"), new String(repo.getString("fullName").toLowerCase()));
String scm = repo.has("scmId") ? repo.getString("scmId") : "git";
probe.triggerMatchingJobs(user, url, scm, payload.toString());
} catch (MalformedURLException e) {
LOGGER.log(Level.WARNING, String.format("URL %s is malformed", url), e);
}
}
}
In the JSON payload that Bitbucket sends to the plugin, the actual checkout URL doesn't appear, only the link to the repository's Bitbucket page. The above method from the plugin appears to construct the checkout URL from that URL by removing everything after and including projects/ and adding the "full name" of the repo, resulting in the above wrong URL.
Official info from Atlassian is that Bitbucket cannot be prevented from adding the "scm" part to the checkout URL.
Is this a bug in the Jenkins plugin? If so, how can the plugin work for anyone?
I found the reason for the failure.
The issue is that the Bitbucket plugin for Jenkins does account for the /scm part in the path, but only if it's the first part after the host name.
If your Bitbucket server instance is configured not under its own domain but under a path of another service, matching the checkout URLs will fail.
Example:
https://bitbucket.foobar.com/scm/PROJ/myproject.git will work,
https://jira.foobar.com/stash/scm/PROJ/myproject.git will not work.
Someone who also had this problem has already created a fix for the plugin, the pull request for which is pending: JENKINS-49177: Now removing first occurrence of /scm
I have found bitbucket api like:
https://bitbucket.org/api/2.0/repositories/{teamname}
But this link return 301 status (moved permanently to !api/2.0/repositories/{teamname}).
Ok, but this one returns status 200 with zero repositories.
I provide two parameters as user and password, but nothing seems changed.
So, can anybody answer how to get full list of private repositories that allowed to specific user?
Atlassian Documentation - Repositories Endpoint provides a detail documentation on how to access the repositories.
The URL mentioned in bitbucket to GET a list of repositories for an account is:
GET https://api.bitbucket.org/2.0/repositories/{owner}
If you use the above URL it always retrieves the repositories where you are the owner. In order to retrieve full list of repositories that the user is member of, you should call:
GET https://api.bitbucket.org/2.0/repositories?role=member
You can apply following set of filters for role based on your needs.
To limit the set of returned repositories, apply the
role=[owner|admin|contributor|member] parameter where the roles are:
owner: returns all repositories owned by the current user.
admin: returns repositories to which the user has explicit
administrator access.
contributor: returns repositories to which the user has explicit write access.
member: returns repositories to which the user has explicit read
access.
Edit-1:
You can make use of Bitbucket REST browser for testing the request/response.(discontinued)
You should not use the API from the https://bitbucket.org/api domain.
Instead, you should always use https://api.bitbucket.org.
Now one reason you might be getting an empty result after following the redirect could be because some http clients will only send Basic Auth credentials if the server explicitly asks for them by returning a 401 response with the WWW-Authenticate response header.
The repositories endpoint does not require authentication. It will simply return the repos that are visible to anonymous users (which might well be an empty set in your case) and so clients that insist on a WWW-Authenticate challenge (there are many, including Microsoft Powershell) will not work as expected (note, curl always sends Basic Auth credentials eagerly, which makes it a good tool for testing).
Unfortunately, from what I see in the documentation, there is no way to list all private repositories which the user has access to.
GET https://api.bitbucket.org/2.0/repositories
"Returns a paginated list of all public repositories." according to the doco.
GET https://api.bitbucket.org/2.0/repositories/{owner}
"Returns a paginated list of all repositories owned by the specified account or UUID." according to the doco.
So, getting all private repositories not necessarily owned by the user is either not possible, or I haven't found the right endpoint, or the documentation is inacurate.
None of the answers above worked for me, so this is what I did. We'll use the Bitbucket REST API.
Authentication
You can't use your normal credentials. I created an API Password. I'm not sure how to get to this page via your browser, but go here: https://bitbucket.org/account/settings/app-passwords/
Create an App Password, then cut and save the password that Atlassian generates for you.
Curl
curl --user your_username:your_app_password https://api.bitbucket.org/2.0/repositories/your_workspace?pagelen=100
I piped that to jq and saved it to a file.
your_workspace you get from looking at the URL of any of your repositories.
Paging
The maximum pagelen appears to be 100. If you have more than 100 repos, then you might have to do this:
curl --user your_username:your_app_password https://api.bitbucket.org/2.0/repositories/your_workspace?pagelen=100&page=2
The JSON
The JSON isn't too bad. You want the "values" array. From there, look at links.clone, which might have two entries like this:
"clone": [
{
"href": "https://user#bitbucket.org/WORKSPACE/REPO.git",
"name": "https"
},
{
"href": "git#bitbucket.org:WORKSPACE/REPO.git",
"name": "ssh"
}
],
That's a cut & paste from my results with personal info changed. Also useful are two other fields:
"full_name": "WORKSPACE/repo",
"name": "Repo",
Expanding on blizzard's answer, here's a little node.js script I just wrote:
import axios from 'axios';
import fs from 'fs';
async function main() {
const bitbucket = axios.create({
baseURL: 'https://api.bitbucket.org/2.0',
auth: {
username: process.env.BITBUCKET_USERNAME!,
password: process.env.BITBUCKET_PASSWORD!,
}
});
const repos = [];
let next = 'repositories?role=member';
for(;;) {
console.log(`Fetching ${next}`)
const res = await bitbucket.get(next);
if(res.status < 200 || res.status >= 300) {
console.error(res);
return 1;
}
repos.push(...res.data.values);
if(!res.data.next) break;
next = res.data.next;
}
console.log(`Done; writing file`);
await fs.promises.writeFile(`${__dirname}/../data/repos.json`,JSON.stringify(repos,null,2),{encoding:'utf8'});
}
main().catch(err => {
console.error(err);
});
I am working on braintree and I want to send custom email notifications to my customers as I am working with recurring billing, so every month these custom notifications should be send to all users. For this I have to use webhooks to retrieve currently ocuured event and then send email notification according to webhook's response. (I think this is only solution in this case, If anyone know another possible solution please suggest). I want to test webhooks at my localhost first, And I have tried to create a new webhook and specified the localhost path as destination to retrieve webhooks. But this shows a error "Destination is not verified"..........
My path is : "http://127.0.0.1:81/webhook/Accept"
These are some of the tools that can be used during development of webhooks :
1) PostCatcher,
2) RequestBin,
3) ngrok,
4) PageKite and
5) LocalTunnel
http://telerivet.com/help/api/webhook/testing
https://www.twilio.com/blog/2013/10/test-your-webhooks-locally-with-ngrok.html
Well Another way to test it is by creating a WebAPI and POSTing Data to your POST method via Postman. To do this, just create a WebAPI in Visual Studio. In the API controller, create a POST method.
/// <summary>
/// Web API POST method for Braintree Webhook request
/// The data is passed through HTTP POST request.
/// A sample data set is present in POSTMAN HTTP Body
/// /api/webhook
/// </summary>
/// <param name="BTRequest">Data from HTTP request body</param>
/// <returns>Webhook notification object</returns>
public WebhookNotification Post([FromBody]Dictionary<String, String> BTRequest)
{
WebhookNotification webhook = gateway.WebhookNotification.Parse(BTRequest["bt_signature"], BTRequest["bt_payload"]);
return webhook;
}
In Postman, Post the following data in the Body as raw JSON.
{
"bt_signature":"Generated Data",
"bt_payload":"Very long generated data"
}
The data for the above Json dictionary has been generated through the below code:
Dictionary<String, String> sampleNotification = gateway.WebhookTesting.SampleNotification(WebhookKind.DISPUTE_OPENED, "my_Test_id");
// Your Webhook kind and your test ID
Just pick the data from sample notification and place it above in the JSON. Run your WebAPI, place debuggers. Add the localhost URL in Postman, select POST, and click on Send.
Your POST method should be hit.
Also, don't forget to add your gateway details:
private BraintreeGateway gateway = new BraintreeGateway
{
Environment = Braintree.Environment.SANDBOX,
MerchantId = "Your Merchant Key",
PublicKey = "Your Public Key",
PrivateKey = "Your Private Key"
};
I hope this helps!
I work at Braintree. If you need more help, please get in touch with our support team.
In order to test webhooks, your app needs to be able to be reached by the Braintree Gateway. A localhost address isn't. Try using your external IP address and make sure the port on the correct computer can be reached from the internet.
Take a look at the Braintree webhook guide for more info on setting up webhooks.
You can use PutsReq to simulate the response you want and do your end-to-end test in development.
For quick 'n dirty testing:
http://requestb.in/
For more formal testing (e.g. continuous integration):
https://www.runscope.com/
If you have a online server you may forward port from your computer to that server.
ssh -nNT -R 9090:localhost:3000 root#yourvds.com
And then specify webhook as http://yourvds.com:9090/webhook
all requests will be forwarded to you machine, you will be able to see logs
I know this is an old question, but according to the docs, you can use this code to test your webhook code:
Dictionary<String, String> sampleNotification = gateway.WebhookTesting.SampleNotification(
WebhookKind.SUBSCRIPTION_WENT_PAST_DUE, "my_id"
);
WebhookNotification webhookNotification = gateway.WebhookNotification.Parse(
sampleNotification["bt_signature"],
sampleNotification["bt_payload"]
);
webhookNotification.Subscription.Id;
// "my_id"
You can use the Svix CLI Listener: https://github.com/svix/svix-cli#using-the-listen-command
This will allow you to easily channel requests to your public endpoint to a local port where you can run your logic against and debug it on your localhost.
Is anyone else having a difficult time getting Twitters oAuth's callback URL to hit their localhost development environment.
Apparently it has been disabled recently. http://code.google.com/p/twitter-api/issues/detail?id=534#c1
Does anyone have a workaround. I don't really want to stop my development
Alternative 1.
Set up your .hosts (Windows) or etc/hosts file to point a live domain to your localhost IP. such as:
127.0.0.1 xyz.example
where xyz.example is your real domain.
Alternative 2.
Also, the article gives the tip to alternatively use a URL shortener service. Shorten your local URL and provide the result as callback.
Alternative 3.
Furthermore, it seems that it works to provide for example http://127.0.0.1:8080 as callback to Twitter, instead of http://localhost:8080.
I just had to do this last week. Apparently localhost doesn't work but 127.0.0.1 does Go figure.
This of course assumes that you are registering two apps with Twitter, one for your live www.mysite.example and another for 127.0.0.1.
Just put http://127.0.0.1:xxxx/ as the callback URL, where xxxx is the port for your framework
Yes, it was disabled because of the recent security issue that was found in OAuth. The only solution for now is to create two OAuth applications - one for production and one for development. In the development application you set your localhost callback URL instead of the live one.
Callback URL edited
http://localhost:8585/logintwitter.aspx
Convert to
http://127.0.0.1:8585/logintwitter.aspx
This is how i did it:
Registered Callback URL:
http://127.0.0.1/Callback.aspx
OAuthTokenResponse authorizationTokens =
OAuthUtility.GetRequestToken(ConfigSettings.getConsumerKey(),
ConfigSettings.getConsumerSecret(),
"http://127.0.0.1:1066/Twitter/Callback.aspx");
ConfigSettings:
public static class ConfigSettings
{
public static String getConsumerKey()
{
return System.Configuration.ConfigurationManager.AppSettings["ConsumerKey"].ToString();
}
public static String getConsumerSecret()
{
return System.Configuration.ConfigurationManager.AppSettings["ConsumerSecret"].ToString();
}
}
Web.config:
<appSettings>
<add key="ConsumerKey" value="xxxxxxxxxxxxxxxxxxxx"/>
<add key="ConsumerSecret" value="xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"/>
</appSettings>
Make sure you set the property 'use dynamic ports' of you project to 'false' and enter a static port number instead. (I used 1066).
I hope this helps!
Use http://smackaho.st
What it does is a simple DNS association to 127.0.0.1 which allows you to bypass the filters on localhost or 127.0.0.1 :
smackaho.st. 28800 IN A 127.0.0.1
So if you click on the link, it will display you what you have on your local webserver (and if you don't have one, you'll get a 404). You can of course set it to any page/port you want :
http://smackaho.st:54878/twitter/callback
I was working with Twitter callback url on my localhost. If you are not sure how to create a virtual host ( this is important ) use Ampps. He is really cool and easy. In a few steps you have your own virtual host and then every url will work on it. For example:
download and install ampps
Add new domain. ( here you can set for example twitter.local) that means your virtual host will be http://twitter.local and it will work after step 3.
I am working on Win so go under to your host file -> C:\Windows\System32\Drivers\etc\hosts and add line: 127.0.0.1 twitter.local
Restart your Ampps and you can use your callback. You can specify any url, even if you are using some framework MVC or you have htaccess url rewrite.
Hope This Help!
Cheers.
Seems nowadays http://127.0.0.1 also stopped working.
A simple solution is to use http://localtest.me instead of http://localhost it is always pointing to 127.0.0.1 And you can even add any arbitrary subdomain to it, and it will still point to 127.0.0.1
See Website
When I develop locally, I always set up a locally hosted dev name that reflects the project I'm working on. I set this up in xampp through xampp\apache\conf\extra\httpd-vhosts.conf and then also in \Windows\System32\drivers\etc\hosts.
So if I am setting up a local dev site for example.com, I would set it up as example.dev in those two files.
Short Answer: Once this is set up properly, you can simply treat this url (http://example.dev) as if it were live (rather than local) as you set up your Twitter Application.
A similar answer was given here: https://dev.twitter.com/discussions/5749
Direct Quote (emphasis added):
You can provide any valid URL with a domain name we recognize on the
application details page. OAuth 1.0a requires you to send a
oauth_callback value on the request token step of the flow and we'll
accept a dynamic locahost-based callback on that step.
This worked like a charm for me. Hope this helps.
It can be done very conveniently with Fiddler:
Open menu Tools > HOSTS...
Insert a line like 127.0.0.1 your-production-domain.com, make sure that "Enable remapping of requests..." is checked. Don't forget to press Save.
If access to your real production server is needed, simply exit Fiddler or disable remapping.
Starting Fiddler again will turn on remapping (if it is checked).
A pleasant bonus is that you can specify a custom port, like this:
127.0.0.1:3000 your-production-domain.com (it would be impossible to achieve this via the hosts file). Also, instead of IP you can use any domain name (e.g., localhost).
This way, it is possible (but not necessary) to register your Twitter app only once (provided that you don't mind using the same keys for local development and production).
edit this function on TwitterAPIExchange.php at line #180
public function performRequest($return = true)
{
if (!is_bool($return))
{
throw new Exception('performRequest parameter must be true or false');
}
$header = array($this->buildAuthorizationHeader($this->oauth), 'Expect:');
$getfield = $this->getGetfield();
$postfields = $this->getPostfields();
$options = array(
CURLOPT_HTTPHEADER => $header,
CURLOPT_HEADER => false,
CURLOPT_URL => $this->url,
CURLOPT_RETURNTRANSFER => true,
CURLOPT_SSL_VERIFYPEER => false,
CURLOPT_SSL_VERIFYHOST => false
);
if (!is_null($postfields))
{
$options[CURLOPT_POSTFIELDS] = $postfields;
}
else
{
if ($getfield !== '')
{
$options[CURLOPT_URL] .= $getfield;
}
}
$feed = curl_init();
curl_setopt_array($feed, $options);
$json = curl_exec($feed);
curl_close($feed);
if ($return) { return $json; }
}
I had the same challenge and I was not able to give localhost as a valid callback URL. So I created a simple domain to help us developers out:
https://tolocalhost.com
It will redirect any path to your localhost domain and port you need. Hope it can be of use to other developers.
set callbackurl in twitter app : 127.0.0.1:3000
and set WEBrick to bind on 127.0.0.1 instead of 0.0.0.0
command : rails s -b 127.0.0.1
Looks like Twitter now allows localhost alongside whatever you have in the Callback URL settings, so long as there is a value there.
I struggled with this and followed a dozen solutions, in the end all I had to do to work with any ssl apis on local host was:
Go download: cacert.pem file
In php.ini * un-comment and change:
curl.cainfo = "c:/wamp/bin/php/php5.5.12/cacert.pem"
You can find where your php.ini file is on your machine by running php --ini in your CLI
I placed my cacert.pem in the same directory as php.ini for ease.
These are the steps that worked for me to get Facebook working with a local application on my laptop:
goto apps.twitter.com
enter the name, app description and your site URL
Note: for localhost:8000, use 127.0.0.1:8000 since the former will not work
enter the callback URL matching your callback URL defined in TWITTER_REDIRECT_URI your application
Note: eg: http://127.0.0.1/login/twitter/callback (localhost will not work).
Important enter both the "privacy policy" and "terms of use" URLs if you wish to request the user's email address
check the agree to terms checkbox
click [Create Your Twitter Application]
switch to the [Keys and Access Tokens] tab at the top
copy the "Consumer Key (API Key)" and "Consumer Secret (API Secret)" to TWITTER_KEY and TWITTER_SECRET in your application
click the "Permissions" tab and set appropriately to "read only", "read and write" or "read, write and direct message" (use the least intrusive option needed for your application, for just and OAuth login "read only" is sufficient
Under "Additional Permissions" check the "request email addresses from users" checkbox if you wish for the user's email address to be returned to the OAuth login data (in most cases check yes)