Can libgit2sharp rely on the installed git global configuration provider? - libgit2sharp

I'm wiring up some LibGit2Sharp code to VSO, so I need to use alternate credentials to access it. (NTLM won't work) I don't want to have to manage these cleartext credentials - I'm already using git-credential-winstore to manage them, and I'm happy logging onto the box if I ever need to update those creds.
I see that I can pass in DefaultCredentials and UsernamePassword credentials - is there any way I can get it to fetch the creds from the global git cred store that's already configured on the machine?

Talking to external programs is outside of the scope of libgit2, so it won't talk to git's credential helper. It's considered to be the tool writer's responsibility to retrieve the credentials from the user, wherever they may be.
The credential store is a helper for the git command-line tool to integrate with whatever credcential storage you have on your environment while keeping the logic outside of the main tool, which needs to run in many different places. It is not something that's core to a repository, but a helper for the user interface.
When using libgit2, you are the one who is writing the tool which users interact with and thus knows how to best get to the environment-specific storage. What libgit2 wants to know is what exactly it should answer to the authentication challenge, as any kind of guessing on its part is going to make everyone's life's harder.
Since the Windows credential storage is accessed through an API, it's not out of the question to support some convenience functions to transform from that credential storage into what libgit2's callback wants, but it's not something where libgit2 can easily take the initiative.

Related

Simplest way to share credentials across multiple servers

I know this is a common problem, but after searching quite a bit I don't have a solution that I like. I am creating a simple Service Oriented Architecture example for my students. We are using Digital Ocean. There will be three separate servers who need to find out each other's IP addresses and various secret tokens (e.g. Twitter access tokens). I want to illustrate the idea that one does not put such info into the code repo.
My plan is to use environment variables in all the strategic spots. But how to get those environment variables set up "automatically" and "maintainably"?
I could have a private github repo with just one file and remember to clone it and update it on all three servers
I don't want to get into capistrano and similar complicated things
I thought of having a gist with the information in it and using http to grab it but gists are public. Ok I make the gist private but now I have to authenticate that gist and I am back where i started.
Suggestions?
Whatever path you choose, you will have to provide a means of authenticating accessing to the secrets.
Either "I've written the Twitter access token on the white board"
Or "the secrets are stored in a service and here's how you authenticate access to them".
For the latter, a private Git repo is not the worst solution. You could provide access to the repo to your students(' accounts). They would have a 2-step: acquire token from GitHub repo, apply token to Digital Ocean resource.
Alternatives exist including HashiCorp's Vault see tutorial. I recently used Google's Secret Manager too. These are 3rd-party services rather than features provided by Digital Ocean directly.
I suspect that there's an implicit question in simplifying the process too which would entail delegated auth. There may be a way (though I'm unaware of it) to delegate auth to Digital Ocean such that you can provide a mechanism (similar to the above examples) that's contrained to specific Digital Ocean accounts.

Jenkins and GitLab -- Gitlab Hook plugin is the right choice?

There are so many posts about this, and being inexperienced in Git doesn't help to get a good grip on this.
I just joined a new company that dont have CI at all, so jumped on the opportunity to create a proof of concept (using Jenkins locally on my Windows box for now, until I get a dedicated server for it). I've used and semi-configured Jenkins in the past, using SVN, and it was so simple and fast to get it working. In this company, they don't use SVN, only GitLab (I believe its private - we have our own site, not .gitlab.com), and nothing works for me.
I followed a few turorials, but mainly this seemed like the one that meets my needs. It didn't work (the reasons and symptoms are probably worth a post of its own).
When I look at Gitlab Hook plugin in Jenkins, I see a big red warning saying it is not safe ("Gitlab API token stored and displayed in plain text").
So my question, for this POC that i am working on, how serious is this warning? Should I avoid this plugin and then this method altogether because of this?
And while i'm at it, I might also throw an additional general question to open up my options here ... If I want Jenkins to work with Gitlab (meaning, I checkin something and it triggers a build), do I absolutely need to use the SSH method, or it could work with HTTPS as well?
Thank you.
This is indeed SECURITY-263 / CVE-2018-1000196
Gitlab Hook Plugin does not encrypt the Gitlab API token used to access Gitlab. This can be used by users with master file system access to obtain GitHub credentials.
Additionally, the Gitlab API token round-trips in its plaintext form, and is displayed in a regular text field to users with Overall/Administer permission. This exposes the API token to people viewing a Jenkins administrator’s screen, browser extensions, cross-site scripting vulnerabilities, etc.
As of publication of this advisory, there is no fix.
So:
how serious is this warning?
Serious, but it does require access to the Jenkins server filesystem, or it requires Jenkins administration level. So that risk can be documented, acknowledged and, for now, set aside, provided mitigation steps are in place, ie.:
the access to the Jenkins server is properly monitored
the list of Jenkins admin account is properly and regularly reviewed.
do I absolutely need to use the SSH method, or it could work with HTTPS as well?
You can use https for accessing GitLab repositories in a Jenkins job.
But for the GitLab hook plugin, SSH remains the recommended way, considering you would use a token (instead of a user account name/password), that you can revoke at any time.

ec2 roles vs ec2 roles with temporary keys for s3 access

So I have a standard Rails app running on ec2 that needs access to s3. I am currently doing it with long-term access keys, but rotating keys is a pain, and I would like to move away from this. It seems I have two alternative options:
One, tagging the ec2 instance with a role with proper permissions to access the s3 bucket. This seems easy to setup, yet not having any access keys seems like a bit of a security threat. If someone is able to access a server, it would be very difficult to stop access to s3. Example
Two, I can 'Assume the role' using the ruby SDK and STS classes to get temporary access keys from the role, and use them in the rails application. I am pretty confused how to set this up, but could probably figure it out. It seems like a very secure method, however, as even if someone gets access to your server, the temporary access keys make it considerably harder to access your s3 data over the long term. General methodology of this setup.
I guess my main question is which should I go with? Which is the industry standard nowadays? Does anyone have experience setting up STS?
Sincere thanks for the help and any further understanding on this issue!
All of the methods in your question require AWS Access Keys. These keys may not be obvious but they are there. There is not much that you can do to stop someone once they have access inside the EC2 instance other than terminating the instance. (There are other options, but that is for forensics)
You are currently storing long term keys on your instance. This is strongly NOT recommended. The recommended "best practices" method is to use IAM Roles and assign a role with only required permissions. The AWS SDKs will get the credentials from the instance's metadata.
You are giving some thought to using STS. However, you need credentials to call STS to obtain temporary credentials. STS is an excellent service, but is designed to for handing out short term temporary credentials to others - such as the case where your web server is creating credentials via STS to hand to your users for limited case use such as accessing files on S3 or sending an email, etc. The fault in your thinking about STS is that once the bad guy has access to your server, he will just steal the keys that you call STS with, thereby defeating the need to call STS.
In summary, follow best practices for securing your server such as NACLs, security groups, least privilege, minimum installed software, etc. Then use IAM Roles and assign the minimum privileges to your EC2 instance. Don't forget the value of always backing up your data to a location that your access keys CANNOT access.

API keys and secrets used in iOS app - where to store them?

I'm developing for iOS and I need to make requests to certain APIs using an API key and a secret. However, I wouldn't like for it to be exposed in my source code and have the secret compromised when I push to my repository.
What is the best practice for this case? Write it in a separate file which I'll include in .gitignore?
Thanks
Write it in a separate file which I'll include in .gitignore?
No, don't write it ever.
That means:
you don't write that secret within your repo (no need to gitignore it, or ot worry about adding/committing/pushing it by mistake)
you don't write it anywhere on your local drive (no need to worry about your computer stolen with that "secret" on it)
Store in your repo a script able to seek that secret from an external source (from outside of git repo) and load it in memory.
This is similar to a git credential-helper process, and that script would launch a process listening to localhost:port in order to serve that "secret" to you when you whenever you need it in the current session only.
Once the session is done, there is no trace left.
And that is the best practice to manage secret data.
You can trigger automatically that script on git checkout, if you declare it in a .gitattributes file as a content filter:
This is a very old question, but if anyone is seeing this in google I would suggest you try CloudKit for storing any App secrets (API keys, Oauth secrets). Only your app can access your app container and communication between Apple and your app is secure.
You can check it out here.

Using OAuth in free/open source software

I'm now reading some introduction materials about OAuth, having the idea to use it in a free software.
And I read this:
The consumer secret must never be
revealed to anyone. DO NOT include it
in any requests, show it in any code
samples (including open source) or in
any way reveal it.
If I am writing a free client for a specific website using OAuth, then I have to include the consumer secret in the source code, otherwise making from source would make the software unusable. However, as it is said, the secret should not be release along with the source.
I completely understand the security considerations, but, how can I solve this dilemma, and use OAuth in free software?
I thought of using an external website as a proxy for authentication, but this is very much complicated. Do you have better ideas?
Edit:
Some clients like Gwibber also use OAuth, but I haven't checked its code.
I'm not sure I get the problem, can't you develop the code as open source retrieve the customer secret from a configuration file or maybe leave it in a special table in the database? That way the code will not contain the customer secret (and as such will be "shareable" as open source), but the customer secret will still be accessible to the application.
Maybe having some more details on the intended platform would help, as in some (I'm thinking tomcat right now) parameters such as this one can be included in server configuration files.
If it's PHP, I know a case of an open source project (Moodle), that keeps a php (config.php) file containing definitions of all important configurations, and references this file from all pages to get the definition. It is the responsibility of the administrator to complete the contents of this file with the values particular to that installation. In fact, if the application sees that the file is missing (usually on the first access to the site) it will redirect to a wizard where the administrator can fill the contents in a more user friendly way. In this case the customer secret will be one of these configurations, and as such will be present in the "production" code, but not in the "distributable" form of the code.
I personally like the idea of storing that value in the database in a table designed for it and possibly other parameters as the code needs not be changed. Maybe a installation wizard can be presented here ass well in the case the values do not exist.
Does this solve your problem?
If your service provider is a webapp, your server needs consumer signup pages that provides the consumer secret as the user signs up their consumer. This is the same process Twitter applications go through. Try signing up there and look at their workflow, you'll have all the steps.
If your software is peer-to-peer, each application needs to be both a service provider and a consumer. The Jira and Confluence use cases below outline that instance.
In one of my comments, I mention https://twitter.com/apps/new as the location of where Twitter app developers generate a consumer secret. How you would make such a page depends on the system architecture. If all the consumers will be talking to one server, that one server will have to have a page like https://twitter.com/apps/new. If there are multiple servers (i.e. federations of clients), each federation will need one server with this page.
Another example to consider is how Atlassian apps use OAuth. They are peer-to-peer. Setting up Jira and Confluence to talk to one another still has a setup page in each app, but it is nowhere near as complex as https://twitter.com/apps/new. Both apps are consumers and service providers at the same time. Visiting the setup in each app allows that app to be set up as a service provider with a one-way trust on the other app, as consumer. To make a two-way trust, the user must visit both app's service provider setup page and tell it the URL of the other app.

Resources