From https://groups.google.com/forum/#!topic/cocoapods/i7dwMV4EqZ8
I'm a bit of a CocoaPods newbie and have never created a Pod before, but I'm looking into how one would be able to automate publishing of public podspecs from within my organization's continuous deployment infrastructure, which has some properties about it that makes using Trunk difficult. They are:
The account/owner that does the publishing is a non-human system account.
The aforementioned system account does not have a home directory on the machine that it runs on, which will hamper usage of a .netrc file (what Trunk uses for storing session tokens). This should be able to be worked around by creating a fake home directory and setting the HOME environment variable to it, since that's what the netrc gem looks for.
The machine that does the publishing is assumed to have "no state," meaning the publish could always occur on a different machine that has never been registered.
I've thought about creating the initial .netrc for this account, storing the token from that in our secure storage, and retrieving it to build a .netrc file when publishing. There's a few things about this that I don't think work well:
The tokens returned by Trunk appear to expire, which would mean having a human to periodically create a new token and update our secure storage.
Trunk sessions appear to be tracking the IP address of the machine they were created from, which I assume publishing requests are validated against and fail if the IP addresses do not match. Because publishing takes places on different machines, this would imply requiring to create a new session every time this publishing automation is ran. Practically speaking, I would hope that the IP addresses for these machines would come from the same external IP address, but that's not an assumption I can live with.
The next alternative I would have is much more complex: having the publish automation register a new session, wait for the registration email, then verify the session with the link in the email before proceeding with the publish. I don't know how to actually accomplish this off the top of my head, but I think it's a viable, if not time-consuming to implement, solution. Any suggestions on how to accomplish that are welcome.
Are there any alternatives for publishing public podspecs? It seems to me that Trunk doesn't really support this use case; it'd be great if there was a way to accommodate for it in Trunk.
No until now.
But I issued Question to be possible to automate pod trunk push with CI service like travisCI.
And then, a developer merged the commit (Allow specifying a Trunk token via the COCOAPODS_TRUNK_TOKEN environment variable) that will allow to automate publishing.
I don't try it, but maybe can push automatically.
[Added] After I wrote above, I tried it and it worked well.
Related
I have two variations of a site based off a primary enrollment site. Currently a running demo of the primary enrollment site is set up and running on a remote server using docker. I'm trying to figure out what steps are needed to move both enrollment site variants A and B up to the remote server for testing and review purposes.
The first variation (branch A) was built from the primary app as master and the second (branch B) was built as a very small variation on the initial vairant, A (think a single file updated from branch A).
So far I understand that I'll have to set up a unique database for both A and B for docker to store app data depending on which enrollment site is running (e.g., enroll-db-A and enroll-db-B). Running both sites from this host will also require specifying a unique port on the dockerfile and docker-compose file since the plan is to keep the primary demo site available through the server's default port.
What I'm confused about is how to actually move the files needed for both variants up to the remote server. Because I obviously want to minimize the number of files needed to transfer up to the remote to handle serving all our sites, and because both variants A and B both largely depend on files from the primary enrollment app root, is it sufficient to simply move all the updated and necessary config files for A and B into new directories on the remote server where the directory for the primary enrollment site is located one level up as the parent of each variant directory?
To paraphrase my manager; there's probably some way to make this work, though it's not worthwhile.
My concern in posting this mostly had to do with the apparent number of redundant files that would be pushed up to the remote web server after creating two, simple variants on an original. For our demonstration purposes, having two highly similar repos in addition to the original base loaded on to the web server is not a significant concern.
So, for now, my need to have this question answered can be deferred indefinitely.
There are so many posts about this, and being inexperienced in Git doesn't help to get a good grip on this.
I just joined a new company that dont have CI at all, so jumped on the opportunity to create a proof of concept (using Jenkins locally on my Windows box for now, until I get a dedicated server for it). I've used and semi-configured Jenkins in the past, using SVN, and it was so simple and fast to get it working. In this company, they don't use SVN, only GitLab (I believe its private - we have our own site, not .gitlab.com), and nothing works for me.
I followed a few turorials, but mainly this seemed like the one that meets my needs. It didn't work (the reasons and symptoms are probably worth a post of its own).
When I look at Gitlab Hook plugin in Jenkins, I see a big red warning saying it is not safe ("Gitlab API token stored and displayed in plain text").
So my question, for this POC that i am working on, how serious is this warning? Should I avoid this plugin and then this method altogether because of this?
And while i'm at it, I might also throw an additional general question to open up my options here ... If I want Jenkins to work with Gitlab (meaning, I checkin something and it triggers a build), do I absolutely need to use the SSH method, or it could work with HTTPS as well?
Thank you.
This is indeed SECURITY-263 / CVE-2018-1000196
Gitlab Hook Plugin does not encrypt the Gitlab API token used to access Gitlab. This can be used by users with master file system access to obtain GitHub credentials.
Additionally, the Gitlab API token round-trips in its plaintext form, and is displayed in a regular text field to users with Overall/Administer permission. This exposes the API token to people viewing a Jenkins administrator’s screen, browser extensions, cross-site scripting vulnerabilities, etc.
As of publication of this advisory, there is no fix.
So:
how serious is this warning?
Serious, but it does require access to the Jenkins server filesystem, or it requires Jenkins administration level. So that risk can be documented, acknowledged and, for now, set aside, provided mitigation steps are in place, ie.:
the access to the Jenkins server is properly monitored
the list of Jenkins admin account is properly and regularly reviewed.
do I absolutely need to use the SSH method, or it could work with HTTPS as well?
You can use https for accessing GitLab repositories in a Jenkins job.
But for the GitLab hook plugin, SSH remains the recommended way, considering you would use a token (instead of a user account name/password), that you can revoke at any time.
I'm proposing a SaaS solution to a prospective client to avoid the need for local installation and upgrades. The client uploads their input data as needed and downloads the outputs, so data backup and maintenance is not an issue, but continuity of the online software service is a concern for them.
Code escrow would appear to be overkill here and probably of little value. I was wondering is there an option along the lines of providing a snapshot image of a cloud server that includes a working version of the app, and for that to be in the client's possession for use in an emergency where they can no longer access the software.
This would need to be as close to a point and click solution as possible - say a one page document with a few steps that a non web savvy IT person can follow - for starting up the backup server image and being able to use the app. If I were to create a private AWS EBS snapshot / AMI that includes a working version of the application, and they created an AWS account for themselves, might they be able to kick that off easily enough?
Update:the app is on heroku at the moment so hopefully it'd be pretty straightforward to get it running in amazon EC2.
Host their app at any major PAAS providers, such as EngineYard or Heroku. Check their code into a private Github repository that you can assign them as the owner. That way they have access to the source code and can create a new instance quickly using the repository as the source.
I don't see the need to create an entire service mirror for a Rails app, unless there are specific configuration needs that can't be contained in the project or handled through capistrano.
I don't know squat about TFS, other than as a user who has performed simple check in/outs.
I just installed it locally and would like to do joint development with a friend.
I was having trouble making my TFS web site on port 8080 visible (the whole scoop is here if your interested) and I wonder if it could be related to the fact that TFS is probably using Windows Authentication to identify the user.
Can TFS be set up to use forms authentication?
We probably need to set up a VPN, though that's a learning curve too.
To use TFS, do our machines have to belong to a domain?
We're not admin types, though he is better than me, though I would be interested in any feedback or advice on which path is likely to pan out the best. I already got AxoSoft OneTime working in this type of an environment and it suits us well, but I am tempted at all the bells & whistles with TFS and the ability to tie tracked bug items to code changes.
As far as finding a good way to share code, do sites like SourceForge allow one to keep code secure among members only?
It does not need to be installed in a domain. I'm running TFS at home within a workgroup on a virtual machine.
Create a user on the machine that hosts TFS. Let's assume this machine is named TFS-MACHINE. Grant that user appropriate Team and Project rights.
When connecting to TFS from the remote machine, the user should be prompted for a user ID and password. They should use a User ID of TFS-MACHINE\username and the appropriate password.
Regarding external spots to host code. If you're looking for cheap/free, you can look at something like Unfuddle, which supports SVN and Git.
If you're looking for hosted TFS, the only place I've been able to find thus far is SaaS Made Easy, but they can start getting a bit expensive, depending on the number of users you have.
Keep in mind if you're going to host locally that you'll still need to do things like periodic backups, etc.
I'm currently tasked with setting up a TFS server for a client. The TFS will mainly be accessed by local (on-site) users through the internal network... Easy!
But what about the few remote users we have? Should they connect via VPN or is it better to make the TFS server public and have the users connect over SSL and provide username and password to the TFS?
Do you have any suggestions on how these solutions will perform compared to each other?
VPN is the way to go if you want the optimal TFS experience with TFS 2005 or TFS 2008. While TFS mainly uses web service based protocols that can all go over SSL, there are a few small things that will not work unless you have proper network access. For example:
Viewing the Build Log (unless worked around)
Access Team Build drops
Publishing Test Results
As well as a few other little niggles. Going the VPN route will also mean that your TFS installation will vary less from a standard base TFS installation which gives you some peace of mind that you won't run into any problems when it comes to upgrading to a new version, applying service packs etc. (or at least any problems you run into will have been run into by many before :-) ). Going the SSL route you are treading a less worn path - though obviously plenty of people do run it that way including CodePlex and all the commercial companies that provide a hosted TFS installation.
The downside of VPN is that usually you are granting users to an entire section of your network (unless you are running TFS in it's own mini private network or something). If you go down the SSL route then be sure to properly test the new team projects as this is easy to break and you might not realise until you try and create one either inside or outside the network.
For additional information, see Chapter 17 of the TFS Guide.
I'd start with a few questions: does the client have a VPN? And are the remote consumers on this VPN already? How secure does this need to be?
(In our case, we have lots of outside vendors we don't want on our VPN, so our source control is publicly accessible with SSL)
When I did it, I used a VPN. Was easier to setup, and made sure that no-one could even see the machine with out being authenticated via the VPN - this was obviously way better from a security standpoint, which trumped any performance benefit we would have got from using SSL, if there even was one...
My previous experience with TFS was in an environment where we had a team of developers staffed out at client sites all over the city. In many situations we still accessed our TFS instance instead of something at the client site. We used SSL with public access to TFS. It worked very well for us.