Is there a way to clear Relay Modern Environment - environment

Based on this: https://github.com/facebook/relay/blob/e5e07c3f12e5de1be584325498b94ed91a2580ec/docs/Modern-RelayEnvironment.md, is there a way to clear Relay's Environment?

Related

What is an environment in programming?

This may seem like a simple question, but I keep getting caught up in it. The word environment seems to be thrown around a lot, but I can't find a definite answer for my question. Environment variables, and and software development environment both have the word environment. Java Runtime Environment also uses this word, but does it mean the same for everything? Is this word even that significant?
Hope this makes sense.

What is a good solution for keeping variables in sync - Terraform, Chef, Jenkins, Octopus Deploy

Let's say I've got a flashy CI and CD pipeline built using the tools I've mentioned in the question title.
Let's also say that there are config files and other things which are parameterized with variables in all of those systems. Variables that, in fact, sometimes need to match across them all. Hostnames, ports, connection strings etc...
How can I, or what tools are available, to keep those variables in a single place or at the very least, make sure they stay in sync with each other when they need to?
My best thinking so far is to write some automated tests but since the variables are generally stored in the systems themselves, that feels a bit clunky. I could store the variables in more config in source but then some of them are secure variables (production passwords etc..) and that would be a bit of a compromise too.
There's a ton of these variables and I can see it becoming a real issue the more I head down this road.
Has anyone solved this yet?
Your configuration management system (Chef), should be in charge of plumbing all of the variables to your CI system. Terraform state, and other databases will have to be persisted separately, but your CM system should be informing the other tools "where the database for the current job lives".
There are a lot of different and reasonable ways to validate variables at different layers- Chef's Policyfile stuff tries to move it to a build-time thing, for instance- but the most important thing you can do is try and minimize differences between environments. Lots of people start out with a clean series of deployment stages, and then three years later each one is so wild, woolly and weird that the stages have lost most of their utility. Being a consistent advocate for "everything is the same everywhere, even though it's harder and more expensive" is a vital political job.

Multi-server Ruby on Rails Production Logging

I currently have an RoR app in production across four application servers with independent logs per instance. What's the best way to aggregate the logging to a common location without resorting to syslog?
I wonder if there is a flavor of log4xxx for ruby.. Could be real cool. If there is, then perhaps you could centralize the streams from different instances into one place, it shouldn't be too complicated for simple implementation.., or perhaps use a tool like this.

CLI-Based "V" in Rails MVC?

Having a hard time getting any useful results from various searches on this concept -- probably because it's a. Wrong and/or b. obscure. Essentially, though, I'd like to write an application which works as either a normal web app or with a command-line interface. I've done this in the ancient past for sysadmin-y stuff using Perl, but that had none of the joy that using Ruby/Rails brings.
I am comfortable enough with Rails itself, and also use standalone Ruby for all manner of CLI stuff. What I'm looking for is best practices, if they exist, for extending a Rails application to have CLI functionality.
Perhaps the answer is as simple as using script/runner and doing my own "VC" while using my Rails models... This is what I was planning on doing, but I thought I'd step back and sanity-check that approach first. I'm having a hard time imagining how I'd utilize any of the Rails controller stuff, given that it's so tightly married to HTTP requests, but I'm often surprised by what clever(er) folks come up with.
Thanks for any useful responses.
I think it all depends on whether you want to reuse your controller logic. If you do then you can go down the route of writing a gem/Rake task/standalone Ruby script that makes HTTP requests to the application and receives the responses as JSON/XML/plain text or whatever. Something like HTTParty is ideal for this.
The second alternative is as you describe: drive your Rails models directly from your own script and present the results.
Another approach is that the web interface shells out to the CLI to do anything. Everything worth doing is in the CLI and the web just calls the CLI for all of its needs.
Shelling is a little bit expensive. If it turns out to hurt performance, use popen to load the CLI just once per web session. Then you can feed it commands (write to its stdin via the popen pipe) and get the results (read from its stdout via the popen pipe) without the CLI having to load for each command. If the CLI is of the "I take some arguments, do something, and exit" sort, then add a new mode to it "--stay-resident" or some-such, that switches it to the behavior that the web interface needs.

What is the best server stack/configuration for Rails SaaS app

What would you suggest as the best server stack for a dedicated server which needs to host Rails SaaS application (not a lot of traffic but need to keep options open for future).
Regardless of your application, you're probably going to want certain standard components:
nginx/passenger will work for small apps or large apps. You should use it.
Unless you have a specific reason to use something else, you should use MySQL since the vast majority of the Rails community uses it and you will be able to get better support.
You should have memcached running right away, even if you don't use it for much yet. You're going to want to be able to seamlessly add caching as it's needed.
You're going to want to have a process for setting up a new server that is fully automated. That way, if you need to spin up a second server, it's trivial. If you ssh into a box to configure it, this means that if you need another server in a pinch (or the first server gets corrupted), you're going to need to remember all the things you did. Not a good place to be in an emergency.
You should be on the very latest version of Ruby on Rails, and upgrade frequently. Keep an eye on deprecations and changes and make the suggested changes as early as possible. When Rails 3 is released, use it.
Engine Yard, where I work, uses an open source tool called chef to manage our automated deployment solution. That's probably a good option.
As ever with a question that broad, it depends. Some things to think about:
What does the application do?
Does the application use any database vendor-specific SQL?
What are the availability requirements?
What are the performance requirements?
How much data will there be?
Which server stacks do you or the person who will be administering it have experience of?
What is your budget?
One thing I can say with complete certainty is that you don't want to be using Windows because Rails work best on a Linux/UNIX stack.
A lot of it depends on your needs. If the model isn't very complex and/or your traffic is fairly low, you can probably get away with apache, mongrel, and sqlite on some *nix.
If you start seeing performance issues, you can add some memcached into the mix, upgrade (relatively painlessly) to mysql, and use a different server (passenger/nginx).
There are also alternate ruby implementations that have some performance boosting changes. Rubninous and jRuby come to mind.

Resources