I want to remove a page I created in my Eleventy-generated website, let's call it a-post.md. And I don't know what's the right way to do it.
I just remove the "source" page and, sure enough, it disappeared from the site. However, the files it generates in the _site directory are still there. Am I supposed to "clean" these by hand or am I doing something wrong?
I know this kind of stuff shouldn't be a question on Stackoverflow ― but as a newbie in the world of Eleventy (and static sites generators), I was baffled by the lack of newbie-friendly documentation on the matter. There are also a lot of articles and posts on how to create and customize pages, but not on how to dispose of them correctly.
The problem you're describing usually only happens during development on your local machines, where the site output sticks around in between builds. SSG are heavily geared towards atomic deploys. This means every deployment will be executed in it's own temporary container environment and the generated site will completely replace the previous build. This means that the newly deployed build will never contain files from a previous build. This is true for all SSG hosting providers like Netlify or Vercel.
The problem you're describing usually only occurs locally during development and if you're deploying new versions of your site manually by uploading it over SFTP or something like this.
Local development
It's best practice to remove your output folder before the build step so your local build matches the atomic deploy you will get in the live environment. You can put the clear command in your build script, this way you never have to think about it (assuming your output directory is dist):
"scripts": {
"build": "rm -rf dist/ && eleventy"
}
Manual deploys
This is really not the best way to deploy your site, but if you're doing this I would try to mimic an atomic deploy:
Upload the new build next to the webroot of your live site. For example, if your live site lives in htdocs, upload the new site as htdocs-new
Delete the htdocs folder.
Rename htdocs-new to htdocs.
This way, you will never get any remnants of previous builds sticking around. The downside is that between steps 2 & 3 your site will be unavailable, whereas atomic deployments on Netlify or Vercel have zero downtime.
Related
What are the persistence options for fitnesse files? So far it seems like a file system is the only thing supported. There does appear to be an out of date database plugin. Is there anything else that is supported (S3, database, etc.)? Is there a way to control where files are persisted if using the filesystem?
I believe there is very little in that area. The location of the files can be controlled using a command line option. See http://fitnesse.org/FitNesse.FullReferenceGuide.UserGuide.QuickReferenceGuide#FitNesseCommandLINE
-d /path/to/fitnesse/root
How I've used the FitNesse wiki is as a local development tool, with the pages on the file system. Once I'm satisfied with the tests I commit them to version control (e.g. git) so that they become part of the (integration) test pipeline setup (e.g. they are run as part of the CI/CD pipeline of the project).
There is a plugin I believe that will automatically commit any save actions to Git, but I've never used that. Saving each edit action just pollutes version control in my opinion. I only want to see tests after they have been checked/completed, and that tends not to be each save.
Working on a shared wiki environment (where I would expect a non-file system approach would fit in) you run into the same problem, I expect. Developing automated tests is a development task that requires some iterations before it is 'done', and not all attempts reach that 'done' state. So using shared storage for wiki persistence creates 'noise' in the test-set: which are the tests that form the current reference set that should pass and what is work in-progress.
If you are working on a larger project where new features are developed together with their automated tests it becomes even more important to know which test changes belong to which features/changes. Having tests on the file system, in version control, allows you to develop test in sync with code changes in the same branch. This is what I would recommend.
I am looking into how to set up a liferay project with version control and automated deployment. I have a working local development environment in eclipse, but as far as I understand it, setting up a portal in liferay is in part the liferay portal instance running on tomcat and then my custom module projects for customization. I basically want all of that in one git repository which can then be
1: cloned by any developer to set up their local dev environment
2: built and deployed by eg. jenkins into eg. AWS
I have looked at the liferay documentation regarding creating a docker container for the portal, but I don't fully understand how things like portal content would be handled.
I would be very grateful if someone could lead me in the right direction on how a environment like this would be set up.
Code and content are different beasts. Set up a local Liferay instance for every single developer. Share/version the code through whatever version control (you mention git).
This way, every developer can work on their own project, set breakpoints, and create content that doesn't interfere with other developers.
Set up a separate integration test environment, that gets its code exclusively through your CI server, never gets touched manually.
Your production (or preproduction) database will likely have completely different content: Where a developer is quick to create a few "Lorem Ipsum" posts and pages, you don't want them to escape into production. Thus there's no movement of content from development to production. Only code moves that way.
In case you want your developers to work on a production-like environment, you can restore the production content (database) to development machines. Note that this is risky though: The database also contains user accounts, and you might trigger update notification mails from your development machines - something that you want to avoid at all costs. Plus, this way you give developers access to login data (even though it's hashed) which can be abused. And it might even be explicitly forbidden by industry regulations to use production data in development environments.
In general: Every system has its own database (at least their own schema), document store and indexing server. Every developer has their own portal JVM running. The other environments (integration test, load test, authoring, production) are also separate environments. And no, you don't need all of them all the time.
I can't attribute this quote (Milen can - see his comment), but it holds here:
Everybody has a testing environment. Some are lucky to run a completely different production environment.
Be the lucky one. If everyone has their own fully separated environment, nobody is stepping on each other's shoes. And you'll need the integration tests (with the CI output) anyway.
I am wondering the best way to include an old web site into a newer rails app.
The legacy web site:
Has 21,000 small text files with minimal markup that are linked together.
Totals ~ 220MB
Has a root page located within a directory and is linked to many sub-directories
I'd like to include the old site in my rails app folder, but I am concerned that it will mean a much longer development cycle each time I deploy. I am using capistrano and my first thought is to place the folder with Old Site in the shared directory on the production server and symbolically link to it accordingly. This approach strikes me as undesirable as my resources for New App will be split in more than one location. The benefit might be a much quicker debug/deploy cycle.
Right now, I have no plans to modify the Old Site files. At some point, that could change.
I have been impressed with how quickly my otherwise lightweight project will deploy. Right now I am making frequent changes and repeating the code/deploy cycle often. I'd like to avoid slowing that down unnecessarily.
Is there a best practice for this sort of thing?
I don't think there is a "best practice" per se, but one option would be to use "Git Submodules". Add your Old Site as a submodule in the right folder in your New Site and you have one developer git repository and Capistrano will not fetch git submodule files while deploying.
With submodules, you will have 2 git repositories. But, one will be linked from within another one ("old site" in "new site"). I think, it is logical. You will have old site repo and new site repo. They are 2 separate sites after all, just linked.
I am trying to migrate the setup here at the office from SVN to Git and am setting up Redmine as the host for our projects and issue management (Currently we use a version of Gforge + SVN). I should preface by saying that I'm an embedded C software developer by day and have basically zero experience with Rails or web apps, but I like trying new things so I volunteered to set up the project management tools which will take us into the future.
I have Redmine setup and am using Gitolite as the Git repo manager. Additionally, I am using the ericpaulbishop/redmine_git_hosting plugin to facilitate automatic public ssh key pushing to Gitolite and automatic repo creation when we register a new project. Everything seems to work except the repo view within the project does not keep track of the changesets. (The "History" is just empty, although when you view the files, it does show the latest version correctly)
I copied the post-receive hook from the plugin's contrib directory to the .gitolite/ common hooks, but again I know little about Ruby and how these gitolite hooks work so I don't know how to debug this. I notice there are log messages and things in the hook, but I have no idea where those are printed, etc...
I even tried the Howto on the Redmine wiki, HowTo setup automatic refresh of repositories in Redmine on commit:
#!/bin/sh
curl "http://<redmine url>/sys/fetch_changesets?key=<your service key>"
Any ideas on where I start debugging? I've been able to resolve every problem up to this point, but I'm a little stuck now. The plugin doesn't make it obvious how this is supposed to work, and to be honest, I'm not even sure if this is a problem with Redmine not reading the repo correctly (or at all), or gitolite not communicating as Redmine expects, etc...
I guess I could answer this...
I checked the issues under the Github page and I found this on:
https://github.com/ericpaulbishop/redmine_git_hosting/issues/89
Which was pretty much exactly my problem. This does appear to be a small bug in the plugin, but you can work around it by changing Max Cache Time to "1 minute or until next commit". This immediately fixed my problem. I simply left it like that but one of the posters claimed that you could change it back to until next commit and it works from then on...
I have a JEE6 project based on Glassfish 3.1.1 that is moving beyond the "one developer prototype" stage to being developed by a team.
Each member of the team will have their own local glassfish server. I don't want each of them to have to go through all the manual steps of setting up the JDBC connection pool, JMS services, jdbc security realm, etc via the admin console, as I did when first developing the prototype. It is error prone, and plus if I want to change something I have to tell everyone what to do. I want it to be done as part of the ant build, so that it is a one-clicker, and then if I have to change something I can just tell them to do a clean to blow away the domain and then run it again. So there would be an ant task to "config-glassfish" that would somehow configure the domain for them.
Despite extensive searching, I can't seem to find any step-by-step guide of how best to accomplish this. Anyone have a link?
Would it be best to attempt to capture the fully configured domain and store that in our src repository?
Or should I instead have ant issue "asadmin" commands to create and configure the domain?
You can do all of this with the sun-appserv-admin ant task. You can find more information here: http://docs.oracle.com/cd/E19316-01/820-4336/beaev/index.html
We struggle with this kind of thing at my work too, but only with a few developers. One thing I really like is that Glassfish has the concept of a resources.xml which will cover a lot of the config. I use this to pass around connection pool configs and JMS queues and it works really well, but it might not cover all your config needs. The contents of the file are pretty much snippets from the domain.xml, and I haven't figured out everything it can do yet. http://docs.oracle.com/cd/E19798-01/821-1751/ggoeh/index.html http://javahowto.blogspot.com/2011/02/sample-glassfish-resourcesxml.html
I haven't tried other ideas since the resources.xml solves my major pain points, but you could take your domain.xml and work through any issues brought up by copying it to another developer's domain, then do variable replacement on the part of the file that need it. That way you could have ant create the domain, then overwrite the domain.xml with the newly filled out one.
Maybe there is a way you could use asadmin backup-domain
One other idea would be Chef. http://wiki.opscode.com/display/chef/Home
I ended up just putting the domain.xml into the src repository, making an ant task to copy it over to the glassfish directory, and instructing other developers that when running that ant task, they should make sure glassfish is not running.
This worked for my case...