Gitlab Pages delivers random content - ruby-on-rails

I am experiencing weird behavior with the Pages feature of GitLab Omnibus package running on an Ubuntu 16.04 virtual machine. Some projects use Pages with Jekyll built by GitLab CI, which has been working as expected since it was first published with Gitlab CE.
For a couple of days now, visiting any of the homepages of those sites shows the content of just one of the projects. Each of them should of course show different content, but they all show the same. Even stranger: the content shown on each of the sites changes over time to one of the other projects, and I can not see whether this is deterministic.
Restarting the build processes of each of the projects did not fix this, neither did gitlab-ctl reconfigure, stop and start, nor rebooting the entire VM.
To investigate on that issue, I edited (which I assume is) the resulting file of the build process at /var/opt/gitlab/gitlab-rails/shared/pages/www/www.domain.org/public/index.html. Not in the first place, but later on during the already stated "rotating" content, the edits showed up on the webpage.
So what is going on there? Is this some caching issue? Is it malconfiguration? Is it a bug? Please help me find and fix the problem, as those are production websites.

Looks like this is actually an issue

Related

Enable k8s experimental features in Docker Desktop

does anyone know if this is possible?
All I can find in docs is reference to enabling docker experimental features, but not the kubernetes experimental features.
I tried this, but still get error.
k alpha debug -it exchange-pricing-865d579659-s8x6d --image=busybox --target=exchange-pricing-865d579659-s8x6d
error: ephemeral containers are disabled for this cluster (error from server: "the server could not find the requested resource").
Thanks
I had the same intent (as have others in this feature request). After several hours of trial and error, I finally found out a way to do so.
Steps:
Depending on which file you're trying to edit, you may need to fully shut down Docker Desktop, and restart WSL. (right-click tray-icon and press "Quit Docker Desktop", then run wsl --shutdown, then run wsl)
Open the [...]/kubeadm/manifests folder, in the Docker filesystem.
On Windows, navigate Windows Explorer to:
For Docker Desktop 4.2.0: \\wsl$\docker-desktop-data\version-pack-data\community\kubeadm\manifests
For Docker Desktop 4.11.0: \\wsl$\docker-desktop-data\data\kubeadm\manifests
Open the kube-controller-manager.yaml, kube-apiserver.yaml, and kube-scheduler.yaml files, adding the line below:
spec:
containers:
- command:
[...]
- --feature-gates=EphemeralContainers=true <-- add this line
Start Docker Desktop again.
It looks so easy when its already figured out, huh? Well trust me, it was a pain to find out.
Some of the slowdowns I hit:
It took me quite a while to even find those manifest files. (eventually found it using grepWin, searching through the whole \\wsl$\docker-desktop-data folder for any matches of a line I grabbed from the kube-apiserver-docker-desktop pod's config, which I viewed using Lens)
Once I found it, I got confused by this documentation. When I read FEATURE STATE: Kubernetes v1.22 [alpha], I thought that meant you needed version 1.22 or higher of Kubernetes for the feature to be available. This caused a huge wild goose chase where I tried to change the version of Kubernetes that was being launched in Docker Desktop, which Docker Desktop didn't seem to like. (in retrospect, the issue may have just been the minor one in point 3 below...)
When I first made changes to the manifest files, I was using Notepad++. And despite my liking Notepad++, it's apparently not quite as smart as vscode in the following regard: it does not automatically detect the indentation type for yaml files. Thus, when I pressed tab to create an indent, so I could add the new flag to the argument list, it added it as a tab character rather than spaces. This caused Kubernetes to fail reading of the file. That might not be so bad if Kubernetes gave a sane error message for that, but instead it merely gave the message unexpected EOF. And I didn't even see that error message at first because it was not being propagated to the kube-controller-manager-docker-desktop pod (which was the only relevant one that wasn't immediately erroring/closing). Anyway, I didn't realize this was the problem at the time, so...
I decided to try bypassing the manifest-files and applying my modification to the etcd data-store directly. In retrospect, this was not a good idea, because the etcd data-store is pretty complex, the tooling is substandard, and the documentation is substandard. I spent a ton of time just trying to figure out how to send commands to read and write data to it (eventually managed to do so by calling etcdctl within the etcd-docker-desktop pod). I spent further time still writing up a NodeJS script capable of reading all the data as JSON, storing it in a dump file, and being able to write changes to entries back despite there being 3+ levels of quoting involved (I eventually was able to use stdin to pass the value rather than as part of the command string, to avoid quotation-mark-inception). After all the work on etcd reading/writing above, I found it didn't work anyway because Kubernetes invariably "breaks" if anyone else writes to its etcd data-store. (even if you write the exact same value that had been there before -- as verified by comparing the dumps before and after)
After all of the above, I decided to have one last go with just adding the flags to mentioned manifest files. Was still getting the startup failure/error, but at the very end, I decided I wanted to see exactly what about my changes was causing Kubernetes to reject them. So I tried commenting out my added line; the error remained. I thought maybe it was a checksum-based rejection then. But then I thought, maybe the YAML parser that Kubernetes is using is just outdated and is finicky about what comments it is able to recognize. So I tried moving the comment around to different places, and was puzzled when the manifest was being accepted just by moving the comment to the root level. I moved it back to various locations, with it working and not working, until I thought to try making the line "half-indented" since it's "in-between" the working and non-working versions. That's when I noticed the line had a tab as its indent. And then it hit me; are the other lines also using tabs? I checked, and nope, they were using spaces. And that's when I realized I had wasted the last few hours on something I coulda just fixed with a simple indent change.
The moral of the story for some is that YAML is a bad configuration format, because it makes it easy to make trivial errors like this. But I actually place the blame more on whatever parser Kubernetes is using for the YAML files; it is unacceptable that a YAML parser would encounter an indentation mismatch and give a message so generic as unexpected EOF. I don't know what the identity of that YAML parser is, but I'm tired enough of the subject that I'm not even going to look into it right now. If one of you finds it, please make an issue report for it -- perhaps including this story as a real-world example of the pain that ambiguous error messages can cause.
Since Ephemeral Containers is still an alpha feature, it is disabled by default.
As you can read here, for this to work, it requires the EphemeralContainers feature gate to be enabled, and Kubernetes client and server version v1.16 or later.
As to the 2nd requirement I assume both your Kuberntes server and client versions are v1.16 or later but it looks like, for the time being, the 1st requirement cannot be met on Docker Desktop. According to this issue, it currently doesn't support enabling Feature Gates.
However you may still try to ssh to your master node and edit the following files:
/etc/kubernetes/manifests/kube-apiserver.yaml
/etc/kubernetes/manifests/kube-scheduler.yaml
by adding inside the command section:
--feature-gates=EphemeralContainers=true
Then you need to delete those pods so they are recreated with new settings applied. You'll find them by running:
kubectl get pods -n kube-system

CSS not rendering at puppeteer inside docker

I have automated browser tests using puppeteer. I ran them at Circle CI using default Circle CI windows machine. Now I'm trying to change to a docker which is based on a Microsoft Debian machine (the website is .NET). I installed chromium at this machine. The problem is the CSS is not rendered. I used page.on request/response and the css is requested, response is 200. I looked for a configuration that could be disabled, but I didn't find it - neither at StackOverflow.
Repository: https://github.com/darakeon/dfm/
Branch right now: 4.1.5.0 (it will be promoted to master when I finish the version)
The dockerfile is inside docker folder. It is at Docker Hub too, my user is darakeon. Right now the name is darakeon/net-circleci. When I solve the problem, I will rename this, to split into 2 different machines - one based on microsoft which has only libman, another based on the first, that can run puppeteer too.
Tests folder: site/Tests/Browser
Script I'm using to run tests: .circleci/browser/run-tests.sh
The most time you spent trying to solve something, the more ridiculous will be the solution. Please, call me idiot, but help me to solve this...
Discovered the problem. Here is how:
I used page.screenshot at another site on the web to check if the css was rendering at it. It was. Weird. After looking for solutions, I was always finding people teaching how to NOT show the css, intercepting the request and stopping it. So I intercepted the requests to see if the css was being requested:
await page.setRequestInterception(true);
page.on('request', (req) => {
console.log(req.url(), req.resourceType())
req.continue();
})
Given the requests were ok, I went to check the responses:
page.on('response', (r) => {
if (r.status() >= 400)
console.error(r.url(), r.status())
})
Surprise! My main css was returning 404. But why, if it worked at windows? Simple. Windows doesn't care if you call Bootstrap, bootstrap, bOOTSTRAP or BoOtStRap, them all will search the same file. Linux consider as right just the exactly same case.
So, when you get your .NET site from windows and put it at Linux, check the cases of everything.

Scheduled publication not working in Umbraco

I am using Umbraco 7.5.2, installed on a VM in Azure.
When a Publish At date is set, the page is never published. I have tried the following steps.
Create and publish a page
Unpublish the page
Set the Publish At date to a few minutes in the future and Save&Publish
Verified page is definitely not visible
Wait for the time to roll around, and even a few minutes more
Page is never published.
The following message appears in the logs on save when setting the publish at date, which is normal. But no other messages appear after and the page is never published.
2016-10-25 17:46:20,784 [P4808/D10/T21] INFO Umbraco.Core.Services.ContentService - Content 'Video1' with Id '1312' is awaiting release and could not be published.
I've made a copy of my production Umbraco folder and of the database ON THE SAME VM. That instance works for scheduled publishing.
I'm wondering if anyone can provide some clues as to what the issue could be or where I could look. I'd like to avoid having to migrate my production data to this test site.
Thanks
The problem may be related with different time-zone on azure VM as Umbraco is using server time to proceed inside the code execution. This should be the first thing to check.
In version 7.6.0 (https://our.umbraco.org/contribute/releases/760) it will be possible to set up server timezone and then set up a time with precise offset, so those type of problems shouldn't be seen then.
For me it was a wrong umbracoApplicationUrl path (it has to point to YOURDOMAIN/umbraco) After fixing this url everything started working fine
In addition to Marcin's answer, also check that the server can access itself, as it pings itself to fire the scheduled publishing. If it can't resolve it's own address, the scheduled publish will fail. It doesn't happen often, but I've seen a few VERY locked down installs where the server can't resolve it's own address.

WSS caches old Workflow version

I'm currently developing three workflows that are supposed to handle the status of items in different lists.
Each Workflow is attached to a separate list.
When I'm deploying and debugging in my development Environment, everything works fine.
Except for the case, when an item is created via an incoming mail.
I already figured out, that I have to restart some services and then it'll work, but I'm still not sure wich of the services is caching the workflow.
Afterwards I build a .wsp file which I deploy on a server.
Each time I deploy the solution, I do a retract and delete solution first.
After deployment I'll recreate the workflows on the lists
It seems to me that this has no effect. An older version of the workflow is still triggered, if I create a new instance in the list.
I already restarted the whole server and still no result.
Has anyone an idea what else I could try in order to get this working?
Thanks in advance.
If Timer Service is the one that calls your code, then restart Windows SharePoint Services Timer (OWSTIMER.EXE).
When workflow waits on something, it gets serialized (hydrated). When event happens, OWSTIMER.EXE deserializes (dehydrates) and continues workflow execution.
So timer is the one that wakes workflow up.
So this problem kind of resolved itself.
I was reading an article on Kirk Evanns Blog on an issue with the development of workflows in VS2008 for WSS.
I had not realized that I still had an illeagle reference in my Project properties.
I removed the reference. The second thing I tried was deploying with -upgradesolution rather than doing a retract-delete-add-deploy...
I don't know which of both did the trick, but I can finally see the new workflows kicking in.
Thanks for your help.

Why does Rails cache view files when hosted on VM and codebase on Samba share

I have the following setup:
Code on my local machine (OS X) shared as a Samba share
A Ubuntu VM running within Parallels, mounts the share
Running Rails 2.1 (either via Mongrel, WEBrick or passenger) in development mode, if I make changes to my views they don't update without me having to kick the server. I've tried switching to an NFS share instead but I get the same problem. I would assume it was some sort of Samba cache issue but autotest picks up the changes to files instantly.
Note:
This is not render caching or template caching and config.action_view.cache_template_loading is not defined in the development config.
Checking out the codebase direct to the VM doesn't display the same issue (but I'd prefer not to do this)
Editing the view file direct on the VM does not resolve this issue.
Touching the view file after alterations does cause the changes to appear in the browser.
I also noticed that the clock in the VM was an hour fast, changing that to the correct time made no difference.
I had the exact same problem while developing on andLinux.
My andLinux's clock was about three hours ahead of the host Windows, and setting the correct time (actually, a minute or so behind) has solved the problem.
Actually, setting the correct date & time in the VM does seem to have solved the problem (after I restarted mongrel) -- going to do a little more digging.

Resources