In my environment, there is a RHEL puppet master which successfully managing over 500 nodes.
When I run puppet in master server (puppet agent -t), I am getting below error. It seems puppet agent is disabled in master. Is there any impact , if I enable puppet agent in master.
*[root#puppet-master]# puppet agent -t
Notice: Skipping run of Puppet configuration client; administratively disabled (Reason: 'reason not specified');
Use 'puppet agent --enable' to re-enable.*
Puppet should be enforcing its own configuration and the default behavior for PE is to run the agent on the master every 30 minutes.
You can test to see what would happen if you enabled Puppet using the following steps;
run systemctl stop puppet this will just stop the agent service, it won't stop the Puppet server running.
Run puppet agent --enable to re-enable Puppet runs.
Run puppet agent -t --noop if you run in noop mode it will not apply any changes, just report back what it would change.
At this point, if it's not going to make any further changes then you'll be safe to run systemctl enable puppet and start it enforcing itself again. If it is going to make some changes you don't want then run puppet agent --disable to ensure the agent doesn't accidentally restart after a reboot and investigate further.
Related
I'm running Jenkins 2.319.2 installed from Debian Bullseye repository and set up some nodes to run my tasks.
In my Jenkins pipeline task, which is running on a node instead of on the master node, I set up several stages, including checkout, build, deploy and finally I have to restart a system service using systemctl. The last step needs to be run with root privileges, so I set up the running user on the node in sudoers config to let it run systemctl without password (NOPASSWD). However, my task always asks for password when running the final step, and hence fails.
If I directly log in the user with ssh, I can run sudo systemctl without needing to input password. In my other freestyle task, I also used the same way to run sudo systemctl restart myservice in the "execute shell", without any problem. But in the pipeline stage it always asks for password. No idea why.
I have an ECS Fargate service running the jetbrains/teamcity-agent image. This is connected to my TeamCity Host which is running on an EC2 instance(windows).
When I check whether the agent is capable of running docker commands, it shows the following errors:
Unmet requirements:
docker.server.osType contains linux
docker.server.version exists
Under Agent Parameters -> Configuration Parameters, I can see the docker version and the dockerCompose.version properly. Is there a setting that I am missing?
If you are trying to access a docker socket in fargate, Fargate does not support running docker commands, there is a proposed ticket for this feature.
the issue with "docker.server.osType" not showing up usually means
that the docker command run from the agent cannot connect with the
docker daemon running. This is usually due to a lack of permissions,
as docker by default only allows connections from root and users of
the group docker
Teamcity-Unmet-requirements-docker-server-osType-contains-linux
I was facing similar issues got them fixed by adding "build agent" user in "docker" group and restarted/rebooted the server.
Where build agent user ==> Means the user with which your TeamCity services are running.
Command to add a user to group
#chmod -a -G docker <userasperyourrequirement>
Command to reboot the server:
#init 6
I have a Jenkins CI and use it to build (mvn) and containerize (docker) my app using Jenkins scripted pipeline. Lastly, I want to deploy my container to Heroku dyno (I have already created an app).
I have followed this documentation https://devcenter.heroku.com/articles/container-registry-and-runtime and have been successfully pushed my docker image to registry.heroku.com/sunset-sailing-4049/web.
The issue is since this announcement https://devcenter.heroku.com/changelog-items/1426 I now need to explicitly execute "heroku container:release web" in order to get my docker container running from registry to app dyno. This is where I am royally stuck. See my below issues:
Heroku is not recognized by Jenkins. (My Jenkins is running on ec2, I have installed heroku toolbelt as ec2-user user. But Jenkins throws error: heroku: command not found). How do I resolve this issue?
How to do "heroku login" from Jenkins, since the login command prompts for browser login. I have added ssh key but I do not know how to use it from the command line, hence Jenkins "shell script"
The only other way I could think of is deploying via heroku pipeline using a dummy git repo onto which Jenkins will upload the source code on a successful build.
Would really appreciate your help solving the above 2 issues.
Thanks in Advance.
You need install heroku as user under which jenkins is running. Or if you installed it globally it may be not in PATH of user under which jenkins is running.
There are multiple options for setting PATH:
Set for specific command.
If your job is pipeline just wrap heroku command in withEnv closure:
withEnv(['PATH+HEROKU=/use/local/bin/']) {
your heroku command here
}
Set path for jenkins slave: go to [Manage Jenkins] -> [Manage Nodes], configure your node and set Environment variable PATH to $PATH:/use/local/bin/. This way all jobs running on slave will get environment variable injected.
For automated cli interactions heroku supports API tokens. You can either put it in ~/.netrc on build machine or supply as environment variable (see here).
(writing here incase someone is facing the same scenario)
ok I took #vladimir's suggestion and did the below:
Heroku command (for jenkins running on ec2):
The below command is needed to push a built docker image to heroku via jenkins/or other ci/cd tool; Because of a recent change (https://devcenter.heroku.com/changelog-items/1426) pushing to heroku registry isn't sufficient any longer. In order to execute the below command you need to install heroku toolbelt.
heroku container:release web
Install snap on amazon linux like below:
follow instruction to enable epel https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/add-repositories.html
Then Modify /etc/yum.repos.d/epel.repo. Under the section marked [epel] , change enabled=0 to enabled=1.
Then do
sudo yum install epel-release
sudo yum install yum-plugin-copr
sudo yum copr enable ngompa/snapcore-el7
sudo yum -y install snapd
sudo systemctl enable --now snapd.socket
Then install heroku toolbelt:
sudo snap install --classic heroku
Deploying to docker image to heroku:
In Jenkins scripted pipeline:
withCredentials([string(credentialsId: 'heroku-api-cred', variable: 'herokuRegistryApiCred')]) {
sh "docker login -u email#example.com -p ${herokuRegistryApiCred} registry.heroku.com"
}
// Tag docker img (in my case it was an image in dockerhub)
sh "docker tag dockerhubusername/pvtreponame:${imageTag} registry.heroku.com/your_app_name/release_type[ie>web]"
sh "docker push registry.heroku.com/your_app_name/web"
sh "/usr/local/bin/heroku container:release web --app=your_app_name"
sh "docker logout registry.heroku.com"
In order to run the app inside docker (in my case it was java) I had to add the below line (otherwise it was crashing because 1. tell app about heroku's port binding. 2. tell web process to run command. The ENTRYPOINT ["java","-jar","my_spring_boot_app-0.0.1-SNAPSHOT.jar"] does not work on heroku.):
CMD ["web", "java $JAVA_OPTS -Dserver.port=$PORT -jar /usr/app/my_spring_boot_app-0.0.1-SNAPSHOT.jar"]
We are running DC/OS 11.1 on Azure cloud and have Docker engine version 17.09 on our agent nodes. We would like to upgrade Docker engine to 17.12.1 on each agent node.
Has anyone had experience with such procedure and would it cause any instability / side effects with the rest of the DC/OS components?
I have not done the upgrade myself in the exact environment you are running in, but I would not be terribly concerned. It goes without saying that test this out in non-production environment before you do it in production.
I would suggest draining the agent node before doing the docker upgrade. What I mean by draining is that you stop all the containers(tasks) running on the node, this will ensure that Mesos agents will stop the tasks and then inform the framework that the tasks are no longer running and the frameworks would take appropriate action.
To drain nodes run
sudo sh -c 'systemctl kill -s SIGUSR1 dcos-mesos-slave && systemctl stop dcos-mesos-slave'
for a private agent
sudo sh -c 'systemctl kill -s SIGUSR1 dcos-mesos-slave-public && systemctl stop dcos-mesos-slave-public'
for public agent
You would observe the agent disappear from the Nodes section of the UI and all tasks running on the agent marked as TASK_LOST. Ideally it should have been TASK_KILLED but that is a topic for another time.
Now perform your docker upgrade
After you have upgraded docker start the agent service back up
sudo systemctl start dcos-mesos-slave
for a private agent
sudo systemctl start dcos-mesos-slave-public
for public agent
The nodes should now start showing up in the UI and start accepting tasks.
To be safe
Verify this in non-prod environment before you do it in prod, to
iron out any operational issues you might encounter
Do 1 or a subset of agents at a time so that you are not left with a
cluster without any nodes while you are performing the upgrade
I am running Jenkins on Ubuntu 14.04 (Trusty Tahr) with slave nodes via SSH. We're able to communicate with the nodes to run most commands, but when a command requires a tty input, we get the classic
the input device is not a TTY
error. In our case, it's a docker exec -it command.
So I'm searching through loads of information about Jenkins, trying to figure out how to configure the connection to the slave node to enable the -t option to force tty instances, and I'm coming up empty. How do we make this happen?
As far as I know, you cannot give -t to the ssh that Jenkins fires up (which makes sense, as Jenkins is inherently detached). From the documentation:
When the SSH slaves plugin connects to a slave, it does not run an interactive shell. Instead it does the equivalent of your running "ssh slavehost command..." a few times...
However, you can defeat this in your build scripts by...
looping back to yourself: ssh -t localhost command
using a local PTY generator: script --return -c "command" /dev/null