I have installed java 1.8.* and set the JAVA_HOME path, and installed Jenkins from the official website as below in AWS EC2 Linux instance.
Java path: JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.342.b07-1.amzn2.0.1.x86_64 and
PATH=$PATH:$HOME/bin:$JAVA_HOME:$MAVEN_HOME:$M2
Jenkins commands used for installation:
sudo wget -O /etc/yum.repos.d/jenkins.repo \
https://pkg.jenkins.io/redhat/jenkins.repo
sudo rpm --import https://pkg.jenkins.io/redhat/jenkins.io.key
sudo yum upgrade
# Add required dependencies for the jenkins package
sudo yum install jenkins
sudo systemctl start jenkins
But jenkins failed to start. I don't have tomcat installed so port 8080 should be available. I have also set Java path at /etc/init.d/jenkins file
under section candidates="/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.342.b07-1.amzn2.0.1.x86_64/bin/java"
started Jenkins as per above command, but the service was failing.
Error Description as below:
systemctl start jenkins
Job for jenkins.service failed because the control process exited with error code. See "systemctl status jenkins.service" and "journalctl -xe" for details.
systemctl status jenkins.service
jenkins.service - Jenkins Continuous Integration Server
Loaded: loaded (/usr/lib/systemd/system/jenkins.service; enabled; vendor preset: disabled)
Active: failed (Result: start-limit) since Wed 2022-12-21 10:15:02 UTC; 10s ago
Process: 20467 ExecStart=/usr/bin/jenkins (code=exited, status=1/FAILURE)
Main PID: 20467 (code=exited, status=1/FAILURE)
Unit jenkins.service entered failed state.
jenkins.service failed.
jenkins.service holdoff time over, scheduling restart.
Stopped Jenkins Continuous Integration Server.
start request repeated too quickly for jenkins.service
Failed to start Jenkins Continuous Integration Server.
Unit jenkins.service entered failed state.
jenkins.service failed.
request if anybody has solution for this, please do help with the resolution. Expecting service should start and running and at browser console I should get the Unlock Jenkins screen window.
Thanks
Related
I'm trying to create a docker container with systemd enabled and install auditd on it.
I'm using the standard centos/systemd image provided in dockerhub.
But when I'm trying to start audit, it fails.
Here is the list of commands that I have done to create and get into the docker container:
docker run -d --rm --privileged --name systemd -v /sys/fs/cgroup:/sys/fs/cgroup:ro centos/systemd
docker exec -it systemd bash
Now, inside the docker container:
yum install audit
systemctl start auditd
I'm receiving the following error:
Job for auditd.service failed because the control process exited with error code. See "systemctl status auditd.service" and "journalctl -xe" for details.
Then I run:
systemctl status auditd.service
And I'm getting this info:
auditd[182]: Error sending status request (Operation not permitted)
auditd[182]: Error sending enable request (Operation not permitted)
auditd[182]: Unable to set initial audit startup state to 'enable', exiting
auditd[182]: The audit daemon is exiting.
auditd[181]: Cannot daemonize (Success)
auditd[181]: The audit daemon is exiting.
systemd[1]: auditd.service: control process exited, code=exited status=1
systemd[1]: Failed to start Security Auditing Service.
systemd[1]: Unit auditd.service entered failed state.
systemd[1]: auditd.service failed.
Do you guys have any ideas on why this is happening?
Thank you.
See this discussion:
At the moment, auditd can be used inside a container only for aggregating
logs from other systems. It cannot be used to get events relevant to the
container or the host OS. If you want to aggregate only, then set
local_events=no in auditd.conf.
Container support is still under development.
Also see this:
local_events
This yes/no keyword specifies whether or not to include local events. Normally you want local events so the default value is yes. Cases where you would set this to no is when you want to aggregate events only from the network. At the moment, this is useful if the audit daemon is running in a container. This option can only be set once at daemon start up. Reloading the config file has no effect.
So at least at Date: Thu, 19 Jul 2018 14:53:32 -0400, this feature not support, had to wait.
Installed Docker 17.x version in RHEL and we are getting below excetption.
-bash-4.2$ docker version
Client:
Version: 17.09.1-ce
API version: 1.32
Go version: go1.8.3
Git commit: 19e2cf6
Built: Thu Dec 7 22:23:40 2017
OS/Arch: linux/amd64
Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Get http://%2Fvar%2Frun%2Fdocker.sock/v1.32/version: dial unix /var/run/docker.sock: connect: permission denied
-bash-4.2$
to solve this , we introduce another user group (docker-user) and we added all the users in this group. after that we ran this command and able to ran docker .
sudo systemctl stop docker
sudo systemctl start docker
cd /var/run
sudo chown :docker-user docker.sock
But we are facing another issue that whenever VM is getting restarted ,DOCKER is not running. So we decided to setup run docker as daemon process and we followed
below steps.
1. create docker.conf file under /etc/systemd/system/docker.service.d folder.
2. added this entry in docker.conf file
[Service]
ExecStart=
ExecStart=/usr/bin/docker daemon -H tcp://0.0.0.0:2375 -H unix:///var/run/docker.sock
ExecStartPost=/bin/chown :docker-user /var/run/docker.sock
After adding this entry and we ran
1. sudo systemctl daemon-reload
2. sudo systemctl stop docker
3. sudo systemctl start docker
We are getting below exception
-bash-4.2$ sudo systemctl status docker.service
● docker.service - Docker Application Container Engine
Loaded: loaded (/usr/lib/systemd/system/docker.service; disabled; vendor preset: disabled)
Drop-In: /etc/systemd/system/docker.service.d
└─docker.conf
Active: failed (Result: start-limit) since Wed 2018-03-28 09:10:50 PDT; 12s ago
Docs: https://docs.docker.com
Process: 23395 ExecStart=/usr/bin/docker daemon -H tcp://0.0.0.0:2375 -H unix:///var/run/docker.sock (code=exited, status=1/FAILURE)
Main PID: 23395 (code=exited, status=1/FAILURE)
Mar 28 09:10:50 hostname systemd[1]: Failed to start Docker Application Container Engine.
Mar 28 09:10:50 hostname systemd[1]: Unit docker.service entered failed state.
Mar 28 09:10:50 hostname systemd[1]: docker.service failed.
Mar 28 09:10:50 hostname systemd[1]: docker.service holdoff time over, scheduling restart.
Mar 28 09:10:50 hostname systemd[1]: start request repeated too quickly for docker.service
Mar 28 09:10:50 hostname systemd[1]: Failed to start Docker Application Container Engine.
Mar 28 09:10:50 hostname systemd[1]: Unit docker.service entered failed state.
Mar 28 09:10:50 hostname systemd[1]: docker.service failed.
Guide me how to setup docker as daemon process
So, you have already dug in pretty good.
However, this behavior is built into Docker. For user groups, the docker daemon will allow users with the docker group to access the server (important to remember, this is exactly the same as giving root access to any users in that group!). If you wanted to specify a different group, you can start the daemon with the -g option.
Installing docker also installs a systemd service unit to run the daemon. The correct way to enable that (so that it restarts automatically) is
sudo systemctl enable docker
At this point, you didn't include enough of the journalctl logs to actually say why the daemon is not starting for you- if it is an option, I would try starting over since I don't know everything you have messed with, but if that is not an option the journalctl logs will likely explain the problem (probably that the user 'docker' no longer has access to the socket after you chowned it, but that is just a guess)
I have 3 CentOS VMs and I have installed Zookeeper, Marathon, and Mesos on the master node, while only putting Mesos on the other 2 VMs. The master node has no mesos-slave running on it. I am trying to run Docker containers so i specified "docker,mesos" in the containerizes file. One of the mesos-agents starts fine with this configuration and I have been able to deploy a container to that slave. However, the second mesos-agent simply fails when I have this configuration (it works if i take out that containerizes file but then it doesn't run containers). Here are some of the logs and information that has come up:
Here are some "messages" in the log directory:
Apr 26 16:09:12 centos-minion-3 systemd: Started Mesos Slave.
Apr 26 16:09:12 centos-minion-3 systemd: Starting Mesos Slave...
WARNING: Logging before InitGoogleLogging() is written to STDERR
[main.cpp:243] Build: 2017-04-12 16:39:09 by centos
[main.cpp:244] Version: 1.2.0
[main.cpp:247] Git tag: 1.2.0
[main.cpp:251] Git SHA: de306b5786de3c221bae1457c6f2ccaeb38eef9f
[logging.cpp:194] INFO level logging started!
[systemd.cpp:238] systemd version `219` detected
[main.cpp:342] Inializing systemd state
[systemd.cpp:326] Started systemd slice `mesos_executors.slice`
[containerizer.cpp:220] Using isolation: posix/cpu,posix/mem,filesystem/posix,network/cni
[linux_launcher.cpp:150] Using /sys/fs/cgroup/freezer as the freezer hierarchy for the Linux launcher
[provisioner.cpp:249] Using default backend 'copy'
[slave.cpp:211] Mesos agent started on (1)#172.22.150.87:5051
[slave.cpp:212] Flags at startup: --appc_simple_discovery_uri_prefix="http://" --appc_store_dir="/tmp/mesos/store/appc" --authenticate_http_readonly="false" --authenticate_http_readwrite="false" --authenticatee="crammd5" --authentication_backoff_factor="1secs" --authorizer="local" --cgroups_cpu_enable_pids_and_tids_count="false" --cgroups_enable_cfs="false" --cgroups_hierarchy="/sys/fs/cgroup" --cgroups_limit_swap="false" --cgroups_root="mesos" --container_disk_watch_interval="15secs" --containerizers="docker,mesos" --default_role="*" --disk_watch_interval="1mins" --docker="docker" --docker_kill_orphans="true" --docker_registry="https://registry-1.docker.io" --docker_remove_delay="6hrs" --docker_socket="/var/run/docker.sock" --docker_stop_timeout="0ns" --docker_store_dir="/tmp/mesos/store/docker" --docker_volume_checkpoint_dir="/var/run/mesos/isolators/docker/volume" --enforce_container_disk_quota="false" --executor_registration_timeout="1mins" --executor_shutdown_grace_period="5secs" --fetcher_cache_dir="/tmp/mesos/fetch" --fetcher_cache_size="2GB" --frameworks_home="" --gc_delay="1weeks" --gc_disk_headroom="0.1" --hadoop_home="" --help="false" --hostname_lookup="true" --http_authenticators="basic" --http_command_executor="false" --http_heartbeat_interval="30secs" --initialize_driver_logging="true" --isolation="posix/cpu,posix/mem" --launcher="linux" --launcher_dir="/usr/libexec/mesos" --log_dir="/var/log/mesos" --logbufsecs="0" --logging_level="INFO" --max_completed_executors_per_framework="150" --oversubscribed_resources_interval="15secs" --perf_duration="10secs" --perf_interval="1mins" --qos_correction_interval_min="0ns" --quiet="false" --recover="reconnect" --recovery_timeout="15mins" --registration_backoff_factor="1secs" --revocable_cpu_low_priority="true" --runtime_dir="/var/run/mesos" --sandbox_directory="/mnt/mesos/sandbox" --strict="true" --switch_user="true" --systemd_enable_support="true" --systemd_runtime_directory="/run/systemd/system" --version="false" --work_dir="/var/lib/mesos"
[slave.cpp:541] Agent resources: cpus(*):1; mem(*):919; disk(*):2043; ports(*):[31000-32000]
[slave.cpp:549] Agent attributes: [ ]
[slave.cpp:554] Agent hostname: node3
[status_update_manager.cpp:177] Pausing sending status updates
[state.cpp:62] Recovering state from '/var/lib/mesos/meta'
[state.cpp:706] No committed checkpointed resources found at '/var/lib/mesos/meta/resources/resources.info'
[status_update_manager.cpp:203] Recovering status update manager
[docker.cpp:868] Recovering Docker containers
[containerizer.cpp:599] Recovering containerizer
[provisioner.cpp:410] Provisioner recovery complete
[group.cpp:340] Group process (zookeeper-group(1)#172.22.150.87:5051) connected to ZooKeeper
[group.cpp:830] Syncing group operations: queue size (joins, cancels, datas) = (0, 0, 0)
[group.cpp:418] Trying to create path '/mesos' in ZooKeeper
[detector.cpp:152] Detected a new leader: (id='15')
[group.cpp:699] Trying to get '/mesos/json.info_0000000015' in ZooKeeper
[zookeeper.cpp:259] A new leading master (UPID=master#172.22.150.88:5050) is detected
Failed to perform recovery: Collect failed: Failed to run 'docker -H unix:///var/run/docker.sock ps -a': exited with status 1; stderr='Cannot connect to the Docker daemon. Is the docker daemon running on this host?'
To remedy this do as follows:
Step 1: rm -f /var/lib/mesos/meta/slaves/latest
This ensures agent doesn't recover old live executors.
Step 2: Restart the agent.
Apr 26 16:09:13 centos-minion-3 systemd: mesos-slave.service: main process exited, code=exited, status=1/FAILURE
Apr 26 16:09:13 centos-minion-3 systemd: Unit mesos-slave.service entered failed state.
Apr 26 16:09:13 centos-minion-3 systemd: mesos-slave.service failed.
Logs from docker:
$ sudo systemctl status docker
● docker.service - Docker Application Container Engine Loaded:
loaded (/usr/lib/systemd/system/docker.service; disabled; vendor preset: disabled)
Drop-In: /usr/lib/systemd/system/docker.service.d
└─flannel.conf Active: inactive (dead) since Tue 2017-04-25 18:00:03 CDT;
24h ago Docs: docs.docker.com Main PID: 872 (code=exited, status=0/SUCCESS)
Apr 26 18:25:25 centos-minion-3 systemd[1]: Dependency failed for Docker Application Container Engine.
Apr 26 18:25:25 centos-minion-3 systemd[1]: Job docker.service/start failed with result 'dependency'
Logs from flannel:
[flanneld-start: network.go:102] failed to retrieve network config: client: etcd cluster is unavailable or misconfigured
You have answer in your logs
Failed to perform recovery: Collect failed:
Failed to run 'docker -H unix:///var/run/docker.sock ps -a': exited with status 1;
stderr='Cannot connect to the Docker daemon. Is the docker daemon running on this host?'
To remedy this do as follows:
Step 1: rm -f /var/lib/mesos/meta/slaves/latest
This ensures agent doesn't recover old live executors.
Step 2: Restart the agent.
Mesos keeps it state/metadata on local disk. When it's restarted it try to load this state. If configuration changed and is not compatible with previous state it won't start.
Just bring docker to live by fixing problems with flannel and etcd and everything will be fine.
add the following flag while starting agent,
--reconfiguration_policy=additive
more details here: http://mesos.apache.org/documentation/latest/agent-recovery/
I am running docker on CentOS 7. (docker from centos repo. not that of docker-engine). docker was running perfectly but for some reasons i tried to reinstall it. Unfortunately docker.service refused to start and shows me the next couple of errors:
Jan 24 15:19:28 fms-provisioner-4.novalocal systemd[1]: Job docker.service/start failed with result 'dependency'.
Jan 24 15:21:30 fms-provisioner-4.novalocal systemd[1]: Dependency failed for Docker Application Container Engine.
-- Subject: Unit docker.service has failed
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
-- Unit docker.service has failed.
-- The result is dependency.
Jan 24 15:21:30 fms-provisioner-4.novalocal systemd[1]: Job docker.service/start failed with result 'dependency'.
Jan 24 15:28:49 fms_k8s_minion2 systemd[1]: [/usr/lib/systemd/system/docker.service:17] Unknown lvalue '--add-runtime docker-runc' in section 'Service'
Jan 24 15:43:09 fms_k8s_minion2 systemd[1]: Dependency failed for Docker Application Container Engine.
-- Subject: Unit docker.service has failed
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
-- Unit docker.service has failed.
-- The result is dependency.
Please may someone tell me what's going on ?
Try to restart docker daemon and service by using
sudo systemctl daemon-reload
and
sudo systemctl restart docker
if this do not help than Remove docker and try
curl -sSL http://get.docker.com | sh
sudo systemctl restart docker
Looks like your Docker build is configured to use `runc:
[/usr/lib/systemd/system/docker.service:17] Unknown lvalue '--add-runtime docker-runc' in section 'Service'
You could install runc, but that probably won't fix the issue:
sudo yum install runc
runC is lightweight, portable implementation of the the Open Container Format (OCF), you can find more about it in documentation.
Anyway the --add-runtime flag was added in Docker 1.12, unless there's at least docker-engine 1.12.0 in your repository remove the flag in /usr/lib/systemd/system/docker.service and reload the service:
sudo systemctl daemon-reload
sudo systemctl restart docker
Thanks all for your answers but i forgot to mention that i am using flannel with docker. it such case, flannel was down do docker won't to start.
that's mainly what was causing my issue.
sorry for disturbing.
I'm following an installation tutorial for unicorn and nginx using this tutorial on ubuntu 15.04. Everything works until the section "Create Unicorn Init Script" where I create the init script, change the required variables, change the permissions and then run
sudo service unicorn_my_app start
Where I get an error that reads
Job for unicorn_my_app.service failed. See "systemctl status unicorn_my_app.service" and "journalctl -xe" for details
When I run
systemctl status unicorn_my_app.service
The output is:
● unicorn_my_app.service - LSB: starts the unicorn app server
Loaded: loaded (/etc/init.d/unicorn_my_app)
Active: failed (Result: exit-code) since Sun 2015-09-27 23:22:49 PDT; 1min 19s ago
Docs: man:systemd-sysv-generator(8)
Process: 26576 ExecStart=/etc/init.d/unicorn_my_app start (code=exited, status=127)
What is happening? why won't unicorn start?