I am trying to install a Sensu client without the server.
According to the documentation:
"The Sensu Core package installs several processes including sensu-server, sensu-api, and sensu-client."
However after adding the repository, I was only able to locate the aggragate sensu package and could not locate or install sensu-client.
I noticed a ticket on github stating it was not possible however that was 2 years ago so maybe things have changed?
Is it possible to install Sensu server without having to install Redis, RabbitMQ and Sensu server?
The sensu package will install sensu-server, sensu-api, and sensu-client services, you can however, setup a functional sensu-client without installing Redis, RabbitMQ, and without configuring sensu-server.
sudo yum install sensu
vi /etc/sensu/config.json
vi /etc/sensu/conf.d/client.json
sudo /etc/init.d/sensu-client start
sudo /sbin/chkconfig sensu-client on
/etc/sensu/config.json should at least have the rabbitmq location of the server, while /etc/sensu/conf.d/client.json of course needs to have the ip and name of the client.
If someone facing an error with installation provide by #Enrique sudo yum install sensu
https://sensu.global.ssl.fastly.net/yum/latest/x86_64/repodata/repomd.xml: [Errno 14] HTTPS Error 404 - Not Found
The above was not working on aws ec2 linux.
They can try this to add sensu repo.
vim /etc/yum.repos.d/sensu.repo
Add this
[sensu]
name=sensu
baseurl=http://sensu.global.ssl.fastly.net/yum/$basearch/
gpgcheck=0
enabled=1
then install sensu
sudo yum install sensu -y
Remaining steps same as answerd by #Enrique Arriaga
It's not possible with sensu core. But you can install it using the new product from sensu, sensu go which has sensu backend (replaces sensu server) and sensu agent (replaces sensu client) allows to install only sensu agent on your infrastructure.
Related
I have created an Redhat EC2 Instance in AWS.
I am trying to install Jenkins as a Docker Image inside that Redhat EC2 Instance.
I am following the below URl to install Docker on AWS
https://docs.docker.com/engine/install/centos/
But I am facing issue after adding that repositoy, I guess Yum is not able to get the repository
Failed to set locale, defaulting to C
Loaded plugins: amazon-id, search-disabled-repos
https://download.docker.com/linux/centos/7Server/x86_64/stable/repodata/repomd.xml: [Errno 14] HTTPS Error 404 - Not Found
Trying other mirror.
To address this issue please refer to the below knowledge base article
https://access.redhat.com/articles/1320623
If above article doesn't help to resolve this issue please open a ticket with Red Hat Support.
One of the configured repositories failed (Docker CE Stable - x86_64),
and yum doesn't have enough cached data to continue. At this point the only
safe thing yum can do is fail. There are a few ways to work "fix" this:
1. Contact the upstream for the repository and get them to fix the problem.
2. Reconfigure the baseurl/etc. for the repository, to point to a working
upstream. This is most often useful if you are using a newer
distribution release than is supported by the repository (and the
packages for the previous distribution release still work).
3. Run the command with the repository temporarily disabled
yum --disablerepo=docker-ce-stable ...
4. Disable the repository permanently, so yum won't use it by default. Yum
will then just ignore the repository until you permanently enable it
again or use --enablerepo for temporary usage:
yum-config-manager --disable docker-ce-stable
or
subscription-manager repos --disable=docker-ce-stable
5. Configure the failing repository to be skipped, if it is unavailable.
Note that yum will try to contact the repo. when it runs most commands,
so will have to try and fail each time (and thus. yum will be be much
slower). If it is a very temporary problem though, this is often a nice
compromise:
yum-config-manager --save --setopt=docker-ce-stable.skip_if_unavailable=true
failure: repodata/repomd.xml from docker-ce-stable: [Errno 256] No more mirrors to try.
https://download.docker.com/linux/centos/7Server/x86_64/stable/repodata/repomd.xml: [Errno 14] HTTPS Error 404 - Not Found
I tried running the following command after that error(just hit and trail)
yum-config-manager --save --setopt=docker-ce-stable.skip_if_unavailable=true
But its not able to find
No package docker-ce available.
No package docker-ce-cli available.
No package containerd.io available.
Error: Nothing to do
Can someone help me with some documentation or any blog to install docker on Redhat platform
I am Using RHEL_7.9 version
Thanks in Advance
I'm new to Docker and I'm not very experienced in SSL/certificates etc.
I'm working on a web application that lets the user log in to JIRA via JIRA API to do things. This works on my computer and I don't get any SSL errors. However, when I run it in a Docker container I get
ssl.SSLError: [SSL: WRONG_SIGNATURE_TYPE] wrong signature type error.
My friend ran the exact same Dockerfile in his computer and created a container and it works which is confusing.
I checked the requests library version on my computer and compared it to the one in Docker container but they are the same. What could be the problem? Thank you
Note: I use Windows
Faced similar problem.
Find the solution here
Just to sum up:
There is missing dependencies inside container which your own system already has. You should install them inside docker.
Idk how but pyopenssl library should be install to
So, you need to add:
RUN apt-get update \
&& apt-get install openssl \
&& apt-get install ca-certificates
In your Dockerfile
And add:
pyopenssl==19.1.0
To your requirements.txt
If you don't use requirements.txt just add:
RUN pip install pyopenssl
To your Dockerfile
Hope it'll help
Similar issue happened to me - python tls requests were working fine on my host, but failed with WRONG_SIGNATURE_TYPE once I dockerized my script.
The issue seems to stem from where requests lib uses older TLS stacks with OpenSSL 1.1.1pre9.
I noticed rolling back to python:3.5.3 stopped the error, however for compatibility with newer versions the solution posted in github issue thread worked fine.
i.e. import the TLSAdapter, use it to setup a requests session, then start making requests.
Can I install Docker over a server with pre-installed cPanel and CentOS 7? Since I am not aware of Docker, I am not completely sure whether it will mess with cPanel or not. I already have a server with CentOS 7 and cPanel configured. I want to know if I can install Docker over this configuration I mentioned without messing up?
Yes you can install docker over cPanel/WHM just like installing it on any other CentOS server/virtual machine.
Just follow these simple steps (as root):
1) yum install -y yum-utils device-mapper-persistent-data lvm2 (these should be already installed...)
2) yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
3) yum install docker-ce
4) enable docker at boot (systemctl enable docker)
5) start docker service (systemctl start docker)
The guide above is for CentOS 7.x. Don't expect to find any references or options related to Docker in the WHM interface. You will be able to control docker via command line from a SSH shell.
I have some docker containers already running on my cPanel/WHM server and I have no issues with them. I basically use them for caching, proxying and other similar stuff.
And as long as you follow these instructions, you won't mess-up any of your cPanel/WHM services/settings or current cPanel accounts/settings/sites/emails etc.
Not sure why you haven't tried this already!
I've been doing research and working on getting Docker working on cPanel. It's not just getting it to work on a CentOS 7 box but rather making it palatable for the cPanel crowd in the form of a plugin. So far I can confirm that it's absolutely doable. Here's what I've accomplished and how:
Integrate Docker Compose with cPanel (which is somewhat a step
further from WHM)
Leverage the user-namespace kernel feature in Linux so Docker
services can't escalate their privileges (see userns remap)
Leverage Docker Compose so users can build complex services and
start ready apps from the store with a click
Make sure services starting via Docker run on a non-public IP on the
server. Everything gets routed via ProxyPass
cPanel has been gracious to provide a Slack channel for people to discuss this upcoming plugin. I'd be more than happy to invite you if you'd like to be kept updated or to contribute. Let me know!
FYI, there's more info here on https://www.unixy.net/docker if you're interested. Please note that this plugin is in private beta but more than happy to let people use it!
Yes you could, in fact someone else has done it already: https://github.com/mirhosting/cPanel-docker
I have a ruby on rails app running on a Google Cloud Platform VM running on the app engine flexible environment. It looks like it installs most of the software on the VM when I deploy the app with gcloud --project project-name preview app deploy I think it installs rails and other software from reading the temporary dockerfile it creates. It grabs the info for the dockerfile from the app.yaml file (I got this setup from following their tutorials).
This was working fine for me but now I need to install ImageMagick onto the server to manipulate images on the site. Normally you do this by running sudo apt-get install imagemagick from the project directory. When I SSH onto the VM I cant find the project directory so that doesn't work.
I have no idea how to get it to run sudo apt-get install imagemagick each time I make a new deploy to the site so it has the software on the new VM.
As you might be able to tell I'm not very good with the server side of things and want to know what I'm supposed to do to get new software onto the VM the right way so its always there like ruby and rails etc.. are each time I make a new deploy.
You should use a custom Dockerfile when you need additional configuration.
To install ImageMagics you have to set runtime: custom in your app.yaml, create a Dockerfile based on default one, and add following line:
RUN apt-get update && \
apt-get install imagemagick -y
I have downloaded docker binary version 1.8.2 and copied that to my backup server (centos server) which doesn't have internet connectivity. I have marked this as executable and started the docker daemon as mentioned in [https://docs.docker.com/engine/installation/binaries/][1]. But it doesn't seem to get installed as a docker service. For all the commands, I have to execute as sudo ./docker-1.8.2 {command}. Is there a way to install docker-engine as a service? Currently sudo docker version shows command not found. I'm a newbie to docker setup. Please advise.
Why not download the rpm package (there are also centos 6 packages), copy to USB stick and then to your server and simply install it with rpm command and that's it. That way you'd get the same installation as if you were to run yum.
Of course you may have some dependencies missing, but you could download all of these as well.
Firstly, if you're downloading bare binaries on an enterprise linux, you're probably doing things in a very bad way. Immediately, you're breaking updates and consistency, and leaving your system in a risky, messy state.
Try using yumdownloader --resolve to get the docker installable and anything it needs.
A better option may be to mirror the installation artifacts, and grab it from the local mirror, but that's beyond the scope if you don't do this already.