I am using a openSUSE Tumbleweed container in a gitlab ci pipeline which runs some script. In that script, I need to send an email at some point of time with certain content.
In the container, I am installing postfix and configuring that relay server in /etc/postfix/main.cf.
The following command works on my laptop using that same relay server:
echo "This is the body of the email" | mail -s "This is the subject" -r sender#email.com receiver#email.com
but doesn't work from the container, even having the same postfix configuration.
I've seen some tutorials that show how to use the postfix/smtp configuration from the host, but since this is a container running in gitlab ci, that's not applicable.
So, finally opted for a python solution and call the script from bash, this way I really don't need to configure postfix, smtp or any other thing. You just export your variables in bash (our use argparse) and run this script. Of course, you need a relay server without auth (normally on port 25).
import os
import smtplib
from email.mime.text import MIMEText
smtp_server = os.environ.get('RELAY_SERVER')
port = os.environ.get('RELAY_PORT')
sender_email = os.environ.get('SENDER_EMAIL')
receiver_email = os.environ.get('RECEIVER_EMAIL')
mimetext = MIMEText("this is the body of the email")
mimetext['Subject'] = "this is the subject of the email"
mimetext['From'] = sender_email
mimetext['To'] = receiver_email
server = smtplib.SMTP(smtp_server, port)
server.ehlo()
server.sendmail(sender_email, receiver_email.split(','), mimetext.as_string())
Related
Migrating from one service to IBM Cloud for Redis.
I cannot find the correct configuration to connect using TLS. Everything I find on this is related to Heroku. and it ignores verifying the TLS/SSL connection.
I cannot find how to configure our Sidekiq/Redis to connect.
I do have a certificate from the IBM Cloud dashboard and I suspect I have to pass that along somehow.
Configure the Sidekiq.yml like this
:redis:
:url: "rediss://:< PWD >#< DB Name >:< PORT >/0"
:namespace: "app"
:ssl_params:
ca_file: 'path/to/cert'
I keep getting back the error Redis::CommandError - WRONGPASS invalid username-password pair or user is disabled.: however using these same credentials in the migration script I am able to connect to the DB, so the credentials are ok, I think it is not including the certificate correctly and I cannot find the correct way to do this
The sidekiq.yml configuration looks good to me, just make sure this has correct complete path
ca_file: 'path/to/cert'
and change the redis url to
:url: "rediss://< PWD >#< DB Name >:< PORT >/0"
further info you can read from here for TLS secured connection.
I'm not familiar with sidekiq.yml. But I've configured redlin with redis using a python script you can find here: https://github.com/IBM-Cloud/vpc-transit/blob/master/py/test_transit.py. Maybe the configuration is similar.
The relevant code is:
def vpe_redis_test(fip, resource):
"""execute a command in fip to verify postgresql is accessible"""
redis = resource["key"]
credentials = redis["credentials"]
cert_data = credentials["connection.rediss.certificate.certificate_base64"]
cli_arguments = credentials["connection.cli.arguments.0.1"]
command = f"""
#!/bin/bash
set -ex
if [ -x ./redli ]; then
echo redli already installed
else
curl -LO https://github.com/IBM-Cloud/redli/releases/download/v0.5.2/redli_0.5.2_linux_amd64.tar.gz
tar zxvf redli_*_linux_amd64.tar.gz
fi
./redli \
--long \
-u {cli_arguments} \
--certb64={cert_data} << TEST > redis.out
set foo working
I am trying to write .netCore 3.1 API in an Ubuntu Linux container that runs the equivalent of this Curl command.
WORKING LINUX CONTAINER CURL COMMAND:
curl --cacert /etc/root/trustedroot.crt --cert /etc/mutualauth/tls.crt --key /etc/mutualauth/tls.key
--header "SOAPAction:actionName" --data #test.xml https://this.is.the/instance --verbose
Enter PEM pass phrase: *****
<Success...>
We use Windows development laptops so everything starts with Windows.
So far, I have the following HttpClientHandler that my HttpClient is using on a Windows development machine. This code works on Windows with the cert in my local machine and current user personal stores and does not work in Linux:
WORKING WINDOWS HTTPCLIENTHANDLER CODE:
X509Store store = new X509Store(StoreName.My, StoreLocation.CurrentUser);
store.Open(OpenFlags.ReadOnly);
try
{
var cert = store.Certificates.Find(X509FindType.FindByThumbprint, "<<cert thumbprint here>>", true);
var handler = new HttpClientHandler
{
ClientCertificateOptions = ClientCertificateOption.Manual,
SslProtocols = SslProtocols.Tls12,
AllowAutoRedirect = false,
AutomaticDecompression = DecompressionMethods.Deflate | DecompressionMethods.GZip
};
handler.ClientCertificates.Add(cert[0]);
}
catch (Exception e)
{
//Handle errors
}
finally
{
store.Close();
}
The cert I imported was .PFX format so as I understand it, the password went in at the time of import and the code for Windows doesn't need to be concerned with it.
The Curl command mentioned above works from the container. So by that logic, if coded or configured properly, the code should be able to do the same thing. As I see it, the Curl command shown above contains four elements that I need to account for in my HttpClientHandler somehow:
The Trusted Root(CA) Certificate: /etc/root/trustedroot.crt
The TLS Certificate: /etc/mutualauth/tls.crt
The Private Key - /etc/mutualauth/tls.key
The PEM Passphrase
I have been reading into this for a couple of months now and have seen various articles and stack overflow posts but there is a staggering amount of variables and details involved with SSL and I cant find anything that directly addresses this in a way that makes sense to me with my limited understanding.
I also have the option of running a Linux script at the time of deployment to add different/other formats of certs/keys to the stores/filesystem in the container. This is how I get the certs and keys into the container in the first place, so I have some control over what I can make happen here as well:
LINUX CONFIG SCRIPT:
cp /etc/root/trustedroot.crt /usr/share/ca-certificates
cp /etc/mutualauth/tls.crt /usr/share/ca-certificates
cp /etc/mutualauth/tls.key /etc/ssl/private
echo "trustedroot.crt" >> /etc/ca-certificates.conf
echo "tls.crt" >> /etc/ca-certificates.conf
update-ca-certificates
dotnet wsdltest.dll --environment=Production --server.urls http://*:80
I do not believe I can get the binary .PFX file into the container due to security policies and limitations, but I definitely can get its string encoded cert and key formats into the container.
...so if there is a way of using different styles of certs that I can extract from the .PFX or specifying password and cert when the server 'spins up' to make my code not require a password, that would work too - I might just be missing something basic in the Linux config.
Would anyone be so kind as to point me in the proper direction to find out how I can uplift my HttpClientHandler code OR Linux config to be able to make this API call? Any ideas are welcome at this point, this has been a thorn in my side for a long time now... Thank you so much!
This was not the right approach.
The correct approach was an NGINX reverse proxy terminating mutual auth TLS so that Dotnetcore doesn't have to.
Save yourself some time and go NGINX!. :D
I am trying to write Jenkins post-initialisation scripts in Groovy that use the AWS CLI. My Jenkins lives behind a corporate proxy, and I configured it as myproxy port 3128 with a username and password, and a no_proxy of "10.*.*.*,ap-southeast-2.compute.internal,localhost,127.0.0.1,myothernoproxydomains.com".
The Groovy code I am trying is as follows:
def sg = "curl http://169.254.169.254/latest/meta-data/security-groups".execute().text
"aws ec2 describe-security-groups \
--region ap-southeast-2 \
--filters Name=group-name,Values=${sg} \
--query SecurityGroups[0].GroupId \
--output text".execute().text
If I comment out the second command, and run it in the Jenkins Script console, it runs fine and I can print the security group name. But if I allow the second command to run, I eventually get a message from my Chrome browser,
This page isn't working", myjenkins.mydomain.com took too long to respond. HTTP ERROR 504.
The Jenkins has no trouble using the HTTP proxy in other contexts, e.g. downloading packages, plugins etc.
I note that environment variables relating to the HTTP proxy do not appear in System.genenv:
System.getenv()
Result: {PATH=/sbin:/usr/sbin:/bin:/usr/bin, SHELL=/bin/bash, LOGNAME=jenkins, PWD=/, USER=jenkins, LANG=en_US.UTF-8, SHLVL=2, HOME=/var/lib/jenkins, _=/etc/alternatives/java}
I have seen Groovy code that calls the AWS CLI work on other Jenkinses at other sites. I think it might be somehow proxy-related?
Am I doing anything wrong? Any ideas on what the issue could be?
I think the issue is that the call to 169.254.169.254 is not passing through the proxy, so the CLI isn't timing out it is the AWS call to the meta data store. Add that into your NO_PROXY value and hopefully that should resolve the issue.
The other option is to turn off the proxy, they are evil :)
If you call it with Error Handling it won't cause a 504 Gateway Timed Out
def sg = "curl http://169.254.169.254/latest/meta-data/security-groups".execute().text
def sout = new StringBuilder(), serr = new StringBuilder()
def proc = 'aws ec2 describe-security-groups --region ap-southeast-2 --filters Name=group-name,Values=${sg} --query SecurityGroups[0].GroupId --output text'.execute()
proc.consumeProcessOutput(sout, serr)
proc.waitForOrKill(1000)
println "out> $sout\nerr> $serr"
This still doesn't work and returns empty. I can't get a simple aws ssm get-parameter to work so a proxy issue seems to be the culprit. I'll update if I get it working.
From a base image of nginx, I installed certbot, successfully got the certs and website works fine over ssl, so now I want to put the renew script as a cron job, but it doesnt seem to be working, I just wanted to check it was working with echo Helloworld:
* * * * * echo "Hello world!" >> /root/cron2.log 2>&1
But nothing shows up in /root, also, there are no logs present in the usual dirs like /var/log/, and there is no file syslog except for /usr/include, and no rsyslog,
What am I doing wrong with CRON? so that I can assess my dry-run renew script in a log so I know its working?
Needed to make sure that the service is actually run, with:
service cron start
I want send e-mail when jenkins build my project
my work process is
create EC2(Amazon Linux) and access ssh (ssh ec2-user#(EC2 ip))
install docker and docker jenkins > run jenkins container
install sendmail(i will use sendmail EC2 localhost server not smtp.gmail)
Jenkins Location > jenkins url : http://13.1xx.xx.xxx:8080/ > system admin email address : blablabal#mycompany.com
jenkins > Configure System > E-mail Notification > Test configuration by sending test e-mail > insert my e-mail myemail#gmail.com
Email was successfully sent
and when i check my email, but it regarded spam mail
I try sendmail command in EC2 instance's command line
# In AWS EC2 instance
$ vi test.txt ( save some text )
$ sendmail myemail#gmail.com < test.txt
and it regared spam
i think every email sended by sendmail, so I try sendmail in my local mac book
# MY LOCAL MAC BOOK
$ vi test.txt (save some text)
$ sendmail myemail#gmail.com < test.txt
but it regared normal e-mail, not spam
please somebody help me? thank you