Docker jrcs/letsencrypt-nginx-proxy-companion doesn't generate a proper certificate - docker

I'm following a tutorial to deploy Wordpress using Docker on a Ubuntu server. The tutorial is in this website.
It's important to mention that I already have two subdomains at this point, one for the Wordpress site and another for the phpMyAdmin site.
However the letsencrypt certificates seem to not be generated properly. I can access the website via http, but not https, and when I look at the certificate it doesn't look correct. In fact it doesn't seem to have one for my website.
To make everything easier I created a script to run all the steps fast:
#!/bin/bash
web_dir=/srv/www
myusername=root
domain_name=subdomain.domain.com
website_folder=/srv/www/$domain_name
nginx_proxy_repo=https://github.com/kassambara/nginx-multiple-https-websites-on-one-server
nginx_folder=/srv/www/nginx-multiple-https-websites-on-one-server/nginx-proxy
final_nginx_folder=/srv/www/nginx-proxy
echo ---INSTALL REQUIRED COMPONENTS----
sudo apt update
sudo apt install apt-transport-https ca-certificates curl software-properties-common
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu focal stable"
sudo apt update
apt-cache policy docker-ce
sudo apt install docker-ce docker-compose git
sudo systemctl status docker
echo ---CREATE AND GIVE PERMISSIONS TO WEBSITES DIR----
sudo mkdir -p $web_dir
# 2. set your user as the owner
sudo chown -R $myusername $web_dir
# 3. set the web server as the group owner
sudo chgrp -R www-data $web_dir
# 4. 755 permissions for everything
sudo chmod -R 755 $web_dir
# 5. New files and folders inherit
# group ownership from the parent folder
chmod g+s $web_dir
echo ---INSTALL NGINX PROXY----
git clone $nginx_proxy_repo $web_dir
rm -rf $web_dir/nginx-proxy/nginx.tmpl
curl -s https://raw.githubusercontent.com/jwilder/nginx-proxy/master/nginx.tmpl > $web_dir/nginx-proxy/nginx.tmpl
cd $web_dir
rm -rf your-website-one.com your-website-two.com README.Rmd .gitignore .Rbuildignore .git README.md
echo ---INSTALL WORDPRESS----
cd $web_dir
git clone https://github.com/kassambara/wordpress-docker-compose $domain_name
echo ---CONFIGURE DOCKER COMPOSE FOR ONLINEHOST----
cd $website_folder
mv docker-compose-onlinehost.yml docker-compose.yml
echo ---FINAL TOUCHES----
cd $website_folder
vi ./setup-onlinehost.sh
chmod +x setup-onlinehost.sh && ./setup-onlinehost.sh
vi .env
vi docker-compose.yml
cd $final_nginx_folder
docker network create nginx-proxy
docker-compose up -d
cd $final_nginx_folder
cd vhost.d
echo "client_max_body_size 64M;" > $domain_name
cd $website_folder
docker-compose up -d --build
docker-compose -f docker-compose.yml -f wp-auto-config.yml run --rm wp-auto-config
When the time comes I setup the setup-onlinehost.sh like this:
project_name="wordpress"
user_name="wordpress"
pass_word="wordpress"
email="mail#gmail.com"
website_title="My Blog"
website_url="https://subdomain.domain.com"
phmyadmin_url="sqlsubdomain.domain.com"
env_file=".env"
compose_file="docker-compose.yml"
Then I remove the redirectnonwww container from the docker-compose.yml file since I don't want the redirect non-www to www behavior.
Then after everything is completed, I can access the websites over http but not over https. When I try to access it over https I receive a message about This connection is not private and the certificate seems to be wrong at this point.
Also If I let continue my browser to visit the website I got to the Nginx 500 Internal Server Error.
If I look into the contents of nginx-proxy/certs I see listed the following items:
certs (folder)
default.crt
default.key
dhparam.pem
subdomain.domain.com (empty folder)
sqlsubdomain.domain.com (empty folder)
conf.d (folder)
docker-compose.yml
html
nginx.tmpl
vhost.d (folder)
subdomain.domain.com (file)
The contents of vhost.d/subdomain.domain.com are:
## Start of configuration add by letsencrypt container
location ^~ /.well-known/acme-challenge/ {
auth_basic off;
auth_request off;
allow all;
root /usr/share/nginx/html;
try_files $uri =404;
break;
}
## End of configuration add by letsencrypt container
client_max_body_size 64M;
I'm not sure if I'm doing something wrong or if I should be doing something else that is not listed on the tutorial.

The issue seemed to be the number of times I had requested a certificate for those specific domains. I tried the deploy multiple times to figure out how to do it properly for the deployment server and also to write a proper version of the script, that I requested many times a certificate for two specific domains.
The issue was resolved after I tried a different domain and subdomain.

Related

How to create CakePHP project (previous version) using Composer (using an old version of PHP, in 2023)

TL;DR Technology moving on and what was deprecated is obsolete now. But fear not, Docker here to solve the problem. See the answer below to see how I create a CakePHP 3.6 project on macOS Ventura.
Here, in 2023, I have my Mac updated to Ventura (13.0.1) with brew installed (Homebrew 3.6.18) and I have this requirement to create a CakePHP project, specifically version 3.6 (I am very well aware of CakePHP 3.x End of Support - but life happens). Now the scene was set, let's go and rock-and-roll!
Heading to installation documentation yields these;
Install PHP - per documentation PHP 7.4 is supported, yay (sort of)
Install Composer - there is at least one another alternative, the Oven way, but I haven't tried it
Create a CakePHP Project
OK, let me install PHP. Oh, I already PHP installed via brew, PHP 8.2.1. Good then let me check if I already have Composer installed as well, composer --version; yes "Composer version 2.4.4 2022-10-27 14:39:29". So the last thing I need to do is to Create a CakePHP Project. Let me run composer create-project --prefer-dist cakephp/app:"^3.6" my_app_name;
Creating a "cakephp/app:^3.6" project at "./my_app_name"
Info from https://repo.packagist.org: #StandWithUkraine
Installing cakephp/app (3.10.1)
- Installing cakephp/app (3.10.1): Extracting archive
Created project in /Users/johndoe/Code/my_app_name
Loading composer repositories with package information
Updating dependencies
Your requirements could not be resolved to an installable set of packages.
Problem 1
- cakephp/cakephp[3.10.0, ..., 3.10.5] require php >=5.6.0,<8.0.0 -> your php version (8.2.1) does not satisfy that requirement.
- Root composer.json requires cakephp/cakephp 3.10.* -> satisfiable by cakephp/cakephp[3.10.0, ..., 3.10.5].
Oh, noes đŸ˜”. Well, let me downgrade my PHP version, or better yet let brew install PHP 7.4 side-by-side (actually it was not a "better yet"). Quick Googling yield Update PHP to 7.4 macOS Catalina with brew on SO. Hmm, I'm on Venture and this is for Catalina. But there is one comment;
This solution works perfectly in MacOS BigSur.
-juanitourquiza
I took juanitourquiza's word for it, besides there's nothing to lose... Except for those irritating "libicuio.71.dylib no such file" errors. It turned out that "Xcode 7.1 changed the name of some libraries now it uses .tdb files.". Bummer!
There I was scratching my head, I thought to myself "well I'm already going to use Docker to serve the app anyway (locally), why not use Docker to create the project too?".
First and foremost you have to have Docker installed, and how is another story.
Create a CakePHP Project - Take 2
Next this little command would suffice to create a CakePHP 3.6 project;
docker run -it --rm \
--name php74_composer \
-v "$PWD":/usr/src/myapp \
-w /usr/src/myapp \
php:7.4-cli sh -c "pwd; apt-get update && apt-get install -y unzip; \
curl https://raw.githubusercontent.com/composer/getcomposer.org/76a7060ccb93902cd7576b67264ad91c8a2700e2/web/installer | php; \
php composer.phar create-project --no-install --no-interaction --no-scripts --prefer-dist cakephp/app:\"3.6.5\" my_app_name; \
cd my_app_name; \
php ../composer.phar config --no-plugins allow-plugins.cakephp/plugin-installer true; \
php ../composer.phar install --ignore-platform-reqs; \
php ../composer.phar run --no-interaction post-install-cmd; \
cd ../ && rm composer.phar; \
exit"
Although it's highly opinionated about the versions (PHP 7.4 and CakePHP 3.6.5) it does the trick! When you run it, it'll create a directory called "my_app_name" on the current working directory. After the container exit out you may move this directory anywhere as your heart desire.
Serve the App
As I mentioned I am going to use Docker as well to serve the app (locally). There are trillion tutorials out there showing how to do it, nonetheless here's my solution;
# In the "my_app_name" directory
mkdir docker && cd docker
echo "FROM php:7.4-fpm
RUN apt-get update && apt-get install -y \\
libicu-dev \\
git \\
curl \\
zip \\
unzip
RUN docker-php-ext-configure intl
RUN docker-php-ext-install intl
RUN docker-php-ext-enable intl
RUN docker-php-ext-configure pdo_mysql
RUN docker-php-ext-install pdo_mysql
RUN docker-php-ext-enable pdo_mysql
RUN curl https://raw.githubusercontent.com/composer/getcomposer.org/76a7060ccb93902cd7576b67264ad91c8a2700e2/web/installer | php && \\
mv composer.phar /usr/local/bin/composer
WORKDIR /var/www" > Dockerfile
echo "version: '3.8'
services:
app:
build:
context: ./
dockerfile: Dockerfile
container_name: johndoes-app
restart: always
working_dir: /var/www
volumes:
- ../:/var/www
nginx:
image: nginx:1.23-alpine
container_name: johndoes-nginx
restart: always
ports:
- 8000:80
volumes:
- ../:/var/www
- ./nginx:/etc/nginx/conf.d
" > docker-compose.yml
mkdir nginx && cd nginx
echo "server {
listen 80;
index index.php;
error_log /var/log/nginx/error.log;
access_log /var/log/nginx/access.log;
error_page 404 /index.php;
root /var/www/webroot;
location ~ \.php {
try_files \$uri =404;
fastcgi_pass app:9000;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME \$document_root\$fastcgi_script_name;
}
location / {
try_files \$uri \$uri/ /index.php?\$query_string;
}
}
" > nginx.conf
And then at "my_app_name/docker" run docker-compose up. After services started go to "http://localhost:8000".
Now you may proceed with configuring your app. Cheers.
Update for Xdebug
If you wish to debug your app via VSCode / VSCodium running on your host machine (Mac in my case);
Append the following to Dockerfile (in between existing lines denoted);
# existing RUN docker-php-ext-enable pdo_mysql
RUN pecl install xdebug-3.1.6 && docker-php-ext-enable xdebug
# existing RUN curl https://raw.githubusercontent.com/composer/getcomposer.org/...
Append the following to docker-compose.yml (in between existing lines denoted);
# for "app" service
# existing - ../:/var/www
- ./99-xdebug.ini:/usr/local/etc/php/conf.d/docker-php-ext-xdebug.ini
# mind indentation since this is a YAML file, i.e.;
# dashes must be on the same level vertically
Create the INI file for Xdebug;
# at the docker directory
echo "zend_extension = xdebug
xdebug.mode = debug
xdebug.start_with_request = yes
xdebug.client_host = host.docker.internal" > 99-xdebug.ini
Install the PHP Debug extension for VSCode or for VSCodium.
As the final step have a .vscode/launch.json file including these;
{
"name": "Listen for Xdebug",
"type": "php",
"request": "launch",
"port": 9003,
"pathMappings": {
"/var/www": "${workspaceFolder}"
}
}
"pathMappings" is the crucial setting here, others were automatically generated.

Safely setup Ubuntu vm with Terraform and Cloud-init

For personal use (and fun) I'm trying to setup a VM on which I want to host my website (Nginx, Django and Postgres running in docker containers). I'm trying to learn how to setup the server using Terraform and Cloud init in a safe manner.
My current cloud-init code:
#cloud-config
groups:
- docker
users:
- default
# the docker service account
- name: test
shell: /bin/bash
home: /home/test
groups: docker
sudo: ALL=(ALL) NOPASSWD:ALL
ssh_import_id: None
lock_passwd: true
ssh-authorized-keys:
- ssh-rsa my_public_ssh_key
package_update: true
package_upgrade: true
packages:
- git
- sudo
- apt-transport-https
- ca-certificates
- curl
- gnupg-agent
- software-properties-common
runcmd:
# install docker following the guide: https://docs.docker.com/install/linux/docker-ce/ubuntu/
- curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
- sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
- sudo apt-get -y update
- sudo apt-get -y install docker-ce docker-ce-cli containerd.io
- sudo systemctl enable docker
# install docker-compose following the guide: https://docs.docker.com/compose/install/
- sudo curl -L "https://github.com/docker/compose/releases/download/1.25.4/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
- sudo chmod +x /usr/local/bin/docker-compose
power_state:
mode: reboot
message: Restarting after installing docker & docker-compose
The VM is Ubuntu 20.04
Technically I want the "test" user to be able to pull the latest code from my git repo and (re-)deploy the website (in /home/test/website) using docker-compose. Is it possible that the user does not have sudo permissions (I don't want to have it have elevated permissions). And secondly: how do I create a root account with a separate SSH key (and would this be a safe setup)?
The Terraform code that produces the VM.
resource "scaleway_instance_server" "app_server" {
type = var.instance_type
image = "ubuntu-focal"
name = var.instance_name
enable_ipv6 = true
tags = [ "FocalFossa", "MyUbuntuInstance" ]
root_volume {
size_in_gb = 20
delete_on_termination = true
}
lifecycle {
create_before_destroy = true
}
ip_id = scaleway_instance_ip.public_ip.id
security_group_id = scaleway_instance_security_group.www.id
# cloud init: setup
cloud_init = file("${path.module}/cloud-init.yml")
}
Help is much appreciated.
Is it possible that the user does not have sudo permissions (I don't want to have it have elevated permissions).
Anything run by cloud-init is run as root, including the bootcmd/runcmd commands. To run things as a different user, you can use sudo in your runcmd.
sudo -u test whoami >> /var/tmp/run_cmd
would write test to /var/tmp/run_cmd.
And secondly: how do I create a root account with a separate SSH key (and would this be a safe setup)?
Your users section would something look like this.
users:
- default
# the docker service account
- name: test
shell: /bin/bash
home: /home/test
groups: docker
sudo: ALL=(ALL) NOPASSWD:ALL
lock_passwd: true
ssh-authorized-keys:
- ssh-rsa my-public-key
- name: root
ssh-authorized-keys:
- ssh-rsa root-public-key
disable_root: false
Is it safe? I think that's debatable, but there's a reason root login is disabled by default. It should be possible to ssh into the default user and then sudo su for your root access needs.
Also, just FYI, the ssh_import_id: None in your config was raising an exception in the cloud-init log because it was trying to import an ssh id for user None.

Docker doesn't seem to be mapping ports

I'm working with Hugo
Trying to run inside a Docker container to allow people to easily manage content.
My first task is to get Hugo running and people able to view the site locally.
Here's my Dockerfile:
FROM alpine:3.3
RUN apk update && apk upgrade && \
apk add --no-cache go bash git openssh && \
mkdir -p /aws && \
apk -Uuv add groff less python py-pip && \
pip install awscli && \
apk --purge -v del py-pip && \
rm /var/cache/apk/* && \
mkdir -p /go/src /go/bin && chmod -R 777 /go
ENV GOPATH /go
ENV PATH /go/bin:$PATH
RUN go get -v github.com/spf13/hugo
RUN git clone http://mygitrepo.com /app
WORKDIR /app
EXPOSE 1313
ENTRYPOINT ["hugo","server"]
I'm checking out the site repo then running Hugo - hugo server
I'm then running this container via:
docker run -d -p 1313:1313 --name app app
Which reports everything is starting OK however when I try to browse locally on localhost:1313 I see nothing.
Any ideas where I'm going wrong?
UPDATE
docker ps gives me:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
9e1f12849044 app "hugo server" 16 minutes ago Up 16 minutes 0.0.0.0:1313->1313/tcp app
And docker logs 9e1 gives me:
Started building sites ...
Built site for language en:
0 draft content
0 future content
0 expired content
25 pages created
0 non-page files copied
0 paginator pages created
0 tags created
0 categories created
total in 64 ms
Watching for changes in /ltec/{data,content,layouts,static,themes}
Serving pages from memory
Web Server is available at http://localhost:1313/ (bind address 127.0.0.1)
Press Ctrl+C to stop
I had the same problem, but following this tutorial http://ahmedalani.com/post/so-recursive-it-hurts/, says about to use the param --bind from hugo server command.
Adding that param mentioned, and the ip 0.0.0.0 we have --bind=0.0.0.0
It works to me, I think this is a natural behavior from every container taking a localhost for self scope, but if you bind with 0.0.0.0 takes a visible scope to the main host.
This is because Docker is actually running in a VM. You need to navigate to the docker-machine ip instead of localhost.
curl $(docker-machine ip):1313
Delete EXPOSE 1313 in your Dockerfile. Dockerfile reference.

How do you run an Openshift Docker container as something besides root?

I'm currently running Openshift, but I am running into a problem when I try to build/deploy my custom Docker container. The container works properly on my local machine, but once it gets built in openshift and I try to deploy it, I get the error message. I believe the problem is because I am trying to run commands inside of the container as root.
(13)Permission denied: AH00058: Error retrieving pid file /run/httpd/httpd.pid
My Docker file that I am deploying looks like this -
FROM centos:7
MAINTAINER me<me#me>
RUN yum update -y
RUN yum install -y git https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
RUN yum install -y ansible && yum clean all -y
RUN git clone https://github.com/dockerFileBootstrap.git
RUN ansible-playbook "-e edit_url=andrewgarfield edit_alias=emmastone site_url=testing.com" dockerAnsible/dockerFileBootstrap.yml
RUN (cd /lib/systemd/system/sysinit.target.wants/; for i in *; do [ $i == systemd-tmpfiles-setup.service ] || rm -f $i; done); \
rm -f /lib/systemd/system/multi-user.target.wants/*;\
rm -f /etc/systemd/system/*.wants/*;\
rm -f /lib/systemd/system/local-fs.target.wants/*; \
rm -f /lib/systemd/system/sockets.target.wants/*udev*; \
rm -f /lib/systemd/system/sockets.target.wants/*initctl*; \
rm -f /lib/systemd/system/basic.target.wants/*;\
rm -f /lib/systemd/system/anaconda.target.wants/*;
COPY supervisord.conf /usr/etc/supervisord.conf
RUN rm -rf supervisord.conf
VOLUME [ "/sys/fs/cgroup" ]
EXPOSE 80 443
#CMD ["/usr/bin/supervisord"]
CMD ["/usr/sbin/httpd", "-D", "FOREGROUND"]
Ive run into a similar problem multiple times where it will say things like Permission Denied on file /supervisord.log or something similar.
How can I set it up so that my container doesnt run all of the commands as root? It seems to be causing all of the problems that I am having.
Openshift has strictly security policy regarding custom Docker builds.
Have a look a this OpenShift Application Platform
In particular at point 4 into the FAQ section, here quoted.
4. Why doesn't my Docker image run on OpenShift?
Security! Origin runs with the following security policy by default:
Containers run as a non-root unique user that is separate from other system users
They cannot access host resources, run privileged, or become root
They are given CPU and memory limits defined by the system administrator
Any persistent storage they access will be under a unique SELinux label, which prevents others from seeing their content
These settings are per project, so containers in different projects cannot see each other by default
Regular users can run Docker, source, and custom builds
By default, Docker builds can (and often do) run as root. You can control who can create Docker builds through the builds/docker and builds/custom policy resource.
Regular users and project admins cannot change their security quotas.
Many Docker containers expect to run as root (and therefore edit all the contents of the filesystem). The Image Author's guide gives recommendations on making your image more secure by default:
Don't run as root
Make directories you want to write to group-writable and owned by group id 0
Set the net-bind capability on your executables if they need to bind to ports <1024
Otherwise, you can see the security documentation for descriptions on how to relax these restrictions.
I hope it helps.
Although you don't have access to root, your OpenShift container, by default, is a member of the root group. You can change some dir/file permissions to avoid the Permission Denied errors.
If you're using a Dockerfile to deploy an image to OpenShift, you can add the following RUN command to your Dockerfile:
RUN chgrp -R 0 /run && chmod -R g=u /run
This will change the group for everything in the /run directory to the root group and then set the group permission on all files to be equivalent to the owner (group equals user) of the file. Essentially, any user in the root group has the same permissions as the owner for every file.
You can run docker as any user , also root (and not Openshift default build-in account UID - 1000030000 when issuing this two commands in sequence on command line oc cli tools
oc login -u system:admin -n default following with oc adm policy add-scc-to-user anyuid -z default -n projectname where projectname is name of your project inside which you assigned under your docker

implement yum functions from kickstar (ks.cfg) file for rh/centos install

I've got the following kickstart file (ks.cfg) for a raw centos installation. I'm trying to implement a "%post" process that will allow the installation to be modified, using you functions (install, groupremove, etc). The whole ks file is at the end of the issue.
I'm not sure why, but the following kickstart is not running the yum install mysql, yum install mysql-server in the post process.
After the install, entering "service mysql start" results in the err msg saying mysql is not found. I am, however, able to run the yum install cmds after installation, and mysql gets installed.
I know I'm missing something subtle, but not sure what it is.
%post
yum install mysql -y <<<<<<<<<<<<<<NOT WORKING!!!!!
yum install mysql-server -y <<<<<<<<<<<<<<NOT WORKING!!!!!
%end
Thanks
ks.cfg
[root#localhost ~]# cat /root/anaconda-ks.cfg
# Kickstart file automatically generated by anaconda.
#version=DEVEL
install
cdrom
lang en_US.UTF-8
keyboard us
network --onboot yes --device eth0 --bootproto dhcp
rootpw --iscrypted $1$JCZKA/by$sVSHffsPr3ZDUp6m7c5gt1
# Reboot after installation
reboot
firewall --service=ssh
authconfig --useshadow --enablemd5
selinux --enforcing
timezone --utc America/Los_Angeles
bootloader --location=mbr --driveorder=sda --append=" rhgb crashkernel=auto quiet"
# The following is the partition information you requested
# Note that any partitions you deleted are not expressed
# here so unless you clear all partitions first, this is
# not guaranteed to work
#clearpart --all --initlabel
#part /boot --fstype=ext4 --size=200
#part / --fstype=ext4 --grow --size=3000
#part swap --grow --maxsize=4064 --size=2032
repo --name="CentOS" --baseurl=cdrom:sr1 --cost=100
%packages
#Base
#Core
#Desktop
#Fonts
#General Purpose Desktop
#Internet Browser
#X Window System
binutils
gcc
kernel-devel
make
patch
python
%end
%post
cp /boot/grub/menu.lst /boot/grub/grub.conf.bak
sed -i 's/ rhgb//' /boot/grub/grub.conf
cp /etc/rc.d/rc.local /etc/rc.local.backup
cat >>/etc/rc.d/rc.local <<EOF
echo
echo "Installing VMware Tools, please wait..."
if [ -x /usr/sbin/getenforce ]; then oldenforce=\$(/usr/sbin/getenforce); /usr/sbin/setenforce permissive || true; fi
mkdir -p /tmp/vmware-toolsmnt0
for i in hda sr0 scd0; do mount -t iso9660 /dev/\$i /tmp/vmware-toolsmnt0 && break; done
cp -a /tmp/vmware-toolsmnt0 /opt/vmware-tools-installer
chmod 755 /opt/vmware-tools-installer
cd /opt/vmware-tools-installer
mv upgra32 vmware-tools-upgrader-32
mv upgra64 vmware-tools-upgrader-64
mv upgrade.sh run_upgrader.sh
chmod +x /opt/vmware-tools-installer/*upgr*
umount /tmp/vmware-toolsmnt0
rmdir /tmp/vmware-toolsmnt0
if [ -x /usr/bin/rhgb-client ]; then /usr/bin/rhgb-client --quit; fi
cd /opt/vmware-tools-installer
./run_upgrader.sh
mv /etc/rc.local.backup /etc/rc.d/rc.local
rm -rf /opt/vmware-tools-installer
sed -i 's/3:initdefault/5:initdefault/' /etc/inittab
mv /boot/grub/grub.conf.bak /boot/grub/grub.conf
if [ -x /usr/sbin/getenforce ]; then /usr/sbin/setenforce \$oldenforce || true; fi
if [ -x /bin/systemd ]; then systemctl restart prefdm.service; else telinit 5; fi
EOF
/usr/sbin/adduser test
/usr/sbin/usermod -p '$1$QcRcMih7$VG3upQam.lF4BFzVtaYU5.' test
/usr/sbin/adduser test1
/usr/sbin/usermod -p '$1$LMyHixbC$4.aATdKUb2eH8cCXtgFNM0' test1
/usr/bin/chfn -f 'ruser' root
%end
%post
yum install mysql -y <<<<<<<<<<<<<<NOT WORKING!!!!!
yum install mysql-server -y <<<<<<<<<<<<<<NOT WORKING!!!!!
%end
It was caused by line-ending when I faced same problem as you. Try to check line-ending of ks.cfg. It should be LF not CR+LF or CR.
It will be help you if you;
Try system-config-kickstart tool.
Find generated /root/anaconda-ks.cfg though there may be no %post section.
Cheers.
You should just put mysql and mysql-server into the %packages section, no need to do this in %post.

Resources