Using Backup gem to do a DB backup, deployed the app with the gem installed to DigitalOcean and the next step is to run the generator using
dokku run oktob bundle exec backup generate:model --trigger oktob_db_backup --databases="postgresql" --storages="dropbox" --encryptors="openssl" --compressors="gzip" --notifiers="mail"
This should create the configuration files to setup the backup, but it returns nothing.
When I run the generator on my local machine, 2 files are generated as normal, but this time without using dokku run oktob as it's on a local machine.
Generated model file: '/Users/ahmadajmi/Backup/models/oktob_db_backup.rb'.
Generated configuration file: '/Users/ahmadajmi/Backup/config.rb'.
Thanks
To solve the problem, the backup gem should be installed outside the app container in the Linux machine.
Backup generator command is used to to generate the backup config file.
backup generate:model --trigger oktob_database_backup --databases="postgresql" --storages="s3" --compressor="gzip" --notifiers="mail"
Which will create the backup configuration file /root/Backup/models/oktob_database_backup.rb in the Linux machine and not related to Dokku, or app container, and this file contains some required configuration related to the database connection, S3 information, and email account to send notifications.
/root/Backup/models/oktob_database_backup.rb file contains:
# encoding: utf-8
##
# Backup Generated: oktob_db_backup
# Once configured, you can run the backup with the following command:
#
# $ backup perform -t oktob_db_backup [-c <path_to_configuration_file>]
#
# For more information about Backup's components, see the documentation at:
# http://backup.github.io/backup
#
Model.new(:oktob_database_backup, 'Oktob Production Database Backup') do
##
# PostgreSQL [Database]
#
database PostgreSQL do |db|
db.name = ENV['DATABASE_NAME']
db.username = ENV['DATABASE_USERNAME']
db.password = ENV['DATABASE_PASSWORD']
db.host = ENV['DATABASE_HOST']
db.port = ENV['DATABASE_PORT']
end
##
# Amazon Simple Storage Service [Storage]
#
store_with S3 do |s3|
# AWS Credentials
s3.access_key_id = ENV['AWS_ACCESS_KEY_ID']
s3.secret_access_key = ENV['AWS_SECRET_ACCESS_KEY']
s3.region = "us-west-2"
s3.bucket = ENV['AWS_DATABASE_BACKUP_BUCKET_NAME']
s3.path = "/"
end
##
# Gzip [Compressor]
#
compress_with Gzip
##
# Mail [Notifier]
#
# The default delivery method for Mail Notifiers is 'SMTP'.
# See the documentation for other delivery options.
#
notify_by Mail do |mail|
mail.on_success = true
mail.on_warning = true
mail.on_failure = true
mail.from = ENV['EMAIL_ADDRESS']
mail.to = ENV['EMAIL_ADDRESS']
mail.address = "smtp.gmail.com"
mail.port = 587
mail.domain = "oktob.io"
mail.user_name = ENV['SMTP_USERNAME']
mail.password = ENV['SMTP_PASSWORD']
mail.authentication = "plain"
mail.encryption = :starttls
end
end
The configurations are stored in /etc/environment file as:
## Database
export DATABASE_NAME=''
export DATABASE_USERNAME=''
export DATABASE_PASSWORD=''
export DATABASE_HOST=''
export DATABASE_PORT=''
## Amazon S3
export AWS_ACCESS_KEY_ID=''
export AWS_SECRET_ACCESS_KEY=''
export AWS_DATABASE_BACKUP_BUCKET_NAME=''
## Email
export EMAIL_ADDRESS=''
export SMTP_USERNAME=''
export SMTP_PASSWORD=''
To perform a backup manually we can do type backup perform -t oktob_database_backup, and this command will do 3 things:
Connect remotely to the Postgres Dokku container and make a database dump
Storing the abckup file into oktob-database-backup S3 bucket.
Send an email notification to EMAIL_ADDRESS
The backup is done every one hour using a cron job, this job added in crontab -e as:
0 * * * * /bin/bash -l -c '/usr/local/rvm/gems/ruby-2.0.0-p647/bin/backup perform -t oktob_database_backup'
We can get the /usr/local/rvm/gems/ruby-2.0.0-p647/bin/backup part by typing which backup in the command line, to get the path.
Good articles to check:
Hourly Production Server Database And File Backups
Backup PostgreSQL from a Rails project to Amazon S3
This is a no-answer, but too long for a comment.
You will generally have the problem, that the newly created files are not persisted, because the dokku run stuff fires up a new container (which then directly reaches its end of life).
You can use the dokku-volume plugin to designate a directory that "lives" outside of your app-containers and keeps the files as-is.
Related
I'm trying to get my Brother DCP-145C to work with a raspberry pi and cups/ samba. After setting up cups/ samba I exposed the print as raw. When I try to add the printer from a windows client I receive an access-denied message. Here is the log from samba:
[2021/11/18 10:41:18.869082, 0] ../auth/gensec/gensec.c:257(gensec_verify_dcerpc_auth_level)
Did not manage to negotiate mandatory feature SIGN for dcerpc auth_level 6
Do I have to change any windows security policies?
Here is my smb.conf:
#
# Sample configuration file for the Samba suite for Debian GNU/Linux.
#
#
# This is the main Samba configuration file. You should read the
# smb.conf(5) manual page in order to understand the options listed
# here. Samba has a huge number of configurable options most of which
# are not shown in this example
#
# Some options that are often worth tuning have been included as
# commented-out examples in this file.
# - When such options are commented with ";", the proposed setting
# differs from the default Samba behaviour
# - When commented with "#", the proposed setting is the default
# behaviour of Samba but the option is considered important
# enough to be mentioned here
#
# NOTE: Whenever you modify this file you should run the command
# "testparm" to check that you have not made any basic syntactic
# errors.
#======================= Global Settings =======================
[global]
log file = /var/log/samba/%m.log
printing = CUPS
printcap = CUPS
hosts allow = 192.168.178.
# lanman auth = no
#ntlm auth = yes
# client lanman auth = no
allow dcerpc auth level connect = yes
# load printers = no
## Browsing/Identification ###
# Change this to the workgroup/NT-domain name your Samba server will part of
workgroup = WORKGROUP
# Windows Internet Name Serving Support Section:
# WINS Support - Tells the NMBD component of Samba to enable its WINS Server
# wins support = no
# WINS Server - Tells the NMBD components of Samba to be a WINS Client
# Note: Samba can be either a WINS Server, or a WINS Client, but NOT both
; wins server = w.x.y.z
# This will prevent nmbd to search for NetBIOS names through DNS.
dns proxy = no
#### Networking ####
# The specific set of interfaces / networks to bind to
# This can be either the interface name or an IP address/netmask;
# interface names are normally preferred
; interfaces = 127.0.0.0/8 eth0
# Only bind to the named interfaces and/or networks; you must use the
# 'interfaces' option above to use this.
# It is recommended that you enable this feature if your Samba machine is
# not protected by a firewall or is a firewall itself. However, this
# option cannot handle dynamic or non-broadcast interfaces correctly.
; bind interfaces only = yes
#### Debugging/Accounting ####
# This tells Samba to use a separate log file for each machine
# that connects
log file = /var/log/samba/log.%m
# Cap the size of the individual log files (in KiB).
max log size = 1000
# If you want Samba to only log through syslog then set the following
# parameter to 'yes'.
# syslog only = no
# We want Samba to log a minimum amount of information to syslog. Everything
# should go to /var/log/samba/log.{smbd,nmbd} instead. If you want to log
# through syslog you should set the following parameter to something higher.
syslog = 0
# Do something sensible when Samba crashes: mail the admin a backtrace
panic action = /usr/share/samba/panic-action %d
####### Authentication #######
# Server role. Defines in which mode Samba will operate. Possible
# values are "standalone server", "member server", "classic primary
# domain controller", "classic backup domain controller", "active
# directory domain controller".
#
# Most people will want "standalone sever" or "member server".
# Running as "active directory domain controller" will require first
# running "samba-tool domain provision" to wipe databases and create a
# new domain.
server role = standalone server
# If you are using encrypted passwords, Samba will need to know what
# password database type you are using.
passdb backend = tdbsam
obey pam restrictions = yes
# This boolean parameter controls whether Samba attempts to sync the Unix
# password with the SMB password when the encrypted SMB password in the
# passdb is changed.
unix password sync = yes
# For Unix password sync to work on a Debian GNU/Linux system, the following
# parameters must be set (thanks to Ian Kahan <<kahan#informatik.tu-muenchen.de> for
# sending the correct chat script for the passwd program in Debian Sarge).
passwd program = /usr/bin/passwd %u
passwd chat = *Enter\snew\s*\spassword:* %n\n *Retype\snew\s*\spassword:* %n\n *password\supdated\ssuccessfully* .
# This boolean controls whether PAM will be used for password changes
# when requested by an SMB client instead of the program listed in
# 'passwd program'. The default is 'no'.
pam password change = yes
# This option controls how unsuccessful authentication attempts are mapped
# to anonymous connections
map to guest = bad user
########## Domains ###########
#
# The following settings only takes effect if 'server role = primary
# classic domain controller', 'server role = backup domain controller'
# or 'domain logons' is set
#
# It specifies the location of the user's
# profile directory from the client point of view) The following
# required a [profiles] share to be setup on the samba server (see
# below)
; logon path = \\%N\profiles\%U
# Another common choice is storing the profile in the user's home directory
# (this is Samba's default)
# logon path = \\%N\%U\profile
# The following setting only takes effect if 'domain logons' is set
# It specifies the location of a user's home directory (from the client
# point of view)
; logon drive = H:
# logon home = \\%N\%U
# The following setting only takes effect if 'domain logons' is set
# It specifies the script to run during logon. The script must be stored
# in the [netlogon] share
# NOTE: Must be store in 'DOS' file format convention
; logon script = logon.cmd
# This allows Unix users to be created on the domain controller via the SAMR
# RPC pipe. The example command creates a user account with a disabled Unix
# password; please adapt to your needs
; add user script = /usr/sbin/adduser --quiet --disabled-password --gecos "" %u
# This allows machine accounts to be created on the domain controller via the
# SAMR RPC pipe.
# The following assumes a "machines" group exists on the system
; add machine script = /usr/sbin/useradd -g machines -c "%u machine account" -d /var/lib/samba -s /bin/false %u
# This allows Unix groups to be created on the domain controller via the SAMR
# RPC pipe.
; add group script = /usr/sbin/addgroup --force-badname %g
############ Misc ############
# Using the following line enables you to customise your configuration
# on a per machine basis. The %m gets replaced with the netbios name
# of the machine that is connecting
; include = /home/samba/etc/smb.conf.%m
# Some defaults for winbind (make sure you're not using the ranges
# for something else.)
; idmap uid = 10000-20000
; idmap gid = 10000-20000
; template shell = /bin/bash
# Setup usershare options to enable non-root users to share folders
# with the net usershare command.
# Maximum number of usershare. 0 (default) means that usershare is disabled.
; usershare max shares = 100
# Allow users who've been granted usershare privileges to create
# public shares, not just authenticated ones
usershare allow guests = yes
#======================= Share Definitions =======================
[homes]
comment = Home Directories
browseable = no
# By default, the home directories are exported read-only. Change the
# next parameter to 'no' if you want to be able to write to them.
read only = yes
# File creation mask is set to 0700 for security reasons. If you want to
# create files with group=rw permissions, set next parameter to 0775.
create mask = 0700
# Directory creation mask is set to 0700 for security reasons. If you want to
# create dirs. with group=rw permissions, set next parameter to 0775.
directory mask = 0700
# By default, \\server\username shares can be connected to by anyone
# with access to the samba server.
# The following parameter makes sure that only "username" can connect
# to \\server\username
# This might need tweaking when using external authentication schemes
valid users = %S
# Un-comment the following and create the netlogon directory for Domain Logons
# (you need to configure Samba to act as a domain controller too.)
;[netlogon]
; comment = Network Logon Service
; path = /home/samba/netlogon
; guest ok = yes
; read only = yes
# Un-comment the following and create the profiles directory to store
# users profiles (see the "logon path" option above)
# (you need to configure Samba to act as a domain controller too.)
# The path below should be writable by all users so that their
# profile directory may be created the first time they log on
;[profiles]
; comment = Users profiles
; path = /home/samba/profiles
; guest ok = no
; browseable = no
; create mask = 0600
; directory mask = 0700
[printers]
comment = All Printers
browseable = no
# path = /var/spool/samba
path = /var/tmp/
printable = yes
guest ok = yes
read only = yes
create mask = 0700
use client driver = Yes
#[Samba_printer_name]
# path = /var/tmp/
# printable = yes
#printer name = Brother_DCP-145C
#guest ok = yes
# Windows clients look for this share name as a source of downloadable
# printer drivers
[print$]
comment = Printer Drivers
path = /var/lib/samba/printers
browseable = yes
read only = yes
guest ok = yes
# Uncomment to allow remote administration of Windows print drivers.
# You may need to replace 'lpadmin' with the name of the group your
# admin users are members of.
# Please note that you also need to set appropriate Unix permissions
# to the drivers directory for these users to have write rights in it
; write list = root, #lpadmin
You need to authenticate to print, guest access will not work.
I'm trying to migrate my local Active Storage files to Google Cloud Storage. I tried to just copy the files of /storage/* to my GCS Bucket - but it seems that this does not work.
I get 404 not found errors cause it is searching for files like:
[bucket]/variants/ptGtmNWuTE...
My local storage directory has a totally different folder structure with folders like:
/storage/1R/3o/NWuT....
My method to retrieve the image is as followed:
variant = attachment.variant(resize: '100x100').processed
url_for(variant)
What am i missing here?
As it turns out - DiskService aka. local storage uses a different folder structure than the cloud services. Thats really weird.
DiskService uses as folders part of the first chars of the key.
Cloud services just use the key and put all variants in a separate folder.
Created a rake task to copy files over to cloud services. Run it with rails active_storage:migrate_local_to_cloud storage_config=google for example.
namespace :active_storage do
desc "Migrates active storage local files to cloud"
task migrate_local_to_cloud: :environment do
raise 'Missing storage_config param' if !ENV.has_key?('storage_config')
require 'yaml'
require 'erb'
require 'google/cloud/storage'
config_file = Pathname.new(Rails.root.join('config/storage.yml'))
configs = YAML.load(ERB.new(config_file.read).result) || {}
config = configs[ENV['storage_config']]
client = Google::Cloud.storage(config['project'], config['credentials'])
bucket = client.bucket(config.fetch('bucket'))
ActiveStorage::Blob.find_each do |blob|
key = blob.key
folder = [key[0..1], key[2..3]].join('/')
file_path = Rails.root.join('storage', folder.to_s, key)
file = File.open(file_path, 'rb')
md5 = Digest::MD5.base64digest(file.read)
bucket.create_file(file, key, content_type: blob.content_type, md5: md5)
file.close
puts key
end
end
end
I would like to perform a regular backup of a PostgreSQL database, my current intention is to use the Backup and Whenever gems. I am relatively new to Rails and Postgres, so there is every chance I am making a very simple mistake...
I am currently trying to setup the process on my development machine (MAC), but keep getting an error when trying to connect to the database.
In the terminal window, I have performed the following to check the details of my database and connection:
psql -d my_db_name
my_db_name=# \conninfo
You are connected to database "my_db_name" as user "my_MAC_username" via socket in "/tmp" at port "5432".
\q
I have also manually created a backup of the database:
pg_dump -U my_MAC_username -p 5432 my_db_name > name_of_backup_file
However, when I try to repeat this within db_backup.rb (created by the Backup gem) I get the following error:
[2018/10/03 19:59:00][error] Model::Error: Backup for Description for db_backup (db_backup) Failed!
--- Wrapped Exception ---
Database::PostgreSQL::Error: Dump Failed!
Pipeline STDERR Messages:
(Note: may be interleaved if multiple commands returned error messages)
pg_dump: [archiver (db)] connection to database "my_db_name" failed: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/tmp/pg.sock/.s.PGSQL.5432"?
The following system errors were returned:
Errno::EPERM: Operation not permitted - 'pg_dump' returned exit code: 1
The contents of my db_backup.rb:
Model.new(:db_backup, 'Description for db_backup') do
##
# PostgreSQL [Database]
#
database PostgreSQL do |db|
# To dump all databases, set `db.name = :all` (or leave blank)
db.name = "my_db_name"
db.username = "my_MAC_username"
#db.password = ""
db.host = "localhost"
db.port = 5432
db.socket = "/tmp/pg.sock"
# When dumping all databases, `skip_tables` and `only_tables` are ignored.
# db.skip_tables = ["skip", "these", "tables"]
# db.only_tables = ["only", "these", "tables"]
# db.additional_options = ["-xc", "-E=utf8"]
end
end
Please could you suggest what I need to do to resolve this issue and perform the same backup through the db_backup.rb code
In case someone else gets stuck in a similar situation, the key to unlocking this problem was the lines:
psql -d my_db_name
my_db_name=# \conninfo
I realised that I needed to change db.socket = "/tmp/pg.sock" to db.socket = "/tmp", which seems to have resolved the issue.
However, I don't understand why the path on my computer differs to the default as I didn't do anything to customise the installation of any gems or the Postgres App
After a mina deploy, it's hanging on "Updating the /home/x/app/current symlink". No errors. It just sits there.
I have tried removing the app directory from the server and "mina setup", but still encountering the same problem. I had no issues deploying initially, but it seems any attempt to deploy subsequent releases results in this problem.
I initially followed this guide to deploy: https://www.ralfebert.de/tutorials/rails-deployment/
require 'mina/rails'
require 'mina/git'
require 'mina/rvm'
# Basic settings:
# domain - The hostname to SSH to.
# deploy_to - Path to deploy into.
# repository - Git repo to clone from. (needed by mina/git)
# branch - Branch name to deploy. (needed by mina/git)
set :application_name, 'x'
set :domain, 'x'
set :user, fetch(:application_name)
set :deploy_to, "/home/#{fetch(:user)}/app"
set :repository, 'x'
set :branch, 'x'
set :rvm_use_path, '/etc/profile.d/rvm.sh'
# Optional settings:
# set :user, 'foobar' # Username in the server to SSH to.
# set :port, '30000' # SSH port number.
# set :forward_agent, true # SSH forward_agent.
# shared dirs and files will be symlinked into the app-folder by the 'deploy:link_shared_paths' step.
# set :shared_dirs, fetch(:shared_dirs, []).push('somedir')
set :shared_files, fetch(:shared_files, []).push('config/database.yml', 'config/secrets.yml')
# This task is the environment that is loaded for all remote run commands, such as
# `mina deploy` or `mina rake`.
task :environment do
ruby_version = File.read('.ruby-version').strip
raise "Couldn't determine Ruby version: Do you have a file .ruby-version in your project root?" if ruby_version.empty?
invoke :'rvm:use', ruby_version
end
task :setup do
in_path(fetch(:shared_path)) do
command %[mkdir -p config]
# Create database.yml for Postgres if it doesn't exist
path_database_yml = "config/database.yml"
database_yml = %[production:
database: #{fetch(:user)}
adapter: postgresql
pool: 5
timeout: 5000]
command %[test -e #{path_database_yml} || echo "#{database_yml}" > #{path_database_yml}]
# Create secrets.yml if it doesn't exist
path_secrets_yml = "config/secrets.yml"
secrets_yml = %[production:\n secret_key_base:\n #{`rake secret`.strip}]
command %[test -e #{path_secrets_yml} || echo "#{secrets_yml}" > #{path_secrets_yml}]
# Remove others-permission for config directory
command %[chmod -R o-rwx config]
end
end
desc "Deploys the current version to the server."
task :deploy do
# uncomment this line to make sure you pushed your local branch to the remote origin
# invoke :'git:ensure_pushed'
deploy do
# Put things that will set up an empty directory into a fully set-up
# instance of your project.
invoke :'git:clone'
invoke :'deploy:link_shared_paths'
invoke :'bundle:install'
# invoke :'rails:db_migrate'
invoke :'rails:assets_precompile'
invoke :'deploy:cleanup'
on :launch do
command "sudo service #{fetch(:user)} restart"
end
end
# you can use `run :local` to run tasks on local machine before of after the deploy scripts
# run(:local){ say 'done' }
end
# For help in making your deploy script, see the Mina documentation:
#
# - https://github.com/mina-deploy/mina/tree/master/docs
I used the same great tutorial, got the same problem.
You can run mina deploy --verbose to see where it stuck.
For me it was not the symlink updating, but sudo service rails-demo restart command.
I used sudo visudo on the server and put the following line there:
rails-demo ALL=(ALL) NOPASSWD: /usr/sbin/service rails-demo restart
Now it works like a charm.
Good luck!
I am using GitLab Omnibus 7.10.0 on RHEL 6.6. I have enabled LDAP using the following configuration:
gitlab_rails['ldap_enabled'] = true
gitlab_rails['ldap_servers'] = YAML.load <<-'EOS' # remember to close this block with 'EOS' below
main: # 'main' is the GitLab 'provider ID' of this LDAP server
label: 'FOO COM Active Directory (LDAP)'
host: 'ad.server.foo.com'
port: 3268
uid: 'someuser'
method: 'plain' # "tls" or "ssl" or "plain"
bind_dn: 'CN=My Whole. Name,OU=Some Users,DC=ad,DC=server,DC=foo,DC=com'
password: 'thepassword'
active_directory: true
allow_username_or_email_login: false
block_auto_created_users: false
base: 'DC=ad,DC=server,DC=foo,DC=com'
user_filter: ''
# ## EE only
# group_base: ''
# admin_group: ''
# sync_ssh_keys: false
#
# secondary: # NOT FILLED OUT
EOS
My problem is that I can't get users to authenticate via LDAP. I'm not sure if the configuration is wrong, or I need to do something on the server side (which I have no direct access to). When I run
gitlab-rake gitlab:ldap:check RAILS_ENV=production
I get this
Checking LDAP ...
LDAP users with access to your GitLab server (only showing the first 100 results)
Server: ldapmain
Checking LDAP ... Finished
I can search for individual users using java with this account (my personal account) or another account for a different application, but can't get AD working with gitlab. I got the bind_dn "My Whole. Name" by running this command on a Windows box.
gpresult -r
I have also tried a bind_dn of:
uid=myADaccountname,OU=Some Users,DC=ad,DC=server,DC=foo,DC=com
and
myADaccountname#ad.server.foo.com
but I still have the same problem.
For Active Directory, the uid should be:
uid: 'sAMAccountName'
Gitlab should connect using the user specified in the bind_dn, with the given password.
Since GitLab 9.5.1 the uid now requires [ ]
See this issue: https://gitlab.com/gitlab-org/gitlab-ce/issues/37120
This might just be a bug which will be fixed.
I had to update the value for Active Directory from the answer above to:
uid: ['sAMAccountName']