Here is the normal way to initialize the drbd partition:
ON BOTH SERVERS
drbdadm create-md r0
drbdadm up r0
Both servers should be now connected, check with
ONLY ON PRIMARY
drbdadm -- --overwrite-data-of-peer primary r0
cat /proc/drbd
AFTER BOTH SERVERS UP-TO-DATE - ON PRIMARY
mkfs –t ext4 –b 4096 /dev/drbd0
I now tried to prepare a primary without secondary available (e.g. customer
wants a single server system and probably later wants to add a hot-standby
server)
drbdadm create-md r0
drbdadm up r0
drbdadm primary r0
I got the error:
0: State change failed: (-2) Need access to UpToDate data
Is there a solution ?
Force the node you wish to become Primary to Primary.
# drbdadm create-md r0
# drbdadm up r0
# drbdadm primary r0 --force
Related
Given that the Docker Content Trust is enabled, I can see the Root Key information when I inspect a repo as below.
[root#lab admin]# docker trust inspect registry.XXXXXX.com/project/nginx --pretty
Signatures for registry.XXXXXX.com/project/nginx
SIGNED TAG DIGEST SIGNERS
test 61191087790c31e43eb37caa10de1135b002f10c09fdda7fa8a5989db74033aa john
test1 61191087790c31e43eb37caa10de1135b002f10c09fdda7fa8a5989db74033aa john
test2 61191087790c31e43eb37caa10de1135b002f10c09fdda7fa8a5989db74033aa john
List of signers and their keys for registry.XXXXXX.com/project/nginx
SIGNER KEYS
john f20b2f70c3fa
Administrative keys for registry.XXXXXX.com/project/nginx
Repository Key: XXXXXXX
Root Key: XXXXXXX <-------------------------------------- this is a hashed value
However, that Root Key value is actually a hashed value, so I can not really confirm the root key used for this repo is or is not the root key file in my ~/.docker/trust/private.
I am wondering is there a way to reveal the relation between this hashed root key id and actual root key file.
Thanks for your help.
You can use notary -d ~/.docker/trust key list but if you have more than one root key it can be confusing so every time I generate a root key I rename it to myRepo.key and move it on safe location preferable offline.
You will need it only if you want to create or revoke other delegated keys.
With rails 6 (or 5.2) encrypted credentials, I am running into difficulty managing and resolving merge conflicts in the credentials.yml.enc file. As is outlined in the documentation, the intention is that encrypted credentials can be added to source control (https://guides.rubyonrails.org/security.html#custom-credentials)
E.g.
branch_a adds credentials for service a and gets merged to master
branch_b adds credentials for service b and when rebasing, the conflict in the credentials.yml.enc file looks something like this:
<<<<<<< HEAD
sahdkajshdkajhsdkjahsdkjahsdkajhsdkjahsdkjahdskjahsdjkahsdencryptedstring-a09dpjmcas==
=======
laskdjalksjdlakjsdlaksjdlakjsdlaksjdlakjsdlajsdlkajsdlkjasdljalsdajsdencryptedstringrere=
>>>>>>> branch_b
I can view the unencrypted credentials.yml.enc on each branch and resolve conflicts quite manually but is there a better way to go about managing credentials generally in order to avoid these credential conflicts.
I don't believe there is a better way, no.
Because of the nature of the encryption, there is no way to resolve it in it's encrypted state. If that was possible it would imply that you can somehow know the values and keys of the file in the encrypted state.
When you do your merge, you should resolve any conflicts in the source file, and then rerun the command that generates the encrypted file, then complete your merge.
It is possible. From the rails credentials usage:
=== Set up Git to Diff Credentials
Rails provides `rails credentials:diff --enroll` to instruct Git to call `rails credentials:diff`
when `git diff` is run on a credentials file.
Running the command enrolls the project such that all credentials files use the
"rails_credentials" diff driver in .gitattributes.
Additionally since Git requires the driver itself to be set up in a config file
that isn't tracked Rails automatically ensures it's configured when running
`credentials:edit`.
Otherwise each co-worker would have to run enable manually, including on each new
repo clone.
If you don't have rails credentials:diff...
It is possible to merge them, but you will have to decrypt them.
When dealing with merge conflicts, you can run git mergetool and it should generate 4 files:
config/credentials.yml_BACKUP_84723.enc
config/credentials.yml_LOCAL_84723.enc
config/credentials.yml_BASE_84723.enc
config/credentials.yml_LOCAL_84723.enc
You may need to run git mergetool in one terminal window, and in another, run this script:
Note that this will expose your credentials on the local machine.
# Temporarily move credentials file to another location
mv config/credentials.yml.enc ~/Desktop/credentials_temp.yml.enc
# Copy local file to original location
cp config/credentials.yml_LOCAL_* config/credentials.yml.enc
# Decrypt and send decrypted credentials to desktop
rails credentials:show > ~/Desktop/credentials_local.yaml
# Delete the copied local file
rm config/credentials.yml.enc
# Copy remote file to original location
cp config/credentials.yml_REMOTE_* config/credentials.yml.enc
# Decrypt and send decrypted credentials to desktop
rails credentials:show > ~/Desktop/credentials_remote.yaml
# Delete the copied remote file
rm config/credentials.yml.enc
# Move credentials file back
mv ~/Desktop/credentials_temp.yml.enc config/credentials.yml.enc
# See diffs or open both
diff ~/Desktop/credentials_local.yaml ~/Desktop/credentials_remote.yaml
# Delete the decrypted files
rm ~/Desktop/credentials_local.yaml ~/Desktop/credentials_remote.yaml
Local is on the left. Remote is on the right.
Enjoy.
Generally it is recommended to ignore credentials in version control i.e. .gitignore and configure them via environment variable.
I am getting the following error while pushing to my code bitbucket. It is using .ssh and it worked perfectly earlier and suddenly following error occurred.
$ git push origin branch-name
Connection reset by 18.205.93.1 port 22
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
Assuming your access to that repository hasn't changed, and assuming nothing has changed in your network (such as a proxy server change), this looks most like an ssh-agent issue. You can see more detail of the attempted connection by running GIT_SSH_COMMAND="ssh -v" git push origin branch-name, or you can test your connection and key with ssh -Tv git#bitbucket.org. Both commands should include lines like these:
debug1: Server host key: ssh-rsa
SHA256:zzXQOXSRBEiUtuE8AikJYKwbHaxvSc0ojez9YXaGp1A
debug1: Offering public key: (key type, fingerprint, and path go here)
debug1: Server accepts key: (details about the accepted key go here)
debug1: Authentication succeeded (publickey).
Authenticated to bitbucket.org ([18.205.93.1]:22).
(There will be other lines. Just look for these specific ones. You may also see a SHA1 fingerprint for the host key, if that's what your system prefers; that will be listed as 97:8c:1b:f2:6f:14:6b:5c:3b:ec:aa:46:46:74:7c:40. If you don't see one of those host keys, then you're probably not actually getting to Bitbucket.)
You should also be able to run ssh-add -L and see the key you want to use for Bitbucket.
If your key is not listed in any output, then you can add it with ssh-add /path/to/key. You can also add that key's path as the IdentityFile in your ~/.ssh/config so that SSH always uses that key for that host:
Host bitbucket.org
IdentityFile /path/to/key
I have an Open edX system run entire in only one server, but system performance is bad. Its RAM consuming is being increased day by day, now I wan to backup and restore to other bigger server.
Document of Open edX is hard to reach this information, and I've searched for a while but don't get what I want. If you know this, please guide me on this problem
Many thanks,
You need to backup edxapp and cs_comments_service_development database in mongodb and all data from mysql.
Backing up:
mysqldump edxapp -u root --single-transaction > backup/backup.sql
mongodump --db edxapp
mongodump --db cs_comments_service_development
Restoring:
mysql -u root edxapp < backup.sql
mongo edxapp --eval "db.dropDatabase()"
mongorestore dump/
It worked for me. Copies all courses, accounts, progress and discussions.
Idea taken from BluePlanetLife/openedx-server-prep, for more details, look here
This might not be a exact answer also not a standard solution for production environment, but might help you.
Manual way can be as follows:
You can setup a new edX instance on a new server.
Update all your repos edx-platform, custom xblocks to appropriate branch,tag.
(The database replacement point 3 and 4 below i haven't tested for production environment.)
replace the mysql databases 'edxapp', 'ora', 'xqueue' in new server with older ones.
replace mongodb databases 'cs_comments_service_development', 'edxapp' in new server with older ones.
I was able to replace mysql 'edxapp' database on the devstack.
I have a backup procedure that uses kpartx to read from a partitioned lvm volume.
Seldomly it happens that the device cannot be unmapped.
Right now when I try to remove the mapping I get the following:
# kpartx -d /dev/loop7
read error, sector 0
read error, sector 1
read error, sector 29
I tried dmsetup clean loop7p1 but nothing changed.
How can I free the partition without rebooting the server?
thanks
You can use 'dmsetup remove_all' to remove this mapping. You shouldn't need to use -f (force), but if you do, it may remove mappings that are in use.