According to Docker Documentation: Manage keys for content trust, the root key is :
Root of content trust for an image tag. When content trust is enabled, you create the root key once. Also known as the offline key, because it should be kept offline.
I don't know the exact meaning of "once". Do I only have one chance to set the root key? If dismissing subsequent consequences of these uploaded repositories, what should I do to reset it?
The keys are trust on first use, so if you change the root key for a repo, anyone that has previously trusted that key would need to have that information removed, which involves changing everywhere that has previously run this image. The notary server itself also needs to have it's data of this repository purged. It may be easier to create a new repository.
Realize that Content Trust currently points to Notary v1 which is soon to be phased out. Project sigstore has cosign already available, Notary v2 is being designed, and I've yet to come across a significant production infrastructure using Content Trust. Even the images in the Docker Library haven't been signed in over a year, so if you enable Content Trust, you'll find that image pulls revert to very old images missing any recent security patches.
Related
I used docker technology to set up a nextcloud server for myself and my family
Can I transfer files from a local server to my Nextcloud server without using the internet and allow users to access them?
Because I have discovered two strange things:
1.Placing files directly under a specific user's file path on the server does not allow the user to successfully access the file.
2.As long as I don't delete the files added by the user, even if I directly change the content of the files on the server, the user can still accurately and correctly read the original content.
Or is the user profile path that I think is incorrect?
I think it's /var/www/html/data/"USERID"/files
I would like to know how to solve it, but at the same time, I also want to know what is the reason that causes the following two problems.
Thank you so much.
I'm looking for an advice on best practices regarding the provision of a SSL certificate (from letsencrypt.org) in a Docker image or creation of the certificate on each start of the container.
There're a lot of howtos and questions on the site regarding the creation of the certificate however, I can't wrap my head around neither of the idea to store the certificate in the Docker image and upload it to a registry protected with credentials nor to create the certificate on every start of the image (might be often because I'm designing microservices).
If you try to generate cert on every container start, then you will hit LE rate limits soon https://letsencrypt.org/docs/rate-limits/ => save certs to the persistent storage and reuse them.
I have had an issue with setting up my gerrit server. The machine has Ubuntu 12.04 LTS Server 64-bit installed on it. I am setting up git and gerrit as a way to manage source code and code review.
I require internal and external access to it. I setup a DNS that would work externally. However, during the initial setup, i left the canonicalWebUrl to its default value. It usually take's the machine's hostname (in this case it was vmserver).
The issue I was running into is exactly as explained here https://stackoverflow.com/questions/14702198/the-requested-url-openid-was-not-found-on-this-server, where after trying to sign in/register account with OPEN ID, it was saying url not found.
For some reason, it was changing the url in the address bar from the the DNS i setup to the CanonicalWebURL.
I tried to change the canonical web url in the gerrit.conf file found in etc of the gerrit site. After restarting the server, however, we were able to see the git project files present as they should be, but the account that was administrator seemed to no longer be registered and none of the projects were visible through gerrit.
I was wondering if there was a special procedure to changing the canonical web url in gerrit without disrupting access to a server?
any help or information on canonical urls would be much appreciated as i cannot find too much information on them.
edit:
looking deeper, i found some information that is way over my head regarding "submodules"
i do not understand if this is what i am looking for or not.
https://gerrit-review.googlesource.com/#/c/36190/
The canonical web url must be set, and it sounds like you have done that correctly.
I suspect the issue you are seeing is caused by changing the canonical web url - some OpenID providers (Google being the big one) will return a different user ID based on the URL of the request. This is a privacy thing and cannot be changed. So previous users will now show up as new users and won't be in their old groups (Administrators group in this case).
If you don't have many users, it might be easiest to migrate them by hand. You can modify the database to map the new user ID to the old user account.
What is the relationship between bitbucket.org and bytebucket.org? Is the latter owned by the owners of the former, or is it some sort of scam?
bytebucket.org is owned by Bitbucket. It is/was used for serving files uploaded to the wiki repositories, to prevent cookie theft and the like if memory serves.
The rest of the domain should probably be configured to redirect.
WHOIS records show that both bitbucket.org and bytebucket.org are registered to the same registrant.
They are also both running the same web server software. They're hosted in different netblocks, but both netblocks are owned by Amazon.com Inc.
I have an account at bitbucket.org. I tried signing in at bytebucket.org but I'm having trouble reaching any https page at that site right now. So I can't confirm that they have a common authentication between the two sites.
Okay, I have done a test: changing my account profile on bitbucket.org. The change I made was reflected at bytebucket.org immediately. It's still possible that bytebucket is a scam -- it might be a proxy to bitbucket.org, as an attempt to capture passwords.
Playing with the ssh and public_key application in Erlang, I've discovered a nice feature.
I was trying to connect to my running Erlang SSH daemon by using a rsa key, but the authentication was failing and I was prompted for a password.
After some debugging and tracing (and a couple of coffees), I've realized that, for some weird reason, a non valid key for my user was there. The authorized_keys file contained two keys. The wrong one was at some point in the file, while the correct one was appended at the end of the file.
Now, the Erlang SSH application, when diffing the provided key with the ones contained in the authorized_keys, it was finding the first entry (completely ignoring the second on - the correct one). Then, it was switching to different authentication mechanism (at first it was trying dsa instead of rsa and then it was prompting for a password).
The question is: Is this behavior intended or should the SSH server check for multiple entries for the same user in the authorized_keys file? Is this a generic SSH behaviour or it's just specific to the Erlang implementation?
Yes, its a 'first failure' authentication, and I came across your issue several times. As far as implementation goes, it was explained to me that the demon iterated over the authorised_keys file looking for a matching login, and THEN checked the key.
This seems to be the standard implementation,