Docker - PHP composer - private repository - docker

I've cloned a new project on OSX, inside which I have composer.json with a reference to a private repository.
I want to use the official Docker composer image, to install all dependencies. Everything works, but the problem occurs with a private repository, because of course, composer container doesn't have a SSH key installed in it. Reasonable.
Could somebody explain to me what would be the 'accurate' way to install the PHP dependency from my private repo?
I've read on the official docs (https://docs.docker.com/samples/library/composer/), where they say:
When you need to access private repositories, you will either need to share your configured credentials, or mount your ssh-agent socket inside the running container:
I'm on OSX, so the mounting part won't work as I've found out during my research.
I also read that the 'Docker' way is to not have the SSH part on the composer image. In other words, only one process per container.
So another way I found is to run a separate SSH server, but I'm not sure how this works actually. Supposedly I would connect through it into the composer container?
If anyone had some experience with this kind of problem, please share your thoughts.
I'm sorry if I left something out, if I did, please let me know.
Thank you!

Related

Prebuilding a docker image *within* a Github Codespace when the image relies on the organization's other private repositories?

I'm exploring how best to use Github Codespaces for my organization. Our dev environment consists of a Docker dev environment that we run on local machines. It relies on pulling other private repos we maintain via the local machine's ssh-agent. I'd ideally like to keep things as consistent as possible and have our Codespaces solution use the same Docker dev environment from within the codespace.
There's a naive solution of just building a new codespace with no devcontainer.json and going through all the setup for a dev environment each time you create a new one... but I'd like to avoid this. Ideally, I keep the same dev experience and am able to get the codespace to prebuild by building the docker image and somehow getting access to our other private repos.
An extremely hacky-feeling solution that works for automated building is creating an ssh key and storing it as a user codespace secret, then setting up the ssh-agent with that ssh-key as part of the postCreateCommand. My understanding is that this would not work with the onCreateCommand because "it will not typically have access to user-scoped assets or secrets.". To reiterate, this works for automated building, but not pre-building.
From this Github issue it looks like cloning via ssh is a complete no-go with prebuilds because ssh will need a user-defined ssh key, which isn't available from the onCreateCommand. The only potential workaround I can see for this is having an organization-wide read-only ssh-key... which seems potentially even sketchier than having user-created ssh keys as user secrets.
The other possibility I can think of is switching to https for the git clones. This would require adding access to the other repos, which is no big deal. BUT I can't quite see how to get access from within the docker image. When I tried this, I was getting errors because I was asked for a username and password when I ran a git clone from within docker... even though git clone worked fine in the base codespace. Is there a way to forward whatever tokens Github uses for access to other repos into the docker build process? Is there a way to have user-generated tokens get passed into the docker build process and use that for access instead?
Thoughts and roasts welcome.

Is there a generic container signature validation method?

Does anyone have a good solution for a generic container signature verification?
From what I've seen (please correct any mistakes)
Docker Hub uses signatures based on "Notary", that needs docker
RedHat use their own signing mechanism, that needs podman
As I can't install both podman and docker (containerd.io and runc have a conflict in RHEL, maybe a different host would allow it?) there seems to be no way to validate signatures that works for both sources.
Even if I could install them both I'd need to parse the dockerfile, work out where the source image was, do a docker/podman pull on the images and then do the build if no pulls fail. (Which feels likely to fail!)
For example : a build stage used a container from docker hub (eg maven) and run stage from redhat (eg registry.access.redhat.com/ubi8).
I really want a generic "validate the container signature at this URL" function that I can drop into a CICD tool. Some teams like using the RH registry, some Docker Hub, some mix and match.
Any good ideas? Obvious solutions I missed?
look at cosign
https://github.com/sigstore/cosign
$ cosign verify --key cosign.pub dlorenc/demo

I can't find jetty-https.xml in Nexus 3

I installed a instance of Nexus Repository Manager 3 in rancher and i'm trying to use https port for a docker hosted repository. This means that i need to create a self-signed certificate to make it work. After a lot of research i came down to a problem, i cant find jetty-https.xml in /etc. The questions is, do this file exist or do i need to create it?
Source:
https://support.sonatype.com/hc/en-us/articles/217542177?_ga=2.62350444.1144825414.1623920039-1845083682.1622816513
https://help.sonatype.com/repomanager3/system-configuration/configuring-ssl#ConfiguringSSL-HowtoEnabletheHTTPSConnector
After modify the nexus.properties file in /nexus-data/etc/ and uncomented the nexus-args and restart the container the jetty-https.xml appeared on $install-dir/etc/jetty/. if you check the logs you can see the exact location of the jetty config folder.

Using other person's docker containers

I am new to docker and while I was searching for something related to my project, I found a popular container on dockerhub -> https://hub.docker.com/r/augury/haproxy-consul/dockerfile.
This may solve the problem that I was facing before. My question is how do I use it? Do I simply run this container, register my applications on consul and this will handle the rest, or something else.
Is it like npmjs.org, where we simply import libraries and use them?
My idea of docker is that its a replication of images in which you can make modifications,so go ahead and build a container of the said project.Changes or any form of modifications will remain yours(your container) until you push it to a repo(upstream).For how to use it just go to the docker docs for more info on how to use it.Hope this helps.
you can simply pull the image docker pull augury/haproxy-consul and run using docker run augury/haproxy-consul -p 80:80. the container will be running and accessible on 80(2nd port)
And also, You can use the image as a base image in your DockerFile if you want to add something on top of it.
You already have a good idea of how the docker runs.Use the port created to make all your modifications, and yes all the changes are on your local repo.

How do I create docker image from an existing CentOS?

I am new to docker.io and not sure if this is beyond the scope of docker. I have an existing CentOS 6.5 system. I am trying to figure out how to create a docker image from a CentOS Linux system I already have running. I would like to basically clone this existing system; so I can port it to another cloud provider. I was able to create a docker image from a base CentOS image but I want to basically clone my existing system and use docker.io going forward.
Am I stuck with creating a base CentOS from scratch and configure it for docker from there? This might be more of a VirtualBox/Vagrant thing, but am interested in docker.io.
Looks like I need to start with base CentOS and create a Dockerfile with all the addons I need... Think I'm getting there now....
Cloning a system that is up and running is certainly not what Docker is intended for. Instead, Docker is meant to develop your OS and server installation together with the app or service, making DevOps even more DevOpsy. By starting with a clean CentOS image, you will be sure to install only what you need for the service, and have everything under control. You actually don't want all the other stuff that might produce incompatibilities. So, the answer here is that you definitely should approach the problem here the other way around.

Resources