I recently started working with docker image registry called Harbor. I could not find any tags related to that.
My scenario: in one of the servers at work, there is harbor installed, along with docker, docker compose etc. Harbor site works fine. Harbor is basically a container in itself, which means multiple containers can be run on the server. I need to provide the dev team with Harbor v1.7. It is better from a security standpoint and has good LDAP integration. The Harbor that we host is very old version: 0.4.5.
My questions:
I searched and could not find a way to include two containers, only how to upgrade. If the former is possible, then can you please advice on how should I go about it? Which configuration files need to be changed, etc? Any links would also be greatly appreciated. At the moment all options are to upgrade the existing containers which is more work. At the moment I am aiming blind.
Since the requirement is v1.7 which use postgressql and mysql was only used till v1.5. If I do upgrade, would it cause a lot of issues with respect to tables, schema and db structure?
Does v0.4.5 have replication capabilities? If so it might be easier to just stand up a fresh install of v1.7 and replicate the images from the old registry to the new one.
I just did an upgrade for v1.6 to v1.7 and that was one of the paths I considered.
Currently harbor can't change authorization mode when there are previously added users.
There is replication capabilities, but can be broken between harbor versions: https://github.com/goharbor/harbor/blob/master/docs/user_guide.md#replicating-resources
I have endure myself a migration to OIDC autentication and the esiest way was create a mint-new one, copy the images and charts from one to the other, stop the older one and change the endpoint to the new one.
I have available a little program to copy images, but sadly no more: no charts or labels https://github.com/chusAlvarez/harbor-replication/
Related
This is more an advises asking than a specific technique question.
I did some search but it's hard to find the exact same issue. If you think it's a duplicate of another one question, please give me some links! :-)
The context
As many developers (I guess), I have one "Ali Baba's cave" server hosting my blog and multiple services: GitLab, Minio, Billing system for my freelance account etc...
All services are setup on an Ubuntu server using differents ways according to the possibilities I have: apt-get install, tar extraction or Capistrano deployments for personal projects.
This is working, but it's a maintenance hell for me. Some project can't be upgraded because a system dependency is conflicted with another one or simply not available on my OS, or an update may have side-effect on some project. For example, a PHP upgrade needed for my personal project completely broke a manually installed PHP service because the new version was not supported.
The needs
I'm currently learning Kubernetes and Helm charts. The goal is to setup a new CoreOS server and a Kubernetes ecosystem with all my apps and projects on it.
With that, I'll be able:
To get rid of maintenance hell with completely independent applications.
To maintain configuration with ease thanks to a simple git project with CI deployment
How to use Helm for that?
I did a test by created a basic chart with helm create my-network, creating a basic nginx app, perfect to add my network homepage!
But now I would like to add and connect some application, let's start with Gitlab.
I found two ways to add it:
Just running the helm upgrade --install gitlab gitalb/gitlab command with a yaml values file for configuration, outside my own chart.
Adding gitlab as a dependency thanks to the requirements.yaml
Both works, giving me the nearly same result.
The first solution seems more "independent" but I don't really know how to build/test it under CI (I would like upgrade automation).
The second allows me to configure all with a single values.yaml files, but I don't know what it is done during a upgrade (Are the upgrade processes of gitlab run during my chart upgrade?) and all in combined onto one "project".
GitLab is an example, but I want to add more of "ready-to-use" apps this way.
What would you advice to me? Solution 1 or 2? And what I should really take care of for both solution, especially for upgrade/backup?
If you have a completely different third solution to propose using Helm, feel free! :-)
Thanks
My experience has generally been that using a separate helm install for each piece/service is better. If those services have dependencies (“microservice X needs a Redis cache”) then those are good things to put in the requirements.yaml files.
A big “chart of charts” runs into a couple of issues:
Helm will flatten dependencies, so if service X needs Redis and service Y also needs Redis, a chart-of-charts setup will install one Redis and let it be shared; but in practice that’s often not what you want.
Separating out “shared” vs. “per-service” configuration gets a little weird. With separate charts you can use helm install -f twice to provide two separate values files, but in a chart-of-charts it’s harder to have a set of really global settings and also a set of per-component settings without duplicating everything.
There’s a standard naming convention that incorporates the Helm helm install --name and the specific component name. This looks normal if it’s service-x-redis, a little weird if it’s service-x-service-x, and kind of strange if you have one global release name the-world-service-x.
There can be good reasons to want to launch multiple independent copies of something, or to test out just the deployment scripting for one specific service, and that’s harder if your only deployment is “absolutely everything”.
For your use case you also might consider whether non-Docker systems management tools (Ansible, Chef, Salt Stack) could reproduce your existing hand deployment without totally rebuilding your system architecture; Kubernetes is pretty exciting but the old ways work very well too.
So, the idea is to dockerize an existing meteor app from 2015. The app is divided into two (backend and frontend). I already made a huge bash script to handle all the older dependencies...software dependencies...etc etc. I just need to run the script and we get the app running. But the idea now is to create a docker image for that project. How should I achieve this? Should I create an empty docker image and run my script there?. Thanks. I'm new to docker.
A bit more info about the stack, the script, the dependencies could be helpful.
Assuming that this app is not in development, you can simply use eg an nginx image, and give it the frontend files to serve.
For the backend there is a huge variety of options like php, node, etc.
The dockerfile of your backend image should contain the installation and setup of dependencies (except for other services like database. There are images to do those separated).
To keep things simple you should try out docker-compose to configure your containers to act as a service as a whole (and save you some configurations).
Later, to scale things up, you could look for orchestration tools such as kubernetes. But I assume, this service is not there yet (based on your question). :)
Docker seems to be the incredible new tool to solve all developer headaches when it comes to packaging and releasing an application, yet i'm unable to find simple solutions for just upgrading a existing application without having to build or buy into whole "cloud" systems.
I don't want any kubernetes cluster or docker-swarm to deploy hundreds of microservices. Just simply replace an existing deployment process with a container for better encapsulation and upgradability.
Then maybe upgrade this in the future, if the need for more containers increases so manual handling would not make sense anymore
Essentially the direct app dependencies (Language and Runtime, dependencies) should be bundled up without the need to "litter" the host server with them.
Lower level static services, like the database, should still be in the host system, as well as a entry router/load-balancer (simple nginx proxy).
Does it even make sense to use it this way? And if so, is there any "best practice" for doing something like this?
Update:
For the application i want to use it on, i'm already using Gitlab-CI.
Tests are already run inside a docker environment via Gitlab-CI, but deployment still happens the "old way" (syncing the git repo to the server and automatically restarting the app, etc).
Containerizing the application itself is not an issue, and i've also used full docker deployments via cloud services (mostly Heroku), but for this project something like this is overkill. No point in paying hundreds of $$ for a cloud server environment if i need pretty much none of the advantages of it.
I've found several of "install your own heroku" kind of systems but i don't need or want to manage the complexity of a dynamic system.
I suppose basically a couple of remote bash commands for updating and restarting a docker container (after it's been pushed to a registry by the CI) on the server, could already do the job - though probably pretty unreliably compared to the current way.
Unfortunately, the "best practice" is highly subjective, as it depends entirely on your setup and your organization.
It seems like you're looking for an extremely minimalist approach to Docker containers. You want to simply put source code and dependencies into a container and push that out to a system. This is definitely possible with Docker, but the manner of doing this is going to require research from you to see what fits best.
Here are the questions I think you should be asking to get started:
1) Is there a CI tool that will help me package together these containers, possibly something I'm already using? (Jenkins, GitLab CI, CircleCI, TravisCI, etc...)
2) Can I use the official Docker images available at Dockerhub (https://hub.docker.com/), or do I need to make my own?
3) How am I going to store Docker Images? Will I host a basic Docker registry (https://hub.docker.com/_/registry/), or do I want something with a bit more access control (Gitlab Container Registry, Harbor, etc...)
That really only focuses on the Continuous Integration part of your question. Once you figure this out, then you can start to think about how you want to deploy those images (Possibly even using one of the tools above).
Note: Also, Docker doesn't eliminate all developer headaches. Does it solve some of the problems? Absolutely. But what Docker, and the accompanying Container mindset, does best is shift many of those issues to the left. What this means is that you see many of the problems in your processes early, instead of those problems appearing when you're pushing to prod and you suddenly have a fire drill. Again, Docker should not be seen as a solve-all. If you go into Docker thinking it will be a solve-all, then you're setting yourself up for failure.
I would like a Jenkins master and slave setup for running specs on standard Rails apps (PostgreSQL, sidekiq/redis, RSPec, capybara-webkit, a common Rails stack), using docker so it can be put on other machines as well. I got a few good stationary machines collecting dust.
Can anybody share an executable docker jenkins rails stack example?
What prevents that from being done?
Preferable with master-slave setup too.
Preface:
After days online, following several tutorials with no success, I am about to abandon project. I got a basic understanding of docker, docker-machine, docker compose and volumes, I got a docker registry of a few simple apps.
I know next to nothing about Jenkins, but I've used Docker pretty extensively on other CI platforms. So I'll just write about that. The level of difficulty is going to vary a lot based on your app's dependencies and quirks. I'll try and give an outline that's pretty generally useful, and leave handling application quirks up to you.
I don't think the problem you describe should require you to mess about with docker-machine. docker build and docker-compose should be sufficient.
First, you'll need to build an image for your application. If your application has a comprehensive Gemfile, and not too many dependencies relating to infrastructure etc (e.g. files living in particular places that the application doesn't set up for itself), then you'll have a pretty easy time. If not, then setting up those dependencies will get complicated. Here's a guide from the Docker folks for a simple Rails app that will help get you started.
Once the image is built, push it to a repository such as Docker Hub. Log in to Docker Hub and create a repo, then use docker login and docker push <image-name> to make the image accessible to other machines. This will be important if you want to build the image on one machine and test it on others.
It's probably worth spinning off a job to run your app's unit tests inside the image once the image is built and pushed. That'll let you fail early and avoid wasting precious execution time on a buggy revision :)
Next you'll need to satisfy the app's external dependencies, such as Redis and postgres. This is where the Docker Compose file comes in. Use it to specify all the services your app needs, and the environment variables etc that you'll set in order to run the application for testing (e.g. RAILS_ENV).
You might find it useful to provide fakes of some non-essential services such as in-memory caches, or just leave them out entirely. This will reduce the complexity of your setup, and be less demanding on your CI system.
The guide from the link above also has an example compose file, but you'll need to expand on it. The most important thing to note is that the name you give a service (e.g. db in the example from the guide) is used as a hostname in the image. As #tomwj suggested, you can search on Docker Hub for common images like postgres and Redis and find them pretty easily. You'll probably need to configure a new Rails environment with new hostnames and so on in order to get all the service hostnames configured correctly.
You're starting all your services from scratch here, including your database, so you'll need to migrate and seed it (and any other data stores) on every run. Because you're starting from an empty postgres instance, expect that to take some time. As a shortcut, you could restore a backup from a previous version before migrating. In any case, you'll need to do some work to get your data stores into shape, so that your test results give you useful information.
One of the tricky bits will be getting Capybara to run inside your application Docker image, which won't have any X displays by default. xvfb (X Virtual Frame Buffer) can help with this. I haven't tried it, but building on top of an image like this one may be of some help.
Best of luck with this. If you have the time to persist with it, it will really help you learn about what your application really depends on in order to work. It certainly did for me and my team!
There's quite a lot to unpack in that question, this is a guide of how to get started and where to look for help.
In short there's nothing preventing it, although it's reasonably complex and bespoke to setup. So hence no off-the-shelf solution.
Assuming your aim is to have Jenkins build, deploy to Docker, then test a Rails application in a Dockerised environment.
Provision the stationary machines, I'd suggest using Ansible Galaxy roles.
Install Jenkins
Install Docker
Setup a local Docker registry
Setup Docker environment, the way to bring up multiple containers is to use docker compose this will allow you to bring up the DB, redis, Rails etc... using the public docker hub images.
Create a Jenkins pipeline
Build the rails app docker image this will contain the rails app.
Deploy the application, this updates the application in the Docker swarm, from the local Docker registry.
Test, run the tests against the application now running.
I've left out the Jenkins master/slave config because if you're only running on one machine you can increase the number of executors. E.g. the master can execute more jobs at the expense of speed.
I am trying to help a sysadmin group reduce server & service downtime on the projects they manage. Their biggest issue is that they have to take down a service, install upgrade/configure, and then restart it and hope it works.
I have heard that docker is a solution to this problem, but usually from developer circles in the context of deploying their node/python/ruby/c#/java, etc. applications to production.
The group I am trying to help is using vendor software that requires a lot of configuration and management. Can docker still be used in this case? Can we install any random software on a container? Then keep that in a private repository, upgrade versions, etc.?
This is a windows environment if that makes any difference.
Docker excels at stateless applications. You can use it for persistent data style applications, but requires the use of volume commands.
Can docker still be used in this case?
Yes, but it depends on the application. It should be able to be installed headless, and a couple other things that are pretty specific. (EG: talking to third party servers to get an license can create issues)
Can we install any random software on a container?
Yes... but: remember that when the container restarts, that software will be gone. It's better to create it as an image, and then deploy it.See my example below.
Then keep that in a private repository, upgrade versions, etc.?
Yes.
Here is an example pipeline:
Create a Dockerfile for the OS and what steps it takes to install the application. (Should be headless)
Build the image (at this point, it's called an image, not a container)
Test the image locally by creating a local container. This container is what has the configuration data such as environment variables, the volumes for persistent data it needs, etc.
If it satisifies the local developers wants, then you can either:
Let your build servers create the image and publish it an internal
docker registry (best practice)
Let your local developer publish it
to an internal docker registry
At that point, your next level environments can then pull down the image from the docker registry, configure them and create the container.
In short, it will require a lot of elbow grease but is possible.
Can we install any random software on a container?
Generally yes, but you can have many problems with legacy software which was developed to work on bare metal.
At first it can be persistence problem, but it can be solved using volumes.
At second program that working good on full OS can work not so good in container. Containers have some difference with VM's or bare metal. For example due to missing init process some containers have zombie process issue. About others difference you can read here
Docker have big profit for stateless apps, but some heave legacy apps can work not so good inside containers and should be tested good before using it in production.