I have a HLF(2.2.3) network setup in a private cloud platform(ubuntu20.04 LTS).
Docker images are pulled from dockerHub to cloud VMs local using below command,
curl -sSL https://raw.githubusercontent.com/hyperledger/fabric/master/scripts/bootstrap.sh | bash -s -- 2.2.3 1.5.0
(shortened url bit.ly/2ysbOFE)
When I scan the pulled images using "Trivy" tool (installed cloud-local) I found tons of vulnerabilities.
How am I suppose to fix these vulnerabilities? I'm aware of shelling out money, buying license and getting frequent patches, I don't want to choose that solution.
Looking for a methods that can be executed to resolve this vulnerabilities within the cloud VMs.
Tried to pull latest images and ran scan on the same, even those images have vulnerabilities.
Please suggest.
How to resolve this vulnerabilities?
How are these problems handled in real-time projects? (I'm new to prod deployment and any insights on this are welcome).
As stated on dockerhub, the published hyperledger fabric images are for development and testing only, not for production. If you want that level of support then you either have to buy images from a provider (but as you stated you don't want to do that) or you build and maintain your own images and do the work yourself. This is what any production deployment should be doing.
One important point to note: Just because a scanner reports a vulnerability doesn't mean you are actually vulnerable. Most scanners only look at what you have not what you are using so you should make a risk assessment of each of the reported vulnerabilities to determine if you are indeed susceptible.
Related
I recently started working with docker image registry called Harbor. I could not find any tags related to that.
My scenario: in one of the servers at work, there is harbor installed, along with docker, docker compose etc. Harbor site works fine. Harbor is basically a container in itself, which means multiple containers can be run on the server. I need to provide the dev team with Harbor v1.7. It is better from a security standpoint and has good LDAP integration. The Harbor that we host is very old version: 0.4.5.
My questions:
I searched and could not find a way to include two containers, only how to upgrade. If the former is possible, then can you please advice on how should I go about it? Which configuration files need to be changed, etc? Any links would also be greatly appreciated. At the moment all options are to upgrade the existing containers which is more work. At the moment I am aiming blind.
Since the requirement is v1.7 which use postgressql and mysql was only used till v1.5. If I do upgrade, would it cause a lot of issues with respect to tables, schema and db structure?
Does v0.4.5 have replication capabilities? If so it might be easier to just stand up a fresh install of v1.7 and replicate the images from the old registry to the new one.
I just did an upgrade for v1.6 to v1.7 and that was one of the paths I considered.
Currently harbor can't change authorization mode when there are previously added users.
There is replication capabilities, but can be broken between harbor versions: https://github.com/goharbor/harbor/blob/master/docs/user_guide.md#replicating-resources
I have endure myself a migration to OIDC autentication and the esiest way was create a mint-new one, copy the images and charts from one to the other, stop the older one and change the endpoint to the new one.
I have available a little program to copy images, but sadly no more: no charts or labels https://github.com/chusAlvarez/harbor-replication/
Docker seems to be the incredible new tool to solve all developer headaches when it comes to packaging and releasing an application, yet i'm unable to find simple solutions for just upgrading a existing application without having to build or buy into whole "cloud" systems.
I don't want any kubernetes cluster or docker-swarm to deploy hundreds of microservices. Just simply replace an existing deployment process with a container for better encapsulation and upgradability.
Then maybe upgrade this in the future, if the need for more containers increases so manual handling would not make sense anymore
Essentially the direct app dependencies (Language and Runtime, dependencies) should be bundled up without the need to "litter" the host server with them.
Lower level static services, like the database, should still be in the host system, as well as a entry router/load-balancer (simple nginx proxy).
Does it even make sense to use it this way? And if so, is there any "best practice" for doing something like this?
Update:
For the application i want to use it on, i'm already using Gitlab-CI.
Tests are already run inside a docker environment via Gitlab-CI, but deployment still happens the "old way" (syncing the git repo to the server and automatically restarting the app, etc).
Containerizing the application itself is not an issue, and i've also used full docker deployments via cloud services (mostly Heroku), but for this project something like this is overkill. No point in paying hundreds of $$ for a cloud server environment if i need pretty much none of the advantages of it.
I've found several of "install your own heroku" kind of systems but i don't need or want to manage the complexity of a dynamic system.
I suppose basically a couple of remote bash commands for updating and restarting a docker container (after it's been pushed to a registry by the CI) on the server, could already do the job - though probably pretty unreliably compared to the current way.
Unfortunately, the "best practice" is highly subjective, as it depends entirely on your setup and your organization.
It seems like you're looking for an extremely minimalist approach to Docker containers. You want to simply put source code and dependencies into a container and push that out to a system. This is definitely possible with Docker, but the manner of doing this is going to require research from you to see what fits best.
Here are the questions I think you should be asking to get started:
1) Is there a CI tool that will help me package together these containers, possibly something I'm already using? (Jenkins, GitLab CI, CircleCI, TravisCI, etc...)
2) Can I use the official Docker images available at Dockerhub (https://hub.docker.com/), or do I need to make my own?
3) How am I going to store Docker Images? Will I host a basic Docker registry (https://hub.docker.com/_/registry/), or do I want something with a bit more access control (Gitlab Container Registry, Harbor, etc...)
That really only focuses on the Continuous Integration part of your question. Once you figure this out, then you can start to think about how you want to deploy those images (Possibly even using one of the tools above).
Note: Also, Docker doesn't eliminate all developer headaches. Does it solve some of the problems? Absolutely. But what Docker, and the accompanying Container mindset, does best is shift many of those issues to the left. What this means is that you see many of the problems in your processes early, instead of those problems appearing when you're pushing to prod and you suddenly have a fire drill. Again, Docker should not be seen as a solve-all. If you go into Docker thinking it will be a solve-all, then you're setting yourself up for failure.
I would like a Jenkins master and slave setup for running specs on standard Rails apps (PostgreSQL, sidekiq/redis, RSPec, capybara-webkit, a common Rails stack), using docker so it can be put on other machines as well. I got a few good stationary machines collecting dust.
Can anybody share an executable docker jenkins rails stack example?
What prevents that from being done?
Preferable with master-slave setup too.
Preface:
After days online, following several tutorials with no success, I am about to abandon project. I got a basic understanding of docker, docker-machine, docker compose and volumes, I got a docker registry of a few simple apps.
I know next to nothing about Jenkins, but I've used Docker pretty extensively on other CI platforms. So I'll just write about that. The level of difficulty is going to vary a lot based on your app's dependencies and quirks. I'll try and give an outline that's pretty generally useful, and leave handling application quirks up to you.
I don't think the problem you describe should require you to mess about with docker-machine. docker build and docker-compose should be sufficient.
First, you'll need to build an image for your application. If your application has a comprehensive Gemfile, and not too many dependencies relating to infrastructure etc (e.g. files living in particular places that the application doesn't set up for itself), then you'll have a pretty easy time. If not, then setting up those dependencies will get complicated. Here's a guide from the Docker folks for a simple Rails app that will help get you started.
Once the image is built, push it to a repository such as Docker Hub. Log in to Docker Hub and create a repo, then use docker login and docker push <image-name> to make the image accessible to other machines. This will be important if you want to build the image on one machine and test it on others.
It's probably worth spinning off a job to run your app's unit tests inside the image once the image is built and pushed. That'll let you fail early and avoid wasting precious execution time on a buggy revision :)
Next you'll need to satisfy the app's external dependencies, such as Redis and postgres. This is where the Docker Compose file comes in. Use it to specify all the services your app needs, and the environment variables etc that you'll set in order to run the application for testing (e.g. RAILS_ENV).
You might find it useful to provide fakes of some non-essential services such as in-memory caches, or just leave them out entirely. This will reduce the complexity of your setup, and be less demanding on your CI system.
The guide from the link above also has an example compose file, but you'll need to expand on it. The most important thing to note is that the name you give a service (e.g. db in the example from the guide) is used as a hostname in the image. As #tomwj suggested, you can search on Docker Hub for common images like postgres and Redis and find them pretty easily. You'll probably need to configure a new Rails environment with new hostnames and so on in order to get all the service hostnames configured correctly.
You're starting all your services from scratch here, including your database, so you'll need to migrate and seed it (and any other data stores) on every run. Because you're starting from an empty postgres instance, expect that to take some time. As a shortcut, you could restore a backup from a previous version before migrating. In any case, you'll need to do some work to get your data stores into shape, so that your test results give you useful information.
One of the tricky bits will be getting Capybara to run inside your application Docker image, which won't have any X displays by default. xvfb (X Virtual Frame Buffer) can help with this. I haven't tried it, but building on top of an image like this one may be of some help.
Best of luck with this. If you have the time to persist with it, it will really help you learn about what your application really depends on in order to work. It certainly did for me and my team!
There's quite a lot to unpack in that question, this is a guide of how to get started and where to look for help.
In short there's nothing preventing it, although it's reasonably complex and bespoke to setup. So hence no off-the-shelf solution.
Assuming your aim is to have Jenkins build, deploy to Docker, then test a Rails application in a Dockerised environment.
Provision the stationary machines, I'd suggest using Ansible Galaxy roles.
Install Jenkins
Install Docker
Setup a local Docker registry
Setup Docker environment, the way to bring up multiple containers is to use docker compose this will allow you to bring up the DB, redis, Rails etc... using the public docker hub images.
Create a Jenkins pipeline
Build the rails app docker image this will contain the rails app.
Deploy the application, this updates the application in the Docker swarm, from the local Docker registry.
Test, run the tests against the application now running.
I've left out the Jenkins master/slave config because if you're only running on one machine you can increase the number of executors. E.g. the master can execute more jobs at the expense of speed.
I am trying to help a sysadmin group reduce server & service downtime on the projects they manage. Their biggest issue is that they have to take down a service, install upgrade/configure, and then restart it and hope it works.
I have heard that docker is a solution to this problem, but usually from developer circles in the context of deploying their node/python/ruby/c#/java, etc. applications to production.
The group I am trying to help is using vendor software that requires a lot of configuration and management. Can docker still be used in this case? Can we install any random software on a container? Then keep that in a private repository, upgrade versions, etc.?
This is a windows environment if that makes any difference.
Docker excels at stateless applications. You can use it for persistent data style applications, but requires the use of volume commands.
Can docker still be used in this case?
Yes, but it depends on the application. It should be able to be installed headless, and a couple other things that are pretty specific. (EG: talking to third party servers to get an license can create issues)
Can we install any random software on a container?
Yes... but: remember that when the container restarts, that software will be gone. It's better to create it as an image, and then deploy it.See my example below.
Then keep that in a private repository, upgrade versions, etc.?
Yes.
Here is an example pipeline:
Create a Dockerfile for the OS and what steps it takes to install the application. (Should be headless)
Build the image (at this point, it's called an image, not a container)
Test the image locally by creating a local container. This container is what has the configuration data such as environment variables, the volumes for persistent data it needs, etc.
If it satisifies the local developers wants, then you can either:
Let your build servers create the image and publish it an internal
docker registry (best practice)
Let your local developer publish it
to an internal docker registry
At that point, your next level environments can then pull down the image from the docker registry, configure them and create the container.
In short, it will require a lot of elbow grease but is possible.
Can we install any random software on a container?
Generally yes, but you can have many problems with legacy software which was developed to work on bare metal.
At first it can be persistence problem, but it can be solved using volumes.
At second program that working good on full OS can work not so good in container. Containers have some difference with VM's or bare metal. For example due to missing init process some containers have zombie process issue. About others difference you can read here
Docker have big profit for stateless apps, but some heave legacy apps can work not so good inside containers and should be tested good before using it in production.
I'm currently investigating using Mesosphere in production to run a couple of micro-services as Docker containers.
I got the DCOS deployment done and was able to successfully run one of the services. Before continuing with this approach I however also need to capture the development side (not of Mesos or Mesosphere itself but the development of the micro-services).
Are there any best practices how to run a local deployment of Mesosphere in a Vagrantbox or something similar that would enable our developers to run all the services that are in our eco-system from existing docker images and run the one service you are currently working on from a local code folder?
I already know how to link the devs code folder into a Vagrant machine and should also get the Docker part running but I'm still kind off lost on the whole Mesosphere integration part.
Is there anyone who could forward me to some resource in the Internet describing a possible solution for this? Did anyone of you do something similar and would care to share some insights on this?
Sneak Peak
Mesosphere is actively working on improving the developer experience surrounding DCOS. Part of that effort includes work on a local development cluster to aid application, service, and DCOS package developers. However, the solution is not quite ready for prime time yet. We have begun giving early access to select DCOS Enterprise Edition customers tho. If you'd like to hear more about that, please talk to your sales representative or contact sales through our web site: https://mesosphere.com/contact/
Public Tools
That said, there are many different tools already available that can help when developing Mesos frameworks or Marathon applications.
mesos-compose-dind
playa-mesos
mini mesos
coreos-mesos-cluster
vagrant-mesos
vagrant-puppet-mesosphere
Disambiguation
Mesosphere, Inc. is the company developing the Datacenter Operating System (DCOS).
The "mesosphere stack" historically refers to Mesos + Marathon (sometimes Chronos too, depending who you ask).
DCOS builds upon those open source tools and adds more (web gui, package manager, cli, centralized control plane, dns, etc.).
Update 2017-08-03
The two currently recommended local development options for DC/OS are:
dcos-vagrant
dcos-docker
I think there's not "the" solution... I guess every company will try to work out the best way to find a fit with their development processes.
My company for example is not using DCOS, but a normal Mesos cluster with clustered Marathon and Chronos schedulers. We have three environments, each running CoreOS and Mesos/Marathon (in different versions, to be able to test against version upgrades etc.):
Local Vagrant clusters for our developers for local development/testing (can be configured to use different CoreOS/Mesos/Marathon versions based on the user_data files)
A test cluster (virtualized, latest CoreOS beta, latest Mesos/Marathon/Chronos)
A production cluster (bare metal, latest CoreOS stable, currently Mesos 0.25.0 and Marathon 0.14.1)
Our build cycle uses a build server (TeamCity in our case, Jenkins etc. should also work fine) which builds the Docker images and pushed them to our private Docker repository. The images are tagged automatically in this process.
We also have to possibility to automatically launch them via Marathon API calls to the cluster defined in the build itself, or they can be deployed manually by the developers. The updated Docker images are thereby pulled from our private Docker repository (make sure to use "forcePullImage": true to get the latest version if you don't use specific image tags).
See
https://mesosphere.github.io/marathon/docs/native-docker.html
https://mesosphere.github.io/marathon/docs/native-docker-private-registry.html
https://mesosphere.github.io/marathon/docs/rest-api.html#post-v2-apps
https://github.com/tobilg/coreos-mesos-cluster