Upload a bundle to remote Karaf - jmx

i have a Karaf on the host A. How to upload a bundle into this host A from another host (B) ?
With JMX service, i can do that ?
Regards.
HNT,

Your question seems to me, that you may need Karaf Cave.
In short I can tell you, that Cave provides karaf cluster support. In your case host A could be the master and host B the slave, so when both karaf instances belong to the same cave cluster, when you install a bundle or feature the changes are propagated to other nodes in the same cluster.
I highly recommend you to have a look on Karaf cave documentation and all those ideas will become clear after that.

I think you simply want to deploy a bundle to a single Karaf instance? So use a file transfer protocol like SCP to upload the bundle to the file system of the target host. Then either move it to karaf's deploy folder or do a local maven install and install the bundle from karaf shell afterwards. Instead you could use a remote maven repository like nexus.

Related

I can't find jetty-https.xml in Nexus 3

I installed a instance of Nexus Repository Manager 3 in rancher and i'm trying to use https port for a docker hosted repository. This means that i need to create a self-signed certificate to make it work. After a lot of research i came down to a problem, i cant find jetty-https.xml in /etc. The questions is, do this file exist or do i need to create it?
Source:
https://support.sonatype.com/hc/en-us/articles/217542177?_ga=2.62350444.1144825414.1623920039-1845083682.1622816513
https://help.sonatype.com/repomanager3/system-configuration/configuring-ssl#ConfiguringSSL-HowtoEnabletheHTTPSConnector
After modify the nexus.properties file in /nexus-data/etc/ and uncomented the nexus-args and restart the container the jetty-https.xml appeared on $install-dir/etc/jetty/. if you check the logs you can see the exact location of the jetty config folder.

Docker without internet

I am currently working on a project which needs to be deployed on customer infra (which is not cloud) and also it will not have internet.
We currently deploy manually our application and install dependencies using tarball, can docker help us here?
Note:
Application stack:
NodeJs
MySql
Elasticsearch
Redis
MongoDB
We will not have internet.
You can use docker load and docker save to load Docker images in TAR format or export these images. If you package your application files within these images this could be used to deliver your project to your customers.
Also note that the destination services must all have Docker Engine installed and running.
If you have control over your dev environment, you can also use Nexus or Gitlab as your private Docker repository. You can then pull your images from there into production, if it makes sense for your product.
I think the most advantage can be had in your local dev setup. Instead of installing, say, MySQL locally, you can run it as a Docker container. I use docker-compose for all client services in my current project. This helps keep your computer clean, makes it easy to avoid versioning hell (if you use different versions for each release or stage) and you don't have to mess around with configuration for each dev machine.
In my previous job every developer had a local Oracle SQL install, and that was not a happy state of affairs.

Where are you supposed to store your docker config files?

I'm new to docker so I have a very simple question: Where do you put your config files?
Say you want to install mongodb. You install it but then you need to create/edit a file. I don't think they fit on github since they're used for deployment though it's not a bad place to store the files.
I was just wondering if docker had any support for storing such config files so you can add them as part of running an image.
Do you have to use swarms?
Typically you'll store the configuration files on the Docker host and then use volumes to bind mount your configuration files in the container. This allows you to separately manage the configuration file from the running containers. When you make a change to the configuration, you can just restart the container.
You can then use a configuration management tool like Salt, Puppet, or Chef to manage copying/storing the configuration file onto the Docker host. Things like passwords can be managed by the secrets capabilities of the tool. When set up this way, changing a configuration file just means you need to restart your container and not build a new image.
Yes, in most cases you definitely want to keep your Dockerfiles in version control. If your org (or you personally) use GitHub for this, that's fine, but stick them wherever your other repos are. One of the main ideas in DevOps is to treat infrastructure as code. In fact, one of the main benefits of something like a Dockerfile (or a chef cookbook, or a puppet file, etc) is that it is "used for deployment" but can also be version-controlled, meaningfully diffed, etc.

Resume ways of using dockers (web development)

Screenshot: my docker-compose for wordpress
I've learned last week how to deploy 3 containers of wordpress, phpmyadmin and mysql. They work fine. The containers were connected between them, using a volume and the same network. The docker was configured from a docker compose file. .yml I used Git of my native operative system to version the changes.
But then I found another way to do the same:
I installed a image of Debian, then added git, apache2, mariadb and phpmyadmin, i connected all and use a "docker commit" to save changes of my development every time.
Then, a coworker told me to use a docker-file and add volumes an use Git for versioning.
Which is the best way?
What problems have the first and second ways?
Is there another way?
From my view you search for optimal deployment structure, its a long way to go and find information about. Here my opinons:
I wouldn't recommend this version because the mix of operation system (win/linux) can cause big problems. Example, Line Breaks, Folder/File Filename.
But the docker compose idea is the right way to setup the test, dev enviroment local.
is outside of git, thats not optimal, but a good solution when save everything.
is alright, but you done already with docker compose. Here the usage of volume can cause same problems as 1. You can use git versioning in commandline mode to develop, but I don't recommend it.
Alternative Ways
Use Software that able to deploy remotely to the php server, like PHPStorm, Eclipse, Winscp use local to develop the application and link it to the Apache/PHP Maschine or Container over FTP/SFTP. You work local and transfer the changed files into the running maschine or container. The Git Versioning would be done on the local maschine. You can also use mysql tools to backup the database local. So if the docker container brake you can setup it easy again.
Make sure you save also config files of apache, php, mysql into git, that makes the resetup of docker container smart.
Use (Gitlab & Gitlab CI), (Bitbucket & Bamboo), (Git & Jenkins) to deploy your php changes to the servers or docker containers.
At best read articles over continuous delivery and continuous integration.
This option is suitable for rollout to customer or dev, beta systems.

How to automate application deployment when using LXD containers?

How should applications be scripted/automatically deployed when in LXD containers?
For example is best way to deploy applications in LXD containers to use a bash script (which deploys an application)? How to execute this bash script inside the container by executing a command on the host?
Are there any tools/methods of doing this in a similar way to Docker recipes?
In my case, I use Ansible to:
build the LXD containers (web, database, redis for example).
connect to the containers and deploy the services and code needed.
you can build your own images for example with the services and/or code already deployed and build specific containers from this images.
I was doing this from before LXD had Ansible support (Ansible 2.2) i prefer to use ssh instead of lxd connection, when i connect to the containers to deploy services/code. they comes with a profile where i had setup my ssh public key (to have direct ssh connection by keys ... no passwords)
Take a look at my open source project on bitbucket devops_lxd_containers It includes:
Scripts to build lxd image templates including Apache, tomcat, haproxy.
Scripts to demonstrate custom application image builds such as Apache hosting and key/value content and haproxy configured as a router.
Code to launch the containers and map ports so they are accessible to the larger network
Code to configure haproxy as layer 7 proxy to route http requests between boxes and containers based on uri prefix routing. Based on where it previously deployed and mapped ports.
At the higher level it accepts a data drive spec and will deploy an entire environment compose of many containers spread across many hosts and hook them all up to act as a cohesive whole via a layer 7 proxy.
Extensive documentation showing how I accomplished each major step using code snippets before automating.
Code to support zero-outage upgrades using the layer7 ability to gracefully bleed off old connections while accepting new connections at the new layer.
The entire system is built on the premise that image building is best done in layers. We build a updated Ubuntu image. From it we build a hardened Ubuntu image. From it we build a basic Apache image. From it we build an application specific image like our apacheKV sample. The goal is to never rebuild any more than once and to re-use the common functionality such as the basicJDK as the source for all JDK dependent images so we can avoid having duplicate code in any location. I have strived to keep Image or template creation completely separate from deployment and port mapping. The exception is that I could not complete creation of the layer 7 routing image until we knew everything about how other images would be mapped.
I've been using Hashicorp Packer with the ansible provisioner using ansible_connection = lxd
Some notes here for constructing a template
When iterating through local files on your host system you may need to be using ansible_connection = local (e.g for stat & friends)
Using local_action in ansible with the lxd connection is still
action inside the container when using stat (but not with include_vars & lookup function for files)
Using lots of debug messages in Ansible is helpful to know which local environment ansible is actually operating in.
I'm surprised no one here mentioned Canonicals own tool for managing LXD.
https://juju.is
it is super simple, well supported, and the only caveat is it requires you turn off ipv6 at the LXD/LXC side of things (in the network bridge)
snap install juju --classic
juju bootstrap localhost
from there you can learn about juju models, deploy machines or prebaked images like ubuntuOS
juju deploy ubuntu

Resources