Is possible to deploy existing app to Docker? - docker

I have existing MEAN stack application. I found more tutorials, but I cannot find anything about deploying existing app to Docker. Is this possible?

As long as you can get the sources of your project on your deployment platform (the Ubuntu server), you can then follow the guide "Dockerizing a Node.js web app".
It shows how to create a simple web application in Node.js, then build a Docker image for that application, and lastly run the image as a container.
You can see a more complete example at Semaphore.

Related

How to use Docker server to deploy / update winforms application?

I'm looking for some advice on where I can learn to setup a Docker server to deploy / update a Winforms app. I've been on youtube, docker forums and different sites but they want to run the app they make from docker or "dockerize" the app. I don't want to do that, I just need it to host some files that can be used for updating through the apps auto update feature. Any resources or help would be super helpful, thank you.

How do I setup an Oracle Dynamo Admin Server?

I am confused by Oracle documentation on how to setup the (ATG) Web Commerce available on the edelivery website.
I would like to get to the step where I have properly set up the admin console.
Running the bin files on a server seems not work for various reasons:
either installation finishes but nothing is working
OR
the installation endlessly asks for arbitrary input.
Also, I want to know if it is possible to setup the server in docker and/or an Amazon Linux EC2 instance.
There are quite a number of steps involved in getting the ATG Admin Server up and running. These start with installing a JDK, Application Server and provisioning a database. Once you have gone through the Installer (which you downloaded from the edelivery site) you need to go through a basic setup process using the CIM tool. The installation process (for ATG 11.3.1) is documented here, while the steps to setup a basic application is documented here.
Working through the steps in the CIM tool, you will end up with a deployable .ear file that you can copy to your application server. Once your application server is started, you will be able to access the Dynamo Admin server.
As of version 11.3.1 ATG is officially supported on Docker. Considering that you compile your own .ear file and it can be deployed to an Application Server (such as Weblogic), Docker support won't necessarily provide you with an ATG Image. It will simply allow you to run your compiled artefact on a Docker container. You are more likely wanting to get hold of a Weblogic Docker Image and deploy your ATG artefact there.

Apache Service Mix Deployment Approaches

Folks,
We have got an enterprise application which uses Apache Service Mix for deployment. The application consists of various services and each is created as separate Maven project (bundle).During development, we are actually building each service separately and in-order to deploy it,its being put in the deploy folder. Also, we have to uninstall the bundle from the container(say; karaf) and then install it again from the console to bring the new changes in effect. This is fine during the development phase.
Now we want to deploy the code to an UAT environment (Amazon EC2) for the client to do the testing. We are now confused about how to deploy the bundles to the remote environment. Do we have a standard approach for CI using Jenkins(or some other tool) to automate the build and deploy process , so that someone who has no knowledge about the bundles(SMX) can deploy the code. We are using Github for source code management.
We have searched a lot in this regard and couldn't find any resources which provide some leads/hints on this.
Any help/tips is highly appreciated. If you need more info, I can give more details.
~Ragesh
We do have exactly similar setup and we use the Jenkins to build and let the Sysadmin to copy the bundles to one server and then he enables the rsync to rest of the servers.
Remember, always deploy the dependent bundle first and then remaining ..
Since we have this dependency ,we can't go automating this process.

Best workflow for developing and debugging - initially deploy in a docker environment

Programming Go in a Docker container or not?
For some time now, I have looked for a good way to program, debug and finally deploy to Docker environment.
I have looked at VS Code, debugging into a container via Delve. It is difficult to attach to the debugger.
Using Eclipse Che, not supported in the IDE.
Since Docker is written in Go - the good people at Docker - they must have a good workflow?
Maybe the conslusion is, that I should not develope and debug inside a container but from host machine - and then only deploy when compiled into a container.
What is you experience?
We are using docker to deliver our products nowadays, and just like you said, we develop and debug them from the host machine. And if we meet some issue which is hard to repo with the runtime enviroment staff, we attach the debug binary to the docker image to replace the built in ones.

Deploy features.xml in servicemix during jenkins Build

I have my features.xml file in src/main/resources/features folder , when I build my project through Jenkins after building my bundle goes to the nexus repository , my requirement is that after my bundle goes to nexus then features.xml should automatically be deployed on servicemix as part of build only. I should not open the servicemix console to install the feature. Please help
You may think about using a KAR (KAraf aRchive).
More information can be found here: http://karaf.apache.org/manual/latest-3.0.x/users-guide/kar.html
You can build а KAR (through Jenkins), containing your feature, then you can use a hot deployment.
Apache Karaf also provides a KAR deployer. It means that you can drop
a KAR file directly in the deploy folder.
Apache Karaf will automatically install KAR files from the deploy
folder. You can change the behaviours of the KAR deployer in the
etc/org.apache.karaf.kar.cfg:
I have also been working on this and my solution was to turn to automated scripting to accomplish this. I wrote a ssh and FTP based program which would stop an smx, delete the ${karaf.home}/data/cache/ directory, replace the new feature file with the one retrieved from the ftp operation, then restart the karaf container.
If you are open to looking into other possibilities:
You can look into Fuse Fabric which can link many smx Containers together and implement version increases and rollbacks. Currently I believe this would also need scripting to accomplish it automatically.
The third option is relatively new and comes in the form of Building docker images and deploying them via OpenShiftV3 which was just unveiled at the Redhat Summit 2015. Its worth noting its fairly new, but it does pack a very impressive feature set.

Resources