I want to create a social network in django framework in Openshift then I need at least a graph db (like Neo4j)and a relational db (like Mysql). I had trouble in add Neo4j to my project because openshift has not any cartridge for it. then I decide to install it with DIY, but I don't understand the functionality of start and stop files in .openshift/action hooks.Then I doing the following steps to install neo4j on server:
1.ssh to my account:
ssh 1238716...#something-prolife.rhcloud.com
2.go in a folder that have permission to write (I go to app-root/repo/ and mkdir test in it) and download the neo4j package from here. and extract it to the test folder that I created before :
tar -xvzf neo4j-community-1.9.4-unix.tar.gz
3.and finally run the neo4j file and start it:
neo4j-community-1.9.4/bin/neo4j start
but I see these logs and can't run the neo4j:
process [3898]... waiting for server to be ready............ Failed
to start within 120 seconds.
Neo4j Server may have failed to start, please check the logs.
how can I run this database in openshift ? where I am wrong ? and where is the logs in please check the logs?
I've developed an openshift cartridge that fixes the permission issue in openshift. I had to change the class HostBoundSocketFactory and SimpleAppServer in neo4j just to bind without using the 0 port, but using an openshift available port.
You can check at: https://github.com/danielnatali/openshift-neo4j-cartridge
it's working for me.
I would also not place it in the app-root/repo but instead I would put it in app-root/data.
You also need to use the IP of the gear - I think the env. variable is something like OPENSHIFT_INTERAL_IP. 127.0.0.1 is not available for binding but I think the ports should be open.
There are 2 ways neo4j can run : embedded or standalone(exposed via a rest service).
Standalone is what you are trying to do. I think the right way to setup neo4j would be by writing a cartridge for openshift, and then add the cartridge to your gear. There has been some discussion about this, but it seems that nobody has taken the time to do this. Check https://www.openshift.com/forums/openshift/neo4j-cartridge. If you decide to write your own cartridge, i might assist. Here are the docs: https://www.openshift.com/developers/download-cartridges.
The other option is running in embedded mode, which i have used. You need to set up a Java EE application(because neo4j embedded mode libraries are available only with java), and put the neo4j libraries in your project. Then, you would expose some routes, check for parameters and run your neo4j queries inside the servlets.
Related
I have Websphere Application Server 8.5.5.14 hosting my ERP. I want to dockerize the application and deploy it into Kubernetes cluster. Can anyone provide me information on how to create image out of my existing WAS 8.5.5.14.
In theory you could do this by creating a tar ball of the filesystem and importing it into docker to make an image via something like:
cat WAS.tar | docker import - appImage
but there's going to be a number of issues you'll need to avoid, for example, if you have resources (jdbc drivers,resource adapters, etc), the tarball will need to have all of those included. You'll also need to expose all of the required ports for your app and its administration. A better way and best practice to solve this would be to start with an IBM supported image of traditional WAS and build your system atop it.
There are detailed instructions to do this at https://github.com/WASdev/ci.docker.websphere-traditional#docker-hub-image
F Rowe's answer is good; if you follow their advice of using the official images you will be using WebSphere v9.0 in the container. You can use this tool that can help figure out if there are any changes you need to make to your application in order to get it working in the container. It also generates some of the wsadmin scripts to configure the server in the image.
I have a test framework running on my local (& git) that is based on TestCafe-Cucumber (Node.js) example: https://github.com/rquellh/testcafe-cucumber & it works really well.
Now, I am trying to use this framework in the deployment (post-deployment) cycle by hosting it as a service or creating a docker container.
The framework executes through the CLI command (npm test) with few parameters.
I know the easiest way is to call the git repo directly as & when required by adding a Jenkins step, however, that is not the solution I am looking for.
So far, I have successfully built the docker image & container now runs on my localhost 8085 port as http://0.0.0.0:8085 (although I get DNS server as it's not an app - please correct me if I am wrong here)
The concern here is: How can I make it work like an app hosted so that once the deployment completes, the Jenkins/Octopus could call it as a service through the URL (http://0.0.0.0:8085) along with few parameters that the framework used to execute the test case?
I request all experts to provide a solution if there are any.
I guess there is no production-ready application or service to solve this task.
However, you can use a REST framework to handle network requests and subprocesses to start test sessions. If you like Node.js, you can start with the Express framework and the execa module.
This way you can build a basic service that can start your tests. If you need a more flexible solution, you can take look at gherkin-testcafe that provides access to TestCafe's API. You can use it instead of starting TestCafe as a subprocess since this way you will have more options to manage your test sessions.
Recently I started to practice Dockers. Basically, I am running a C application on Docker container. Now, I want to try cloud foundry, therefore, trying to understand the difference between the two.
I'll describe the application as a novice because I am.
The application I start as a service(from /etc/init.d) and it reads a config file during startup, which specifies what all modules to load and IP of other services and it's own (0.0.0.0 does not work, so I have to give actual IP).
I had to manually update the IP and some details in the config file when the container starts. So, I wrote a startup script which did all the changes when the container starts and then the service start command.
Now, moving on to Cloud Foundry, the first thing I was not able to find is 'How to deploy C application' then I found a C build pack and a binary build pack option. I still have to try those but what I am not able to understand how I can provide a startup script to a cloud foundry container or in brief how to achieve what I was doing with Dockers.
The last option I have is to use docker containers in Cloud foundry, but I want to understand if I can achieve what I described above.
I hope I was clear enough to explain my doubt.
Help appreciated.
An old question, but a lot has changed since this was posted:
Recently I started to practice Dockers. Basically, I am running a C application on Docker container. Now, I want to try cloud foundry, therefore, trying to understand the difference between the two.
...
The last option I have is to use docker containers in Cloud foundry, but I want to understand if I can achieve what I described above.
There's nothing wrong with using Docker containers on CF. If you've already got everything set up to run inside a Docker container, being able to run that on CF give you yet another place you can easily deploy your workload.
While these are pretty minor, there are a couple requirements for your Docker container, so it's worth checking those to make sure it's possible to run on CF.
https://docs.cloudfoundry.org/devguide/deploy-apps/push-docker.html#requirements
Anyways, I am not working on this now as CF is not suitable for the project. It's an SIP application and CF only accepts HTTP/S requests.
OK, the elephant in the room. This is no longer true. CF has support for TCP routes. These allow you to receive TCP traffic directly to your application. This means, it's no longer just HTTP/S apps that are suitable for running on CF.
Instructions to set up your CF environment with TCP routing: https://docs.cloudfoundry.org/adminguide/enabling-tcp-routing.html
Instructions to use TCP routes as a developer: https://docs.cloudfoundry.org/devguide/deploy-apps/routes-domains.html#create-route-with-port
Now, moving on to Cloud Foundry, the first thing I was not able to find is 'How to deploy C application' then I found a C build pack and a binary build pack option.
Picking a buildpack is an important step. The buildpack takes your app and prepares it to run on CF. A C buildpack might sound nice as it would take your source code, build and run it, but it's going to get tricky because your C app likely depends on libraries. Libraries that may or may not be installed.
If you're going to go this route, you'll probably need to use CF's multi-buildpack support. This lets you run multiple buildpacks. If you pair this with the Apt buildpack, you can install the packages that you need so that any required libraries are available for your app as it's compiled.
https://docs.cloudfoundry.org/buildpacks/use-multiple-buildpacks.html
https://github.com/cloudfoundry/apt-buildpack
Using the binary buildpack is another option. In this case, you'd build your app locally. Perhaps in a docker container or on an Ubuntu VM (it needs to match the stack being used by your CF provider, i.e. cf stacks, currently Ubuntu Trusty or Ubuntu Bionic). Once you have a binary or binary + set of libraries, you can simply cf push the compiled artifacts. The binary buildpack will "run" (it actually does nothing) and then your app will be started with the command you specify.
My $0.02 only, but the binary buildpack is probably the easier of the two options.
what I am not able to understand how I can provide a startup script to a cloud foundry container or in brief how to achieve what I was doing with Dockers.
There's a few ways you can do this. The first is to specify a custom start command. You do this with cf push -c 'command'. This would normally be used to just start your app, like './my-app', but you could also use this to do other things.
Ex: cf push -c './prep-my-app.sh && ./my-app'
Or even just call your start script:
Ex: cf push -c './start-my-app.sh'.
CF also has support for a .profile script. This can be pushed with your app (at the root of the files you push), and it will be executed by the platform prior to your application starting up.
https://docs.cloudfoundry.org/devguide/deploy-apps/deploy-app.html#profile
Normally, you'd want to use a .profile script as you'd want to let the buildpack decide how to start your app (setting -c will override the buildpack), but in your case with the C or binary buildpack's, it's unlikely the buildpack will be able to do that, so you'll end up having to set a custom start command anyway.
For this specific case, I'd suggest using cf push -c as it's slightly easier, but for all other cases and apps deployed with other buildpacks, I'd suggest a .profile script.
Hope that helps!
I have a hunch that docker could greatly improve my webdev workflow - but I haven't quite managed to wrap my head around how to approach a project adding docker to the stack.
The basic software stack would look like this:
Software
Docker image(s) providing custom LAMP stack
Apache with several modules
MYSQL
PHP
Some CMS, e.g. Silverstripe
GIT
Workflow
I could imagine the workflow to look somewhat like the following:
Development
Write a Dockerfile that defines a LAMP-container meeting the requirements stated above
REQ: The machine should start apache/mysql right after booting
Build the docker image
Copy the files required to run the CMS into e.g. ~/dev/cmsdir
Put ~/dev/cmsdir/ under version control
Run the docker container, and somehow mount ~/dev/cmsdir to /var/www/ on the container
Populate the database
Do work in /dev/cmsdir/
Commit & shut down docker container
Deployment
Set up remote host (e.g. with ansible)
Push container image to remote host
Fetch cmsdir-project via git
Run the docker container, pull in the database and mount cmsdir into /var/www
Now, this looks all quite nice on paper, BUT I am not quite sure whether this would be the right approach at all.
Questions:
While developing locally, how would I get the database to persist between reboots of the container instance? Or would I need to run sql-dump every time before spinning down the container?
Should I have separate container instances for the db and the apache server? Or would it be sufficient to have a single container for above use case?
If using separate containers for database and server, how could I automate spinning them up and down at the same time?
How would I actually mount /dev/cmsdir/ into the containers /var/www/-directory? Should I utilize data-volumes for this?
Did I miss any pitfalls? Anything that could be simplified?
If you need database persistance indepent of your CMS container, you can use one container for MySQL and one container for your CMS. In such case, you can have your MySQL container still running and your can redeploy your CMS as often as you want independently.
For development - the another option is to map mysql data directories from your host/development machine using data volumes. This way you can manage data files for mysql (in docker) using git (on host) and "reload" initial state anytime you want (before starting mysql container).
Yes, I think you should have a separate container for db.
I am using just basic script:
#!/bin/bash
$JOB1 = (docker run ... /usr/sbin/mysqld)
$JOB2 = (docker run ... /usr/sbin/apache2)
echo MySql=$JOB1, Apache=$JOB2
Yes, you can use data-volumes -v switch. I would use this for development. You can use read-only mounting, so no changes will be made to this directory if you want (your app should store data somewhere else anyway).
docker run -v=/home/user/dev/cmsdir:/var/www/cmsdir:ro image /usr/sbin/apache2
Anyway, for final deployment, I would build and image using dockerfile with ADD /home/user/dev/cmsdir /var/www/cmsdir
I don't know :-)
You want to use docker-compose. Follow the tutorial here. Very simple. Seems to tick all your boxes.
https://docs.docker.com/compose/
I understand this post is over a year old at this time, but I have recently asked myself very similar questions and have several great answers to your questions.
You can setup a MySQL docker instance and have data persist on a stateless data container, aka the data container does not need to be actively running
Yes I would recommend having a separate instance for your web server and database. This is the power of Docker.
Check out this repo I have been building. Basically it is as simple as make build & make run and you can have a web server and database container running locally.
You use the -v argument when running the container for the first time, this will link a specific folder on the container to the host running the container.
I think your ideas are great and it is currently possible to achieve all that you are asking.
Here is a turn key solution achieving all of the needs you have listed.
I've put together an easy to use docker compose setup that should match your development workflow requirements.
https://github.com/ehyland/docker-silverstripe-dev
Main Features
Persistent DB
Your choice of HHVM + NGINX or Apache2 + PHP5
Debug and set breakpoints with xDebug
The README.md should be clear enough to get you started.
I'm new to rabbitmq and by association new to erlang. I'm running into a problem where I cannot start rabbitmq as the 'home' location for the .erlang.cookie has been changed. I've run the command
init:get_argument(home).
which returns
{ok,[["H:\\"]]}
this is an issue, as this is a network drive I do not always have access to. I need to be able to change the 'home' directory to something local.
when I run
rabbitmqctl status
it gives me the following error:
{error_logger,{{2013,7,5},{14,47,10}},"Failed to create cookie file 'h:/.erlang.cookie': enoent",[]}
which again leads me to believe that there is an issue with the home argument. I need to be able to change this location to something local.
Versions:
Erlang R16B01 32 bit
RabbitMQ 3.1.3
Running on Win7
I have uninstalled and reinstalled multiple times hoping to resolve this. I am looking for a way to change the 'home' location in erlang so rabbitmq can properly start.
The solution I came up with was to not bother with the installed service. I used the rabbitmq-server.bat to start the service, SET HOMEDRIVE=C: at the start of the file. I'm planing to run this from a parent service so that I can install this on servers.
Final note to earlang and rabbitMQ developers; using pre-existing environment variables for you own purposes is just wrong. You should create your own, or better yet put this stuff in a configuration file. Telling people to talk to their system administrators to change the HOMEDRIVE and APPDATA variables is arrogant to say the least.
You need to set the correct values for variables $HOMEDRIVE and $HOMEPATH. These links should help:
Permanently Change Environment Variables in Windows
Overriding HOMEDRIVE and HOMEPATH as a Windows 7 user