Status quo:
We are developing a project at the client side. There's an existing Teradata appliance on the DEV side and one on the production side.
On the DEV side there is more than one supplier and every supplier has its own sub-database. The DBAs are not granted with direct permissions but call macros to create users and databases, grant rights etc. But no SYSDBA permissions on Teradata.
On the PRD side these macros don't exist. Every statement has to be run as is and has to be run automatically (packaged via RPM).
Therefore it is currently impossible to do a complete packaging and integration testing.
We have a Jenkins running which is doing several other tasks. The system is virtual, we're root and we already have an established packaging process.
What we need/ideas: an image of a plain Teradata database we can connect to (remote is ok) and run our DDL scripts.
The idea is to start some kind of image (Docker, VMWare, VirtualBox) which provides a small Teradata installation, we run our DDLs and throw the result away at the end.
Best case would be Docker in this case, but I'm open for ideas. Is there some kind of trial Teradata (v15) which can be used in this case?
I have looked into this (as I need to do the same) and here is what I have found:
You can actually run the VMWare image in Virtualbox (which is what I will be doing).
Once I have the image running I tarred and dumped out the file system at root (/) and I was able to startup docker.
However Teradata Express has also got a RAID1 setup (I think) which are the two vmdks PDISK0 and PDISK1 (SCSI sdb and sbc). I couldn't find a way to replicate this in docker (without spending more time and my time is up on this) so for now I think running in docker is not an option but if someone more familar with docker could find a way to virtualize the RAID1 I am happy to be corrected.
Related
I am trying to find the best way to achieve the following scenario;
I am currently working on getting a complex enterprise web application that consist on:
DB
BPM Engine
SOA Engine
Reporting Engine
Web Application Server
IDE
The applications is currently running in non-prod and prod environment but each environment is independent (no infra as a code, and deployments go from dev -> ... -> prod).
When a new developer comes in, they can't run the system in their local machine as it involves too many components (will come to this later). So they do development in their local machine and to test, they need to publish and deploy to dev. Test, rinse and repeat.
I am currently working on reverse engineer the whole thing so I can get it working on my local machine provided that I can install and run all the components. I am nearly there after fiddling with a lot of configuration, settings, etc.
This work I would like others to use, so they can also run the project in their local machines. In fact, since we will be migrating soon, I would like to pack the whole thing in a way that I can deploy it anywhere (the app already working and configured) and parametrised somehow whether is DEV, SYS, UAT, PROD. This, according to my understanding is what a docker image would do for you correct? You do all the work and then you create an image out of it? Then you can have this image running in a container and that way, other people can 'reuse' your work?
Is this the correct way of doing it? Any hints / comments would be appreciated
Apologies for my writing.
I am confused by Oracle documentation on how to setup the (ATG) Web Commerce available on the edelivery website.
I would like to get to the step where I have properly set up the admin console.
Running the bin files on a server seems not work for various reasons:
either installation finishes but nothing is working
OR
the installation endlessly asks for arbitrary input.
Also, I want to know if it is possible to setup the server in docker and/or an Amazon Linux EC2 instance.
There are quite a number of steps involved in getting the ATG Admin Server up and running. These start with installing a JDK, Application Server and provisioning a database. Once you have gone through the Installer (which you downloaded from the edelivery site) you need to go through a basic setup process using the CIM tool. The installation process (for ATG 11.3.1) is documented here, while the steps to setup a basic application is documented here.
Working through the steps in the CIM tool, you will end up with a deployable .ear file that you can copy to your application server. Once your application server is started, you will be able to access the Dynamo Admin server.
As of version 11.3.1 ATG is officially supported on Docker. Considering that you compile your own .ear file and it can be deployed to an Application Server (such as Weblogic), Docker support won't necessarily provide you with an ATG Image. It will simply allow you to run your compiled artefact on a Docker container. You are more likely wanting to get hold of a Weblogic Docker Image and deploy your ATG artefact there.
I have read four tutorials about getting started with Jenkins, and whilst they say it is possible to run Jenkins on the same computer on develops on they also all recommend installing it on a separate one, most commonly a Mac Mini. However: I only own a MacBook Pro; am short on cash; and am only person contributing to my iOS projects currently (I want to learn Jenkins for future client work). So it would be better for me for now to use my MacBook for both purposes.
Whilst I appreciate this is a matter of opinion somewhat, I am wondering what the reason is for the recommendation of separation, and whether I might be able to run Jenkins on the MacBook for now?
Thank you for reading.
The reason it is advised to have a master server and a number of slave server is only valid in company (or big team) environment. It is that build job can be CPU and memory intensive and often many developer starts jobs on the server. In cases like that one machine (being the master and slave server ot once) will be slow. Not only the jobs will take longer to finish, but even the web interface may become unresponsive.
For learning the basic configuration steps one machine is totally enough and you can even run your builds with your Jenkins instance.
I'm not entirely sure what the reason for that is in those tutorials, however, I can suggest an easy way to get started with Jenkins for free (That's how I usually run jenkins for personal use). You can create a free account with one of the Cloud providers like AWS, GCP or Azure and have your jenkins running there. For example, in AWS you can have a 1-year free trial account where you can spin up some free servers. There are many tutorials online, like this one, which will show you step by step of how to get started with Jenkins on AWS. Here are some high-level steps:
Create a free account in AWS (or any other cloud provider)
Spin up an EC2 instance - it can be any linux version or windows, whatever you are more comfortable with
SSH or RDP to the instance and install jenkins - there are exact installation steps for any flavor of your OS out there
Once the installation is complete, you will be able to access jenkins on your browser - in case of AWS, it would be the public ip of the server and default port 8080
My work laptop is running LinuxMint as the base OS, plus Virtualbox to run Windows 7 which is the actual work environment, usually plus an additional Virtualbox VM to run a different Windows installation in which I do my client project work (I have one VM per client, to avoid messing up my main OS).
But I'm wondering if it's feasible and beneficent to switch to using Docker for the client project stuff? That is, I'd like to keep LinuxMint (to preserve my sanity), and keep Windows ('cause I have to use some MS products), but then instead of that series of "client VM's" use Docker containers?
I'm not entirely clear on how containers are useful. Can I, for instance, have a container in which I've installed dotNET and MS SQL; and then another container where I've installed an Azure Powershell; and a third container where I've installed Java and Eclipse -- and then decide which of these "sets" of software is available on the same common base OS (Windows, with VPN and Outlook and Notepad++)?
This post makes me think I'm asking for a solution from the wrong tool?
Or should I perhaps attack the root problem from a different angle, and ask the following over at Workplace.SE: How to work as a consultant without "cluttering up" one's (Windows) OS with more or less temporary installations of all sorts of software necessary for client projects?
AFAIK there is no WindowsOS ready to be run INSIDE a docker container localy, but they are anounced. See www.docker.com/microsoft and msdn windowscontainers
What you can do is run Linux OSs in docker containers within Windows. But in your case you should run the docker engine in your Mint Linux
Not really an answer, more like several comments -- though it's too long to fit within a comment
First of all I would not run Mint, but that's off the question.
Then, it may probably worth to take a look at How is Docker different from a normal virtual machine?.
Also, as you linked, Docker does not aim (at all) to run several programs. Indeed, their policy is Caas: Container as a Service. So basically one program per container. Saying all that, you can probably run wine within container and run one application on each container (over wine).
Have fun!
let's say that RoR development environment is set up and working
does the developer need shell access to develop the RoR application?
would ftp be good enough?
why? I don't want to give my future developers ssh access to my linux box. Or can I set up their file permission so they can read only their project directory?
UPDATE
the whole idea is to have below running on my VPS linux hosting
code repository
production environment
test environment
maybe development environment
for
few projects
that are looked after by different people
so I want the developers to be able to do their job and only be able to access their project files and maybe only I would be able do to deployment into production from test environment
As Tom mentioned, it makes life a lot easier on Rails developers if they have ssh access to the machine so they can migrate the database, run bundle install, check the logs, or just jump into console.
There are ways to segregate users though, through file/directory permissions, chroot, or but making your linux machine a bunch of virtual machines and giving them their own
You can take a look at how Heroku's client works for possible ideas, since Rails developers are able to deploy, migrate, check logs, and even get into the console without direct shell access. Deployment is all done via git hooks and then their client gives access to particular commands. This is not trivial to set up/get working, though.
Well it does not REQUIRE shell access, but it sure makes it easier.
Without it how can you migrate a db? You would have to manually create controllers, models, etc.
Short answer, you CAN develop without shell access, it is just awkward and more tedious.
This is a common situation - for instance, Network Solutions allows you to do the basic RoR install but only gives ssh access if you step up and pay extra for a VM hosting package. My suggestion is to create the app on a local machine, of course using shell commands, then FTP mirror the files up, then use mysqldump to export the local database. NSI allows you a database console whereby you can then import your database dump file. You will probably have to edit config/database.yml since the host database server is unlikely to be localhost. If the necessary gems aren't present, you will have to plead with your hosting customer service.