I am planning to use WSO2 API Manager for a client...Planning to use the API Manager Docker image for hosting it..
But it looks like to use API Manager docker image ,I need to have paid subscription once the trial period ends..
https://wso2.com/api-management/install/docker/get-started/ ..the link says
" In order to use WSO2 product Docker images, you need an active WSO2 subscription."
Is it like that?
Cant i have the image running in the client premises without any subscription?
You can build it yourself using their official dockerfiles which hosted on github and then push it to your own registry.
The rest of the dockerfiles for other WSO2 Products can be found under the same github account.
The following steps are describing How to build an image and run WSO2 API Manager, taken from this README.md file.
Checkout this repository into your local machine using the following Git command.
git clone https://github.com/wso2/docker-apim.git
The local copy of the dockerfiles/ubuntu/apim directory will be referred to as AM_DOCKERFILE_HOME from this point onwards.
Add WSO2 API Manager distribution and MySQL connector to <AM_DOCKERFILE_HOME>/files.
Download WSO2 API Manager v2.6.0
distribution and extract it to <AM_DOCKERFILE_HOME>/files.
Download MySQL Connector/J
and copy that to <AM_DOCKERFILE_HOME>/files.
Once all of these are in place, it should look as follows:
<AM_DOCKERFILE_HOME>/files/wso2am-2.6.0/
<AM_DOCKERFILE_HOME>/files/mysql-connector-java-<version>-bin.jar
Please refer to WSO2 Update Manager documentation
in order to obtain latest bug fixes and updates for the product.
Build the Docker image.
Navigate to <AM_DOCKERFILE_HOME> directory.
Execute docker build command as shown below.
docker build -t wso2am:2.6.0 .
Running the Docker image.
docker run -it -p 9443:9443 wso2am:2.6.0
Here, only port 9443 (HTTPS servlet transport) has been mapped to a Docker host port.
You may map other container service ports, which have been exposed to Docker host ports, as desired.
Accessing management console.
To access the management console, use the docker host IP and port 9443.
https://<DOCKER_HOST>:9443/carbon
In here, refers to hostname or IP of the host machine on top of which containers are spawned.
How to update configurations
Configurations would lie on the Docker host machine and they can be volume mounted to the container.
As an example, steps required to change the port offset using carbon.xml is as follows.
Stop the API Manager container if it's already running. In WSO2 API Manager 2.6.0 product distribution, carbon.xml configuration file
can be found at <DISTRIBUTION_HOME>/repository/conf. Copy the file to some suitable location of the host machine, referred to as <SOURCE_CONFIGS>/carbon.xml and change the offset value under ports to 1.
Grant read permission to other users for <SOURCE_CONFIGS>/carbon.xml
chmod o+r <SOURCE_CONFIGS>/carbon.xml
Run the image by mounting the file to container as follows.
docker run \
-p 9444:9444 \
--volume <SOURCE_CONFIGS>/carbon.xml:<TARGET_CONFIGS>/carbon.xml \
wso2am:2.6.0
In here, refers to /home/wso2carbon/wso2am-2.6.0/repository/conf folder of the container.
As explained above these steps for ubuntu, for other distributions you can check the following directory and then read the README.md file inside
You can build the docker images yourself. Follow the instructions given at https://github.com/wso2/docker-apim/tree/master/dockerfiles/ubuntu/apim#how-to-build-an-image-and-run.
Thes caveat is that you will not be getting any bug fixes if you do not have a subscription.
Related
I am kind of new here. I would like to know to deploy my docker image from the docker hub to IBM cloud-free tier using IBM cloud standalone CLI. I was using the Openshift online starter plan, where with just about 4 commands the image can be deployed. Can someone list me how its done or point me to a resource that shows how it's done.
Thanks
So if you can explain more about your free-tier, I can edit my response later. For now, I will explain for both options. You can find free services from here. You will need IBM Cloud Cli tool which can be downloaded:
With ibmcloud cli, helm, kubectl, docker and more
Only standalone ibmcloud cli
First account type is: The lite account. In this option, you cannot create IBM Kubernetes service or OpenShift cluster. You can only access to some lite services (apprx. 40 services) and one of them is Cloud Foundry Public. IBM Cloud let you use 256 MB of RAM on Cloud Foundry Public with free of charge. You can use the following command:
ibmcloud cf push \
--docker-image <your-image> \
--docker-username <your-username> \
--random-route \
-i 1 \
-m <memory_limit_max_256_mb>
You can find more details by writing ibmcloud cf push --help
Second account type is: trial/free-tier account which can be had by two ways: a feature code or switching to pay-as-you-go.
This option includes the Cloud Foundry Public also, I won't repeat it. But with this account type, you will have IBM Kubernetes Service Free (Deletes itself 30 days later, you can create one more after that time)
When you create a kubernetes cluster on IBM Cloud, you will see the service page. There is an access menu on left tab-menu. You have to follow those steps to be able to access to your kubernetes cluster from your workspace.
Then it is easy to deploy your image by entering:
kubectl create deployment app --image=<your_image_url>
If your image is not publicly accessible, then you have to create an imagepullsecret and bind it to your service account but this is out-of-topic. You can find it in here
For free tier clusters (kubernetes only, no OCP) you'll need to do this in the web console.
OpenShift worker nodes are available for paid accounts and standard
clusters only. You can create OpenShift clusters that run version 4.3
or 3.11 (deprecated). The operating system is Red Hat Enterprise Linux
7.
Following the instructions in this tutorial to create a standard Red Hat OpenShift on IBM Cloud cluster, open the OpenShift console, access built-in OpenShift components, deploy an app in an OpenShift project, and expose the app on an OpenShift route so that external users can access the service.
To create an application from a container image in DockerHub, you just have to run the below command with the image name
oc new-app mysql
For other strategies to build the container image, check the documentation here
There is a new option to deploy Docker images to IBM Cloud.
The new option is called IBM Cloud Code Engine and is currently in Beta and available in the us-south region.
Login to IBM Cloud:
ibmcloud login or if you are authenticated in your browser try ibmcloud login --sso
Target the us-south region and an existing resource group:
ibmcloud target -r us-south -g default
Install the Code Engine plugin into your ibmcloud cli:
ibmcloud plugin install code-engine
Create a Code Engine project:
ibmcloud code-engine project create --name myProject
Create a Code Engine application from a docker image:
ibmcloud code-engine application create --name myapp --image docker.io/ibmcom/helloworld
Wait until it's deployed.
I launched a container of apache/nifi, built, and configured a flow.
I'd like to somehow save that flow off somewhere, so that it can be loaded into a new docker image running Nifi.
Such that, a 'user' only has to do 'docker run....' and an instance of nifi will be launched, the flow loaded, and started.
It's not clear to me what files (nar, xml, etc...) need to be made available to the image a user is to run.
if you have nothing custom you can save the flow.xml.gz from the /conf directory to save the flow.
if you also want to save the content flowfile or current flowfile, you should also save the flowfile repository and content repository
if you have customs processors you should save the nar in lib directory.
everything should be present in the nifi directory before starting it.
You could use the nifi-toolkit to deploy flows and process groups to your Apache NiFi instance without having to rely on the GUI.
https://nifi.apache.org/docs/nifi-docs/html/toolkit-guide.html
This setup required you having:
Apache NiFi
Apache NiFi-Registry
Here is a working example (provided that the hostname of your Apache NiFi-Registry container is nifi-registry and its port the default 18080) based on an empty Apache NiFi instance and Nifi-Registry. Tested on Apache NiFi 1.12.1.
Firstly you need to generate a JSON file for your flow through the Apache Registry.
Add a Registry to your Apache NiFi:
/opt/nifi/nifi-toolkit-current/bin/cli.sh nifi create-reg-client -rcn registry -rcu http://nifi-registry:18080
Create a Process Group that will contain your flow. Right click on it and click on "Version" and "Start Version Control". This will save your flow inside the NiFi Registry. Work on your flow through the GUI and when you are ready, right click on your process group and commit your last changes. Now you will need to export the JSON of your flow from the registry.
/opt/nifi/nifi-toolkit-current/bin/cli.sh registry export-flow-version -u http://nifi-registry:18080 -f <flowid> -fv <flowversion> > <json_file>
Now that your JSON flow is ready, you are ready to deploy it on a fresh environment.
Create a bucket inside the registry. This will return the newly generated bucket id.
/opt/nifi/nifi-toolkit-current/bin/cli.sh registry create-bucket -u http://nifi-registry:18080 -bn <bucketname>
Use the previously generated bucket id to create a flow. This will return the newly generated flow id:
/opt/nifi/nifi-toolkit-current/bin/cli.sh registry create-flow -u http://nifi-registry:18080 -b <bucketid> -fn <flowname>
Import your flow (it must have been previously export from the GUI -> right click download flow, and available in the Apache NiFi filesystem):
/opt/nifi/nifi-toolkit-current/bin/cli.sh registry import-flow-version -u http://nifi-registry:18080 -f <flowid> -i <json_file>
Deploy the flow as a process group. This will return the newly generated process group id.
/opt/nifi/nifi-toolkit-current/bin/cli.sh nifi pg-import -b <bucketid> -f <flowid> -fv <flowversion>
Start the process group services (if any)
/opt/nifi/nifi-toolkit-current/bin/cli.sh nifi pg-enable-services -pgid <processgroupid>
Start the processors of your process group (if any):
/opt/nifi/nifi-toolkit-current/bin/cli.sh nifi pg-start -pgid <processgroupid>
Please keep in mind that Apache NiFi should be up and running before executing these commands. If you are planning on embedding these instructions in the Dockerfile, some logic that waits for the service to be up should be implemented.
You might also take a look at this Python wrapper for the NiFi Toolkit:
https://github.com/Chaffelson/nipyapi
Lastly, Apache NiFi also provides some REST APIs that might help you:
https://nifi.apache.org/docs/nifi-docs/rest-api/index.html
It appears LinkedIn doesn't have an official Burrow docker image on Docker Hub, but there are others who have forked it.
However, I can't find any examples of how to add any of them to a docker compose file that spins up ZK and Kafka something like this.
What am I missing?
It appears LinkedIn doesn't have an official Burrow docker image on Docker Hub
No, and while #toddpalino is one of the maintainers of Burrow, and has it on his Docker Hub account he states that providing a Docker image is not a core tenant of the project
In any case, there is a Docker Compose file in the Github repo, so you're welcome to clone the project and build an image yourself. I have opened a PR to make the README more clear that it does exists.
Regarding the configuration, the TOML file is linked into /etc/burrow of the container, and you would need to edit that file locally, and use a Compose volume mount to connect to an externally available Kafka broker or container.
I have two servers:
Server A: Build server with Jenkins and Docker installed.
Server B: Production server with Docker installed.
I want to build a Docker image in Server A, and then run the corresponding container in Server B. The question is then:
What's the recommended way of running a container in Server B from Server A, once Jenkins is done with the docker build? Do I have to push the image to Docker hub to pull it in Server B, or can I somehow transfer the image directly?
I'm really not looking for specific Jenkins plugins or stuff, but rather, from a security and architecture standpoint, what's the best approach to accomplish this?
I've read a ton of posts and SO answers about this and have come to realize that there are plenty of ways to do it, but I'm still unsure what's the ultimate, most common way to do this. I've seen these alternatives:
Using docker-machine
Using Docker Restful Remote API
Using plain ssh root#server.b "docker run ..."
Using Docker Swarm (I'm super noob so I'm still unsure if this is even an option for my use case)
Edit:
I run Servers A and B in Digital Ocean.
Docker image can be saved to a regular tar archive:
docker image save -o <FILE> <IMAGE>
Docs here: https://docs.docker.com/engine/reference/commandline/image_save/
Then scp this tar archive to another host, and run docker load to load the image:
docker image load -i <FILE>
Docs here: https://docs.docker.com/engine/reference/commandline/image_load/
This save-scp-load method is rarely used. The common approach is to set up a private Docker registry behind your firewall. And push images to or pull from that private registry. This doc describes how to deploy a container registry. Or you can choose registry service provided by a third party, such as Gitlab's container registry.
When using Docker repositories, you only push/pull the layers which have been changed.
You can use Docker REST API. Jenkins HTTP Request plugin can be used to make HTTP requests. You can run Docker commands directly on a remote Docker host setting the DOCKER_HOST environment variable. To export an the environment variable to the current shell:
export DOCKER_HOST="tcp://your-remote-server.org:2375"
Please be aware of the security concerns when allowing TCP traffic. More info.
Another method is to use SSH Agent Plugin in Jenkins.
What's the procedure for installing and running Docker on Google Compute Engine?
Until the recent GA release of Compute Engine, running Docker was not supported on GCE (due to kernel restrictions) but with the newly announced ability to deploy and use custom kernels, that restriction is no longer intact and Docker now works great on GCE.
Thanks to proppy, the instructions for running Docker on Google Compute Engine are now documented for you here: http://docs.docker.io/en/master/installation/google/. Enjoy!
They now have a VM which has docker pre-installed now.
$ gcloud compute instances create instance-name
--image projects/google-containers/global/images/container-vm-v20140522
--zone us-central1-a
--machine-type f1-micro
https://developers.google.com/compute/docs/containers/container_vms
A little late, but I wanted to add an answer with a more detailed workflow and links, since answers are still rather scattered:
Create a Docker image
a. Locally
b. Using Google Container Builder
Push local Docker image to Google Container Repository
docker tag <current name>:<current tag> gcr.io/<project name>/<new name>
gcloud docker -- push gcr.io/<project name>/<new name>
UPDATE
If you have upgraded to Docker client versions above 18.03, gcloud docker commands are no longer supported. Instead of the above push, use:
docker push gcr.io/<project name>/<new name>
If you have issues after upgrading, see more here.
Create a compute instance.
This process actually obfuscates a number of steps. It creates a virtual machine (VM) instance using Google Compute Engine, which uses a Google-provided, container-optimized OS image. The image includes Docker and additional software responsible for starting our docker container. Our container image is then pulled from the Container Repository, and run using docker run when the VM starts. Note: you still need to use docker attach even though the container is running. It's worth pointing out only one container can be run per VM instance. Use Kubernetes to deploy multiple containers per VM (the steps are similar). Find more details on all the options in the links at the bottom of this post.
gcloud beta compute instances create-with-container <desired instance name> \
--zone <google zone> \
--container-stdin \
--container-tty \
--container-image <google repository path>:<tag> \
--container-command <command (in quotes)> \
--service-account <e-mail>
Tip You can view available gcloud projects with gcloud projects list
SSH into the compute instance.
gcloud beta compute ssh <instance name> \
--zone <zone>
Stop or Delete the instance. If an instance is stopped, you will still be billed for resources such as static IPs and persistent disks. To avoid being billed at all, use delete the instance.
a. Stop
gcloud compute instances stop <instance name>
b. Delete
gcloud compute instances delete <instance name>
Related Links:
More on deploying containers on VMs
More on zones
More create-with-container options
As of now, for just Docker, the Container-optimized OS is certainly the way to go:
gcloud compute images list --project=cos-cloud --no-standard-images
It comes with Docker and Kubernetes preinstalled. The only thing it lacks is the Cloud SDK command-line tools. (It also lacks python3, despite Google's announce of Python 2 sunset on 2020-01-01. Well, it's still 27 days to go...)
As an additional piece of information I wanted to share, I was searching for a standard image that would offer both docker and gcloud/gsutil preinstalled (and found none, oops). I do not think I'm alone in this boat, as gcloud is the thing you could hardly go by without on GCE¹.
My best find so far was the Ubuntu 18.04 image that came with their own (non-Debian) package manager, snap. The image comes with the Cloud SDK preinstalled, and Docker installs literally in a snap, 11 seconds on an F1 instance initial test, about 6s on an n1-standard-1. The only snag I hit was the error message that the docker authorization helper was not available; an attempt to add it with gcloud components install failed because the SDK was installed as a snap, too. However, the helper is actually there, only not in the PATH. The following was what got me the both tools available in a single transient builder VM in the least amount of setup script runtime, starting off the supported Ubuntu 18.04 LTS image²:
snap install docker
ln -s /snap/google-cloud-sdk/current/bin/docker-credential-gcloud /usr/bin
gcloud -q auth configure-docker
¹ I needed both for a Daisy workflow imaging a disk with both artifacts from GS buckets and a couple huge, 2GB+ library images from the local gcr.io registry that were shared between the build (as cloud builder layers) and the runtime (where I had to create and extract containers to the newly built image). But that's besides the point; one may needs both tools for a multitude of possible reasons.
² Use gcloud compute images list --uri | grep ubuntu-1804 to get the most current one.
Google's GitHub site offers now a gce image including docker. https://github.com/GoogleCloudPlatform/cloud-sdk-docker-image
It's as easy as:
creating a Compute Engine instance
curl https://get.docker.io | bash
Using docker-machine is another way to host your google compute instance with docker.
docker-machine create \
--driver google \
--google-project $PROJECT \
--google-zone asia-east1-c \
--google-machine-type f1-micro $YOUR_INSTANCE
If you want to login this machine on google cloud compute instance, just use docker-machine ssh $YOUR_INSTANCE
Refer to docker machine driver gce
There is now improved support for containers on GCE:
Google Compute Engine is extending its support for Docker containers. This release is an Open Preview of a container-optimized OS image that includes Docker and an open source agent to manage containers. Below, you'll find links to interact with the community interested in Docker on Google, open source repositories, and examples to get started. We look forward to hearing your feedback and seeing what you build.
Note that this is currently (as of 27 May 2014) in Open Preview:
This is an Open Preview release of containers on Virtual Machines. As a result, we may make backward-incompatible changes and it is not covered by any SLA or deprecation policy. Customers should take this into account when using this Open Preview release.
Running Docker on GCE instance is not supported. The instance goes down and not able to login again.
We can use the Docker image given by the GCE, to create a instance.
If your google cloud virtual machine is based on ubuntu use the following command to install docker
sudo apt install docker.io
You may use this link: https://cloud.google.com/cloud-build/docs/quickstart-docker#top_of_page.
The said link explains how to use Cloud Build to build a Docker image and push the image to Container Registry. You will first build the image using a Dockerfile and then build the same image using the Cloud Build's build configuration file.
Its better to get it while creating compute instance
Go to the VM instances page.
Click the Create instance button to create a new instance.
Under the Container section, check Deploy container image.
Specify a container image name under Container image and configure options to run the container if desired. For example, you can specify gcr.io/cloud-marketplace/google/nginx1:1.12 for the container image.
Click Create.
Installing Docker on GCP Compute Engine VMs:
This is the link to GCP documentation on the topic:
https://cloud.google.com/compute/docs/containers#installing
In it it links to the Docker install guide, you should follow the instructions depending on what type of Linux you have running in the vm.