Using Transfer for on-premises option to transfer files - docker

[google-cloud-storage]I am trying to copy files from Linux directory to GCP bucket using "Transfer for on-premises" option. I’ve installed docker script on Linux and GCP bucket is created. I now need to run Docker Run command to copy files. My question is how do I specify the source & target places in the docker command. For example;
Sudo docker run –source –target --hostname=$(hostname) --agent-id-prefix=ID123456789

The short answer is you can't supply a source/destination to this command, because its purpose is not to transfer the data. This command starts the agents for the service - agents are always-running processes that help you move data.
After starting agents that have access to your files, you issue a copy command in the Cloud Console, where you can specify a source directory and target bucket+prefix. When you do this, the service will contact the agents and use them to push the data to Google Cloud in parallel, for faster transfers. See the following links for more details:
Overview of how Transfer Service for on-premises data works
Setting up the service, and how to submit a transfer job

Related

Docker for custom build process

I have an executable that performs a number of tasks such as:
Copy .NET source code to a directory
Run another executable that modifies the source code
Run MSBuild to build the code
Publish the code
Run add-migration to create database
Run another executable that populates the database from files
etc.
I've set this up on my laptop, and everything works correctly, and now I want to publish this to the cloud.
Is it possible to create a docker image that does all these kinds of things, and run it on Azure Container Instances? Or do I need to run this kind of system on a VM?
I'm new to docker so don't know what it's capable of, but if I can run it on ACI as-needed that would be great, so I'm not paying for a VM 24/7 when this process only happens a few times a day
Docker is an open source centralised plateform design to create, Deploy and run Application.Its uses OS level of virtualization.Docker uses container on host to run the application.Container is also like a Virtaul Machine but its advantage, there is no preallocation of RAM as we have in VM.
Is it possible to create a docker image that does all these kinds of
things, and run it on Azure Container Instances? Or do I need to run
this kind of system on a VM?
Yes it is possible to create docker images using docker compose yaml file . In that yaml file you need to write all the task you want to peroform and then build an images of that file and push it container registery and create a conatainer instance of it.
You can take reference of these thread for Copy source code and to add it into docker image using Dockerfile.and how to create database in docker container using yaml file

Airflow on Google Cloud Composer vs Docker

I can't find much information on what the differences are in running Airflow on Google Cloud Composer vs Docker. I am trying to switch our data pipelines that are currently on Google Cloud Composer onto Docker to just run locally but am trying to conceptualize what the difference is.
Cloud Composer is a GCP managed service for Airflow. Composer runs in something known as a Composer environment, which runs on Google Kubernetes Engine cluster. It also makes use of various other GCP services such as:
Cloud SQL - stores the metadata associated with Airflow,
App Engine Flex - Airflow web server runs as an App Engine Flex application, which is protected using an Identity-Aware Proxy,
GCS bucket - in order to submit a pipeline to be scheduled and run on Composer, all that we need to do is to copy out Python code into a GCS bucket. Within that, it'll have a folder called DAGs. Any Python code uploaded into that folder is automatically going to be picked up and processed by Composer.
How Cloud Composer benefits?
Focus on your workflows, and let Composer manage the infrastructure (creating the workers, setting up the web server, the message brokers),
One-click to create a new Airflow environment,
Easy and controlled access to the Airflow Web UI,
Provide logging and monitoring metrics, and alert when your workflow is not running,
Integrate with all of Google Cloud services: Big Data, Machine Learning and so on. Run jobs elsewhere, i.e. other cloud provider (Amazon).
Of course you have to pay for the hosting service, but the cost is low compare to if you have to host a production airflow server on your own.
Airflow on-premise
DevOps work that need to be done: create a new server, manage Airflow installation, takes care of dependency and package management, check server health, scaling and security.
pull an Airflow image from a registry and creating the container
creating a volume that maps the directory on local machine where DAGs are held, and the locations where Airflow reads them on the container,
whenever you want to submit a DAG that needs to access GCP service, you need to take care of setting up credentials. Application's service account should be created and downloaded as a JSON file that contains the credentials. This JSON file must be linked into your docker container and the GOOGLE_APPLICATION_CREDENTIALS environment variable must contain the path to the JSON file inside the container.
To sum up, if you don’t want to deal with all of those DevOps problem, and instead just want to focus on your workflow, then Google Cloud composer is a great solution for you.
Additionally, I would like to share with you tutorials that set up Airflow with Docker and on GCP Cloud Composer.

Docker without internet

I am currently working on a project which needs to be deployed on customer infra (which is not cloud) and also it will not have internet.
We currently deploy manually our application and install dependencies using tarball, can docker help us here?
Note:
Application stack:
NodeJs
MySql
Elasticsearch
Redis
MongoDB
We will not have internet.
You can use docker load and docker save to load Docker images in TAR format or export these images. If you package your application files within these images this could be used to deliver your project to your customers.
Also note that the destination services must all have Docker Engine installed and running.
If you have control over your dev environment, you can also use Nexus or Gitlab as your private Docker repository. You can then pull your images from there into production, if it makes sense for your product.
I think the most advantage can be had in your local dev setup. Instead of installing, say, MySQL locally, you can run it as a Docker container. I use docker-compose for all client services in my current project. This helps keep your computer clean, makes it easy to avoid versioning hell (if you use different versions for each release or stage) and you don't have to mess around with configuration for each dev machine.
In my previous job every developer had a local Oracle SQL install, and that was not a happy state of affairs.

Docker multiple machine configurations

I want to know if there is a suggested approach on how to configure Docker machines using configuration files. I have a service that I configure for several users, it is basically a Django app.
Until now I had a shared base image and a bunch of scripts. When I need to create a new machine for a new user, I create it in Google Cloud Engine using the base image. Then I :
SSH into it
Launch a script that download everything via git and launch all services
Copy required credential files using scp
Is there a way to optimize some steps with Docker (using secrets or some external config management tool)?
Thanks!

Nexus repository configuration with dockerization

Is it possible to configure Nexus repository manager (3.9.0) in a way which is suitable for a Docker based containerized environment?
We need a customized docker image which contains basic configurations for the nexus repository manager, like project specific repositories, LDAP based authentication for users. We found that most of the nexus configurations live in the database (OrientDB) used by nexus. We also found that there is a REST interface offered by nexus to handle configurations by 3rd parties, but we found no configuration exporter/importer capabilites besides backup (directory servers ha LDIF, application servers ha command line scripts, etc.).
Right now we export the configuration as backup files, and during the customized docker image build we copy those backup file back to the file system in the container:
FROM sonatype/nexus3:latest
[...]
# Copy backup files
COPY backup/* ${NEXUS_DATA}/backup/
When the conatiner starts up it will pick up the backup files and the nexus will be configured the way we need. However though, it would be much better if there was a way which would allow us the handle these configurations via a set of config files.
All that data is stored under /nexus-data, so you can create an initial docker container with a docker volume or a host directory that would keep all that data. After you preconfigured that instance you can distribute your customized docker image with that docker volume containing nexus data. Or if you used a host directory you can simply copy over all that data is similar fashion as you do now, but use /nexus-data directory instead.
You can find more information at DockerHub under Persistent Data.

Resources