Google Endpoints YAML file update: Is there a simpler method - google-cloud-run

When using Google Endpoints with Cloud Run to provide the container service, one creates a YAML file (stagger 2.0 format) to specify the paths with all configurations. For EVERY CHANGE the following is what I do (based on the documentation (https://cloud.google.com/endpoints/docs/openapi/get-started-cloud-functions)
Step 1: Deploying the Endpoints configuration
gcloud endpoints services deploy openapi-functions.yaml \
--project ESP_PROJECT_ID
This gives me the following output:
Service Configuration [CONFIG_ID] uploaded for service [CLOUD_RUN_HOSTNAME]
Then,
Step 2: Download the script to local machine
chmod +x gcloud_build_image
./gcloud_build_image -s CLOUD_RUN_HOSTNAME \
-c CONFIG_ID -p ESP_PROJECT_ID
Then,
Step 3: Re deploy the Cloud Run service
gcloud run deploy CLOUD_RUN_SERVICE_NAME \
--image="gcr.io/ESP_PROJECT_ID/endpoints-runtime-serverless:CLOUD_RUN_HOSTNAME-CONFIG_ID" \
--allow-unauthenticated \
--platform managed \
--project=ESP_PROJECT_ID
Is this the process for every API path change? Or is there a simpler direct method of updating the YAML file and uploading it somewhere?
Thanks.

Based on the documentation, yes, this would be the process for every API path change. However, this may change in the future as this feature is currently on beta as stated on the documentation you shared.
You may want to look over here in order to create a feature request to GCP so they can improve this feature in the future.
In the meantime, I could advise to create a script for this process as it is always the same steps and doing something in bash that runs these commands would help you automatize the task.
Hope you find this useful.

When you use the default Cloud Endpoint image as described in the documentation the parameter --rollout_strategy=managed is automatically set.
You have to wait up to 1 minutes to use the new configuration. Personally it's what I observe in my deployments. Have a try on it!

Related

Right way to use secret flag in docker buildkit

I am struggling with the same problem mentioned by Gavin on
this question.
Specifically in with new
docker build secret information
What is the right way to use it that feature?
Looking around on the internet I only found some variations of the same example in docker documentation mentioned above, which prints the secret on build time. Maybe I didn't fully understand the example, so please help me.
If there is no way to get the secret in build time an use in another part of a Dockerfile (e.g. an ARG variable or RUN command), when and how that new feature can be used to truly keep my secret safe and also do the work?
My go is to use this new feature in build time and also keep my secret information safe in case someone get my image file and execute a history on it.
For example, ff I have a Dockerfile like this:
FROM influxdb:2.0
ENV DOCKER_INFLUXDB_INIT_MODE=setup
ENV DOCKER_INFLUXDB_INIT_USERNAME=admin
ENV DOCKER_INFLUXDB_INIT_PASSWORD="mypassword"
How can I use that new feature mentioned in docker documentation to set my variable (DOCKER_INFLUXDB_INIT_PASSWORD), for example, in a way that it will not logged in image history?
Thanks in advance
How can I use that new feature mentioned in docker documentation to
set my variable (DOCKER_INFLUXDB_INIT_PASSWORD), for example, in a way
that it will not logged in image history?
It depends on whether you need the secret only at build time, or
whether you actually want to use it at runtime. The latter situation
is probably more common, and there's already a canonical solution:
If you want to keep DOCKER_INFLUXDB_INIT_PASSWORD out of your image
history, just don't set it during the build process. Require it to be
set a runtime, e.g., via the -e VAR=VALUE argument to docker run
(or the --env-file option):
docker run -e DOCKER_INFLUXDB_INIT_PASSWORD=mypassword myimage
You could have an ENTRYPOINT script that checks for the variable at
runtime and exits with a helpful error message if it's not set.
The official Docker images for things like MySQL and PostgreSQL work
this way.
In contrast, a build secret is meant for secrets that are only
required at build time. For example, let's say your build process
needs to do something like this:
RUN curl -o /data/mydataset -u username:password \
https://example.com/dataset
You'd like to share your Dockerfile and associated sources with
other people, but you don't want to share your username and password.
This is where build secrets come in. You would instead stick your
authentication information in a file, and modify your Dockerfile to
read that information from a secret:
RUN --mount=type=secret,id=mysecret \
curl -o /data/mydataset -u $(cat /run/secrets/mysecret) \
https://example.com/dataset
In this example, once we've copied the remote file into the image,
we're done with the secret: we don't need it in order to run a
container from the image; it was only required during the build
process.
NB: The above assumes that you're providing the secret at build time as described in the documentation, so your build command might look something like:
DOCKER_BUILDKIT=1 docker build --secret id=mysecret,src=mysecret.txt -t myimage .

Separating Docker files and application source files to optimize production environment

I have a bunch of (Ruby) scripts stored on a server. Up until now, my team has used them by opening an accessor app that launches a list of the script names, and they select the script they want to run in that instance on the files in their working folder. The scripts are run directly from the server, so updates made to the script files are automatically reflected when a user runs the script.
The scripts require a fair amount of specific dependencies, so I'm trying to move to a Docker-based workflow to eliminate the problems we encounter with incongruent computer environments. I've been able to successfully build an image with our script library and run an instance of it on my computer.
However, all of the documentation and tutorials include the application source files when building an image, so that all the files are copied over by the Dockerfile. From my understanding, this means that any time the code in the application files needs to be updated, all the users will need to rebuild the image before trying to run anything. I would very rarely ever need to make changes to the environment settings/dependencies, but the app code is changed relatively frequently, so it seems like having every user rebuild an image every single time a line of app code is changed would actually slow down everyone's workflow considerably.
My question is this: Is it not possible to have Docker simply create the environment that a user must have to run the applications, but have the applications themselves still run directly off the server where they were originally stored? And does a new container need to be created every single time a user wants to run any one of the scripts? (The users are not tech-savvy.)
Generally you'd do this by using a Docker image instead of the checked-out tree of scripts. You can use a Docker registry to store a built copy of the image somewhere on the network; Docker Hub works for this, most large public-cloud providers have some version of this (AWS ECR, Google GCR, Azure ACR, ...), or you can run your own. The workflow for using this would generally look like
# Get any updates to the "latest" version of the image
# (can be run infrequently)
docker pull ourorg/scripts
# Actually run the script, injecting config files and credentials
docker run --rm \
-v $PWD/config:/config \
-v $HOME/.ssh:/config/.ssh \
ourorg/scripts \
some_script.rb
# Nothing in this example actually requires a local copy of the scripts
I'm envisioning a directory that has kind of a mix of scripts and support files and not a lot of organization to it. Still, you could write a simple Dockerfile that looks like
FROM ruby:2.7
WORKDIR /opt/scripts
# As of Bundler 2.1, there is no compatibility between Bundler
# versions; this must match exactly what is in Gemfile.lock
RUN gem install bundler -v 2.1.4
# Copy the scripts in and do basic installation
COPY Gemfile Gemfile.lock .
RUN bundle install
COPY . .
ENV PATH /opt/scripts:$PATH
# Prefix all commands with...
ENTRYPOINT ["bundle", "exec"]
# The default command to run is...
CMD ["ls"]
On the back end you'd need a continous integration service (Jenkins is popular if a little unwieldy; there are a large selection of cloud-hosted ones) that can rebuild the Docker image whenever there's a commit to the source repository. You can generally rig this up so that it happens automatically whenever anybody pushes anything.
This process makes more sense of most people are just using the set of scripts and few of them are developing them. It's also a little bit difficult to discover what the scripts are (you might be able to docker run --rm ourorg/scripts ls though).
Is it not possible to have Docker simply create the environment that a user must have to run the applications, but have the applications themselves still run directly off the server where they were originally stored?
This always strikes me as an ineffective use of Docker. You have all of the fiddly steps of your current workflow that require everyone to run a git pull or equivalent routinely, but you also have to inject the host source tree into the container. If there are OS incompatibilities in, for example, native gems in the vendor tree, you have to work around that.
# You still need to do this periodically
git pull
# And you also need to
sudo docker run \
--rm \
-v $PWD:/app \
-v $HOME/config:/config \
-v $HOME/.ssh:/config/.ssh \
-w /app \
ruby:2.7 \
bundle exec ./some_script.rb
Some of these details (especially the config file and credentials) you'd have to deal with even if you did build an image; some others of the details you could improve by building an image. Inside the image you need to correct the ownership and permissions on the ssh keys and replace the $PWD/vendor tree with something the container can run, without modifying the mounted host directories.
Is it not possible to have Docker simply create the environment that a user must have to run the applications, but have the applications themselves still run directly off the server where they were originally stored?
You can build an image with all the environment already installed then mount the directory with the scripts so the container can read the scripts from the host. Something like
docker run -it --rm -v /opt/myscripts:/myscripts myimage somescript.rb
Then your image Dockerfile would end with:
WORKDIR /myscripts
ENTRYPOINT ["/usr/bin/ruby"]
And does a new container need to be created every single time a user wants to run any one of the scripts?
Of course, a container is just an isolated process managed by docker, you could make a wrapper so the users wouldn't need to type the full docker run command.

How to Activate Dataflow Shuffle Service through gcloud CLI

I am trying to activate the Dataflow Shuffle [DS] through gcloud command line interface.
I am using this command:
gcloud dataflow jobs run ${JOB_NAME_STANDARD} \
--project=${PROJECT_ID} \
--region=us-east1 \
--service-account-email=${SERVICE_ACCOUNT} \
--gcs-location=${TEMPLATE_PATH}/template \
--staging-location=${PIPELINE_FOLDER}/staging \
--parameters "experiments=[shuffle_mode=\"service\"]"
The job starts. The Dataflow UI reflects it:
However, the logs showing the error with parsing the value:
Failed to parse SDK pipeline options: json: cannot unmarshal string into Go struct
field sdkPipelineOptions.experiments of type []string
What am I doing wrong?
This question is indeed related to an existing question:
How to activate Dataflow Shuffle service?
however the original question was covering python API, while my problem is with gcloud CLI.
[DS] https://cloud.google.com/dataflow/docs/guides/deploying-a-pipeline#cloud-dataflow-shuffle
P.S. Update
I have also tried:
No luck.
There's currently no way (I know of) to enable shuffle_service for template.
You have two options:
a) Run a job not from template
b) create a template that already has shuffle_service enabled.
The unmarshalling issue is most likely because templates only support fixed amount of parameters and template does not support "experiments" parameter.
----UPD----
I was asked on how to create template with shuffle_service enabled.
Here are sample steps I took.
Follow WordCountTutorial to create project with pipeline definition.
Created template with following command:
mvn -Pdataflow-runner compile exec:java -Dexec.mainClass=org.apache.beam.examples.WindowedWordCount -Dexec.args="--project={project-name} --stagingLocation=gs://{staging-location} --inputFile=gs://apache-beam-samples/shakespeare/* --output=gs://{output-location} --runner=DataflowRunner --experiments=shuffle_mode=service --region=us-central1 --templateLocation=gs://{resulting-template-location}"
Note --experiments=shuffle_mode=service argument
Invoked template from UI or via command:
cloud dataflow jobs run {job-name} --project={project-name} --region=us-central1 --gcs-location=gs://{resulting-template-location}

Scanning Rest API's through OWASP zap inside a docker environment

I set an Azure devops CI/CD build that will start a vm where Owasp Zap is running as a proxy and where the Owasp zap Azure devops task will run on a target url and copy my report in an Azure Storage.
Followed this guy's beautiful tutorial: https://kasunkodagoda.com/2017/09/03/introducing-owasp-zed-attack-proxy-task-for-visual-studio-team-services/
(also the guy who created the Azure devops task)
All well and good but recently I wanted to use an REST Api as a target url. The Owasp zap task in azure devops doesn't have the ability. Even asked the creator (https://github.com/kasunkv/owasp-zap-vsts-task/issues/30#issuecomment-452258621) and he also didn't think this is available through the Azure devops task and only through docker.
On my next quest I am now trying to get it running inside a docker image. (Firstly inside Azure devops but that wasn't smooth https://github.com/zaproxy/zaproxy/issues/5176 )
And finally getting on this tutorial (https://zaproxy.blogspot.com/2017/06/scanning-apis-with-zap.html)
Where I am trying to run a docker image with the following steps:
--- docker pull owasp/zap2docker-weekly
--running the container
-------command : docker run -v ${pwd}:/zap/wrk/:rw -t owasp/zap2docker-weekly zap-api-scan.py -t https://apiurl/api.json -f openapi -z "-configfile /zap/wrk/options.prop"
------- options.prop file
-config replacer.full_list\(0\).description=auth1 \
-config replacer.full_list\(0\).enabled=true \
-config replacer.full_list\(0\).matchtype=REQ_HEADER \
-config replacer.full_list\(0\).matchstr=Authorization \
-config replacer.full_list\(0\).regex=false \
-config replacer.full_list\(0\).replacement=Bearer xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
But This scans only the root url not every URL. As I am typing this question i tried to download the json file from the root and running the docker run command with passing the json file with the -t I am getting number of imported url's : what seems to be everything. But this seems to freeze inside powershell.
Which step do i miss to get a full recursive scan on my rest api ?
Any one some ideas or some help pls ?
Firstly, your property file format is wrong. You only need the '-config' and '\'s if you set the options directly on the command line. In the property file you should have:
replacer.full_list(0).description=auth1
replacer.full_list(0).enabled=true
replacer.full_list(0).matchtype=REQ_HEADER
replacer.full_list(0).matchstr=Authorization
replacer.full_list(0).regex=false
replacer.full_list(0).replacement=Bearer xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
Secondly, what does https://apiurl/api.json return and have you checked you can access it from within your docker container?
Try running
curl https://apiurl/api.json
and see what you get.

Move file downloaded in Dockerfile to harddrive

First off, I really lack a lot of knowledge regarding Docker itself and its structure. I know that it'd be way more beneficial to learn the basics first, but I do require this to work in order to move on to other things for now.
So within a Dockerfile I installed wget & used it to download a file from a website, authentification & download are successful. However, when I later try move said file it can't be found, and it doesn't show up using e.g explorer either (path was specified)
I thought it might have something to do with RUN & how it executes the wget command; I read that the Id can be used to copy it to harddrive, but how'd I do that within a Dockerfile?
RUN wget -P ./path/to/somewhere http://something.com/file.txt --username xyz --password bluh
ADD ./path/to/somewhere/file.txt /mainDirectory
Download is shown & log-in is successful, but as I mentioned I am having trouble using that file later on as it's not to be located on the harddrive. Probably a basic error, but I'd really appreciate some input that might lead to a solution.
Obviously the error is produced when trying to execute ADD as there is no file to move. I am trying to find a way to mount a volume in order to store it, but so far in vain.
Edit:
Though the question is similiar to the "move to harddrive" one, I am searching for ways to get the id of the container created within the Dockerfile in order to move it; while the thread provides such answers, I haven't had any luck using them within the Dockerfile itself.
Short answer is that it's not possible.
The Dockerfile builds an image, which you can run as a short-lived container. During the build, you don't have (write) access to the host and its file system. Which kinda makes sense, since you want to build an immutable image from which to run ephemeral containers.
What you can do is run a container, and mount a path from your host as a volume into the container. This is the only way how you can share files between the host and a container.
Here is an example how you could do this with the sherylynn/wget image:
docker run -v /path/on/host:/path/in/container sherylynn/wget wget -O /path/in/container/file http://my.url.com
The -v HOST:CONTAINER parameter allows you to specify a path on the host that is mounted inside the container at a specified location.
For wget, I would prefer -O over -P when downloading a single file, since it makes it really explicit where your download ends up. When you point -O to the location of the volume, the downloaded file ends up on the host system (in the folder you mounted).
Since I have no idea what your image or your environment looks like, you might need to tweak one or two things to work well with your own image. As a general recommendation: For basic commands like wget or curl, you can find pre-made images on Docker Hub. This can be quite useful when you need to set up a Continuous Integration pipeline or so, where you want to use wget or curl but can't execute it directly.
Use wget -O instead of -P for specific file download
for e.g.,
RUN wget -O /tmp/new_file.txt http://something.com --username xyz --password bluh/new_file.txt
Thanks

Resources