Using FastAPI locally I can generate an openapi yaml file using /docs.
Now I am running the API in API Gateway/Cloud Run, is there any way I can generate the yaml file from code running in Cloud Run?
Related
I have set up Pentaho 9.1 and the AWS CLI in a docker container as well as defined relevant AWS environment variables from within the docker-compose file. Running aws s3 ls from within the container shell confirms that the environment variables have been set up properly since it does list my S3 buckets. However, when using the S3 CSV input step in Pentaho, the job run fails until I manually enter the AWS CLI configuration settings within the container shell by running aws configure and entering the AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY and AWS_DEFAULT_REGION.
It seems like a Pentaho bug although I am not sure. Has anybody faced a similar issue?
Thanks,
Seb
I am doing deployment and I have drafted a YAML manifest file from a reference docker-compose file. After it has been extracted on VScode, I have to upload it to the cluster on Google Cloud Platform, please how do I do that? What is the process of moving those files to google cloud k8s using pulumi.
Many thanks.
If I'm understanding your question correctly, you have some Kubernetes resources that you want to deploy into your Google Kubernetes Engine Cluster?
If so, you can use Pulumi's Kubernetes provider to deploy Kubernetes YAML files to your cluster.
Here's an example in Python:
import pulumi
import pulumi_kubernetes as k8s
# Create resources from standard Kubernetes guestbook YAML example.
guestbook = k8s.yaml.ConfigFile('guestbook', 'guestbook-all-in-one.yaml')
# Export the private cluster IP address of the frontend.
frontend = guestbook.get_resource('v1/Service', 'frontend')
pulumi.export('private_ip', frontend.spec['cluster_ip'])
The example above assumes you have a KUBECONFIG environment variable set in your terminal with appropriate credentials and access to your GKE cluster.
You can see more details and examples (in other languages too) at https://www.pulumi.com/docs/guides/adopting/from_kubernetes/.
Hi I am using github aciton to do my CICD pipline. And I try to deploy multiple docker container to AWS elasticbeanstalk with multiple-container environment.
In my github action, I have already successfully push my docker images to the docker hub. What should I do next in my github action? Should I still deploy the zip file to AWS elasticbeanstalk or something else? Would someone give something guides please? Thank you!
After pushing to Docker Hub, you need to create an authentication file that contains information required to authenticate with the registry using these instructions.
Add the authentication parameter to the Dockerrun.aws.json configuration file
The ElasticBeanstalk multi-container environment only supports hosted images. As a result, you can deploy the Dockerrun.aws.json configuration file on it's own without having to create a zip archive of the source code. If you do zip the source code with the configuration file, it becomes available in the EC2 container instances and is accessible in the /var/app/current/
Read more here: https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_docker_v2config.html
My end goal is to provide a Jupyter Notebook that can run any code that I would normally run from a Rails console, so that our data scientists can make notebooks using directly the models from our existing Rails app, and easily draw/refresh some graphs based on the models.
I'm using the iRuby gem and help sections to successfully make a docker image that would be able to load our Rails environment.
Assuming we have a Rails project/console as a github project with a git repository, how can I setup jupyter with iRuby AND load the code from our main Rails application ?
My end goal is to build a docker image that I could easily deploy on AWS ECS to provide a microservice with a "Jupyter Rails console". Once I have the docker image running the "Jupyter Rails console", deploying on ECS should be a piece of cake
I am using the sciruby Dockerfile from https://hub.docker.com/r/minad/sciruby-notebooks/dockerfile. What should I do to load the code from our Rails project ?
Note : I am also using Capistrano, and it turns out I had previously found a way to "deploy" our Rails code inside a container using Capistrano, in case this may help to draft a solution
I want to automate the deployment of DAGs written in a certain repository.
To achieve that I use the gcloud tool and this just imports the DAGs and Plugins according to the documentation.
Now the problem is that when changing the structure of a DAG it is just not possible to get it to load/run correctly in the webinterface. When I use airflow locally I just restart the webserver and everything is fine, however using Cloud Composer I cannot find out how to restart the webserver.
We only support uploading DAGs through GCS currently: https://cloud.google.com/composer/docs/how-to/using/managing-dags
The webserver, which is hosted through GAE, can't be restarted.