I am using Nova and Laravel-Nova-Excel.
I need to export the contents of a nova-resource (+ add any filters) by cron on disk, as Excel. How to do it right?
Now I see this option:
Get the query builder from the resource (how to do it? - the main problem)
Create a query class as described here - https://docs.laravel-excel.com/3.1/exports/from-query.html
Export the file.
Does anyone know other options?
You would like to export all the models connected to a certain nova-resource to excel from a cron job. Please correct me if I misinterpreted your question.
Laravel-Nova-Excel package is a nova wrapper around Laravel Excel. It will be easier to just use the package directly instead of the nova wrapper.
Use this guide Laravel Excel - quick start to define a Excel ExportModel class in which you define your filter logic.
Create a Laravel Command in which you trigger the ExportModel class. The following artisan command can be used to generate a laravel command : php artisan make:command ExportModel
Use this command in your existing cronjob: php artisan command:export
Or use the built in laravel schedules laravel schedules
Related
I'm in the process of migrating entire CloudFormation stacks to Troposphere, including Lambda and Lambda-reliant CFN Custom Resources.
One of my goals is to circumvent the creation of template files altogether, making the Python code the sole "source of truth" (i.e without template files that are created and therefore can be edited, causing config drift).
This requires the ability to:
Pass a file-like object to the SAM builder (instead of a file-name)
Calling the AWS SAM builder from Python and not the CLI
My first naive idea was that I would be able to import a few modules from aws-sam-cli put a wrapper for io.StringIO around it (to hold the template as file-like object) and presto! Then I looked at the source code for sam build and all hope left me:
I may not be able to use Docker/containers for building, as I it will map the build environment, including template files.
AWS SAM CLI is not designed to have a purely callable set of library functions, similar to boto3. Close, but not quite.
Here is the core of the Python source
with BuildContext(template,
base_dir,
build_dir,
clean=clean,
manifest_path=manifest_path,
use_container=use_container,
parameter_overrides=parameter_overrides,
docker_network=docker_network,
skip_pull_image=skip_pull_image,
mode=mode) as ctx:
builder = ApplicationBuilder(ctx.function_provider,
ctx.build_dir,
ctx.base_dir,
manifest_path_override=ctx.manifest_path_override,
container_manager=ctx.container_manager,
mode=ctx.mode
)
try:
artifacts = builder.build()
modified_template = builder.update_template(ctx.template_dict,
ctx.original_template_path,
artifacts)
move_template(ctx.original_template_path,
ctx.output_template_path,
modified_template)
click.secho("\nBuild Succeeded", fg="green")
msg = gen_success_msg(os.path.relpath(ctx.build_dir),
os.path.relpath(ctx.output_template_path),
os.path.abspath(ctx.build_dir) == os.path.abspath(DEFAULT_BUILD_DIR))
click.secho(msg, fg="yellow")
This relies on a number of imports from a aws-sam-cli internal library with the build focused ones being
from samcli.commands.build.build_context import BuildContext
from samcli.lib.build.app_builder import ApplicationBuilder, BuildError, UnsupportedBuilderLibraryVersionError, ContainerBuildNotSupported
from samcli.lib.build.workflow_config import UnsupportedRuntimeException
It's clear that this means it's not as simple as creating something like a boto3 client and away I go! It looks more like I'd have to fork the whole thing and throw out nearly everything to be left with the build command, context and environment.
Interestingly enough, sam package and sam deploy, according to the docs, are merely aliases for aws cloudformation package and aws cloudformation deploy, meaning those can be used in boto3!
Has somebody possibly already solved this issue? I've googled and searched here, but haven't found anything.
I use PyCharm and the AWS Toolkit which if great for development and debugging and from there I can run SAM builds, but it's "hidden" in the PyCharm plugins - which are written in Kotlin!
My current work-around is to create the CFN templates as temp files and pass them to the CLI commands which are called from Python - an approach I've always disliked.
I may put in a feature request with the aws-sam-cli team and see what they say, unless one of them reads this.
I've managed to launch sam local start-api from a python3 script.
Firstly, pip3 install aws-sam-cli
Then the individual command can be imported and run.
import sys
from samcli.commands.local.start_api.cli import cli
sys.exit(cli())
... provided there's a template.yaml in the current directory.
What I haven't (yet) managed to do is influence the command-line arguments that cli() would receive, so that I could tell it which -t template to use.
Edit
Looking at the way aws-sam-cli integration tests work it seems that they actually kick off a process to run the CLI. So they don't actually pass a parameter to the cli() call at all :-(
For example:
class TestSamPython36HelloWorldIntegration(InvokeIntegBase):
template = Path("template.yml")
def test_invoke_returncode_is_zero(self):
command_list = self.get_command_list(
"HelloWorldServerlessFunction", template_path=self.template_path, event_path=self.event_path
)
process = Popen(command_list, stdout=PIPE)
return_code = process.wait()
self.assertEquals(return_code, 0)
.... etc
from https://github.com/awslabs/aws-sam-cli/blob/a83aa9e620ff679ca740496a3f1ff4872b88894a/tests/integration/local/invoke/test_integrations_cli.py
See also start_api_integ_base.py in the same repo.
I think on the whole this is to be expected because the whole thing is implemented in terms of the click command-line application framework. Unfortunately.
See for example http://click.palletsprojects.com/en/7.x/testing/ which says "The CliRunner.invoke() method runs the command line script in isolation ..." -- my emphasis.
I am using following python script to run sam cli commands. This should work for you too.
import json
import sys
import os
try:
LAMBDA_S3_BUCKET="s3-bucket-name-in-same-region"
AWS_REGION="us-east-1"
API_NAME = "YourAPIName"
BASE_PATH="/path/to/your/project/code/dir"
STACK_NAME="YourCloudFormationStackName"
BUILD_DIR="%s/%s" % (BASE_PATH, "build_artifact")
if not os.path.exists(BUILD_DIR):
os.mkdir(BUILD_DIR)
os.system("cd %s && sam build --template template.yaml --build-dir %s" % (BASE_PATH, BUILD_DIR))
os.system("cd %s && sam package --template-file %s/template.yaml --output-template-file packaged.yaml --s3-bucket %s" %(BASE_PATH, BUILD_DIR, LAMBDA_S3_BUCKET))
os.system("cd %s && sam deploy --template-file packaged.yaml --stack-name %s --capabilities CAPABILITY_IAM --region %s" %(BASE_PATH, STACK_NAME, AWS_REGION))
except Exception as e:
print(e.message)
exit(1)
I'm trying to seed my database with a collection exported via the mongoexport tool, but I can't seem to find any way to use the mongoimport tool through Ruby.
I looked at the Mongo Driver for how to execute mongo queries via Ruby, and thought about iterating through each line of json from the export, but there are keys like "$oid" which give errors when attempting to do a collection.insert()
Is it possible to use the mongoimport tool in Ruby, or what's the best way to add code to seeds.rb so that it imports a mongo collection?
The mongoimport tool is actually a command-line tool. So you don't use the Mongo Driver for this.
Instead you should "shell out" and call the process. Here's a link on calling a command from the shell.
Calling shell commands from Ruby
mongoexport exports documents in an extended json format specified in the MongoDB docs.
http://www.mongodb.org/display/DOCS/Mongo+Extended+JSON
The driver doesn't read this format automatically. For seeding a database, you may want to use mongodump and mongorestore, which use the database's native BSON format. As another poster mentioned, you could easily shell out to do this.
I have a pylons based webapp and i'd love to use celery + rabbitmq for some time taking tasks. I've taken a look at the celery-pylons project but I haven't succeeded in using it.
My main problem with celery is: where do i put the celeryconfig.py file or is there any other way to specify the celery options eg. BROKER_HOST and the like, from within a pylons app (In the same way one can put the options in the django settings.py file when using django-celery).
Basically, i investigated 2 options: using celery as a standalone project and using celery-pylons, both without much success.. :(
Thanks in advance for your help.
I am doing this currently, although I've not updated celery in some time. I'm still on 2.0.0 I think.
What I did was to create a celery_app directory within my pylons application. (so in same directory as data, controllers, etc.)
In that directory are my celeryconfig.py, tasks.py, and pylons_tasks.py.
pylons_tasks.py is just a file that initializes the pylons application so I can load Pylons models and such into the celery tasks.py file. So it does the pylons init and then imports tasks.py.
The celeryconfig is then set to use myapp.celery_app.pylons_tasks as the CELERY_IMPORTS value.
CELERY_IMPORTS = ("myapp.celery_app.pylons_tasks", )
Hope that helps some.
The tightest integration with pylons is to build a custom loader into paste commands. This is what celery-pylons does. Check out my fork of celery-pylons https://bitbucket.org/dougtabuchi/celery-pylons/src which should work with the latest celery and pylons 1.0.
To get the celeryd side working you need to add the correct options in your ini file and then call paster celeryd development.ini
For the webapp side you just need to import celerypylons in environment.py Then you will be able to import and use your tasks from anywhere in your project.
A good pylons project to look at that uses celery is https://rhodecode.org/rhodecode/files/tip/
Let's say I want to download a Symfony's complete app, for instance, Jobbet
I'll have everything necessary to run the app in my desktop but it wouldn't really work with an empty database. Is there a terminal command to create and fill the database with everything that the app requires?
First, configure your database, either by command line, or editing the "/config/databases.yml" file.
> php symfony configure:database "mysql:host=YOURHOST;dbname=YOURDBNAME" YOURDBUSER YOURDBPASS
Next, if you want to generate everything, forms, filters, models and data, run the following command:
For Doctrine ORM:
php symfony doctrine:build --all --and-load
For Propel ORM:
php symfony propel:build --all --and-load
This should get you up and running. You should definitely look at the tutorial for Jobeet posted on the Symfony Project website for more information on how this project works:
Doctrine: http://www.symfony-project.org/jobeet/1_4/Doctrine/en/
Propel: http://www.symfony-project.org/jobeet/1_4/Propel/en/
You can either edit config/databases.yml file or use configure:database task. For more info run:
./symfony help configure:database
hey, I am a newbie in symfony.
I am following this joobet tutorial on symfony-project.com, I am on there day 3 http://www.symfony-project.org/jobeet/1_4/Doctrine/en/03
Whenever I type php symfony doctrine:insert-sql, I get the
following error:
doctrine creating tables
Couldn't locate driver named mysql
I am using it on WAMP. I have symfony present at C:\wamp\bin\php\php5.3.0\PEAR\symfony
and my project is present in C:\wamp\www\jobeet
Kindly help me resolve this, as I am stuck here, and cant move further.
You need to enable the mysql driver of PDO. You will need to edit 1 or 2 php.ini files, depending on your system.
First you need to edit the php.ini apache uses, you can find it by creating a new file that does a phpinfo(), checking it via a browser and searching for "php.ini".
The second one is the one the command line uses, open a terminal (start - run - cmd.exe), and run php -i > phpinfo.txt, then open the text file and search for "php.ini".
You are looking for the extensions part of it, uncomment the line with pdo_mysql (remove the ; at the beginning). After all these, restart apache and you're good to go.