There is an option to warmup cache in System->Maintanance->Warmup cache . I have to warmup cache weekly in my contao site. So that I am thinking to write a scheduler task . I know there is an option for implementing scheduler task
$GLOBALS['TL_CRON']
What will be the risks for implementing this ? Any security risk behind this cache warmup in scheduler ?
In brief :- I need a scheduler for page cache warmup.
Disabled pages should not be included
Hidden pages should also be warmed up
There is no way to do what you want to do via a command or cron entry. Things like search reindexing or the frontend page cache warmup of the extension you are using only work via JavaScript AJAX requests - thus they need a client to work. Contao does not yet have the ability to use something like a server side request queue for page cache warming and search indexing.
I am assuming you are referring to a Contao 3 installation and by cache you mean the internal cache, which you can purge in the maintenance section of the back end and can then rebuild.
For this you could use the \Contao\Automator class for which there also exists a command line interface. To purge and rebuild the internal cache you could use the following command:
$ php system/bin/automator generateInternalCache
Replace php with the paht to an appropriate PHP CLI if necessary (preferably with the PHP version that your Contao installation uses).
Update:
For Contao 4 (Managed Edition), which is a Symfony based application, you can use the following commands:
$ php vendor/bin/contao-console cache:clear --no-warmup
$ php vendor/bin/contao-console cache:warmup
Related
My end goal: I want to fetch data from a retail site on an hourly schedule to see if a specific product is back in stock or not.
I tried using xpath in python to scrape the site myself, but I'm not too familiar, and why reinvent the wheel if someone built a scraper already? In this case, Diggernaut has a github repo.
https://github.com/Diggernaut/configs/tree/master/bananarepublic.gap.com
I'm using the above github repo to try and run a pre-existing web scraper on the banana republic retail site. All that's included in the folder is a config.yml file. I don't even know where to start to try and run it... I am not familiar with using .yml files at all, barely know my way around a terminal (I can do basic "ls" and "cd" and "brew install", otherwise, no idea).
Help! I have docker and git installed (not that I know how to use docker). I have a Mac version 10.13.6 (High Sierra).
I'm not sure why you're looking at using Docker for this, as the config.yml is designed for use on Diggernaut.com and not as part of a docker container deployment. In fact, there is no docker container for Diggernaut that exists as far as I can see.
On the main Github config page for Diggernaut they list the following instructions:
All configs can be used with Diggernaut service to retrieve products information.
You need to create free account at Diggernaut
Login to your account
Create a project with any name and description you want
Get into your new project by clicking it and create new digger with any name
Then you will see 3 options suggested to you, you need to use one where you will use meta-language
Config editor will open and you can simply copy and paste config code and click on save button.
Switch mode for digger from Debug to Active and then run your digger.
Wait for completion.
Download data.
Schedule your runs if required.
With the expiration of Dartium that happened just a few days ago, I felt compelled to migrate from dart 1.24.3 to Dart2, even though it is still in dev.
I have although hit a few walls doing so, one of them being related to the architecture of my apps.
I run a nodeJs server, which also acts as a webserver with client side dart.
The problem that I experience with the new dart SDK is that in order for the .dart files to be read in Chrome, they must be served using webdev serve or build_runner serve.
Obviously, these 2 commands act as the file server, which is not what I want since I'm using a nodeJS server.
By using build_runner watch I think I am enabling the build and watch of the .dart files into .dart.js inside of the following directory :
.dart_tool/build/generated//lib
I am also able to serve them from my nodeJS server. What remains is the package directory, I can't seem to find where pub serves gets the following package files:
/packages/$sdk/dev_compiler/amd/require.js
/packages/$sdk/dev_compiler/amd/dart_sdk.js
Does anyone know what build_runner serve does to include them?
Thank you,
There are 2 options for using a different server during development.
Run build_runner serve on a different port and proxy the requests to it from your other server. This has the benefit of delaying requests while a build is ongoing so you don't get an inconsistent set of assets.
Run build_runner watch --output web:build and use the created build/ directory to serve files from. This will include a build/packages directory that has these files in it.
These files are served from the lib directory of the dart sdk itself.
Note that there is another option, which is to use the -o option from build_runner. This will create a merged directory with source and generated files, which you can serve directly without relying on any internal file layout.
This one is quite strange.
I am running a very typical Docker container that holds a Rails API. Inside this API, I have an endpoint which takes an upload of a CSV and does some things and stuff.
Here is the exact flow:
vim spec/fixtuers/bid_update.csv
# fill it with some data
# now we call the spec that uses this fixture
docker-compose run --rm web bundle exec rspec spec/requests/bids_spec.rb
# and now the csv is loaded and I can see it as plaintext
However, after creating this, I decided to change the content of the CSV. So I do this, adding a column and respective value to it for each piece.
Now, however, when we run our spec again after saving this it has the old version of the CSV. The one originally used at the breakpoint in the spec.
cat'ing out the CSV shows it clearly should have the new content.
Restarting the VM does nothing. The only solution I've found is to docker-machine rm dev and build a new machine (my main one for this is called dev).
I am entirely perplexed as to what could cause this or a simple means to fix it (building with all those images takes a while).
Ideas? Inform me I'm an idiot and I just had to press 0 for an operator and they would have fixed it?
Any help appreciated :)
I think it could be an issue with how virtualbox shares folders with your environment. More information here https://github.com/mitchellh/vagrant/issues/351#issuecomment-1339640
Does laravel 5.1 work without internet connection?
I like to create a laravel new application
when i execute laravel new test (with intenet connection) it works well;
but when i execute similar command in the same directory (new anotherName) without internet connection it doesn't work and the nest error message is shown
[GuzzleHttp\Exception\RequestException]
Error creating resource. [url] http://cabinet.laravel.com/latest.zip [type]
2 [message] fopen(http://cabinet.laravel.com/latest.zip): failed to open s
tream: php_network_getaddresses: getaddrinfo failed: Name or service not kn
own [file] /home/<Myname>/.composer/vendor/guzzlehttp/guzzle/src/Adapter/Str
eamAdapter.php [line] 367
Is there a solution because i can't work online always?
When you use the laravel installer it fetches the latest version from the server. One solution would be to initialise a Laravel project, then add it to git version control and then when offline checkout the project to a new folder. You'd have to manually choose a new app key (I think). You will also not be able to composer require or npm install any new packages while offline.
Once you have created it though it should run offline (unless your views are sourcing assets from, say, bootstrap or jQuery CDNs).
Composer 2+:
COMPOSER_DISABLE_NETWORK=1 laravel new myapp
Troubleshooting:
Check your composer version: composer --version - you may have to update to the latest version with composer self-update;
Check you have a global cache: echo $COMPOSER_HOME - you may have to create a ~/.composer and set export COMPOSER_HOME="${HOME}/.composer" to your ~/.bashrc or ~/.zshrc - don't forget to close and open your terminal to apply the changes;
If you get this error https://repo.packagist.org could not be fully loaded (Network disabled, request canceled: https://repo.packagist.org/packages.json), package information was loaded from the local cache and may be out of date, the laravel packages are not in the global cache. Run the command with internet enabled to download the files.
Let's say I want to download a Symfony's complete app, for instance, Jobbet
I'll have everything necessary to run the app in my desktop but it wouldn't really work with an empty database. Is there a terminal command to create and fill the database with everything that the app requires?
First, configure your database, either by command line, or editing the "/config/databases.yml" file.
> php symfony configure:database "mysql:host=YOURHOST;dbname=YOURDBNAME" YOURDBUSER YOURDBPASS
Next, if you want to generate everything, forms, filters, models and data, run the following command:
For Doctrine ORM:
php symfony doctrine:build --all --and-load
For Propel ORM:
php symfony propel:build --all --and-load
This should get you up and running. You should definitely look at the tutorial for Jobeet posted on the Symfony Project website for more information on how this project works:
Doctrine: http://www.symfony-project.org/jobeet/1_4/Doctrine/en/
Propel: http://www.symfony-project.org/jobeet/1_4/Propel/en/
You can either edit config/databases.yml file or use configure:database task. For more info run:
./symfony help configure:database