symfony 1.4 propel:build-model not working as expected - symfony1

Just wondering if anyone might know what's happening here. I have several schema.yml files, and when I try to build model classes using symfony propel:build-model I don't get any error message, however instead of any classes being generated I get xml files generated in the same config folder as the schema yml files. i.e. if I have a file named logger_schema.yml in the config directory, after I run build-model, I will also have a generated-logger_schema.xml file in the config directory as well, and no generated classes.
Any idea what could be causing this?

The XML file in question is a worker file symfony/Propel creates as part of the class generation process - it's not an "error" as such.
symfony CLI tasks require quite a lot of PHP memory, especially on Windows. If the Propel task is failing, I would recommend a permanent change to the php.ini file setting on memory allocation to at least 256M. I know this seems high, but you should only ever need these tasks on a development machine. As you note, you saw evidence of memory exhaustion on another related task.
If that doesn't fix it, could you add to your question all of the CLI output when you run the task? It might shed some light on the step which is failing.

After looking at this ticket, it appears the XML files are likely the result of a symfony error, despite the fact that I repeatedly got no error message using propel:build-model. After trying propel:build --model --forms, I did in fact get a "memory exhausted" error, which was solved by temporarily increasing the PHP memory limit.

Related

Updating Grails 4+ configuration values during runtime

In Grails 2 we used the "External configuration plugin", which included the method checkNow() for checking and refreshing values from an external config file.
Does it exist a simple approach for doing something similar in Grails 4+? I have seen references to Spring Cloud Config Server, but it seems a bit overkill for me. All I really want to do is be able to (now and then) update a config value in runtime. It could also be purely by a few lines of code, and does not have to originate from changes in the config file. This would avoid having to restart our server for minor changes in config. Thanks!
I'm replying to myself with a ridiculously simple answer: "just change it". Using the console plugin (or any other form of code execution), I can just assign grailsApplication.config.any.property a new value. It won't persist and it won't update any listeners or anything. But it is a glaringly obvious solution that I just assumed wouldn't work due to the getProperty() calls (I interpreted the name as reading from file) and googled discussions about Spring Cloud Config.
So, move on... nothing to see here. Just mild embarrassment :-P

Map a file in Docker using Docker Volume [duplicate]

change a config.properties file in a jar / war file in runtime and hotdeploy the changes ?
my requirement is something as follows, we have a "config.properties" in a jar/war file , i have to open the file through a webpage and after the user has made necessary changes to it, i have to update the "config.properties" in jar/war file and hot deploy it. can we achieve this feat ? if so can you please point me to relevant sites/documents so that i can jumpstart on this.
I will strongly recommend your architecht rethink this solution. What you describe should be done through JNDI or a similar technique, not through reloading properties.
Deployments should be considered static - that any given web container allows for magic trickery should not be depended on, and WILL break some day (most likely at the most inconvenient time).
You've got a couple of problems off the top of my head:
ensuring that nothing is holding static references to a java.util.Properties that has previously loaded your config.properties file.
most servlet engines will unpack your war to a working directory so the properties file you load won't be the one in the war, it will be the unpacked one. This means your changes
will be overwritten when you restart the servlet engine because this is typically one of the points the war is unpacked.
While these problems aren't insurmountable I've always found it much easier to implement this sort of behavior by storing the properties in JNDI (as Thorbjørn suggests) or a database (while being careful about the static references I mentioned in point 1).
The JNDI/database solution has the nice side effect of easing deployment into multiple environments because each typically has it's own registry/database.
Even that I agree with the comments explained before, I could suggest one solution:
Apache Commons Configuration extension gives you the posibility to do something like:
config.setReloadingStrategy(new FileChangedReloadingStrategy());
That could make the trick to change the configuration file on a runtime basis with no code at all.
However, like JNDI and other methods of web application configuration, the security is a concern. Be careful on which parameters you can/must be able to configure.

.kmod and .ko - difference?

Have been using Ndisgen to try to generate a .ko kernel module for an rtl8192se driver for my Freebsd 9 netbook having followed instructions found on several different dev blogger sites.
Somehow, i've just not been able to generate a file with extension .ko. Instead, i keep getting a .kmod file.
Question is, what is the difference between these ?
I have also attempted kldload for this .kmod file. When i check it via kldstat, ok, i see it there but, when i then check with dmesg and pciconf -lv, my realtek card is still not hooked up.
So i reckon i really need to generate the .ko file in the first place, but what am i doing wrong or missing, such that only a kmod is generated?
Any pointers would be appreciated! thanks! :)
Update::
There was a message I had ignored.
My bad!
the message after conversion was :
"...Cleaning up... rm: machine: is a directory cleanup failed.Exiting"
That's all because i had pasted a copy of the "/usr/include/machine" folder with all the headers i thought was required in the path where I was converting the driver.
But i ignored it thinking, well since ndisgen had already created a .kmod file(which was what I had assumed was also a kernel module, just not in .ko form) then it was alright.
SO finally, since it's complaining that it's a directory and can't be cleaned, i then created a symbolic link to that folder instead.
Et voila! the clean was successful and now i have the .ko file! :D
The ndisgen script renames the .ko file to .kmod temporarily to do some cleanup.
If that cleanup works, it should rename it back to a .ko file. See the drvgen function /usr/src/usr.sbin/ndiscvt/ndisgen.sh.
I'm assuming that something goes wrong in between both renames. Do you get any error messages?
Keep in mind that if you load the driver, it should show up as the ndis0 device!
Looks like you are getting a NetBSD kernel module, not a FreeBSD one. See these posts:
hubertf's NetBSD Blog
Modern net bsd kernel module
Is the source code that you are using available publicly for us to try follow your steps?

Web deployment failing due to file in use

I'm using Microsoft's Web Deploy Remote Agent service to allow me to easily publish code to the server from within Visual Studio.
The web site I am deploying is using log4net to log messages to log files, and every time I try to deploy a new version of the code, I get this error in Visual Studio stating that the current log4net log file is in use:
An error occurred when the request was processed on the remote
computer. The file 'Web.log' is in use.
The process cannot access 'C:\inetpub\wwwroot\Logs\Web.log' because it
is being used by another process.
I can solve this by going onto the server and doing an iisreset before publishing... but that is kind of defeating the point of 'easy' publishing from Visual Studio :)
Is there some way I can get the publish task to issue an iisreset automatically, or some other way I can work round this?
I kept poking around and found some tidbits around the file being locked in a few other forums. Have you tried adding
<lockingModel type="log4net.Appender.FileAppender+MinimalLock" />
To your <appender> element in the web.config file? From the Apache docs
Opens the file once for each AcquireLock/ReleaseLock cycle,
thus holding the lock for the minimal amount of time. This method of
locking is considerably slower than FileAppender.ExclusiveLock but
allows other processes to move/delete the log file whilst logging
continues.
As far as the performance considerations, I suppose you would need to test if this will affect you or not as I am assuming it really depends on how often you are writing to the log file as to how much this will impact performance. I can't believe that getting/releasing a lock could take all that much time though.
There is a MSDEPLOY provider called recycleApp which is used exactly for this. You can include this in your deployment manifest.
Another option is to use ignoreOnErrors flag which will skip the file in use and continue with the deployment.

PHP Command-line scripts are ignoring php.ini and ini_set('memory_limit',...) directives

I am facing the common "Fatal error: Out of memory (allocated 30408704) (tried to allocate 24 bytes)..." PHP Fatal error. Pages served via Apache are not exhibiting this behavior.
I've tried the following:
Increasing the memory_limit in php.ini to a much larger value.
Increasing memory_limit within the script itself via calls to ini_set('memory_limit', -1), ini_set('memory_limit', '-1'), ini_set('memory_limit', 100000000), ini_set('memory_limit', '128M'), etc.
unset()ing unneeded arrays and objects to encourage the garbage collector to free up memory.
Contacting the web host. They are normally very capable and knowledgeable, but have not been able to help me with this issue either.
I've tried explicitly including a php.ini file using the -c command-line flag to hand-pick specific php.ini files with various values.
I've tried setting memory_limit in php.ini using both raw numbers of bytes and values such as 64M, 128M, etc.
The hosting provider was able to run the script as root with no issues, but experiences the same issue I do when running it using my non-root user. Perhaps there is some kind of permissions issue involved?
Regardless of what I try, the error message is the same. It appears that my command line scripts are ignoring changes to memory_limit.
I tend to try to make sure my scripts are memory efficient, but I'm currently needing to parse large amounts of HTML via Simple HTML DOM and it is in the parser that I'm experiencing out of memory issues. In an attempt to reduce the memory footprint of the script, I've tried using DOMDocument instead. This does not help either. In fact, the out of memory error is now triggered elsewhere in the script.
My question: has anyone experienced this or a similar issue? Do you have any recommendations?
Thank you.
It turns out that the problem was caused by shell fork bomb protection being enabled on the server that was placing a hard memory limit on all command-line scripts. This had been enabled by my web host without my knowledge.
your PHP on cli may be using a different php.ini to your apache php. try a phpinfo() and check its using the ini file you think its using.

Resources