Using Grails to store image but could not store outside CATALINA_HOME in production - grails

I'm using Grails 2.5.6 to store uploaded images to folder on a server.
The following are my code to store the image
mpr.multiFileMap.file.each{fileData->
CommonsMultipartFile file = fileData
File convFile = new File(file.getOriginalFilename());
file.transferTo(convFile);
/** Processing File **/
File uploadedFile = new File("${directory}${generatedFileName}.${extension}")
convFile.renameTo(uploadedFile)
}
I have no problem running on development (MacOSX High Sierra)
But when i deployed on production (Ubuntu 14.04 server), i could not save the file outside CATALINA_HOME directory.
I have checked the permission and ownership of the destination directory, but still, the directory was created but the file was never stored.
For Example, i've tried to store the file on /home/tomcat/ directory (/home directory was in separate partition with tomcat which stored it /var), the directory was created, but the file was never stored.
When i put the destination directory within CATALINA_HOME folder, everything works fine. But this was not the scenario i want to do.

You say your destination directory is on another partition, so maybe another filesystem is used on this partition.
Or if you look on the javadoc of the renameTo method it is said :
Many aspects of the behavior of this method are inherently
platform-dependent: The rename operation might not be able to move a
file from one filesystem to another, it might not be atomic, and it
might not succeed if a file with the destination abstract pathname
already exists. The return value should always be checked to make
sure that the rename operation was successful.
...
#return true if and only if the renaming succeeded;
false otherwise
Thus I think the renameTo method is not able to move the file, don't know why but you can rewrite your code like this :
mpr.multiFileMap.file.each{fileData->
CommonsMultipartFile file = fileData
File uploadedFile = new File("${directory}${generatedFileName}.${extension}")
// String originalFilename = file.getOriginalFilename()
// you can store originalFilename in database for example
if(!uploadedFile.getParentFile().exists()) {
uploadedFile.getParentFile().mkdirs()
// You can set permissions on the target directory if you desired, using PosixFilePermission
}
file.transferTo(uploadedFile)
}

Related

Can I preserve a folder's contents in my project directory when I queue a new Build in TFS?

I have a problem. I am using Team Foundation Server 2017 RTM. I have a build definition that will deploy my app to a development server running Windows Server 2012 R2. My app allows users to upload images and PDFs. When this is done, a folder named Media is created in my project's root directory and the files are uploaded here. The problem is, whenever I queue a new build, this folder gets destroyed and all the links to the media don't point to anything. I am rather new at managing and setting up TFS so I was wondering if there is any way I can preserve the contents of my media folder whenever I queue a new build. Any ideas?
Ok, so I spent my whole day looking at this.
In my C# code I create a directory like so:
// -- Create a new file name that is unique
string fileExtension = Path.GetExtension(upload.FileName);
Guid fileGuid = Guid.NewGuid();
string fileName = fileGuid + fileExtension;
// -- Create the directory and upload the image to that directory
string mediaDirectory = Server.MapPath("~/Media/");
Directory.CreateDirectory(mediaDirectory);
string filePath = Path.Combine(mediaDirectory, fileName);
upload.SaveAs(filePath);
I would then set the image url on the Media object like:
string imageUrl = "/Media/" + fileName;
So now, instead of storing the image in the database, I am just storing the URL to the image.
This was creating the directory in the app directory where I can store the files:
Which is cool but as I mentioned, this directory will be destroyed every time I queue a new build. How I fixed this was to modify where I stored the images:
// -- Create a new file name that is unique
string fileExtension = Path.GetExtension(upload.FileName);
Guid fileGuid = Guid.NewGuid();
string fileName = fileGuid + fileExtension;
// -- Create the directory and upload the image to that directory
// The Media directory will be created on the C drive root
string mediaDirectory = #"c:\Media";
Directory.CreateDirectory(mediaDirectory);
string filePath = Path.Combine(mediaDirectory, fileName);
upload.SaveAs(filePath);
Now my Media folder is created on the server's C drive and won't be destroyed whenever I queue a new build. Since the app can't access files outside the app directory, I needed a way to access those files in the Media directory. What I did was create a new virtual folder in IIS that points to the Media folder and gave it the alias Media:
This will now let me have access to all those files I put in the Media directory and will properly display the images when needed. I really hope this helps someone because I spent way too long looking at this.
According to your description, there is a concept of working directory in the build agent. If you set clean=true in the build definition, this will delete the previous build output when you query a new build. Not sure where you Media folder located, avoid to create/put it in some directory on the build agent such as Build.ArtifactStagingDirectory
The local path on the agent where any artifacts are copied to before
being pushed to their destination. For example: c:\agent_work\1\a.
A typical way to use this folder is to publish your build artifacts
with the Copy files and Publish build artifacts steps.
Note: This directory is purged before each new build, so you don't have to clean it up yourself.
More details about the folder path in build/release, you could refer this tutorial-- Predefined variables

How to replace the hosts file on C#

I just want to know how can I completely replace the user's hosts file with another one?
Note: I wana give the user just my .exe compiled file(With my own hosts file attached to it) and after running the exe file the user's hosts file should be replaced with my own hosts file I attached to my exe file.
Simple way, you can use IO library.
string path = "system32\\drivers\\etc\\hosts";
string hostfile = Path.Combine(Environment.GetFolderPath(Environment.SpecialFolder.Windows), path);
const string checkIP = "127.0.0.1 localhost";
if (!File.ReadAllLines(hostfile).Contains(checkIP))
File.AppendAllLines(hostfile, new string[] {checkIP});
Do not forget, your program must run with administrator privileges, otherwise you will get an UnauthorizedAccessException exception.

Jenkins Continuous Integration with Amazon S3 - Everything is uploading to the root?

I'm running Jenkins and I have it successfully working with my GitHub account, but I can't get it working correctly with Amazon S3.
I installed the S3 plugin and when I run a build it successfully uploads to the S3 bucket I specify, but all of the files uploaded end up in the root of the bucket. I have a bunch of folders (such as /css /js and so on), but all of the files in those folders from hithub end up in the root of my S3 account.
Is it possible to get the S3 plugin to upload and retain the folder structure?
It doesn't look like this is possible. Instead, I'm using s3cmd to do this. You must first install it on your server, and then in one of the bash scripts within a Jenkins job you can use:
s3cmd sync -r -P $WORKSPACE/ s3://YOUR_BUCKET_NAME
That will copy all of the files to your S3 account maintaining the folder structure. The -P keeps read permissions for everyone (needed if you're using your bucket as a web server). This is a great solution using the sync feature, because it compares all your local files against the S3 bucket and only copies files that have changed (by comparing file sizes and checksums).
I have never worked with the S3 plugin for Jenkins (but now that I know it exists, I might give it a try), though, looking at the code, it seems you can only do what you want using a workaround.
Here's what the actual plugin code does (taken from github) --I removed the parts of the code that are not relevant for the sake of readability:
class hudson.plugins.s3.S3Profile, method upload:
final Destination dest = new Destination(bucketName,filePath.getName());
getClient().putObject(dest.bucketName, dest.objectName, filePath.read(), metadata);
Now if you take a look into hudson.FilePath.getName()'s JavaDoc:
Gets just the file name portion without directories.
Now, take a look into the hudson.plugins.s3.Destination's constructor:
public Destination(final String userBucketName, final String fileName) {
if (userBucketName == null || fileName == null)
throw new IllegalArgumentException("Not defined for null parameters: "+userBucketName+","+fileName);
final String[] bucketNameArray = userBucketName.split("/", 2);
bucketName = bucketNameArray[0];
if (bucketNameArray.length > 1) {
objectName = bucketNameArray[1] + "/" + fileName;
} else {
objectName = fileName;
}
}
The Destination class JavaDoc says:
The convention implemented here is that a / in a bucket name is used to construct a structure in the object name. That is, a put of file.txt to bucket name of "mybucket/v1" will cause the object "v1/file.txt" to be created in the mybucket.
Conclusion: the filePath.getName() call strips off any prefix (S3 does not have any directory, but rather prefixes, see this and this threads for more info) you add to the file. If you really need to put your files into a "folder" (i.e. having a specific prefix that contains a slash (/)), I suggest you to add this prefix to the end of your bucket name, as explicited in the Destination class JavaDoc.
Yes this is possible.
It looks like for each folder destination, you'll need a separate instance of the S3 plugin however.
"Source" is the file you're uploading.
"Destination bucket" is where you place your path.
Using Jenkins 1.532.2 and S3 Publisher Plug-In 0.5, the UI configure Job screen rejects additional S3 publish entries. There would also be a significant maintenance benefit to us if the plugin recreated the workspace directory structure as we'll have many directories to create.
Set up your git plugin.
Set up your Bash script
All in your folder marked as "*" will go to bucket

How do I derive physical path of a relative directory inside Config.groovy?

I am trying to set up Weceem using the source from GitHub. It requires a physical path definition for the uploads directory, and for a directory for appears to be used for writing searchable indexes. The default setting for uploads is:
weceem.upload.dir = 'file:/var/www/weceem.org/uploads/'
I would like to define those using relative paths like WEB-INF/resources/uploads. I tried a methodology I have used previously for accessing directories with relative path like this:
File uploadDirectory = ApplicationHolder.application.parentContext.getResource("WEB-INF/resources/uploads").file
def absoluteUploadDirectory = uploadDirectory.absolutePath
weceem.upload.dir = 'file:'+absoluteUploadDirectory
However, 'parentContext' under ApplicationHolder.application is NULL. Can anyone offer a solution to this that would allow me to use relative paths?
look at your Config.groovy you should have (maybe it is commented)
// locations to search for config files that get merged into the main config
// config files can either be Java properties files or ConfigSlurper scripts
// "classpath:${appName}-config.properties", "classpath:${appName}-config.groovy",
grails.config.locations = [
"file:${userHome}/.grails/${appName}-config.properties",
"file:${userHome}/.grails/${appName}-config.groovy"
]
Create Conig file in deployment server
"${userHome}/.grails/${appName}-config.properties"
And define your prop (even not relative path) in that config file.
To add to Aram Arabyan's response, which is correct, but lacks an explanation:
Grails apps don't have a "local" directory, like a PHP app would have. They should be (for production) deployed in a servlet container. The location of that content is should not be considered writable, as it can get wiped out on the next deployment.
In short: think of your deployed application as a compiled binary.
Instead, choose a specific location somewhere on your server for the uploads to live, preferably outside the web server's path, so they can't be accessed directly. That's why Weceem defaults to a custom folder under /var/www/weceem.org/.
If you configure a path using the externalized configuration technique, you can then have a path specific to the server, and include a different path on your development machine.
In both cases, however, you should use absolute paths, or at least paths relative to known directories.
i.e.
String base = System.properties['base.dir']
println "config: ${base}/web-app/config/HookConfig.grooy"
String str = new File("${base}/web-app/config/HookConfig.groovy").text
return new ConfigSlurper().parse(str)
or
def grailsApplication
private getConfig() {
String str = grailsApplication.parentContext.getResource("config/HookConfig.groovy").file.text
return new ConfigSlurper().parse(str)
}

How to call input file which is qlready in the package

In my Hadoop Map Reduce application I have one input file.I want that when I execute the jar of my application, then the input file will automatically be called.To do this I code one class to specify the input,output and file itself but from where I am calling the file, there I want to specify the file path. To do that I have used this code:
QueriesTest.class.getResourceAsStream("/src/main/resources/test")
but it is not working (cannot read the input file from the generated jar)
so I have used this one
URL url = this.getClass().getResource("/src/main/resources/test") here I am getting the problem of URL. So please help me out. I am using Hadoop 0.21.
I'm not sure what you want to tell us with your resource loading, but the usual way to add an input file is this:
Configuration conf = new Configuration();
Job job = new Job(conf);
Path in = new Path("YOUR_PATH_IN_HDFS");
FileInputFormat.addInputPath(job, in);
job.setInputFormatClass(TextInputFormat.class); // could be a sequencefile also
// set the other stuff
job.waitForCompletion(true);
Make sure your file resides in HDFS then.

Resources