I need to write a Java method in order to send postscript files to a printer in one job. In other words, I need to reproduce the effect of the following Unix command:
lp -d printer file1.ps file2.ps file3.ps
First I thought I could just concatenate the PS files (using classes like ConcatInputStream and PrintJobWatcher). But the resulting merged PS file is not always valid.
If it helps, here is my current code (I have been asked to do it in Groovy):
/**
* Prints the {#code files} {#code copyCount} times using
* {#code printService}.
* <p>
* Exceptions may be thrown.
* #param printService Print service
* #param files Groovy array of {#code File} objects
* #param copyCount Number of copies to print
*/
private static void printJob(
PrintService printService,
def files,
int copyCount) {
// No multiple copy support for PS file, must do it manually
copyCount.times { i ->
InputStream inputStream = null
try {
log.debug("Create stream for copy #${i}")
inputStream = new ConcatInputStream()
for (def file in files) {
if (file != null) {
log.debug("Add '${file.absolutePath}' to the stream")
((ConcatInputStream)inputStream).addInputStream(
new FileInputStream(file))
}
}
log.debug("Create document")
Doc doc = new SimpleDoc(
inputStream, DocFlavor.INPUT_STREAM.AUTOSENSE, null)
log.debug("Create print job")
DocPrintJob docPrintJob = printService.createPrintJob()
log.debug("Create watcher")
PrintJobWatcher watcher = new PrintJobWatcher(docPrintJob)
log.debug("Print copy #${i}")
docPrintJob.print(doc, null)
log.debug("Wait for completion")
watcher.waitForDone()
} finally {
if (inputStream) log.debug("Close the stream")
inputStream?.close()
}
}
}
I’m not allowed to convert the PS into PDF.
I read here that I could insert false 0 startjob pop between the PS files. But then would there be only one job?
I may be confusing the concept of "jobs"...
I didn’t find a post on the topic (sending multiple PS files to the printer in one job). The solution may be so obvious that it blinded me, that why I posted this question.
My next attempt will be to execute lp from the class, even if it looks dirty I know I can make it work that way... If you know a simpler way, please tell me.
Edit:
Executing lp (as below) works well:
/**
* Prints the {#code files} {#code copyCount} times using an executable.
* <p>
* Exceptions may be thrown.
* #param config ConfigObject containing closures for building the
* command line to the printing executable, and to analyze the
* return code. Example of config file:
*
* print {
* commandClosure = { printerName, files ->
* [
* 'lp',
* '-d', printerName,
* files.collect{ it.absolutePath }
* ].flatten()
* }
* errorClosure = { returnCode, stdout, stderr -> returnCode != 0 }
* warnClosure = { returnCode, stdout, stderr ->
* !stderr?.isAllWhitespace() }
* }
*
* #param printerName Printer name
* #param files Groovy array of {#code File} objects
* #param copyCount Number of copies to print
*/
private static void printJob(
ConfigObject config,
String printerName,
def files,
int copyCount) {
files.removeAll([null])
Integer copyCount = job.copyCountString.toInteger()
copyCount.times { i ->
def command = config.print.commandClosure(printerName, files)
log.debug("Command: `" + command.join(' ') + "`")
def proc = command.execute()
proc.waitFor()
def returnCode = proc.exitValue()
def stdout = proc.in.text
def stderr = proc.err.text
def debugString = "`" + command.join(' ') +
"`\nReturn code: " + returnCode +
"\nSTDOUT:\n" + stdout + "\nSTDERR:\n" + stderr
if (config.print.errorClosure(returnCode, stdout, stderr)) {
log.error("Error while calling ${debugString}")
throw new PrintException("Error while calling ${debugString}")
} else if (config.print.warnClosure(returnCode, stdout, stderr)) {
log.warn("Warnings while calling ${debugString}")
} else {
log.debug("Command successful ${debugString}")
}
}
}
Even if I would prefer not to use an external executable... This issue is not anymore critical for me. I will accept an answer if it does not require the call to an external executable.
Actually, can't you just loop through the files inside your loop for the number of copies?
ie:
private static void printJob( PrintService printService, def files, int copyCount) {
// No multiple copy support for PS file, must do it manually
copyCount.times { i ->
log.debug( "Printing Copy $i" )
files.each { file ->
log.debug( "Printing $file" )
file.withInputStream { fis ->
Doc doc = new SimpleDoc( fis, DocFlavor.INPUT_STREAM.AUTOSENSE, null )
DocPrintJob docPrintJob = printService.createPrintJob()
PrintJobWatcher watcher = new PrintJobWatcher( docPrintJob )
docPrintJob.print( doc, null )
watcher.waitForDone()
}
}
}
}
(untested)
edit
As an update to your workaround above, rather than:
def proc = command.execute()
proc.waitFor()
def returnCode = proc.exitValue()
def stdout = proc.in.text
def stderr = proc.err.text
You're probably better with:
def proc = command.execute()
def out = new StringWriter()
def err = new StringWriter()
ps.consumeProcessOutput( out, err )
ps.waitFor()
def returnCode = proc.exitValue()
def stdout = out.toString()
def stderr = err.toString()
As this won't block if the process writes a lot of information :-)
One of the issues could be related to the Document Structure Convention (DSC) comments. These comments provide metadata about the document contained in the file. A tool like ghostscript should be able to process the resulting concatenated file because it ignores DSC comments entirely and just processes the postscript. But tools that expect to work on DSC-conforming files will be confused when the first file ends (it's marked by an End comment) and there's more data in the file.
One thing that might work is to strip all comments from the files, so there's no misleading DSC information. (DSC comments will always be a full line starting with %%, so an RE substitution should do it. s/^%[^$]*$//g)
Related
I'm trying to pipe output from echo into a command using GLib's spawn_command_line_sync method. The problem I've run into is echo is interpreting the entire command as the argument.
To better explain, I run this in my code:
string command = "echo \"" + some_var + "\" | command";
Process.spawn_command_line_sync (command.escape (),
out r, out e, out s);
I would expect the variable to be echoed to the pipe and the command run with the data piped, however when I check on the result it's just echoing everything after echo like this:
"some_var's value" | command
I think I could just use the Posix class to run the command but I like having the result, error and status values to listen to that the spawn_command_line_sync method provides.
The problem is that you are providing shell syntax to what is essentially the kernel’s exec() syscall. The shell pipe operator redirects the stdout of one process to the stdin of the next. To implement that using Vala, you need to get the file descriptor for the stdin of the command process which you’re running, and write some_var to it manually.
You are combining two subprocesses into one. Instead echo and command should be treated separately and have a pipe set up between them. For some reason many examples on Stack Overflow and other sites use the Process.spawn_* functions, but using GSubprocess is an easier syntax.
This example pipes the output of find . to sort and then prints the output to the console. The example is a bit longer because it is a fully working example and makes use of a GMainContext for asynchronous calls. GMainContext is used by GMainLoop, GApplication and GtkApplication:
void main () {
var mainloop = new MainLoop ();
SourceFunc quit = ()=> {
mainloop.quit ();
return Source.REMOVE;
};
read_piped_commands.begin ("find .", "sort", quit);
mainloop.run ();
}
async void read_piped_commands (string first_command, string second_command, SourceFunc quit) {
var output = splice_subprocesses (first_command, second_command);
try {
string? line = null;
do {
line = yield output.read_line_async ();
print (#"$(line ?? "")\n");
}
while (line != null);
} catch (Error error) {
print (#"Error: $(error.message)\n");
}
quit ();
}
DataInputStream splice_subprocesses (string first_command, string second_command) {
InputStream end_pipe = null;
try {
var first = new Subprocess.newv (first_command.split (" "), STDOUT_PIPE);
var second = new Subprocess.newv (second_command.split (" "), STDIN_PIPE | STDOUT_PIPE);
second.get_stdin_pipe ().splice (first.get_stdout_pipe (), CLOSE_TARGET);
end_pipe = second.get_stdout_pipe ();
} catch (Error error) {
print (#"Error: $(error.message)\n");
}
return new DataInputStream (end_pipe);
}
It is the splice_subprocesses function that answers your question. It takes the STDOUT from the first command as an InputStream and splices it with the OutputStream (STDIN) for the second command.
The read_piped_commands function takes the output from the end of the pipe. This is an InputStream that has been wrapped in a DataInputStream to give access to the read_line_async convenience method.
Here's the full, working implementation:
try {
string[] command = {"command", "-options", "-etc"};
string[] env = Environ.get ();
Pid child_pid;
string some_string = "This is what gets piped to stdin"
int stdin;
int stdout;
int stderr;
Process.spawn_async_with_pipes ("/",
command,
env,
SpawnFlags.SEARCH_PATH | SpawnFlags.DO_NOT_REAP_CHILD,
null,
out child_pid,
out stdin,
out stdout,
out stderr);
FileStream input = FileStream.fdopen (stdin, "w");
input.write (some_string.data);
/* Make sure we close the process using it's pid */
ChildWatch.add (child_pid, (pid, status) => {
Process.close_pid (pid);
});
} catch (SpawnError e) {
/* Do something w the Error */
}
I guess playing with the FileStream is what really made it hard to figure this out. Turned out to be pretty straightforward.
Based on previous answers probably an interesting case is to use program arguments to have a general app to pipe any input on it:
pipe.vala:
void main (string[] args) {
try {
string command = args[1];
var subproc = new Subprocess(STDIN_PIPE | STDOUT_PIPE, command);
var data = args[2].data;
var input = new MemoryInputStream.from_data(data, GLib.free);
subproc.get_stdin_pipe ().splice (input, CLOSE_TARGET);
var end_pipe = subproc.get_stdout_pipe ();
var output = new DataInputStream (end_pipe);
string? line = null;
do {
line = output.read_line();
print (#"$(line ?? "")\n");
} while (line != null);
} catch (Error error) {
print (#"Error: $(error.message)\n");
}
}
build:
$ valac --pkg gio-2.0 pipe.vala
and run:
$ ./pipe sort "cc
ab
aa
b
"
Output:
aa
ab
b
cc
I want to view the more recent X builds for several Jenkins jobs.
So if I wanted to show the last 5 builds for jobs 1-5 it would looks something like this:
status build time
-------------------------
pass job1#3 13:54
fail job1#2 13:05
fail job1#1 13:01
pass job5#1 12:17
pass job3#1 11:03
How can I accomplish this?
Notice the builds of the jobs are woven together so if one job has run many builds recently it will show up more than other jobs that haven't run as much.
Executing the following script in the Script Console (Manage Jenkins -> Script Console):
import java.text.SimpleDateFormat
def numHoursBack = 24
def dateFormat = new SimpleDateFormat("HH:mm")
def buildNameWidth = 30
def cutOfTime = System.currentTimeMillis() - numHoursBack * 3600 * 1000
SortedMap res = new TreeMap();
for (job in Jenkins.instance.getAllItems(BuildableItem.class)) {
for (build in job.getBuilds()) {
if (build.getTimeInMillis() < cutOfTime) {
break;
}
res.put(build.getTimeInMillis(), build)
}
}
def format = "%-10s%-${buildNameWidth}s%-10s"
println(String.format(format, "status", "build", "Time"))
for (entry in res.descendingMap().entrySet()) {
def build = entry.getValue()
println(String.format(format, build.getResult(), build.getFullDisplayName(), dateFormat.format(build.getTime())))
}
For me this gives:
status build Time
SUCCESS xxx #107393 17:53
SUCCESS xxx #107392 17:48
SUCCESS xxx #107391 17:43
null yyy #3030 17:38
SUCCESS xxx #107390 17:38
FAILURE zzz #3248 17:37
...
You might need to change the numHoursBack constant, it controls the number of hours back to look for builds. As well as the buildNameWidth which determines the column width of the build column (if you have really long job and build names you might need to extend this).
Here is a bit cleanup version of Jon S Groovy script. Also displaying worst build information.
import hudson.model.*;
import java.text.SimpleDateFormat;
//
// Settings
//
def numHoursBack = 24;
def dateFormat = new SimpleDateFormat("HH:mm");
def cutOfTime = System.currentTimeMillis() - numHoursBack * 3600 * 1000;
/**
* Basic build information.
*/
def printBuildInfo(finalizedBuild) {
String level = "INFO";
String result = finalizedBuild.getResult().toString();
switch (result) {
case "UNSTABLE":
level = "WARNING";
break;
case "FAILURE":
level = "ERROR";
break;
}
// basic info and URL
println(String.format(
"[%s] Build %s result is: %s.",
level,
finalizedBuild.getFullDisplayName(),
result
));
// pipe description from downstream
def description = finalizedBuild.getDescription();
if (description != null && !description.isEmpty()) {
println(description.replaceAll("<br>", ""));
}
return finalizedBuild;
}
/**
* Get recent build items.
*/
def getRencentBuilds(cutOfTime) {
SortedMap res = new TreeMap();
for (job in Jenkins.instance.getAllItems(BuildableItem.class)) {
for (build in job.getBuilds()) {
if (build.getTimeInMillis() < cutOfTime) {
break;
}
res.put(build.getTimeInMillis(), build);
}
}
return res;
}
/**
* Print build items.
*
* minResult - minimum to print
*/
def printBuilds(builds, minResult, dateFormat) {
def format = "%-10s %-8s %s";
Result worstResult = Result.SUCCESS;
def worstBuild = null;
// header
println(String.format(format, "status", "Time", "build"));
// list
for (entry in builds.descendingMap().entrySet()) {
def build = entry.getValue();
Result result = build.getResult();
if (result.isWorseThan(worstResult)) {
worstResult = result;
worstBuild = build;
}
if (result.isWorseOrEqualTo(minResult)) {
println(String.format(
format, build.getResult(), dateFormat.format(build.getTime()), build.getFullDisplayName()
));
}
}
return worstBuild;
}
def builds = getRencentBuilds(cutOfTime);
println ("\n\n----------------------\n Failed builds:\n");
def worstBuild = printBuilds(builds, Result.FAILURE, dateFormat);
println ("\n\n----------------------\n Worst build:\n");
if (worstBuild != null) {
printBuildInfo(worstBuild);
}
println ("\n\n----------------------\n All builds:\n");
printBuilds(builds, Result.SUCCESS, dateFormat);
I figured if the devtool can list all created IndexedDB, then there should be an API to retrieve them...?
Dose anyone know how I get get a list of names with the help of a firefox SDK?
I did dig into the code and looked at the source. unfortunately there wasn't any convenient API that would pull out all the databases from one host.
The way they did it was to lurk around in the user profiles folder and look at all folder and files for .sqlite and make a sql query (multiple times in case there is an ongoing transaction) to each .sqlite and ask for the database name
it came down this peace of code
// striped down version of: https://dxr.mozilla.org/mozilla-central/source/devtools/server/actors/storage.js
/* This Source Code Form is subject to the terms of the Mozilla Public
* License, v. 2.0. If a copy of the MPL was not distributed with this
* file, You can obtain one at http://mozilla.org/MPL/2.0/. */
"use strict";
const {async} = require("resource://gre/modules/devtools/async-utils");
const { setTimeout } = require("sdk/timers");
const promise = require("sdk/core/promise");
// A RegExp for characters that cannot appear in a file/directory name. This is
// used to sanitize the host name for indexed db to lookup whether the file is
// present in <profileDir>/storage/default/ location
const illegalFileNameCharacters = [
"[",
// Control characters \001 to \036
"\\x00-\\x24",
// Special characters
"/:*?\\\"<>|\\\\",
"]"
].join("");
const ILLEGAL_CHAR_REGEX = new RegExp(illegalFileNameCharacters, "g");
var OS = require("resource://gre/modules/osfile.jsm").OS;
var Sqlite = require("resource://gre/modules/Sqlite.jsm");
/**
* An async method equivalent to setTimeout but using Promises
*
* #param {number} time
* The wait time in milliseconds.
*/
function sleep(time) {
let deferred = promise.defer();
setTimeout(() => {
deferred.resolve(null);
}, time);
return deferred.promise;
}
var indexedDBHelpers = {
/**
* Fetches all the databases and their metadata for the given `host`.
*/
getDBNamesForHost: async(function*(host) {
let sanitizedHost = indexedDBHelpers.getSanitizedHost(host);
let directory = OS.Path.join(OS.Constants.Path.profileDir, "storage",
"default", sanitizedHost, "idb");
let exists = yield OS.File.exists(directory);
if (!exists && host.startsWith("about:")) {
// try for moz-safe-about directory
sanitizedHost = indexedDBHelpers.getSanitizedHost("moz-safe-" + host);
directory = OS.Path.join(OS.Constants.Path.profileDir, "storage",
"permanent", sanitizedHost, "idb");
exists = yield OS.File.exists(directory);
}
if (!exists) {
return [];
}
let names = [];
let dirIterator = new OS.File.DirectoryIterator(directory);
try {
yield dirIterator.forEach(file => {
// Skip directories.
if (file.isDir) {
return null;
}
// Skip any non-sqlite files.
if (!file.name.endsWith(".sqlite")) {
return null;
}
return indexedDBHelpers.getNameFromDatabaseFile(file.path).then(name => {
if (name) {
names.push(name);
}
return null;
});
});
} finally {
dirIterator.close();
}
return names;
}),
/**
* Removes any illegal characters from the host name to make it a valid file
* name.
*/
getSanitizedHost: function(host) {
return host.replace(ILLEGAL_CHAR_REGEX, "+");
},
/**
* Retrieves the proper indexed db database name from the provided .sqlite
* file location.
*/
getNameFromDatabaseFile: async(function*(path) {
let connection = null;
let retryCount = 0;
// Content pages might be having an open transaction for the same indexed db
// which this sqlite file belongs to. In that case, sqlite.openConnection
// will throw. Thus we retey for some time to see if lock is removed.
while (!connection && retryCount++ < 25) {
try {
connection = yield Sqlite.openConnection({ path: path });
} catch (ex) {
// Continuously retrying is overkill. Waiting for 100ms before next try
yield sleep(100);
}
}
if (!connection) {
return null;
}
let rows = yield connection.execute("SELECT name FROM database");
if (rows.length != 1) {
return null;
}
let name = rows[0].getResultByName("name");
yield connection.close();
return name;
})
};
module.exports = indexedDBHelpers.getDBNamesForHost;
If anyone want to use this then here is how you would use it
var getDBNamesForHost = require("./getDBNamesForHost");
getDBNamesForHost("http://example.com").then(names => {
console.log(names);
});
Think it would be cool if someone were to build a addon that adds indexedDB.mozGetDatabaseNames to work the same way as indexedDB.webkitGetDatabaseNames. I'm not doing that... will leave it up to you if you want. would be a grate dev tool to have ;)
I'm new to Grails and I'm trying to configure Log4j so it logs the exact file and line where the log call occured. No pattern works as the conversionPattern! It seems Grails wraps the logger in a way that Log4j doesn't see the real source of the call.
I'm aware of this thread, but I'm not sure how to create a custom appender. I just can't believe nobody already developed something to fix this issue!
I'm open to any suggestions :
Does using something else than Log4j work in Grails to get the actual file+line (Logback?)?
Anyone with an existing "custom appender" he's willing to share?
Thanks in advance!
I actually did it by myself. I guess I should do a proper Grails plugin for it, but I'm still not comfortable enough with Grails to be sure the code will always work. I tested it by logging from a Controller and from a Service, using Grails 2.2.4, and it seems to work well.
It works by checking the stacktrace to find the actual file and line where the call occurred and then it adds this information in the MDC thread context. Values added to MDC can be used by the (other) appenders using the %X{fileAndLine} token.
Here's the code and the javadoc (read it!) :
package logFileLineInjectorGrailsPlugin
import org.apache.log4j.Appender;
import org.apache.log4j.AppenderSkeleton;
import org.apache.log4j.spi.LoggingEvent;
import org.apache.log4j.Logger;
import java.lang.StackTraceElement;
import org.apache.log4j.MDC;
/**
* Allows the log appenders to have access to the FILE and LINE where the log call actually occurred.
*
* (1) Add this pseudo appender to your other appenders, in Config.groovy. Then you can use
* "%X{fileAndLine}" in the other appenders to output the file and line where the log call actually occurred.
*
* ------------
* log4j = {
* appenders {
* appender name:'fileAndLineInjector', new logFileLineInjectorGrailsPlugin.FileAndLineInjector()
* // example of a console appender using the "%X{fileAndLine}" token :
* console name:'stdout', layout:pattern(conversionPattern: '[%d{yyyy-MM-dd HH:mm:ss}] %-5p ~ %m ~ %c ~ %X{fileAndLine}%n')
* }
* (...)
* ------------
*
* (2) Then add it has the *first* appender reference in the declarations of the loggers in which you want to use the "%X{fileAndLine}" token.
*
* For example :
*
* ------------
* root {
* error 'fileAndLineInjector', 'stdout'
* }
* ------------
*
* With this setup in place, a call to log.error("test!") will result in something like :
*
* [2013-08-12 19:16:15] ERROR ~ test! ~ grails.app.services.testProject.TestService ~ (TestService.groovy:8)
*
* In Eclipse/STS/GGTS (I didn't try in other IDEs), when "%X{fileAndLine}" is outputed in the internal console, the text is clickable
* and leads to the actual file/line.
*
*
*/
class FileAndLineInjector extends AppenderSkeleton {
#Override
public void close() {
}
#Override
public boolean requiresLayout() {
return false;
}
#Override
protected void append(LoggingEvent event) {
StackTraceElement[] strackTraceElements = Thread.currentThread().getStackTrace();
StackTraceElement targetStackTraceElement = null;
for(int i = 0; i < strackTraceElements.length; i++) {
StackTraceElement strackTraceElement = strackTraceElements[i];
if(strackTraceElement != null &&
strackTraceElement.declaringClass != null &&
strackTraceElement.declaringClass.startsWith("org.apache.commons.logging.Log\$") &&
i < (strackTraceElements.length - 1)) {
targetStackTraceElement = strackTraceElements[++i];
while(targetStackTraceElement.declaringClass != null &&
targetStackTraceElement.declaringClass.startsWith("org.codehaus.groovy.runtime.callsite.") &&
i < (strackTraceElements.length - 1)) {
targetStackTraceElement = strackTraceElements[++i];
}
break;
}
}
if(targetStackTraceElement != null) {
MDC.put("fileAndLine", "(" + targetStackTraceElement.getFileName() + ":" + targetStackTraceElement.getLineNumber() + ")");
} else {
MDC.remove("fileAndLine");
}
}
}
Let me know if something is not clear or if you find a way to improve it!
Im trying to implement nicEdit with the nicupload plugin, but when I select a file to upload it says "Failed to upload image", and the server response says "Invalid Upload ID".
This is the code that calls the script and initializes:
<script src="http://js.nicedit.com/nicEdit-latest.js" type="text/javascript"></script>
<script type="text/javascript">//<![CDATA[
bkLib.onDomLoaded(function() {
new nicEditor({uploadURI : '../../nicedit/nicUpload.php'}).panelInstance('area1');
});
//]]>
</script>
The path to nicUpload.php is correct, and the code is the one that can be found in the documentation: http://nicedit.com/src/nicUpload/nicUpload.js
I made the upload folder changes, and set write permissions. According to the documentation (http://wiki.nicedit.com/w/page/515/Configuration%20Options), thats all, but i keep getting errors. Any ideas?
After looking for an solution a long time (lot of posts without real solution), i now fixed the code myself. I'm now able to upload an image to my own server. Thx to firebug and eclipse ;-)
The main problem is that the nicUpload.php is old and not working with the current nicEdit-Upload function.
Missing is the error handling, feel free to add this...
Add the nicEditor to your php file and configure it to use the nicEdit.php:
new nicEditor({iconsPath : 'pics/nicEditorIcons.gif', uploadURI : 'script/nicUpload.php'}
Download the nicEdit.js uncompressed and change the following lines in nicEdit.js:
uploadFile : function() {
var file = this.fileInput.files[0];
if (!file || !file.type.match(/image.*/)) {
this.onError("Only image files can be uploaded");
return;
}
this.fileInput.setStyle({ display: 'none' });
this.setProgress(0);
var fd = new FormData();
fd.append("image", file);
fd.append("key", "b7ea18a4ecbda8e92203fa4968d10660");
var xhr = new XMLHttpRequest();
xhr.open("POST", this.ne.options.uploadURI || this.nicURI);
xhr.onload = function() {
try {
var res = JSON.parse(xhr.responseText);
} catch(e) {
return this.onError();
}
//this.onUploaded(res.upload); // CHANGE HERE
this.onUploaded(res);
}.closure(this);
xhr.onerror = this.onError.closure(this);
xhr.upload.onprogress = function(e) {
this.setProgress(e.loaded / e.total);
}.closure(this);
xhr.send(fd);
},
onUploaded : function(options) {
this.removePane();
//var src = options.links.original; // CHANGE HERE
var src = options['url'];
if(!this.im) {
this.ne.selectedInstance.restoreRng();
//var tmp = 'javascript:nicImTemp();';
this.ne.nicCommand("insertImage", src);
this.im = this.findElm('IMG','src', src);
}
var w = parseInt(this.ne.selectedInstance.elm.getStyle('width'));
if(this.im) {
this.im.setAttributes({
src : src,
width : (w && options.image.width) ? Math.min(w, options.image.width) : ''
});
}
}
Change the nicUpload.php like this
<?php
/* NicEdit - Micro Inline WYSIWYG
* Copyright 2007-2009 Brian Kirchoff
*
* NicEdit is distributed under the terms of the MIT license
* For more information visit http://nicedit.com/
* Do not remove this copyright message
*
* nicUpload Reciever Script PHP Edition
* #description: Save images uploaded for a users computer to a directory, and
* return the URL of the image to the client for use in nicEdit
* #author: Brian Kirchoff <briankircho#gmail.com>
* #sponsored by: DotConcepts (http://www.dotconcepts.net)
* #version: 0.9.0
*/
/*
* #author: Christoph Pahre
* #version: 0.1
* #description: different modification, so that this php file is working with the newest nicEdit.js (needs also modification - #see)
* #see http://stackoverflow.com/questions/11677128/nicupload-says-invalid-upload-id-cant-make-it-works
*/
define('NICUPLOAD_PATH', '../images/uploadedImages'); // Set the path (relative or absolute) to
// the directory to save image files
define('NICUPLOAD_URI', '../images/uploadedImages'); // Set the URL (relative or absolute) to
// the directory defined above
$nicupload_allowed_extensions = array('jpg','jpeg','png','gif','bmp');
if(!function_exists('json_encode')) {
die('{"error" : "Image upload host does not have the required dependicies (json_encode/decode)"}');
}
if($_SERVER['REQUEST_METHOD']=='POST') { // Upload is complete
$file = $_FILES['image'];
$image = $file['tmp_name'];
$id = $file['name'];
$max_upload_size = ini_max_upload_size();
if(!$file) {
nicupload_error('Must be less than '.bytes_to_readable($max_upload_size));
}
$ext = strtolower(substr(strrchr($file['name'], '.'), 1));
#$size = getimagesize($image);
if(!$size || !in_array($ext, $nicupload_allowed_extensions)) {
nicupload_error('Invalid image file, must be a valid image less than '.bytes_to_readable($max_upload_size));
}
$filename = $id;
$path = NICUPLOAD_PATH.'/'.$filename;
if(!move_uploaded_file($image, $path)) {
nicupload_error('Server error, failed to move file');
}
$status = array();
$status['done'] = 1;
$status['width'] = $size[0];
$rp = realpath($path);
$status['url'] = NICUPLOAD_URI ."/".$id;
nicupload_output($status, false);
exit;
}
// UTILITY FUNCTIONS
function nicupload_error($msg) {
echo nicupload_output(array('error' => $msg));
}
function nicupload_output($status, $showLoadingMsg = false) {
$script = json_encode($status);
$script = str_replace("\\/", '/', $script);
echo $script;
exit;
}
function ini_max_upload_size() {
$post_size = ini_get('post_max_size');
$upload_size = ini_get('upload_max_filesize');
if(!$post_size) $post_size = '8M';
if(!$upload_size) $upload_size = '2M';
return min( ini_bytes_from_string($post_size), ini_bytes_from_string($upload_size) );
}
function ini_bytes_from_string($val) {
$val = trim($val);
$last = strtolower($val[strlen($val)-1]);
switch($last) {
// The 'G' modifier is available since PHP 5.1.0
case 'g':
$val *= 1024;
case 'm':
$val *= 1024;
case 'k':
$val *= 1024;
}
return $val;
}
function bytes_to_readable( $bytes ) {
if ($bytes<=0)
return '0 Byte';
$convention=1000; //[1000->10^x|1024->2^x]
$s=array('B', 'kB', 'MB', 'GB', 'TB', 'PB', 'EB', 'ZB');
$e=floor(log($bytes,$convention));
return round($bytes/pow($convention,$e),2).' '.$s[$e];
}
?>
You can manually pass an id to your script: e.g nicUpload.php?id=introPicHeader and it will become introPicHeader.jpg (or appropriate extension) in the images folder you defined.
However, I have noticed that this script is broken and cannot access the configuration option uploadURI if specified directly in nicEdit.js during the nicEditorAdvancedButton.extend({. This causes access to an relatively pathed "Unknown" resource, causing an error.
The documentation implies otherwise and the fact that the nicURI was specified here for imgur.com (maybe as a default) gave me the impression I could also add a uploadURI reference to the nicUpload.php script in a single place rather than on every editor instantiation.
Update
This works if you pass it during instantiation, which I guess does allow for easy dynamic id population.
Unfortunately, the nicUpload.php is riddled with errors and it's output is not JSON. The editor expects to parse JSON and finds a script tag and errors with unexpected token "<".
There are a raft of other errors which I will attempt to identify:
In nicEdit.js
A.append("image") should be infact A.append("nicImage")
this.onUploaded(D.upload) should become this.onUploaded(D)
this.onUploaded(D) should be moved to within the try block after var D=JSON.parse(C.responseText) to fix variable scope issues
B.image.width needs to become B.width
In nicUpload.php
JSON output is not formed correctly, comment out html output and output just json_encode($status).
JSON output needs to return a key/value pair named links rather than url although renaming the var D=B.links to var D=B.url in nicEdit.js would also suffice as a fix.
Both php and javascript code leaves a lot to be desired, I get many errors regularly and have been fixing them myself.