I (have to) use PowerPoint 2007 under Windows. I want to create a hyperlink with several GET-Parameters, thus it is quite long. It seems that PowerPoint cuts the hyperlink in the "add hyperlink form" to 255 characters. I wonder since Microsoft says the limit is at 900 or 1032 characters at
https://social.msdn.microsoft.com/Forums/office/en-US/4e668216-3341-4784-a8d4-2c7023334ac2/hyperlink-limit-in-powerpoint-2007?forum=officegeneral.
Do you have an idea how to use a hyperlink with at least 300-400 characters?
I cannot shrink the hyperlink due to the information that has to be passed the linked website.
This sounds like a limitation on the text field where you actually input the URL.
You could install OpenOffice which may allow you to edit the URL with full length. From there, you can save as PPT. That would certainly confirm if it is a PPT limitation, or a input box text field limitation (which I suspect).
Microsoft may have read the feedback and at least in recent Powerpoint ("1803" from Office 365) the hyperlinks can have a length of 500 characters (and maybe more).
Related
I could not find any documentation describing conventions in text/html data in the clipboard resulting from copying part of a word document!
Specifically I want to know what classes like MsoNormal, TableGrid313, MsoTableGrid, MsoHeading9, MsoListParagraph are there! Or does styling information of texts always lay in style attribute of a span element containing the text?
The Word round-tip HTML is undocumented as it's not an official Word file format.
It was created to enable round-tripping Word documents for viewing (and some editing) in a browser, many years ago. Even then, it was not documented as its use was for internal Microsoft software. Being HTML, anyone could read and produce it, but MS made an conscious decision to not document it (and not need to put the resources into maintaining that documentation).
We have a requirement to add the ability to edit PDF documents witin a Delphi application.
I.e. given a PDF document, open it and generate a form with edit boxes on it which the user can use to update the PDF document.
Can anyone suggest a third part component that would provide this functionality or suggest some way of achieving this.
Thanks
I use QuickPDF. Well documented, lots of examples, good support. However updating text in a PDF is an art, not a science, and unless you have full control over the producer of the PDF you may find it hard to do in the general case. For example: I have seen PDFs where text is formed from individual characters, each inserted at a specific location, so hard to edit as words; and of course in some PDFs the 'text' is actually an image of text, requiring OCR before you can edit it.
You can try Gnostice PDFtoolkit.
DISCLAIMER: I work for Gnostice.
Take a look at Amyuni PDF Creator ActiveX, it is supported in 32 bit and 64 bit applications, you may find it useful now that Delphi has a 64 compiler.
Usual disclaimer applies
I am using MS TextServices to implement windowless rich text editing and setting CFE_LINK to create hyperlinks. This all works but when I save the text to my internal buffer for writing to a file the CFE_LINK effect isn't saved.
I have (tried to) ensured that AutoDetectURL is OFF.
I am using EM_STREAMOUT to save from editor to buffer, as UTF-8 as RichEdit doesn't seem to work with Unicode.
I've looked at the saved RTF and looked at the MS RTF Specs and I can't see what control word would be used, so now I am worrying that it's not actually saved.
From my experience, the answer seems to be No. The richedit control creates the hyperlink formatting on the fly, but does not store it.
Six years ago or so I spent a lot of time trying to achieve what you are probably trying to do, using riched20.dll (at the time, version 3 of the richedit control). I wanted to have "proper" hyperlinks in a notebook application: the url would be marked as hidden text, while the description text would be underlined and clickable. The best I could do was to achieve this at runtime, marking arbitrary stretches of text with CFE_LINK. After saving and reloading the rtf stream, the changes would be gone. No amount of asking around did any good, either, though that was well before StackOverflow :)
My solution would be to replace richedit with a third-party control, such as TRichView, which can do what you want.
I have a document A in encoding A displayed in tool A and a document B in encoding B displayed in tool B. If I cut and paste (part of) B into A what might be the resultant character encoding? I realise this depends on tool A and tool B and the information held in the paste buffer (which presumably can contain an encoding?) and the operating system.
What should high-quality tools do? and in practice how many of the common tools (e.g. Word, TextPad, various IDEs, etc.) do a good job?
First of all, a text editor's internal representation of text has no bearing on how the text is encoded (serialized) when you save the file. So a document is not "in" an encoding; it's a sequence of abstract characters. When the document is saved to a file (or transmitted over the network) then it gets encoded.
It's up to each application to decide what it puts on the clipboard. Typically, a windows app that knows what it's doing will put a number of different representations on the clipboard. When you paste in the other app, the app will look for the representation that best suits its need.
In your case, a text editor (that knows what it's doing) will put a Unicode representation of a selected string onto the clipboard (where Unicode, in Windows, is typically moved around as UTF-16, but that's not important). When you paste in the other app, it will insert that sequence of Unicode characters into the document at the selection point.
There's an app floating around called "ClipSpy" that will help you see what I'm talking about, interactively.
I observed the following behavior when I looked into Unicode normalization: When copying a canonically decomposed string (NFD) in Firefox in macOS 10.15.7, the string is normalized to NFC when pasting it in Chrome. What's weird is that the pasting affects the content of the clipboard: When pasting the string in Firefox again, it's then also canonically composed there. If I don't paste it anywhere else before pasting it in Firefox again, the NFD form survives. Interestingly, the problem doesn't occur in the other direction: When copying a canonically decomposed string in Chrome, it's pasted in NFD form anywhere I can tell. My conclusion is that Firefox stores text to the clipboard differently from other applications. One way to play around with this yourself is to copy 'mañana' === 'mañana' to your JavaScript console. The statement returns false if the NFD form of the string on the right survived the copy & paste.
This is a very good question. When you copy/paste, exactly what is copied/pasted - CHARACTERS or BYTES?. And if BYTES, what encoding are they in?
From the answers, it sounds like the answer is "it depends". Different programs will put different things in the clipboard, sometimes placing multiple representations.
Then the pasting program needs to pick the best one and "do the right thing" with it.
Following my conversion with #Kaspar Etter, I did some testing. Here is what I found:
Copy from and Paste to:
Firefox:
Firefox to Firefox: NO normalization
Other apps to Firefox: NO normalization
Firefox to other apps: normalization
Even if we use AppleScript, JXA, or Python to directly read the SystemClipboard that contains the text copied from Firefox, the text is still normalized. Since copying and pasting from Firefox to Firefox does not involve normalization, Firefox probably does not normalize the text during the copy process. I have no idea when the normalization happens.
Safari (MacOS, not iOS):
Safari to Safari: normalization
Other apps to Safari: normalization
Safari to other apps: NO normalization
For Safari (MacOS), the normalization also happens at least on Canvas by instructure.com. In the fill-in-blank questions of Classic Quizzes, when students type Hebrew words in quizzes and hit "submit", the input was normalized, but the answer key was not. In that of the New Quizzes, however, both the input and the answer key are normalized. It's a mystery to me.
Chrome:
Chrome to Chrome: NO normalization
Other apps to Chrome: NO normalization (Firefox overrides)
Chrome to other apps: NO normalization (Safari overrides)
Conclusion: Firefox and Safari behave in the opposite way. Chrome behaves normally and consistently (except when it is overridden by Firefox and Safari).
I'm looking for an internal representation format for text, which would support basic formatting (font face, size, weight, indentation, basic tables, also supporting the following features:
Bidirectional input (Hebrew, Arabic, etc.)
Multi-language input (i.e. UTF-8) in same text field
Anchored footnotes (i.e. a superscript number that's a link to that numbered footnote)
I guess TEI or DocBook are rich enough, but here's the snag -- I want these text buffers to be Web-editable, so I need either an edit control that eats TEI or DocBook, or reliable and two-way conversion between one of them and whatever the edit control can eat.
UPDATE: The edit control I'm thinking of is something like TinyMCE, but AFAICT, TinyMCE lacks footnotes, and I'm not sure about its scalability (how about editing 1 or 2 megabytes of text?)
Any pointers much appreciated!
FCKeditor has a great API, supports several programming languages (considering it is javascript this isn't hard to achieve), can be loaded through HTML or instantiated in code; but most of all, allows easy access to the underlying form field, so having a jQuery or prototype ajax buffer shouldn't be terribly difficult to achieve.
The load time is very quick compared to previous versions. I'd give it a whirl.
In my experience a two-way conversion between HTML and XML formats like TEI or DocBook is very hard to make 100% reliable.
You could use Xopus (demo) to have your users directly edit TEI or DocBook XML. Xopus is a commercial browser based XML editor designed specifically for non-technical users. It supports bidi and UTF-8. The WYSIWYG view is rendered using XSLT, so that gives you sufficient control to render footnotes the way you describe.
As TEI and DocBook don't have means to store styling information, those formats will not allow your users to change font face, size and weight. But I think that is a good thing: users should insert headers and emphasis, designers should pick font face and size.
Xopus has a powerful table editor and indentation is handled by nesting sections or lists and XSLT reacting to that.
Unfortunately Xopus 3 will only scale to about 200KB of XML, but we're working on that.
I can't really decide on one of them. IMHO they are all not very good and complete. They all have their advantages and clear disadvantages. If TinyMCE is your favorite then afaik, it also does tables.
This list will probably come in handy: WysiwygEditorComparision.
I've also used FCKEditor and it performed well and was easy to integrate into my project. It's worth checking out.
Small correction to laurens' answer above: As of now (May 2012), Xopus supports UTF8, but not BiDi editing. Right-to-left text is displayed fine if it came from another source, cannot be edited correctly.
Source: I was recently asked to evaluate this, so have been testing it.