Mastodon
Where is there an end of it? | All posts by alex

What Colour are Rose Hips?

I have been using a Nikon D300 now for six months, and one of the characteristics I have had to get to grips with is its default colour handling: out of the camera the colours can be a touch “zingy” to my eyes – in an effort to replicate the classic Fuji Velvia look, the colours processing gives very rich yellows, but this has the side effect of making reds somewhat orangey.

Here is a picture of some rose hips:

Rose Hips #1

In real life to my eye it seemed that the central rose hip had no orangeyness to its red; the upper rose hip had just a touch of orange. But, out-of-the camera, the colour rendition here differs from what I saw: the tints are much more orange.

One solution to this lies in the dark art of RAW conversion. Many photographers roll their eyes at this business – it can require a lot of time farting around with post processing software rather than taking more photos. However, I quite like farting around with software, so am quite happy to experiment.

For RAW conversion I use the fabulous DxO Optics Pro package. This offers a host of options for converting the RAW image into a JPEG, and it has to be said that some of these (such as chromatic aberration fixing) are now finding their ways into camera bodies. However, this software still offers quite a bit more flexibility and, in particular, will fix lens distortion for certain camera/lens bodies which have been analysed.

Another useful feature is the ability to control colour rendering. Want to give your Nikon D300 pictures the look of a Canon 40D? No problem – just specify it.

For Nikon users, a DxO user by the name of Andy_F has developed some RAW conversion presets specifically targeted at correcting the colour conversion of recent Nikon bodies. The result of using one of these to process the original RAW file give this result:

Rose Hips #2

Which is much closer to what I think I saw.

Even better though is Andy_F’s “landscape” preset, which attempts some detail recovery from the image:

Rose Hips #3

SC 34 meetings, Copenhagen

This week I attended 4 days of meetings of SC 34 working groups. WG 4 (OOXML maintenance) and WG 5 (OOXML/ODF interoperability). Last year I predicted that OOXML would get boring and, on the whole, the content of these meetings fulfilled that prophecy (while noting, of course, that for us markup geeks and standards wonks “boring” is actually rather exciting). There was however, one hot issue, which I’ll come to later …

Defects

Since the publication of OOXML in November last year, the work of WG 4 has been almost exclusively focussed on defect correction. To date over 200 defects have been submitted (the UK leading the way having submitted 38% of them). Anybody interested in what kinds of thing are being fixed can consult the material on the WG 4 web site for a quick overview. During the Copenhagen meeting WG 4 closed 53 issues meaning that 71% of all submitted defects submitted have now been resolved. By JTC 1 standards that is impressively rapid. The defects will now go forward to be approved by JTC 1 National Bodies before they can become official Amendments and Corrigenda to the base Standard. Among the many more minor fixes, a couple of agreed changes are noteworthy:

  • In response to a defect report from Switzerland, for the Strict version (only) of IS 29500, the XML Namespace has been changed, so that consumers can know unambiguously whether they are consuming a Strict or Transitional document without any risk of silent data loss. This is (editorially) a lot of work, but the results will be I think worthwhile.
  • As I wrote following the Prague meeting, there was a move afoot to re-instate the values “on” and “off” as permissible Boolean values (alongside “true”, “1”, “false” and “0”) so that Transitional documents would accurately reflect the existing corpus of Office documents, in accord with the stated scope of the standard. This change has now been agreed by the WG.

The ISO Date Issue

The “hot issue” I referred to earlier is ISO dates. What better way to illustrate the problem we face than by using one of Denmark’s most famous inventions, the LEGO® brick …


f*cked-up lego brick
OOXML Transitional imagined as a LEGO® brick

More precisely, the problem is about date representation in spreadsheet cells. One of the innovations of the BRM was to introduce ISO 8601 date representation into OOXML. However the details of how this have been done are problematic:

  1. Despite the text of the original resolution declaring that ISO 8601 dates should live alongside serial dates (for compatibility with older documents), one possible reading of the text today is that all spreadsheet cell dates have to be in ISO 8601 format
  2. Having any such dates in ISO 8601 format is particularly problematic for Transitional OOXML, which is meant to represent the existing corpus of office documents. Since none of these existing documents contain ISO 8601 dates having them here makes no sense
  3. Even more seriously, if people start creating “Transitional” OOXML documents which contain 8601 dates, then a huge installed base of software expecting Ecma-376 documents will silently corrupt that data and/or produce surprising results. (My concern here is more for things like big ERP and Financial systems, rather than desktop apps like MS Office). Hence the odd LEGO brick above: like those bricks, such files would embody an interoperability disaster
  4. Even the idea of using ISO 8601 is pretty daft unless it is profiled (currently it is not). ISO 8601 is a big complex standards with many options irrelevant to office documents: it would be far more sensible for OOXML to target a tiny subset of ISO 8601, such as that specified by W3C XML Schema Definition Language (XSD) 1.1 Part 2: Datatypes
  5. Many date/time functions declared in spreadsheetML appear to be predicated on date/time values being represented as serial values and not ISO 8601 dates. It is not clear if the Standard as written makes any sense when ISO 8601 dates are used.

The solution?

Opinions vary about how the ISO date problem might best be solved. My own strong preference would be to see the Standard clarified so that users were sure that Transitional documents were guaranteed to correspond to the existing document reality – in other words that Transitional documents only contain serial date representations, and not “ISO 8601” date representations. In my view the ISO dates should be for use in the Strict variant of OOXML only.

If a major implementation (Excel 2010, say) appears which starts pumping incompatible, ISO 8601-flavoured Transitional documents into circulation, then that would be an interop disaster. The standards process would be rightly criticised for producing a dangerous format; users would be at risk of corrupted data; and guilty vendors too would be blamed accordingly.

It is imperative that this problem is fixed, and fixed soon.

Impressions of Copenhagen

Our meetings took place at the height of midsummer, and every day was glorious (all meetings started with a sad closing of the curtains). Something of a pall was cast over proceeding by thefts, in two separate incidents, of delegates’ laptop computers; but there is no doubt Copenhagen is a wondeful city blessed with excellent food, tasty beer, and an abundance of good-looking women. Dansk Standard provided most civilised meeting facilities, and entertainment chief Jesper Lund Stocholm worked hard to ensure everyone enjoyed Copenhagen to the full, especially the midsummer night witch-burning festivities! Next up is the SC 34 Plenary in Seattle in September; I’m sure there will be many more defect reports on OOXML to consider by then, and that WG 4's tireless convenor Murata-san will be keeping the pressure on to mantain the pace of fixes  …


Jesper Among the Beers
Jesper Lund Stocholm

ODF Forensics

The other day I had a phone call from Michiel Leenaars of the NLnet Foundation, who is busy gearing up for this week's ODF plugfest in the Hague. Michiel had seen my blog post on ODF validation using pipelines, and was interested in whether I could come up with something quick and dirty for providing forensic information about pairs of ODF documents, so they could be assessed before and after they are used by a tool. This could help users check if anything has been incorrectly added or taken away during a round-trip. Here's what I came up with …

Reaching for XProc

Yes again, I am going to use an XProc pipeline to do the processing. The basic plan of attack is:

  1. take two documents
  2. generate a “fingerprint” for each of them
  3. compare the fingerprints
  4. display the result in a meaningful, human-readable form

Fingerprinting XML

For a basic comparison between document I chose simply to compare the elements used, and the number of them. This obviously leaves out quite a bit which might also be compared (attributes, text) etc – but is a useful smoke test about whether major structures have been added or lost during a round-trip.

So the overall pipeline will look like this:

<?xml version="1.0"?>
<pipeline name="get-opc-rels" xmlns="http://www.w3.org/ns/xproc"
xmlns:xo="http://xmlopen.org/odf-fingerprint"
xmlns:mf="urn:oasis:names:tc:opendocument:xmlns:manifest:1.0"
xmlns:cx="http://xmlcalabash.com/ns/extensions">
<import href="extensions.xpl"/>
<!-- the URLs of the ODF documents to be processed -->
<option name="package-a" required="true"/>
<option name="package-b" required="true"/>
<!-- get the first fingerprint ... -->
<xo:make-fingerprint name="finger-a">
<with-option name="package-url" select="$package-a"/>
</xo:make-fingerprint>
<!-- ... and the second ... -->
<xo:make-fingerprint name="finger-b">
<with-option name="package-url" select="$package-b"/>
</xo:make-fingerprint>
<!-- combine them into a single document for input into an XSLT -->
<wrap-sequence wrapper="fingerprint-pair">
<input port="source">
<pipe step="finger-a" port="result"/>
<pipe step="finger-b" port="result"/>
</input>
</wrap-sequence>
<!-- style into an HTML report of differences -->
<xslt name="transform-to-html">
<input port="stylesheet">
<document href="style-diffs.xsl"/>
</input>
</xslt>
</pipeline>

A number of things are of note:

  • The ODF packages are interrogated using the JAR URI mechanism I described here
  • We’re using a custom step <xo:make-fingerprint>, which takes as its input the URL of an ODF document (“package-url”), and which emits a “fingerprint” as an XML document. Obviously this step is not built into any XProc processor, so we’ll need to write it ourselves
  • We using XProc’s wrap-sequence step to combine the two “fingerprint” documents into a single document
  • We’ll be relying on an XSLT transform to turn this combined document into a nice report, which will be the end result of the pipeline.

Writing the fingerprinting pipeline

To define our custom pipeline <xo:make-fingerprint> we simply author a new pipeline, and give it the type “xo:make-fingerprint”. This can then be invoked as a step. Here’s what this sub-pipeline looks like:

<pipeline type="xo:make-fingerprint">
<!-- the URL of the ODF file to fingerprint -->
<option name="package-url" required="true"/>
<!-- load its manifest -->
<load>
<with-option name="href" select="concat('jar:',$package-url,'!/META-INF/manifest.xml')"/>
</load>
<!-- visit each entry in the manifest which refs an XML resource -->
<viewport name="handle"
match="mf:file-entry[@mf:media-type='text/xml'
and not(starts-with(@mf:full-path,'META-INF'))]">
<cx:message message="Loading item ..."/>
<!-- load the entry -->
<load name="load-item">
<with-option name="href" select="concat('jar:',$package-url,'!/',/*/@mf:full-path)"/>
</load>
<!-- accumulate everything in a <wrapper> document -->
<wrap-sequence wrapper="wrapper">
<input port="source">
<pipe step="load-item" port="result"/>
</input>
</wrap-sequence>
</viewport>
<!-- transform the accumulated mass into a fingerprint -->
<xslt name="transform-to-fingerprint">
<input port="stylesheet">
<document href="make-fingerprint.xsl"/>
</input>
</xslt>
<!-- label it with the package URL, as an attribute on the root element-->
<add-attribute match="/*" attribute-name="package-url">
<with-option name="attribute-value" select="$package-url"/>
</add-attribute>
</pipeline>

Things to notice here:

  • We iterate through the ODF manifest looking for XML documents
  • All of the XML in the entire package is retrieved and combined into a single mega-document wrapped in an element named<wrapper>
  • We’re relying on an XSLT transform, “make-fingerprint.xsl” to do the heavy lifting and turn our mega-document into meaningful (and smaller) “fingerprint” document
  • We add the URL of the ODF document to the fingerprint using XProc’s nifty add-attribute step

The Heavy Lifting: XSLT

The XSLT to boil a document down into a fingerprint can be seen here. What it produces is a summary of the elements used in each of the namespaces the document mentions. This extract gives a flavour of the kind of result it produces:

<namespace name="urn:oasis:names:tc:opendocument:xmlns:manifest:1.0">
<element name="file-entry" count="1"/>
</namespace>
<namespace name="urn:oasis:names:tc:opendocument:xmlns:meta:1.0">
<element name="generator" count="1"/>
</namespace>
<namespace name="urn:oasis:names:tc:opendocument:xmlns:office:1.0">
<element name="automatic-styles" count="1"/> 
<element name="body" count="1"/>
<element name="document-content" count="1"/> 
<element name="document-meta" count="1"/>
<element name="document-styles" count="1"/>
<element name="font-face-decls" count="2"/>
<element name="meta" count="1"/>
<element name="spreadsheet" count="1"/>
<element name="styles" count="1"/>
</namespace>

Returning now to our main pipeline, we can see it makes two calls to (or should that be “sucks on”) the sub-pipeline to generate two fingerprints. These are then wrapped with a wrap-sequence step, and we have all we need to generate the final report. Again, an XSLT transform is used to do a comparison operation and the result is emitted as an HTML document intended for human consumption. An example of what this looks like (comparing the OpenOffice and Google Docs versions of Maya’s wedding planner) can be found here.

Putting it to use

The results of this process need to be interpreted on a case-by-case basis. Just because two applications represent notionally the same document with different XML is not necessarily a fault (though I’d like to know why Maya’s Wedding Planner has 2,000 spreadsheet cells according to Google Docs, and only 51 according to OO.o).

The most useful application of this pipeline is to check for untoward data loss when a document is processed by an application – and I understand this is a particular concern of the Dutch government. With this in mind it is possible to take this pipeline further still, checking attribute differences and even textual differences. Though there comes a point when diff'ing XML that it is best to use a specialist tool such as the excellent DeltaXML (I have no association with this product, except knowing it is well-respected through its use among clients). Many an unsuspecting programmer has come to grief under-estimating the complexities of comparing XML documents.

Online ODF Validation

Michiel also asked whether it would be possible to make the ODF validation pipeline I blogged about previously available as an online service. Coincidentally this was something I was working on anyway, though using Java rather than XML pipelines. The result is now available here. Enjoy …

No one supports ISO ODF today?

IBM employee and ODF TC co-chair Rob Weir’s latest blog entry seeks to rebut what he terms a “disinformation campaign being waged against ODF”. The writing is curiously disjointed, and while at first I thought Rob’s famously fluent pen had been constipated by his distaste at having to descend further into the ad hominem gutter, on closer inspection I think it is perhaps a tell of Rob’s discomfort about his own past statements.

In particular, Rob takes issue with a statement that he condemns as “Microsoft FUD […] laundered via intermediaries”:

There is no software that currently implements ODF as approved by the ISO

Now Rob Weir is a great blogger, a much-praised committee chair, and somebody who can, on occasion, fearlessly produce the blunt truth like a rabbit from a hat. For this reason, I know his blog entry, “Toy Soldiers” of July 2008 has enjoyed quite some exposure in standards meetings around the world, most particularly for its assertions about ODF. He wrote:

  1. No one supports ODF 1.0 today. All of the major vendors have moved on to ODF 1.1, and will be moving on to ODF 1.2 soon.
  2. No one supports OOXML 1.0 today, not even Microsoft.
  3. No one supports interoperability via translation, not Sun in their Plugin, not Novell in their OOXML support, and not Microsoft in their announced ODF support in Office 2007 SP2.

While the anti-MS line here represents the kind of robust corporate knockabout stuff we might expect, it is Rob’s statement that “no one supports ODF 1.0 today. All of the major vendors have moved on […]” which has particularly resonated for users. A pronouncement on adoption from a committee chair about his own committee’s standard is significant. And naturally, it has deeply concerned some of the National Bodies who have adopted ODF 1.0 (which is ISO/IEC 26300) as a National standard, and who now find they have adopted something which, apparently, “no one supports”.

So, far from being “Microsoft FUD”, the idea that “No one supports ODF 1.0” is in fact Rob Weir’s own statement. And it was taken up and repeated by Andy Updegrove, Groklaw and Boycott Novell, those well-known vehicles of Microsoft’s corporate will.

Today however, this appears to have become an inconvenient truth. The rabbit that was pulled out of the hat in the interest of last summer’s spin, now needs to be put into the boiler. Consequently we find Rob’s blog entry of July 2008 has been silently amended so that it now states:

  1. Few applications today support exclusively ODF 1.0 and only ODF 1.0. Most of the major vendors also support ODF 1.1, one (OpenOffice 3.x), now supports draft ODF 1.2 as well.
  2. No one supports OOXML 1.0 today, not even Microsoft.
  3. No one supports interoperability via translation, not Sun in their Plugin, not Novell in their OOXML support, and not Microsoft in their announced ODF support in Office 2007 SP2.

The pertinent change is to item 1 on this list, which now has a weasel-worded (and tellingly tautological) assertion that might make the unsuspecting reader think that ODF 1.0 was somehow supported by the major vendors. Well, is it? Who is right, the Rob Weir of 2008 or the Rob Weir of 2009? Maybe I’ve missed something, but personally I’m unaware of an upsurge in ODF 1.0 support during the last 11 months. My money is on the former Rob being right here.

Okay, I use OpenOffice.org 1.1.5 (despite its Secunia level 4 advisory) out of a kind of loyalty to ISO/IEC 26300 (ODF 1.0), but I’m often teased about being the only person on the planet who must be doing this, and onlookers wonder what the .swx (etc) files I produce really are.

Blog Etiquette

As a general rule, when making substantive retrospective changes to blog entries, especially controversial blog entries, it is honest dealing to draw attention to this by striking-through removed text and prominently labelling the new text as “updated”. Failing to do this can lead to the suspicion that an attempt to re-write history is underway …