Going under the knife

by Alex Brown 9. December 2011 06:00
Alex3
Alex3 by FreddieBrown

And so, unexpectedly swiftly, I find I am to present myself at Addenbrooke's at 07:00 this Saturday to be admitted for an open partial nephtectomy (following the recent diagnosis of suspected kidney cancer).

Laparoscopic vs open surgery

I have avoided hospital all my life so far, so the sudden prospect of major surgery is a little daunting. I discussed various surgical options with my consultant – I was attracted by the idea of laparoscopic surgery, but perhaps only because my inner geek was interested in having a robot involved in the procedure (the hospital has a da Vinci Surgical System). The chief advantage of the laparoscopic approach is that it is less invasive and therefore tends to have a shorter recovery time – and exhibiting perhaps a dry sense of humour the consultant observed I was probably “keen to get back to the gym”.

However, in my sort of case the Cambridge team tends to favour open surgery. This is in part because they can dump ice into my body cavity during the operation, so that the (cooled) kidney remainder dies a little less as a result of the necessary ischemia, but also because of the “endophytic cyst” that has been found in the centre of the kidney. Ah yes, that cyst. The doctors seems sure this is nothing to worry about, since many people develop simple (fluid-filled) renal cysts at some time. However just to be sure the surgical team will perform an ultrasound scan on my exposed kidney to confirm whether this cyst really is as simple as it appears, and if not – cut it out. Given that I am learning that doctors are practised in the art of gradual disclosure, I feel a little nervous about this.

Radical vs partial

There was also the question of whether to have the whole kidney removed (radical nephrectomy), or just the diseased part. The thinking here is that for smaller tumours (such as mine) it is better to preserve some kidney, and so some kidney function, where possible. This is not so much based on direct clinical evidence – since one kidney always takes-over so over effectively when the other is removed this would be hard to measure – but on logic: if something else goes wrong with the remaining kidney later, it is surely better to have preserved whatever one can.

Retail therapy

Faced with various discomforts ahead, I decided I needed to treat myself to some compensatory camera equipment, and plumped for a second-hand Nikon D700. This is a camera that Nikon is about to discontinue, but has many points in its favour:

  • It’s now been around long enough (since 2008) that second-hand ones are available at reasonable prices.
  • It’s a “full-frame” camera, with all the attendant benefits that brings – particularly in ultra-wide lens choice, which intereste me.
  • Unlike some Nikon models this has happy colours.
  • It’s so well-established that supporting software (such as my favourite RAW converter, DxO Optics Pro) is thoroughly de-snagged.

I’ll post some more thoughts on this camera when I’ve had a chance to use it more, but in the meantime … Merry Xmas!

Xmas Cheer

Cancer

by Alex Brown 16. November 2011 09:33
Die
What are the odds?

I suppose it was during the ultrasound scan that I first realised something was really up. Here I was, having my bladder and kidneys scanned, as a result of two years, on and off, of what we Brits might term “waterworks problems”. We were chatting away (“Good drainage! — yes that looks fine” … small-talk like that) when the scanner reached my right kidney.

“Everything okay?” I enquire.

“Let me finish and then we can discuss it,” the sonographer replies. The tone had changed.

Now, I say this was my first realisation that something was wrong, but it’s not quite as simple as that. As a mild hypochondriac I often live with the strange internal doublethink of believing that every ache or pain betokens some dreadful illness, while simultaneously knowing that that’s silly and that I'm fine really. Now the balance had changed: the dark fears had become the reality; the self-reassurance the self-deception.

“Maybe some kind of cyst – it’s worth having it checked out. I’ll put something in a note to your GP.”

And so she did, as I find out on getting the ”I'm so sorry” call from my GP with the (not very meaningful) news that the scan had found a 27mm heterogeneous vascular lesion on my right kidney, and the (rather more meaningful) news that as a consequence I was being urgently referred to Addenbrooke’s with suspected cancer of the kidney.

More scans

“This is the worst bit,” said the consultant, “the waiting around not knowing.”

I doubted this: an early and painful death was potentially the worst bit I thought. In any case, it seems that in the World of Cancer “not knowing” is a constant. Or at least not knowing everything. You don’t how well you'll respond to various treatments, you don’t know what’s happening internally between scans, you don’t know what the limited resolving power of the of scanners can’t reveal. An ultrasound scan, it turns out, doesn’t give a very precise image of the organs, so I was now to have a CT scan during which a contrast agent would be injected into my body so that my organs “lit up” in the images produced.

Meanwhile, I’d been able to gather information about kidney cancer from the Web. In particular it seemed that:

  • The kind of sizes being talked about for my “mass” meant it was small in kidney cancer terms. In some cases the tumours grow football-sized before detection.
  • Kidney cancer doesn’t respond to the usual radiation-based treatments used for many other cancers. It is treated by surgery and (recently) by new immunotherapy drugs which can sometimes be successful in stimulating the immune system into attacking the cancer.
  • If a kidney has to be removed, people can usually get by fine with just one.

Quoth the server, 404

In addition to factual information available on the Web there is a range of forums and mailing lists dealing with kidney cancer, from furrow-browed ones detailing experiences and reviewing the latest research, to softer ones offering more purely emotional support (“I’ll pray for you on your cancer journey”). Needless to say I prefer the former. There are also lists of kidney cancer blogs (of which I suppose this is now one) which range from the reassuring (“I had kidney cancer n years ago and following surgery have had no recurrence”) to the embattled (“we were very disappointed to learn the scan showed there were now nodules on the lungs”) to the despairing, where a distraught spouse takes over to leave grief-stricken postings following the first blogger’s death. And there are those blogs which just get you a 404 – which could be good or bad …

So at yesterday’s meeting to review the CT scan result I already felt reasonably well-prepared for what might transpire and what the options might be. The key points were that:

  • The CT scan confirms a 3.3cm × 2.5cm mass on the lower pole of my right kidney. Its removal is recommended as it is highly likely to be cancerous.
  • The chances of any cancer having spread, given the size of the tumour, are very low. Removal of the tumour should effect a complete cure.
  • Other organs (in the thorax, pelvis & abdomen) were surveyed in the CT scan imagery: nothing was found. My left kidney is “pristine”.
  • The recommended procedure is an open partial nephrectomy, to happen just before or just after Xmas. This will probably entail 3-6 days in hospital and some weeks of recovery at home; no driving for 6 weeks.

So this is where I am. A fuller picture will emerge when the pathology is known for whatever is removed – but for now, the plan is that after some fairly hefty surgery I can expect the disease to be gone. Or, even better, that the slim chance comes good that the tumour is not cancerous – for as Woody Allen has observed, the most beautiful words in the English language are not “I love you” but “It's benign.”

Which is witty but not, I think, true.

Two Cats

by Alex Brown 14. June 2011 13:01
Sasha

Sasha, a Russian Blue and Leo, an Abyssinian. Both photos taken in natural light with a Nikkor 50mm f/1.4g lens at f/1.4, on a Fuji S5 Pro.

Leo

Breaking the Rules

by Alex Brown 30. April 2011 16:15
Cowslip Meadow
Cowslip Meadow

 

You are supposed not to transition from an out-of-focus foreground into an in-focus background, or blow your highlights. Although (using my new old camera) I could have retrieved all the highlights in the sky, I decided a notch of pure white actually looked more effective. The Topaz Adjust “dramatic” filter has been applied the sky too, which risks making parts of it darker than the lit grass (another non-no).

Oh well …

Tags: ,

UK Open Standards *Sigh*

by Alex Brown 15. April 2011 11:44

It can be tough, putting effort into standardization activities – particularly if you're not paid to do it by your employer. The tedious meetings, the jet lag, the bureaucratic friction and the engineering compromises can all eat away at the soul. But most people participating (particularly, perhaps, those not paid to do it by their employer) are kept going by the thought that, in the end, their contribution might make a difference. That in some small way the world will become a better place because of their efforts.

So it will come something of a kick in the teeth to see something like this Survey on Open Standards in the Public Sector from the UK government's Cabinet Office. It is hard to know where to start with this: whether it’s the ignorance of what a “standard” (never mind an “open standard”) is; or the thought that having a check-box survey is an intelligent way to form an assessment of technologies leading into a standards policy.

Faced with such clueless fuckwittery it’s tempting simply to ask: what’s the point?

Australia and OOXML

by Alex Brown 20. January 2011 09:57
Somewhere too early

 

There have been some poor decisions of late in Australia. Not playing Hauritz and persisting too long with the out-of-form Clarke and Ponting probably cost Australia the Ashes and has led to terrible self-flagelation. While it’s generally not done to take pleasure in the discomfort of others, I do think an exception can be made in the case of the Australian cricket team.

From various recent blogs and tweets I’ve noticed a fuss surrounding the decision by the Australian Government Information Management Office (AGIMO) to recommend the use of OOXML as a document format, and from the tenor of the comments it would seem this is being treated as similar calamity for Australia. However, there appears to be some misunderstanding and misinformation flying around which is worth a comment …

Leaving aside the merits of the decision itself, one particular theme in the commentary is that AGIMO have somehow picked a “non-ISO” version of OOXML. I can’t find any evidence of this. By specifying Ecma 376 without an edition number the convention is that the latest version of that standard is intended; and though I do think there is a danger of over-reading this particular citation, the current version of Ecma 376 is the second edition, which is the version of OOXML that was approved by ISO and IEC members in April 2008. The Ecma and ISO/IEC versions are in lock-step, with the Ecma text only ever mirroring the ISO/IEC text. And although (as now) there are inevitably some bureaucratic and administrative delays in the Ecma version rolling in all changes made in JTC 1 prior to publication, to cite one is, effectively, equivalent to citing the other.

[UPDATE: John Sheridan from AGIMO comments below that Ecma 376 1st Edition was intended, and I respond]

Rethinking OOXML Validation, Part 1

by Alex Brown 4. November 2010 15:09
ODF Plugfest Venue
Brussels ODF Plugfest venue

At the recent ODF Plugfest in Brussels, I was very interested to hear Jos van den Oever of KOffice present on how ODF’s alternative “flat” document format could be used to drive browser based rendering of ODF documents. ODF defines two methods of serializing documents: one uses multiple files in a “Zip” archive, the aforementioned “flat” format combines everything into a single XML file. Seeing this approach in action gelled with some thoughts I’d been having on how better to validate OOXML documents using standards-based XML tools …

Unlike ODF, OOXML has no “flat” file format – its files are OPC packages built on top of Zip archives. However, some interesting work has already been done in this area by Microsoft’s Eric White in such as blog posts as The Flat OPC Format, which points out that Microsoft Word™ (alone among the Office™ suite members [UPDATE: Word and PowerPoint can do this]) can already save in an unofficial flat format which can be processed with standards-based XML tools like XSLT processors.

Rather than having to rely on Word, or stick only to word processing documents, I thought it would be interesting to explore ways in which any OOXML document could be flattened and processed using standards-based processors. Ideally one would then also write a tool that did the opposite so that to work with OOXML content the steps would be first to flatten it, then to do the processing, and then to re-structify it into an OPC package.

Back to XProc

I have already written a number of blog posts on office document validation, and have used a variety of technical approaches to get the validation done. Most of my recent effort has been on developing the Office-o-tron, a hand-crafted Java application which functions primarily by unpacking archives to the file system before operating on their individual components. Earlier efforts using XProc has foundered on the difficulty of working with files inside a Zip archive — in particular because I was using the non-standard JAR URI scheme which, it turns out, is not capable of addressing items with certain names (e.g. “Object 1”) that one typically finds inside ODF documents.

However, armed with knowledge gained from developing Office-o-tron, and looking again at Zip handling extension functions of the Calabash XProc processor, made me think there was a way XProc could be used to get the job done. Here’s how …

Inspecting an OPC package

OOXML documents are built using the Open Packaging Convention (OPC, or ISO/IEC 29500-2), a generic means of building file formats within Zip archives which also happens to underpin the XPS format. OPC’s chief virtue – that it is very generic – is offset by much (probably too much) complexity in pursuit of this goal. Before we can know what we’ve got in an OPC package, and how to process it, some work needs to be done.

Fortunately, the essence of what we need consists of two pieces of information: a file inside the Zip guaranteed to be called “[Content_Types].xml”, and a manifest of the content of the package. XProc can get both of these pieces of information for us:

<?xml version="1.0"?>
<p:pipeline name="consolidate-officedoc"
  xmlns:p="http://www.w3.org/ns/xproc"
  xmlns:c="http://www.w3.org/ns/xproc-step"
  xmlns:cx="http://xmlcalabash.com/ns/extensions"
  xmlns:xo="http://xmlopen.org/officecert"
  xmlns:xsl="http://www.w3.org/1999/XSL/Transform"
  version="1.0">

  <p:import href="extensions.xpl"/>

  <!-- specifies the document to be processed -->
  <p:option name="package-sysid" required="true"/>


  <!--
  
  Given the system identifer $package-sysid of an OOXML document,
  this pipeline returns a document whose root element is archive-info
  which contains two children: the [Content_Types].xml resource
  contained in the root of the archive, and a zipfile element
  created per the unzip step at:
  
  http://xmlcalabash.com/extension/steps/library-1.0.xpl
  
  -->
  <p:pipeline type="xo:archive-info">

    <p:option name="package-sysid" required="true"/>

    <cx:unzip name="content-types" file="[Content_Types].xml">
      <p:with-option name="href" select="$package-sysid"/>
    </cx:unzip>

    <cx:unzip name="archive-content">
      <p:with-option name="href" select="$package-sysid"/>
    </cx:unzip>

    <p:sink/>

    <p:wrap-sequence wrapper="archive-info">
      <p:input port="source">
        <p:pipe step="content-types" port="result"/>
        <p:pipe step="archive-content" port="result"/>
      </p:input>
    </p:wrap-sequence>

  </p:pipeline>

  <!-- get the type information and content of the package -->
  <xo:archive-info>
    <p:with-option name="package-sysid" select="$package-sysid"/>
  </xo:archive-info>

  <!-- etc -->

Executing this pipeline on a typical “HelloWorld.docx” file gives us an XML document which consists of a composite of our two vital pieces of information, as follows:

<archive-info>
  <Types xmlns="http://schemas.openxmlformats.org/package/2006/content-types">
    <Override PartName="/word/comments.xml"
      ContentType="application/vnd.openxmlformats-officedocument.wordprocessingml.comments+xml"/>
    <Default Extension="rels" ContentType="application/vnd.openxmlformats-package.relationships+xml"/>
    <Default Extension="xml" ContentType="application/xml"/>
    <Override PartName="/word/document.xml"
      ContentType="application/vnd.openxmlformats-officedocument.wordprocessingml.document.main+xml"/>
    <Override PartName="/word/styles.xml"
      ContentType="application/vnd.openxmlformats-officedocument.wordprocessingml.styles+xml"/>
    <Override PartName="/docProps/app.xml"
      ContentType="application/vnd.openxmlformats-officedocument.extended-properties+xml"/>
    <Override PartName="/word/settings.xml"
      ContentType="application/vnd.openxmlformats-officedocument.wordprocessingml.settings+xml"/>
    <Override PartName="/word/theme/theme1.xml"
      ContentType="application/vnd.openxmlformats-officedocument.theme+xml"/>
    <Override PartName="/word/fontTable.xml"
      ContentType="application/vnd.openxmlformats-officedocument.wordprocessingml.fontTable+xml"/>
    <Override PartName="/word/webSettings.xml"
      ContentType="application/vnd.openxmlformats-officedocument.wordprocessingml.webSettings+xml"/>
    <Override PartName="/docProps/core.xml"
      ContentType="application/vnd.openxmlformats-package.core-properties+xml"/>
  </Types>
  <c:zipfile href="file:/C:/work/officecert/hello.docx">
    <c:file compressed-size="368" size="712" name="docProps/app.xml" date="1980-01-01T00:00:00.000Z"/>
    <c:file compressed-size="375" size="747" name="docProps/core.xml"
      date="1980-01-01T00:00:00.000Z"/>
    <c:file compressed-size="459" size="1004" name="word/comments.xml"
      date="1980-01-01T00:00:00.000Z"/>
    <c:file compressed-size="539" size="1218" name="word/document.xml"
      date="1980-01-01T00:00:00.000Z"/>
    <c:file compressed-size="407" size="1296" name="word/fontTable.xml"
      date="1980-01-01T00:00:00.000Z"/>
    <c:file compressed-size="651" size="1443" name="word/settings.xml"
      date="1980-01-01T00:00:00.000Z"/>
    <c:file compressed-size="1783" size="16891" name="word/styles.xml"
      date="2009-05-25T14:15:08.000+01:00"/>
    <c:file compressed-size="1686" size="6992" name="word/theme/theme1.xml"
      date="1980-01-01T00:00:00.000Z"/>
    <c:file compressed-size="187" size="260" name="word/webSettings.xml"
      date="1980-01-01T00:00:00.000Z"/>
    <c:file compressed-size="265" size="948" name="word/_rels/document.xml.rels"
      date="1980-01-01T00:00:00.000Z"/>
    <c:file compressed-size="372" size="1443" name="[Content_Types].xml"
      date="1980-01-01T00:00:00.000Z"/>
    <c:file compressed-size="243" size="590" name="_rels/.rels" date="1980-01-01T00:00:00.000Z"/>
  </c:zipfile>
</archive-info>

The purpose of the information in the Types element is to tell us the MIME types of the contents of the package, either specifically (in Override elements), or indirectly by associating a MIME type with file extensions (in Default elements). What we are now going to do is add another step to our pipeline that resolves all this information so that we label each of the items in the Zip file with the MIME type that applies to it.

 <p:xslt>
    <p:input port="stylesheet">
      <p:inline>
        <xsl:stylesheet version="2.0"
          xmlns:opc="http://schemas.openxmlformats.org/package/2006/content-types">

          <xsl:variable name="ooxml-mappings" select="document('ooxml-map.xml')"/>

          <xsl:template match="/">
            <c:zipfile>
              <xsl:copy-of select="/archive-info/c:zipfile/@*"/>
              <xsl:apply-templates/>
            </c:zipfile>
          </xsl:template>

          <xsl:template match="c:file">
            <xsl:variable name="entry-name" select="@name"/>
            <xsl:variable name="toks" select="tokenize($entry-name,'\.')"/>
            <xsl:variable name="ext" select="$toks[count($toks)]"/>
            <c:file>
              <xsl:copy-of select="@name"/>
              <xsl:variable name="overriden-type"
                select="//opc:Override[ends-with(@PartName,$entry-name)]/@ContentType"/>
              <xsl:variable name="default-type"
                select="//opc:Default[ends-with(@Extension,$ext)]/@ContentType"/>
              <xsl:variable name="resolved-type"
                select="if(string-length($overriden-type)) then $overriden-type else $default-type"/>
              <xsl:attribute name="resolved-type" select="$resolved-type"/>
              <xsl:attribute name="schema"
                select="$ooxml-mappings//mapping[mime-type=$resolved-type]/schema-name"/>
              <expand name="{@name}"/>
            </c:file>
          </xsl:template>

        </xsl:stylesheet>
      </p:inline>
    </p:input>
  </p:xslt>

You’ll notice I am also using an XML document called “ooxml-map.xml” as part of this enrinchment process. This is a file which contains the (hard won) information about which document of which MIME types are governed by which schemas as published as part of the OOXML standard. That document is available online here.

The result of running this additional step is to give us an enriched manifest of the OPC package content:

<c:zipfile xmlns:c="http://www.w3.org/ns/xproc-step"
  xmlns:cx="http://xmlcalabash.com/ns/extensions"
  xmlns:xo="http://xmlopen.org/officecert"
  xmlns:opc="http://schemas.openxmlformats.org/package/2006/content-types"
  href="file:/C:/work/officecert/hello.docx">
  <c:file name="docProps/app.xml"
    resolved-type="application/vnd.openxmlformats-officedocument.extended-properties+xml"
    schema="shared-documentPropertiesExtended.xsd">
    <expand name="docProps/app.xml"/>
  </c:file>
  <c:file name="docProps/core.xml"
    resolved-type="application/vnd.openxmlformats-package.core-properties+xml"
    schema="opc-coreProperties.xsd">
    <expand name="docProps/core.xml"/>
  </c:file>
  <c:file name="word/comments.xml"
    resolved-type="application/vnd.openxmlformats-officedocument.wordprocessingml.comments+xml"
    schema="wml.xsd">
    <expand name="word/comments.xml"/>
  </c:file>
  <c:file name="word/document.xml"
    resolved-type="application/vnd.openxmlformats-officedocument.wordprocessingml.document.main+xml"
    schema="wml.xsd">
    <expand name="word/document.xml"/>
  </c:file>
  <c:file name="word/fontTable.xml"
    resolved-type="application/vnd.openxmlformats-officedocument.wordprocessingml.fontTable+xml"
    schema="wml.xsd">
    <expand name="word/fontTable.xml"/>
  </c:file>
  <c:file name="word/settings.xml"
    resolved-type="application/vnd.openxmlformats-officedocument.wordprocessingml.settings+xml"
    schema="wml.xsd">
    <expand name="word/settings.xml"/>
  </c:file>
  <c:file name="word/styles.xml"
    resolved-type="application/vnd.openxmlformats-officedocument.wordprocessingml.styles+xml"
    schema="wml.xsd">
    <expand name="word/styles.xml"/>
  </c:file>
  <c:file name="word/theme/theme1.xml"
    resolved-type="application/vnd.openxmlformats-officedocument.theme+xml"
    schema="dml-main.xsd">
    <expand name="word/theme/theme1.xml"/>
  </c:file>
  <c:file name="word/webSettings.xml"
    resolved-type="application/vnd.openxmlformats-officedocument.wordprocessingml.webSettings+xml"
    schema="wml.xsd">
    <expand name="word/webSettings.xml"/>
  </c:file>
  <c:file name="word/_rels/document.xml.rels"
    resolved-type="application/vnd.openxmlformats-package.relationships+xml"
    schema="">
    <expand name="word/_rels/document.xml.rels"/>
  </c:file>
  <c:file name="[Content_Types].xml" resolved-type="application/xml" schema="">
    <expand name="[Content_Types].xml"/>
  </c:file>
  <c:file name="_rels/.rels"
    resolved-type="application/vnd.openxmlformats-package.relationships+xml"
    schema="">
    <expand name="_rels/.rels"/>
  </c:file>
</c:zipfile>

Also notice that each of the items has been given a child element called expand – this is a placeholder for the documents which we are going to expand in situ to create our flat representation of the OPC package content. The pipeline step to achieve that expansion is quite straightforward:

  <p:viewport name="archive-content" match="c:file[contains(@resolved-type,'xml')]/expand">
    <p:variable name="filename" select="/*/@name"/>
    <cx:unzip>
      <p:with-option name="href" select="$package-sysid"/>
      <p:with-option name="file" select="$filename"/>
    </cx:unzip>
  </p:viewport>

At this point, we're only expanding the content that looks like it is XML – a fuller implementation would expand non-XML content and BASE64 encode it (perfectly doable with XProc).

The result of applying this process is a rather large document, with all the expand elements referring to XML documents replaced by that XML document content … in other words, a flat OPC file. With the additional metadata we have placed on the containing c:file elements, we have enough information to start performing schema validation. I will look at validation in more depth in the next part of this post …

OpenOffice.org becomes LibreOffice

by Alex Brown 28. September 2010 08:49

Or, as The Register characteristically puts it, “OpenOffice files Oracle divorce papers”.

This is a very interesting development, and the new LibreOffice project looks much more like a normal community-based open-source project than OpenOffice.org ever did, with its weird requirement that contributors surrendered their copyright to Sun (then Oracle). The purpose of that always seemed to me that it enabled Sun/Oracle, as the copyright holder, to skip around the viral nature of the GPL and strike deals with other corporations over the code base (so you won't see the all source code for IBM Lotus Symphony freely available, for example). Another consequence was that some useful work done by the Go-OOo project never found its way back into OpenOffice.org — now though we learn that “that the enhancements produced by the Go-OOo team will be merged into LibreOffice, effective immediately”. In particular I hope this will see better support for OOXML in the future – surely a necessity if LibreOffice is ever to succeed in the substitution game.

One wrinkle is the “cease fire” agreed between Microsoft and Sun (and inherited by Oracle) in which OpenOffice appeared to be granted safety from Patent action by Microsoft. Presumably this will not apply to to the new LibreOffice project …

While this development seems like it might be very good news for open source office suites, it is very unfortunate that the brand has been fragmented with yet another new name for would-be users to get their heads round.

About the author

Alex Brown


Links

Legal

The author's views contained in this weblog are his, and not necessarily of any organisation. Third-party contributions are the responsibility of the contributor.

This weblog’s written content is governed by a Creative Commons Licence.

Creative Commons License     


Bling

Use OpenDNS  

profile for alexbrn at Stack Overflow, Q&A for professional and enthusiast programmers

Quotable

Note that everyone directly involved in the development of ISO standards is a volunteer or funded by outside sponsors. The editors, technical experts, etc., get none of this money. Of course, we must also consider the considerable expense of maintaining offices and executive staff in Geneva. Individual National Bodies are also permitted to sell ISO standards and this money is used to fund their own national standards activities, e.g., pay for offices and executive staff in their capital. But none of this money seems to flow down to the people who makes the standards.

Rob Weir

RecentComments

Comment RSS