From: Paul Rohr (paul@abisource.com)
Date: Mon Apr 29 2002 - 19:16:11 EDT
One last idea on the gettext front. I just took a look at the current .mo
file format:
http://www.gnu.org/manual/gettext-0.10.35/html_node/gettext_34.html
It's obviously optimized for string lookups (which we don't need). However,
it *would* be nice to have our translators only need to deal with .po files.
So then I asked myself ... why use the MO file format at all?
AFAICT, there are four different steps needed here:
1. isolate the strings
-----------------------
Prepare the sources so that gettext can extract strings to a .PO file.
To make this work, I think we'd just need to redefine the existing
*String_id.h file macros.
2. translate them
------------------
Have the translators work with and check in .PO files. (These are a plain
text format, right?)
3. transform the result
------------------------
At *build time*, instead of running a gettext tool to create a .MO file,
run one that creates a strings file with the appropriate IDs.
NOTE: This means that people wouldn't have to create strings files by
hand any more. So long as we have a way to keep track of the encoding (so
iconv can find it), that shouldn't be a problem.
4. at runtime, look up the strings
-----------------------------------
Instead of doing all that ID --> en-US mapping, just use the existing
logic as is.
To make this work, we'd need two little tools -- one that creates or
updates a PO file given a pair of appropriately macroized string_id.h files
(for step 1) and one that transforms a PO file back into a strings file (for
step 3). Everything else stays the same.
pros
- we need zero new code at runtime
- translators just work with PO files as usual
cons
- this sounds so easy, it's almost gotta be a dumb idea
What am I missing here?
Paul
PS: When I studied literary theory, this was *not* the kind of po-mo critic
I was training to become.
This archive was generated by hypermail 2.1.4 : Mon Apr 29 2002 - 19:17:10 EDT