[Logo]

Link Grammar Parser


News

Nov, 2022: link-grammar 5.12.0 released! This release is important, because it fixes a rare thread-corruption bug in multi-threaded regex functions. It is notable as it allows LG to pull the dictionary from a live AtomSpace, even as it is growing and changing. See below for a description of other changes in this release.

What is Link Grammar?

The Link Grammar Parser exhibits the linguistic (natural language) structure of English, Thai, Russian, Arabic, Persian and limited subsets of a half-dozen other languages. This structure is a graph of typed links (edges) between the words in a sentence. One may obtain the more conventional HPSG (constituent) and dependency style parses from Link Grammar by applying a collection of rules to convert to these different formats. This is possible because Link Grammar goes a bit "deeper" into the "syntactico-semantic" structure of a sentence: it provides considerably more fine-grained and detailed information than what is commonly available in conventional parsers.

The theory of Link Grammar parsing was originally developed in 1991 by Davy Temperley, John Lafferty and Daniel Sleator, at the time professors of linguistics and computer science at the Carnegie Mellon University. The three initial publications on this theory provide the best introduction and overview; since then, there have been hundreds of publications further exploring, examining and extending the ideas.

Although based on the original Carnegie-Mellon code base, the current Link Grammar package has dramatically evolved and is profoundly different from earlier versions. There have been innumerable bug fixes; performance has improved by more than an order of magnitude. The package is fully multi-threaded, fully UTF-8 enabled, and has been scrubbed for security, enabling cloud deployment. Parse coverage of English has been dramatically improved; other languages have been added (most notably, Thai and Russian). There is a raft of new features, including support for morphology, dialects, and a fine-grained weight (cost) system, allowing vector-embedding-like behaviour. There is a new, sophisticated tokenizer tailored for morphology: it can offer alternative splittings for morphologically ambiguous words. Dictionaries can be updated at run-time, enabling systems that perform continuous learning of grammar to also parse at the same time. That is, dictionary updates and parsing are mutually thread-safe. Classes of words can be recognized with regexes. Random planar graph parsing is fully supported; this allows uniform sampling of the space of planar graphs.

The latest addition is an experimental sentence generator; it is being used in the OpenCog Language Learning project, which aims to automatically learn Link Grammars from corpora, using brand-new and innovative information theoretic techniques, somewhat similar to those found in artificial neural nets (deep learning), but using explicitly symbolic representations.

Quick Overview

The parser includes API's in various different programming languages, as well as a handy command-line tool for playing with it. Here's some typical output:

              linkparser> This is a test!
                 Linkage 1, cost vector = (UNUSED=0 DIS= 0.00 LEN=6)
              
                  +-------------Xp------------+
                  +----->WV----->+---Ost--+   |
                  +---Wd---+-Ss*b+  +Ds**c+   |
                  |        |     |  |     |   |
              LEFT-WALL this.p is.v a  test.n !
              
              (S (NP this.p) (VP is.v (NP a test.n)) !)
              
                          LEFT-WALL    0.000  Wd+ hWV+ Xp+
                             this.p    0.000  Wd- Ss*b+
                               is.v    0.000  Ss- dWV- O*t+
                                  a    0.000  Ds**c+
                             test.n    0.000  Ds**c- Os-
                                  !    0.000  Xp- RW+
                         RIGHT-WALL    0.000  RW-

This rather busy display illustrates many interesting things. For example, the Ss*b link connects the verb and the subject, and indicates that the subject is singular. Likewise, the Ost link connects the verb and the object, and also indicates that the object is singular. The WV (verb-wall) link points at the head-verb of the sentence, while the Wd link points at the head-noun. The Xp link connects to the trailing punctuation. The Ds**c link connects the noun to the determiner: it again confirms that the noun is singular, and also that the noun starts with a consonant. (The PH link, not required here, is used to force phonetic agreement, distinguishing 'a' from 'an'). These link types are documented in the English Link Documentation.

The bottom of the display is a listing of the "disjuncts" used for each word. The disjuncts are simply a list of the connectors that were employed to form the links. They are particularly interesting because they serve as an extremely fine-grained form of a "part of speech" or "grammatical category", although they also can be interpreted as "semantic selections". Thus, for example: the disjunct S- O+ indicates a transitive verb: its a verb that takes both a subject and an object. The additional markup above indicates that 'is' is not only being used as a transitive verb, but it also indicates finer details: a transitive verb that took a singular subject, and was used (is usable as) the head verb of a sentence. The floating-point value is the "cost" of the disjunct; it very roughly captures the log-likelihood of this particular grammatical (and semantic!) usage. Much as parts-of-speech correlate with word-meanings, so also fine-grained parts-of-speech correlate with much finer distinctions and gradations of meaning.

The link-grammar parser also supports morphological analysis. Here is an example in Russian:

              linkparser> это теста
                 Linkage 1, cost vector = (UNUSED=0 DIS= 0.00 LEN=4)
              
                           +-----MVAip-----+
                  +---Wd---+       +-LLCAG-+
                  |        |       |       |
              LEFT-WALL это.msi тест.= =а.ndnpi

The LL link connects the stem 'тест' to the suffix 'а'. The MVA link connects only to the suffix, because, in Russian, it is the suffixes that carry all of the syntactic structure, and not the stems. The Russian lexis is documented here.

The Thai dictionary is now fully developed, effectively covering the entire language. An example in Thai:

            linkparser> นายกรัฐมนตรี ขึ้น กล่าว สุนทรพจน์
               Linkage 1, cost vector = (UNUSED=0 DIS= 2.00 LEN=2)
            
                +---------LWs--------+
                |           +<---S<--+--VS-+-->O-->+
                |           |        |     |       |
            LEFT-WALL นายกรัฐมนตรี.n ขึ้น.v กล่าว.v สุนทรพจน์.n

The VS link connects two verbs 'ขึ้น' and 'กล่าว' in a serial verb construction. A summary of link types is documented here. Full documentation of Thai Link Grammar can be found here.

Thai Link Grammar also accepts POS-tagged and named-entity-tagged inputs. Each word can be annotated with the Link POS tag. For example:

            linkparser> เมื่อวานนี้.n มี.ve คน.n มา.x ติดต่อ.v คุณ.pr ครับ.pt
            Found 1 linkage (1 had no P.P. violations)
               Unique linkage, cost vector = (UNUSED=0 DIS= 0.00 LEN=12)
            
                                      +---------------------PT--------------------+
                +---------LWs---------+---------->VE---------->+                  |
                |           +<---S<---+-->O-->+       +<--AXw<-+--->O--->+        |
                |           |         |       |       |        |         |        |
            LEFT-WALL เมื่อวานนี้.n[!] มี.ve[!] คน.n[!] มา.x[!] ติดต่อ.v[!] คุณ.pr[!] ครับ.pt[!]

Full documentation for the Thai dictionary can be found here.

The Thai dictionary accepts LST20 tagsets for POS and named entities, to bridge the gap between fundamental NLP tools and the Link Parser. For example:

            linkparser> linkparser> วันที่_25_ธันวาคม@DTM ของ@PS ทุก@AJ ปี@NN เป็น@VV วัน@NN คริสต์มาส@NN
            Found 348 linkages (348 had no P.P. violations)
               Linkage 1, cost vector = (UNUSED=0 DIS= 1.00 LEN=10)
            
                +--------------------------------LWs--------------------------------+
                |               +<------------------------S<------------------------+
                |               |                +---------->PO--------->+          |
                |               +----->AJpr----->+            +<---AJj<--+          +---->O---->+------NZ-----+
                |               |                |            |          |          |           |             |
            LEFT-WALL วันที่_25_ธันวาคม@DTM[!] ของ@PS[!].pnn ทุก@AJ[!].jl ปี@NN[!].n เป็น@VV[!].v วัน@NN[!].na คริสต์มาส@NN[!].n

Note that each word above is annotated with LST20 POS tags and NE tags. Full documentation for both the Link POS tags and the LST20 tagsets can be found here. More information about LST20, e.g. annotation guideline and data statistics, can be found here.

The any language supports uniformly-sampled random planar graphs:

            linkparser> asdf qwer tyuiop fghj bbb
            Found 1162 linkages (1162 had no P.P. violations)
            
                         +-------ANY------+-------ANY------+
                +---ANY--+--ANY--+        +---ANY--+--ANY--+
                |        |       |        |        |       |
            LEFT-WALL asdf[!] qwer[!] tyuiop[!] fghj[!] bbb[!]

The ady language does likewise, performing random morphological splittings:

            linkparser> asdf qwerty fghjbbb
            Found 1512 linkages (1512 had no P.P. violations)
            
                                              +------------------ANY-----------------+
                +-----ANY----+-------ANY------+                  +---------LL--------+
                |            |                |                  |                   |
            LEFT-WALL asdf[!ANY-WORD] qwerty[!ANY-WORD] fgh[!SIMPLE-STEM].= =jbbb[!SIMPLE-SUFF]

Theory

An extended overview and summary of Link Grammar can be found on the Link Grammar Wikipedia page, which touches on most of the important, primary aspects of the theory. However, it is no substitute for the original papers published on the topic:

A fairly comprehensive bibliography of papers written before 2004 is here and is mirrored here. A sampling of publications that reference Link Grammar in some way can be found here; some of these may be downloaded here.

Documentation

There is an extensive set of pages documenting the English dictionary; specifically, the names of links and their meanings, as well as how to write new rules. There is also a short primer for creating dictionaries for new languages.

The documentation for the C/C++ programming API is here. Bindings for other programming languages can be found in the bindings directory in the GitHub Link Grammar Repo.

System Summary

  • Actively maintained! New releases typically happen quarterly.
  • Besides English, there are comprehensive Thai and Russian dictionaries. The Thai dictionary was provided by Prachya Boonkwan. The Russian dictionary was provided by Sergey Protasov. The Persian and Arabic subsystems were provided by John Dehdari. A modest (thousand-word) German dictionary is included. There are proof-of-concept dictionaries for Lithuanian, Indonesian, Kazakh, Vietnamese, Hebrew and Turkish.
  • Several machine-learning projects are attempting to automatically learn LG grammars using unsupervised training methods on bulk text.
  • LG is a full morpho-syntactic parser; morphological disambiguation is handled with a sophisticated tokenization system which tracks alternative candidate word-splits (of words into morphemes) during parsing.
  • Multiple programming language bindings are available, including Ruby, Python, Perl, Lisp, Java, Javascript, Ocaml and AutoIt. Look here.
  • A network (TCP/IP) parse server provides JSON-formatted parse results.
  • Integrated with the OpenCog Atomspace. This allows graph queries and graph tools to be applied to LG output.
  • Fully multi-threaded; a standard build system; pkg-config integration; a CMake config file, dynamic/shared library support; pre-defined Docker containers; support for Linux as well as Windows, MacOSX, FreeBSD.
  • Several security audits have been performed, including fuzzing for mal-formed input. Secure and robust for cloud deployment.
  • Source code hosted at GitHub.
  • LGPL v2.1 license; see endnote for details.


Downloading Link Grammar

The source code to the system can be downloaded as a tarball. The current stable version is Link Grammar 5.12.0 (Nov. 2022). Older versions are available here.

GitHub hosts the primary link-grammar repository. Issues (bugs) should be reported there. Developers who are not a part of the core development team should not use or deploy the source from github. It is unstable and frequently buggy and broken! All users should use the tarballs, only!

Mailing Lists

The mailing list for Link Grammar discussion is at the link-grammar google group.

Subscribe to link-grammar:

Enter email:


Ongoing development by OpenCog

Ongoing development of Link Grammar is guided and supported by the Open Cognition project, where the parser plays an important role in the OpenCog natural language processing subsystem. Research and implementation is ongoing; current work includes investigations into unsupervised learning of language.

Stanford Parser Compatibility

A sibling project, RelEx, uses constraint-grammar-like techniques to extract dependency relations that are compatible with the Stanford parser. It's performance is comparable to the Stanford PCFG parsing model, and is more than three times faster than the Stanford "lexicalized" (factored) model.

The RelEx project is no longer in active development. We learned (the hard way) that the native Link Grammar parses contain much more information than the Stanford dependency markup is capable of supporting. The Stanford-style dependencies are simply are not rich or sophisticated enough to produce the kind of data needed for semantic analysis and comprehension, viz. tasks such as predicate-argument extraction, framing, semantic selection, and the like.

Language generation

For sentence generation, i.e. the creation of grammatically correct sentences from a bag of semantic relations, the microplanner and surface realization (sureal) portion of OpenCog is strongly recommended. A short example is here. These "sort-of work", but not very well. The primary issue is that they do not make use of the statistical information available in language to choose likely or reasonable sentence constructions.

We previously recommended two projects that should now be considered obsolete: NLGen and NLGen2. For your entertainment, they're still listed below: The NLGen and NLGen2 projects provide natural language generation modules, based on, and compatible with link-grammar and RelEx. They implement the SegSim ideas for NL generation. See the following YouTube videos of a virtual dog, showing some of NLGen's capabilities (circa 2009): Demo of Virtual Dog Learning to Play Fetch via Imitation and Reinforcement, AI Virtual Dog's Emotions Fluctuate Based on Its Experiences, Demo of Embodied Anaphora Resolution and AI Virtual Dog Answers Simple Questions about Itself and Its Environment.


Linguistic Disclaimer

Link Grammar is a natural language parser, not a human-level artificial general intelligence. This means that there are many sentences that it cannot parse correctly, or at all. There are entire classes of speech and writing that it cannot handle, including twitter posts, IRC chat logs, Valley-girl basilect, Old and Middle English, stock-market listings and raw HTML dumps.

Link Grammar works best with "newspaper English", as taught to and written by those educated in American colleges: standard-sized sentences, with proper grammar, proper punctuation, and correct capitalization. Link Grammar has difficulties with the following types of textual input:

  • Phrases (that are not a part of a complete sentence). There is some support for incomplete sentences with ellipsis. Many kinds of short phrases that can be interpreted as commands or instructions or exclamations are supported.
  • Twitter posts. These tend to be sentence fragments, often lacking proper grammatical structure. You should strip off hash-tags before sending text into the parser.
  • Any text containing a large number of spelling errors. The parser does have a built-in "spelling-guesser", which explores alternative spellings for words.
  • "Registers", such as newspaper headlines, where determiners are omitted; for example, "Thieves rob bank." Note, however, a "dialect" support system is in development, which can be used to alter ranking favorability for different forms of expression within a single dictionary.
  • Dialog, stage plays and movie scripts. Such dialog tends to consist of interleaved sentences. External software would be needed to disentangle distinct sentence streams.
  • Speech-to-text output. Such systems generate large numbers of mis-heard words that, taken at face value cannot be a part of valid sentences. Even if such recognition was perfect, spoken English tends not to be as well-constructed or grammatical as written English.
  • Support for British English and Commonwealth English is poor. This includes any English dialects spoken in India, Pakistan, Nigeria, Bangladesh, South Africa, as well as former American protectorates, such as the Philippines. British and regional spelling of words is missing from the dictionaries. The "dialect" support subsystem should be able to alleviate this, provided that the lexis is appropriately curated.
  • Slang and various regional non-middle-class-American dialects. This includes most dialects spoken by anyone living in economically poor or under-educated geographical regions, whether in urban housing projects or the red-state small-town and rural poor. Self-identifying subgroup dialects are also not handled, such as drug-culture, gang-culture and hacker-culture. The "dialect" support subsystem should be able to alleviate this, provided that the lexis is appropriately curated.
  • Long run-on sentences. These can generate thousands of alternative parses in a combinatorial explosion.

It is hoped that the unsupervised learning of language proposal will be of sufficient power and ability to handle most of these exceptional cases. Work is currently ongoing.


Natural Language Support

Ranked in order of maturity.

English
The main English documentation is here.
Thai
A comprehensive set of dictionaries, covering more than 100K words, first appears in version 5.10.4 (March 2022); a "final" version is devlivered in version 5.10.5 (June 2022). Documentation can be found in the LINKDOC.md file in github. Developed by Prachya Boonkwan with the support of the National Electronics and Computer Technology Center in Thailand.
Russian
A set of Russian dictionaries providing full coverage for the language have been incorporated into the main distribution as of version 4.7.10 (March 2013). An older version, from which these are derived, can be found at http://slashzone.ru/parser/. By Sergey Protasov. Includes link documentation (mirror) and subscript (morphology) documentation (mirror). Russian morpheme dictionaries can be had at http://aot.ru.

Документация по связям и по классам слов доступна в виде списка примеров.

Persian
The Persian dictionaries from Jon Dehdari have been incorporated into the main distribution, as of version 5.0.0 (April 2014). This includes a copy of the Persian stemming engine, as significant morphology analysis needs to be performed to parse Persian.
Arabic
The Arabic dictionaries from Jon Dehdari have been incorporated into the main distribution, as of version 5.0.0 (April 2014). These are derived from the older, original version. [Mirror] These require the Aramorph stemming package, which is included.
German
A small German dictionary, consisting of 850 words, is included. A brief description is provided here.
Lithuanian
A small Lithuanian prototype dictionary has been created. It contains a few hundred words. A few basic sentences parse just fine; the current version focuses on morphological analysis coupled with grammatical analysis. Documentation is here.

Sukurta yra labai prasta Lietuvių kalbos žodynas; beveik neiks ikį šiol neveikia. Čia dokumentacija.

Vietnamese
A small Vietnamese prototype dictionary has been created. It contains several hundred words.
Indonesian
A small Indonesian prototype dictionary has been created. It contains about one hundred words.
Hebrew
A very small Hebrew prototype dictionary has been created. It contains a few dozen words. Almost nothing works correctly (yet).
Kazakh
A very small Kazakh prototype dictionary has been created. It contains a few dozen words. Almost nothing works correctly (yet).
Turkish
A very small Turkish prototype dictionary has been created. It contains a few dozen words. Almost nothing works correctly (yet).
French, Luthor project
The Luthor project aims to develop a set of scripts to automatically construct Link Grammar linkage dictionaries by mining Wiktionary data. Current efforts are focusing on French. (This project appears to be defunct).

Adjunct Projects

The default distribution for Link Grammar includes bindings for Java, Python, Vala, OCaML, Common Lisp, and AutoIt, as well as a SWIG FFI interface file. Additional language bindings, and some related projects, are listed below:

RelEx Semantic Relation Extractor
RelEx is an English-language semantic relationship extractor, built on the Link Parser. It can identify subject, object, indirect object and many other relationships between words in a sentence. It will also provide part-of-speech tagging, noun-number tagging, verb tense tagging, gender tagging, and so on. RelEx includes a basic implementation of the Hobbs anaphora (pronoun) resolution algorithm.
Ruby bindings
Ruby bindings are coordinated at the Ruby-LinkParser website. The code can be found at the ged/link-parser github page.
Perl bindings
Perl bindings, created by Danny Brian, can be found on the Lingua-LinkParser page on CPAN. Caution: those bindings appear to be unmaintained; currently, they include features that were removed more than than five years ago. (We encourage a new maintainer to step up!) There is also a tutorial written against a very old version of the bindings; some details may be different.
Psi Toolkit (Perl)
The Psi Toolkit, an NLP toolkit aimed at linguists and NLP engineers, includes bindings for link-grammar, via perl.

Recent Changes

Version 5.12.0 (26 November 2022)

This release contains an important bug-fix for a multi-threaded race and crash in the regex code. This is quite rare: I was seeing a crash after 24 hours when running 6 threads. If you're not running at that level, chances are slim you'll see it. But still.

Also notable: this version can attach to a live dictionary running in the AtomSpace. This offers some major improvements over the previous version; a bit more is planned, as integration becomes tighter.

  • Fix crash when using the Atomese dictionary backend.
  • Fix generation tokenization bug when dict has no unknown word token.
  • Major Atomese dictionary extensions, including generation support.
  • Minor tweaks to `any` uniform random parse tree language.
  • Include U+202F NARROW NO-BREAK SPACE as a space character.
  • Fix the various regexes so that they're thread safe! #1354
  • Maybe(?) fix FreeBSD missing -lstdthreads #1355

Version 5.11.0 (27 September 2022)

Most notable in this release is a preliminary prototype interface to the OpenCog AtomSpace. This allows working directly with language data in the AtomSpace; avoiding the need to export the language model.

  • Prototype support for dictionary in the AtomSpace.
  • English dict: assorted missing nouns, verbs. #1289
  • Performance improvements. #1309
  • Fix Windows build break in 5.10.4, 5.10.5 fixed in #1313
  • Fix "amy" language #1312
  • Fix multilib systems, e.g. elf32-i386 #1314
  • Corrected grapheme support for random morpheme sampling. #1315
  • Thai updates #1322
  • Punctuation fixes #1331
  • Affixes can now be specified with regexes #1334
  • The regex library PCRE2 is required by default.

Version 5.10.5 (17 June 2022)

This version is the first to contain the final version of the Thai dictionary.

  • Updated Docker files. #1288
  • English dict: broader handling of ellipsis.
  • Updated Thai dicts: #1292, #1294
  • Fix Thai regex's to work even with basic C++ regex lib. #1297
  • Performance improvements. #1298

Version 5.10.4 (4 March 2022)

This is a notable and important release, as it is the first to include a complete Thai dictionary!

  • English dict: fix relative clause, per mailing list.
  • Remove assorted length restrictions on word-size. #1283
  • Add missing files for building link-generator on Windows. #1285
  • Strip the internally added "._I" from subscripted idioms. #1287
  • New: Provisional Thai dictionary. #1279

Version 5.10.3 (14 February 2022)

  • Remove `node.js/package-lock.json` from tarball distribution. #1251
  • Fix Windows MSVC build break. #1253
  • Fix memory leak in the "!" link-parser command. #1256
  • Add C++ regex support. It is now the default for MSVC builds. #1258
  • Fix spell-guess for run-on words. #1249
  • Port link-generator to MS-Windows. #1269
  • Fix apostrophe handling for link-generator w/sqlite3 dicts. #1276

Version 5.10.2 (16 September 2021)

  • Fix python install path.
  • Fix size in brand-new `link-generator` (hits 32-bit & ARM) #1247

Version 5.10.1 (7 September 2021)

  • Fix perl bindings build fail. #1248

Version 5.10.0 (4 September 2021)

The minor version number has been bumped because of a change to the link types used for idioms. Subscripts with an underbar are now reserved.

Users of the unsupervised language learning project will need this version. It contains fixes for random corpora generation.

  • Expanded English vocabulary
  • Support dictionary "#define allow-duplicate-words true". #1204
  • Fix crash for sentences containing wildcard words. #1206
  • Connector names starting with "ID" are no longer reserved. #1208
  • Connector names starting with underbar are reserved for internal use.
  • ".I" subscripts are no longer reserved; "._" subscripts are reserved. These last three changes introduce linkage incompatibilities.
  • Fix parsing with nulls when using an sqlite3 dictionary.
  • Fix regexes for NetBSD when using libc regexes. #1223
  • English dict: fix many "how?" questions.
  • English dict: fix conditional sentences #1240

Version 5.9.1 (28 April 2021)

Emergency bug fix.

  • Fix build break when SQLite3 is not installed. #1195

Version 5.9.0 (25 April 2021)

An experimental sentence generator has been added. This generator will create new grammatical sentences, based on a "fill in the blanks" approach to specifying a template sentence. The dictionary is scanned for any suitable words that might fit into wild-card locations; the resulting sentences are then printed. This is particularly useful for generating random corpora of grammatically valid sentences.

  • Use #define for custom configuration in dictionaries. #1128
  • Panic-mode fixes and extensions. In link-parser see !help panic_variables.
  • English dict: fix silly mistake with "I love cats and dogs".
  • Disable maintainer-mode in `configure.ac`.
  • Fix very rare crash/corruption introduced in v.5.8.1 #1142
  • English dict: fix problems with "just/only".
  • English dict: work on hesitation markers.
  • Fix multi-threading mem-leak. #1149
  • Provide emscripten javascript wrapper for the command-line parser.
  • Public API shared library entry points exported automatically. #1182
  • Provide bindings for the Vala programming language.
  • Increase number of allowed idiom expressions. #1187
  • Replace O(n^2) idiom loading algo by an O(n log n) algo. #1194
  • Disable SAT solver by default.
  • New tool: Sentence generator! This is an experimental prototype.
A list of older changes can be found here.

Website

Issues concerning this website should be addressed to Linas Vepstas - <linasvepstas@gmail.com> or Dom Lachowicz - <domlachowicz@gmail.com>.

License

Current versions of the Link Grammar parser software, language dictionaries and documentation are available under the LGPL v2.1 license. Versions prior to 5.0.0 are available under a variant of the BSD license.

Copyright (c) 2003-2004 Daniel Sleator, David Temperley, and John Lafferty. All rights reserved.
Copyright (c) 2003 Peter Szolovits
Copyright (c) 2004,2012,2013 Sergey Protasov
Copyright (c) 2006 Sampo Pyysalo
Copyright (c) 2007 Mike Ross
Copyright (c) 2008,2009,2010 Borislav Iordanov
Copyright (c) 2008-2022 Linas Vepstas
Copyright (c) 2014-2022 Amir Plivatsky
Copyright (c) 2021-2022 Prachya Boonkwan.