summaryrefslogtreecommitdiff
path: root/doc/context/sources/general/manuals/onandon
diff options
context:
space:
mode:
Diffstat (limited to 'doc/context/sources/general/manuals/onandon')
-rw-r--r--doc/context/sources/general/manuals/onandon/onandon-110.tex137
-rw-r--r--doc/context/sources/general/manuals/onandon/onandon-53.tex288
-rw-r--r--doc/context/sources/general/manuals/onandon/onandon-emoji.tex16
-rw-r--r--doc/context/sources/general/manuals/onandon/onandon-execute.tex396
-rw-r--r--doc/context/sources/general/manuals/onandon/onandon-expansion.tex307
-rw-r--r--doc/context/sources/general/manuals/onandon/onandon-fences.tex499
-rw-r--r--doc/context/sources/general/manuals/onandon/onandon-media.tex220
-rw-r--r--doc/context/sources/general/manuals/onandon/onandon-modern.tex1284
-rw-r--r--doc/context/sources/general/manuals/onandon/onandon-performance.tex523
-rw-r--r--doc/context/sources/general/manuals/onandon/onandon-runtoks.tex531
-rw-r--r--doc/context/sources/general/manuals/onandon/onandon-variable.tex328
-rw-r--r--doc/context/sources/general/manuals/onandon/onandon.tex22
12 files changed, 4106 insertions, 445 deletions
diff --git a/doc/context/sources/general/manuals/onandon/onandon-110.tex b/doc/context/sources/general/manuals/onandon/onandon-110.tex
new file mode 100644
index 000000000..e8b005f24
--- /dev/null
+++ b/doc/context/sources/general/manuals/onandon/onandon-110.tex
@@ -0,0 +1,137 @@
+% language=uk
+
+% After watching \quotation {What Makes This Song Great 30: Alanis Morisette} by
+% Rick Beato, it's tempting to poder \quotation {What makes \TEX\ great}.
+
+\startcomponent onandon-110
+
+\environment onandon-environment
+
+\startchapter[title={Getting there, version 1.10}]
+
+When we decided to turn experiments with a \LUA\ extensions to \PDFTEX\ into
+developing \LUATEX\ as alternative engine we had, in addition to opening up some
+of \TEX's internals, some extensions in mind. Around version 1.00 most was
+already achieved and with version 1.10 we're pretty close to where we want to be.
+The question is, when are we ready? In order to answer that I will look at four
+aspects:
+
+\startitemize[packed]
+\startitem objectives \stopitem
+\startitem functionality \stopitem
+\startitem performance \stopitem
+\startitem stability \stopitem
+\stopitemize
+
+The main {\em objective} was to open up \TEX\ in a way that permit extensions
+without the need to patch the engine. Although it might suit us, we don't want to
+change too much the internals, first of all because \TEX\ is \TEX, the documented
+program with a large legacy. \footnote {This is reflected in the keywords that
+exposed mechanisms use: they reflect internal variable names and constants and as
+a consequence there is inconsistency there.} Discussions about how to extend
+\TEX\ are not easy and seldom lead to an agreement so better is to provide a way
+to do what you like without bothering other users and|/|or interfering with macro
+packages. I think that this objective is met quite well now. Other objectives,
+like embedding basic graphic capabilities using \METAPOST\ have already been met
+long ago. There is more control over the backend and modern fonts can be dealt
+with.
+
+The {\em functionality} in terms of primitives has been extended but within
+reasonable bounds: we only added things that make coding a bit more natural but
+we realize that this is very subjective. So, here again we can say that we met
+our goals. A lot can be achieved via \LUA\ code and users and developers need to
+get accustomed to that if they want to move on with \LUATEX. We will not
+introduce features that get added to or are part of other engines.
+
+We wanted to keeping {\em performance} acceptable. The core \TEX\ engine is
+already pretty fast and it's often the implementation of macros (in macro
+packages) that creates a performance hit. Going \UTF\ has a price as do modern
+fonts. At the time of this writing processing the 270 page \LUATEX\ manual takes
+about 12 seconds (one run), which boils down to over 27 pages per second.
+
+\starttabulate[||c|c|]
+\NC \BC runtime \BC overhead \NC \NR
+\BC \LUATEX \NC $12.0$ \NC $+0.6$ \NC \NR
+\BC \LUAJITTEX \NC $ 9.7$ \NC $+0.5$ \NC \NR
+\stoptabulate
+
+Is this fast or slow? One can do tests with specific improvements (using new
+primitives) but in practice it's very hard to improve performance significantly.
+This is because a test with millions of calls that show a .05 second improvement
+disappears when one only has a few thousand calls. Many small improvements can
+add up, but less that one thinks, especially when macros are already quite
+optimal. Also this runtime includes time normally used for running additional
+programs (e.g.\ for getting bibliographies right).
+
+It must be said that performance is not completely under our control. For
+instance, we have patched the \LUAJIT\ hash function because it favours \URL's
+and therefore favours hashing the middle of the string which is bad for our use
+as we are more interested in the (often unique) start of strings. We also
+compress the format which speeds up loading but not on the native windows 64~bit
+binary. At the time this writing the extra overhead is 2~seconds due to some
+suboptimal gzip handling; the cross compiled 64~bit mingw binaries that I use
+don't suffer from this. When I was testing the 32~bit binaries on the machine of
+a colleague, I was surprised to measure the following differences on a complex
+document with hundreds of \XML\ files, many images and a lot of manipulations.
+
+\starttabulate[||c|c|]
+\NC \BC 1.08 with \LUA\ 5.2 \BC 1.09 with \LUA\ 5.3 \NC \NR
+\BC \LUATEX \NC $21.5$ \NC $15.2$ \NC \NR
+\BC \LUAJITTEX \NC $10.7$ \NC $10.3$ \NC \NR
+\stoptabulate
+
+Now, these are just rough numbers but they demonstrate that the gap between
+\LUATEX\ and \LUAJITTEX\ is becoming less which is good because at this moment it
+looks like \LUAJIT\ will not catch up with \LUA\ 5.3 so at some point we might
+drop it. It will be interesting to see what \LUA\ 5.4 will bring as it offers an
+\ alternative garbage collector. And imagine that the regular \LUA\ virtual
+machine gets more optimized.
+
+You also have to take into account that having a browser open in the background
+of a \TEX\ run has way more impact than a few tenths of a second in \LUATEX\
+performance. The same is true for memory usage: why bother about \LUATEX\ taking
+tens of megabytes for fonts while a few tabs in a browser can bump memory
+consumption to gigabytes of memory usage. Also, using a large \TEX\ tree (say the
+whole of \TEXLIVE) can have a bit of a performance hit! Or what about inefficient
+callbacks, using inefficient \LUA\ code of badly designed solutions? What we
+could gain here we loose there, so I think we can safely say that the current
+implementation of \LUATEX\ is as good as you can (and will) get. Why should we
+introduce obscure optimizations where on workstations \TEX\ is just one of the
+many processes? Why should we bother too much to speed up on servers that have
+start|-|up or job management overhead or are connected to relatively slow remote
+file system? Why squeeze out a few more milliseconds when badly written macros or
+styles can have an way more impact on performance? So, for now we're satisfied
+with performance. Just for the record, the ratio between \CONTEXT\ \MKII\
+running other engines and \LUATEX\ with \MKIV\ for the next snippet of code:
+
+\starttyping
+\dorecurse{250}{\input tufte\par}
+\stoptyping
+
+is 2.8 seconds for \XETEX, 1.5 seconds for \LUATEX, 1.2 seconds for \LUAJITTEX,
+and 0.9 seconds for \PDFTEX. Of course this is not really a practical test but it
+demonstrates the baseline performance on just text. The 64 bit version of \PDFTEX\
+is actually quite a bit slower on my machine. Anyway, \LUATEX\ (1.09) with \MKIV\
+is doing okey here.
+
+That brings us to {\em stability}. In order to achieve that we will not introduce
+many more extensions. That way users get accustomed to what is there (read: there
+is no need to search for what else is possible). Also, it makes that existing
+functionality can become bug free because no new features can interfere. So, at
+some point we have to decide that this is it. If we can do what we want now,
+there are no strong arguments for more. in that perspective version 1.10 can be
+considered very close to what we want to achieve.
+
+Of course development will continue. For instance, the \PDF\ inclusion code will
+be replaced by more lightweight and independent code. Names of functions and
+symbolic constants might be normalized (as mentioned, currently they are often
+related to or derived from internals). More documentation will be added. We will
+quite probably keep up with \LUA\ versions. Also the \FFI\ interface will become
+more stable. And for sure bugs will be fixed. We might add a few more options to
+control behaviour of for instance of math rendering. Some tricky internals (like
+alignments) might get better attribute support if possible. But currently we
+think that most fundamental issues have been dealt with.
+
+\stopchapter
+
+\stopcomponent
diff --git a/doc/context/sources/general/manuals/onandon/onandon-53.tex b/doc/context/sources/general/manuals/onandon/onandon-53.tex
new file mode 100644
index 000000000..0d5dc1b9c
--- /dev/null
+++ b/doc/context/sources/general/manuals/onandon/onandon-53.tex
@@ -0,0 +1,288 @@
+% language=uk
+
+\startcomponent onandon-53
+
+\environment onandon-environment
+
+\startchapter[title={From \LUA\ 5.2 to 5.3}]
+
+When we started with \LUATEX\ we used \LUA\ 5.1 and moved to 5.2 when that became
+available. We didn't run into issues then because there were no fundamental
+changes that could not be dealt with. However, when \LUA\ 5.3 was announced in
+2015 we were not sure if we should make the move. The main reason was that we'd
+chosen \LUA\ because of its clean design which meant that we had only one number
+type: double. In 5.3 on the other hand, deep down a number can be either an
+integer or a floating point quantity.
+
+Internally \TEX\ is mostly (up to) 32-bit integers and when we go from \LUA\ to
+\TEX\ we round numbers. Nonetheless one can expect some benefits in using
+integers. Performance|-|wise we didn't expect much, and memory consumption would
+be the same too. So, the main question then was: can we get the same output and
+not run into trouble due to possible differences in serializing numbers; after
+all \TEX\ is about stability. The serialization aspect is for instance important
+when we compare quantities and|/|or use numbers in hashes.
+
+Apart from this change in number model, which comes with a few extra helpers,
+another extension in 5.3 was that bit|-|wise operations are now part of the
+language. The lpeg library is still not part of stock \LUA. There is some minimal
+\UTF8 support, but less than we provide in \LUATEX\ already. So, looking at these
+changes, we were not in a hurry to update. Also, it made sense to wait till this
+important number|-|related change was stable.
+
+But, a few years later, we still had it on our agenda to test, and after the
+\CONTEXT\ 2017 meeting we decided to give it a try; here are some observations. A
+quick test was just dropping in the new \LUA\ code and seeing if we could make a
+\CONTEXT\ format. Indeed that was no big deal but a test run failed because at
+some point a (for instance) \type {1} became a \type {1.0}. It turned out that
+serializing has some side effects. And with some ad hoc prints for tracing (in
+the \LUATEX\ source) I could figure out what went on. How numbers are seen can
+(to some extent) be deduced from the \type {string.format} function, which is in
+\LUA\ a combination of parsing, splitting and concatenation combined with piping
+to the \CCODE\ \type {sprintf} function. \footnote {Actually, at some point I
+decided to write my own formatter on top of \type {format} and I ended up with
+splitting as well. It's only now that I realize why this is working out so well
+(in terms of performance): simple format (single items) are passed more or less
+directly to \type {sprintf} and as \LUA\ itself is fast, due to some caching, the
+overhead is small compared to the built|-|in splitter method. And the \CONTEXT\
+formatter has many more options and is extensible.}
+
+\starttyping
+local a = 2 * (1/2) print(string.format("%s", a),math.type(x))
+local b = 2 * (1/2) print(string.format("%d", b),math.type(x))
+local c = 2 print(string.format("%d", c),math.type(x))
+local d = -2 print(string.format("%d", d),math.type(x))
+local e = 2 * (1/2) print(string.format("%i", e),math.type(x))
+local f = 2.1 print(string.format("%.0f",f),math.type(x))
+local g = 2.0 print(string.format("%.0f",g),math.type(x))
+local h = 2.1 print(string.format("%G", h),math.type(x))
+local i = 2.0 print(string.format("%G", i),math.type(x))
+local j = 2 print(string.format("%.0f",j),math.type(x))
+local k = -2 print(string.format("%.0f",k),math.type(x))
+\stoptyping
+
+This gives the following results:
+
+\starttabulate[|cBT|c|T|c|cT|]
+\BC a \NC 2 * (1/2)\NC s \NC 1.0 \NC float \NC \NR
+\BC b \NC 2 * (1/2)\NC d \NC 1 \NC float \NC \NR
+\BC c \NC 2 \NC d \NC 2 \NC integer \NC \NR
+\BC d \NC -2 \NC d \NC 2 \NC integer \NC \NR
+\BC e \NC 2 * (1/2)\NC i \NC 1 \NC float \NC \NR
+\BC f \NC 2.1 \NC .0f \NC 2 \NC float \NC \NR
+\BC g \NC 2.0 \NC .0f \NC 2 \NC float \NC \NR
+\BC h \NC 2.1 \NC G \NC 2.1 \NC float \NC \NR
+\BC i \NC 2.0 \NC G \NC 2 \NC float \NC \NR
+\BC j \NC 2 \NC .0f \NC 2 \NC integer \NC \NR
+\BC k \NC -2 \NC .0f \NC 2 \NC integer \NC \NR
+\stoptabulate
+
+This demonstrates that we have to be careful when we need these numbers
+represented as strings. In \CONTEXT\ the number of places where we had to check
+for that was not that large; in fact, only some hashing related to font sizes had
+to be done using explicit rounding.
+
+Another surprising side effect is the following. Instead of:
+
+\starttyping
+local n = 2^6
+\stoptyping
+
+we now need to use:
+
+\starttyping
+local n = 0x40
+\stoptyping
+
+or just:
+
+\starttyping
+local n = 64
+\stoptyping
+
+because we don't want this to be serialized to \type {64.0} which is due to the
+fact that a power results in a float. One can wonder if this makes sense when we
+apply it to an integer.
+
+At any rate, once we could process a file, two documents were chosen for a
+performance test. Some experiments with loops and casts had demonstrated that we
+could expect a small performance hit and indeed, this was the case. Processing
+the \LUATEX\ manual takes 10.7 seconds with 5.2 on my 5-year-old laptop and 11.6
+seconds with 5.3. If we consider that \CONTEXT\ spends 50\% of its time in \LUA,
+then we see a 20\% performance penalty. Processing the \METAFUN\ manual (which
+has lots of \METAPOST\ images) went from less than 20 seconds (\LUAJITTEX\ does
+it in 16 seconds) up to more than 27 seconds. So there we lose more than 50\% on
+the \LUA\ end. When we observed these kinds of differences, Luigi and I
+immediately got into debugging mode, partly out of curiosity, but also because
+consistent performance is important to~us.
+
+Because these numbers made no sense, we traced different sub-mechanisms and
+eventually it became clear that the reason for the speed penalty was that the
+core \typ {string.format} function was behaving quite badly in the \type {mingw}
+cross-compiled binary, as seen by this test:
+
+\starttyping
+local t = os.clock()
+for i=1,1000*1000 do
+ -- local a = string.format("%.3f",1.23)
+ -- local b = string.format("%i",123)
+ local c = string.format("%s",123)
+end
+print(os.clock()-t)
+\stoptyping
+
+\starttabulate[|c|c|c|c|c|]
+\BC \BC lua 5.3 \BC lua 5.2 \BC texlua 5.3 \BC texlua 5.2 \BC \NR
+\BC a \NC 0.43 \NC 0.54 \NC 3.71 (0.47) \NC 0.53 \NC \NR
+\BC b \NC 0.18 \NC 0.24 \NC 3.78 (0.17) \NC 0.22 \NC \NR
+\BC c \NC 0.26 \NC 0.68 \NC 3.67 (0.29) \NC 0.66 \NC \NR
+\stoptabulate
+
+The 5.2 binaries perform the same but the 5.3 Lua binary greatly outperforms
+\LUATEX, and so we had to figure out why. After all, all this integer
+optimization could bring some gain! It took us a while to figure this out. The
+numbers in parentheses are the results after fixing this.
+
+Because font internals are specified in integers one would expect a gain
+in running:
+
+\starttyping
+mtxrun --script font --reload force
+\stoptyping
+
+and indeed that is the case. On my machine a scan results in 2561 registered
+fonts from 4906 read files and with 5.2 that takes 9.1 seconds while 5.3 needs a
+bit less: 8.6 seconds (with the bad format performance) and even less once that
+was fixed. For a test:
+
+\starttyping
+\setupbodyfont[modern] \tf \bf \it \bs
+\setupbodyfont[pagella] \tf \bf \it \bs
+\setupbodyfont[dejavu] \tf \bf \it \bs
+\setupbodyfont[termes] \tf \bf \it \bs
+\setupbodyfont[cambria] \tf \bf \it \bs
+\starttext \stoptext
+\stoptyping
+
+This code needs 30\% more runtime so the question is: how often do we call \type
+{string.format} there? A first run (when we wipe the font cache) needs some
+715,000 calls while successive runs need 115,000 calls so that slow down
+definitely comes from the bad handling of \type {string.format}. When we drop in
+a \LUA\ update or whatever other dependency we don't want this kind of impact. In
+fact, when one uses external libraries that are or can be compiled under the
+\TEX\ Live infrastructure and the impact would be such, it's bad advertising,
+especially when one considers the occasional complaint about \LUATEX\ being
+slower than other engines.
+
+The good news is that eventually Luigi was able to nail down this issue and we
+got a binary that performed well. It looks like \LUA\ 5.3.4 (cross|)|compiles
+badly with \GCC\ 5.3.0 and 6.3.0.
+
+So in the end caching the fonts takes:
+
+\starttabulate[||c|c|]
+\BC \BC caching \BC running \NC \NR
+\BC 5.2 stock \NC 8.3 \NC 1.2 \NC \NR
+\BC 5.3 bugged \NC 12.6 \NC 2.1 \NC \NR
+\BC 5.3 fixed \NC 6.3 \NC 1.0 \NC \NR
+\stoptabulate
+
+So indeed it looks like 5.3 is able to speed up \LUATEX\ a bit, given that one
+integrates it in the right way! Using a recent compiler is needed too, although
+one can wonder when a bad case will show up again. One can also wonder why such a
+slow down can mostly go unnoticed, because for sure \LUATEX\ is not the only
+compiled program.
+
+The next examples are some edge cases that show you need to be aware
+that
+\startitemize[n,text,nostopper]
+ \startitem an integer has its limits, \stopitem
+ \startitem that hexadecimal numbers are integers and \stopitem
+ \startitem that \LUA\ and \LUAJIT\ can be different in details. \stopitem
+\stopitemize
+
+\starttabulate[||T|T|]
+\NC \NC \tx print(0xFFFFFFFFFFFFFFFF) \NC \tx print(0x7FFFFFFFFFFFFFFF) \NC \NR
+\HL
+\BC lua 52 \NC 1.844674407371e+019 \NC 9.2233720368548e+018 \NC \NR
+\BC luajit \NC 1.844674407371e+19 \NC 9.2233720368548e+18 \NC \NR
+\BC lua 53 \NC -1 \NC 9223372036854775807 \NC \NR
+\stoptabulate
+
+So, to summarize the process. A quick test was relatively easy: move 5.3 into the
+code base, adapt a little bit of internals (there were some \LUATEX\ interfacing
+bits where explicit rounding was needed), run tests and eventually fix some
+issues related to the Makefile (compatibility) and \CCODE\ obscurities (the slow
+\type {sprintf}). Adapting \CONTEXT\ was also not much work, and the test suite
+uncovered some nasty side effects. For instance, the valid 5.2 solution:
+
+\starttyping
+local s = string.format("02X",u/1024)
+local s = string.char (u/1024)
+\stoptyping
+
+now has to become (both 5.2 and 5.3):
+
+\starttyping
+local s = string.format("02X",math.floor(u/1024))
+local s = string.char (math.floor(u/1024))
+\stoptyping
+
+or (both 5.2 and (emulated or real) 5.3):
+
+\starttyping
+local s = string.format("02X",bit32.rshift(u,10))
+local s = string.char (bit32.rshift(u,10))
+\stoptyping
+
+or (only 5.3):
+
+\starttyping
+local s = string.format("02X",u >> 10))
+local s = string.char (u >> 10)
+\stoptyping
+
+or (only 5.3):
+
+\starttyping
+local s = string.format("02X",u//1024)
+local s = string.char (u//1024)
+\stoptyping
+
+A conditional section like:
+
+\starttyping
+if LUAVERSION >= 5.3 then
+ local s = string.format("02X",u >> 10))
+ local s = string.char (u >> 10)
+else
+ local s = string.format("02X",bit32.rshift(u,10))
+ local s = string.char (bit32.rshift(u,10))
+end
+\stoptyping
+
+will fail because (of course) the 5.2 parser doesn't like that. In \CONTEXT\ we
+have some experimental solutions for that but that is beyond this summary.
+
+In the process a few \UTF\ helpers were added to the string library so that we
+have a common set for \LUAJIT\ and \LUA\ (the \type {utf8} library that was added
+to 5.3 is not that important for \LUATEX). For now we keep the \type {bit32}
+library on board. Of course we'll not mention all the details here.
+
+When we consider a gain in speed of 5-10\% with 5.3 that also means that the gain
+of \LUAJITTEX\ compared to 5.2 becomes less. For instance in font processing both
+engines now perform closer to the same.
+
+As I write this, we've just entered 2018 and after a few months of testing
+\LUATEX\ with \LUA\ 5.3 we're confident that we can move the code to the
+experimental branch. This means that we will use this version in the \CONTEXT\
+distribution and likely will ship this version as 1.10 in 2019, where it becomes
+the default. The 2018 version of \TEX~Live will have 1.07 with \LUA\ 5.2 while
+intermediate versions of the \LUA\ 5.3 binary will end up on the \CONTEXT\
+garden, probably with number 1.08 and 1.09 (who knows what else we will add or
+change in the meantime).
+
+\stopchapter
+
+\stopcomponent
+
+% collectgarbage("count") -- two return values in 2
diff --git a/doc/context/sources/general/manuals/onandon/onandon-emoji.tex b/doc/context/sources/general/manuals/onandon/onandon-emoji.tex
index 1f67cc528..0a89727a1 100644
--- a/doc/context/sources/general/manuals/onandon/onandon-emoji.tex
+++ b/doc/context/sources/general/manuals/onandon/onandon-emoji.tex
@@ -75,8 +75,8 @@ mentions twice that amount. Currently in \CONTEXT\ we resolve such combinations
when requested.} so imagine what will happen in the future. But, instead of
making a picture for each variant, a different solution has been chosen. For
coloring this seguiemj font uses the (very flexible) stacking technology: a color
-shape is an overlay of colored symbols. The colors are organized in pallets and
-it's no big deal to add additional pallets if needed. Instead of adding
+shape is an overlay of colored symbols. The colors are organized in palettes and
+it's no big deal to add additional palettes if needed. Instead of adding
pre|-|composed shapes (as is needed with bitmaps and \SVG) snippets are used to
build alternative glyphs and these can be combined into new shapes by
substitution and positioning (for that kerns, mark anchoring and distance
@@ -186,11 +186,11 @@ account:
scale too.
\stopitem
\startitem
- How efficient is a shape constructed. In that respect a bitmap or \SVG\ image
+ How efficiently is a shape constructed? In that respect a bitmap or \SVG\ image
is just one entity.
\stopitem
\startitem
- How well can (semi) arbitrary combinations of emoji be provided. Here the
+ How well can a (semi)arbitrary combinations of emoji be provided? Here the
glyph approach wins.
\stopitem
\startitem
@@ -203,17 +203,17 @@ account:
social political reasons.
\stopitem
\startitem
- Are black and white shapes provided alongside color shapes.
+ Are black and white shapes provided alongside color shapes?
\stopitem
\stopitemize
Maybe an \SVG\ or bitmap image can have a lot of detail compared to a stacked
-glyph but, when we're just using pictographic representations, the later is the
+glyph but, when we're just using pictographic representations, the latter is the
best choice.
When I was playing a bit with the skin tone variants and other combinations that
should result in some composed shape, I used the \UNICODE\ test files but I got
-the impression that there are some errors in the test suite, for instance with
+the impression that there were some errors in the test suite, for instance with
respect to modifiers. Maybe the fonts are just doing the wrong thing or maybe
some implement these sequences a bit inconsistent. This will probably improve
over time but the question is if we should intercept issues. I'm not in favour of
@@ -409,7 +409,7 @@ In case you wonder how some of the details above were typeset, there is a module
\NC \type {\ShowEmojiSnippets} \NC show the snippets of a given emoji \NC \NR
\NC \type {\ShowEmojiSnippetsOverlay} \NC show the overlayed snippets of a given emoji \NC \NR
\NC \type {\ShowEmojiGlyphs} \NC show the snippets of a typeset emoji \NC \NR
-\NC \type {\ShowEmojiPalettes} \NC show the color pallets in the current font \NC \NR
+\NC \type {\ShowEmojiPalettes} \NC show the color palletes in the current font \NC \NR
\stoptabulate
Examples of usage are:
diff --git a/doc/context/sources/general/manuals/onandon/onandon-execute.tex b/doc/context/sources/general/manuals/onandon/onandon-execute.tex
new file mode 100644
index 000000000..abb3b4d8a
--- /dev/null
+++ b/doc/context/sources/general/manuals/onandon/onandon-execute.tex
@@ -0,0 +1,396 @@
+% language=uk
+
+\startcomponent onandon-execute
+
+\environment onandon-environment
+
+\startchapter[title={Executing \TEX}]
+
+Much of the \LUA\ code in \CONTEXT\ originates from experiments. When it survives
+in the source code it is probably used, waiting to be used or kept for
+educational purposes. The functionality that we describe here has already been
+present for a while in \CONTEXT, but improved a little starting with \LUATEX\
+1.08 due to an extra helper. The code shown here is generic and not used in
+\CONTEXT\ as such.
+
+Say that we have this code:
+
+\startbuffer
+for i=1,10000 do
+ tex.sprint("1")
+ tex.sprint("2")
+ for i=1,3 do
+ tex.sprint("3")
+ tex.sprint("4")
+ tex.sprint("5")
+ end
+ tex.sprint("\\space")
+end
+\stopbuffer
+
+\typebuffer
+
+% \ctxluabuffer
+
+When we call \type {\directlua} with this snippet we get some 30 pages of \type
+{12345345345}. The printed text is saved till the end of the \LUA\ call, so
+basically we pipe some 170.000 characters to \TEX\ that get interpreted as one
+paragraph.
+
+Now imagine this:
+
+\startbuffer
+\setbox0\hbox{xxxxxxxxxxx} \number\wd0
+\stopbuffer
+
+\typebuffer
+
+which gives \getbuffer. If we check the box in \LUA, with:
+
+\startbuffer
+tex.sprint(tex.box[0].width)
+tex.sprint("\\enspace")
+tex.sprint("\\setbox0\\hbox{!}")
+tex.sprint(tex.box[0].width)
+\stopbuffer
+
+\typebuffer
+
+the result is {\tttf \ctxluabuffer}, which is not what you would expect at first
+sight. However, if you consider that we just pipe to a \TEX\ buffer that gets
+parsed after the \LUA\ call, it will be clear that the reported width is the
+width that we started with. It will work all right if we say:
+
+\startbuffer
+tex.sprint(tex.box[0].width)
+tex.sprint("\\enspace")
+tex.sprint("\\setbox0\\hbox{!}")
+tex.sprint("\\directlua{tex.sprint(tex.box[0].width)}")
+\stopbuffer
+
+\typebuffer
+
+because now we get: {\tttf\ctxluabuffer}. It's not that complex to write some
+support code that makes this more convenient. This can work out quite well but
+there is a drawback. If we use this code:
+
+\startbuffer
+print(status.input_ptr)
+tex.sprint(tex.box[0].width)
+tex.sprint("\\enspace")
+tex.sprint("\\setbox0\\hbox{!}")
+tex.sprint("\\directlua{print(status.input_ptr)\
+ tex.sprint(tex.box[0].width)}")
+\stopbuffer
+
+\typebuffer
+
+Here we get \type {6} and \type {7} reported. You can imagine that when a lot of
+nested \type {\directlua} calls happen, we can get an overflow of the input level
+or (depending on what we do) the input stack size. Ideally we want to do a \LUA\
+call, temporarily go to \TEX, return to \LUA, etc.\ without needing to worry
+about nesting and possible crashes due to \LUA\ itself running into problems. One
+charming solution is to use so|-|called coroutines: independent \LUA\ threads
+that one can switch between --- you jump out from the current routine to another
+and from there back to the current one. However, when we use \type {\directlua}
+for that, we still have this nesting issue and what is worse, we keep nesting
+function calls too. This can be compared to:
+
+\starttyping
+\def\whatever{\ifdone\whatever\fi}
+\stoptyping
+
+where at some point \type {\ifdone} is false so we quit. But we keep nesting when
+the condition is met, so eventually we can end up with some nesting related
+overflow. The following:
+
+\starttyping
+\def\whatever{\ifdone\expandafter\whatever\fi}
+\stoptyping
+
+is less likely to overflow because there we have tail recursion which basically
+boils down to not nesting but continuing. Do we have something similar in
+\LUATEX\ for \LUA ? Yes, we do. We can register a function, for instance:
+
+\starttyping
+lua.get_functions_table()[1] = function() print("Hi there!") end
+\stoptyping
+
+and call that one with:
+
+\starttyping
+\luafunction 1
+\stoptyping
+
+This is a bit faster than calling a function like:
+
+\starttyping
+\directlua{HiThere()}
+\stoptyping
+
+which can also be achieved by
+
+\starttyping
+\directlua{print("Hi there!")}
+\stoptyping
+
+which sometimes can be more convenient. Anyway, a function call is what we can
+use for our purpose as it doesn't involve interpretation and effectively behaves
+like a tail call. The following snippet shows what we have in mind:
+
+\startbuffer[code]
+local stepper = nil
+local stack = { }
+local fid = 0xFFFFFF
+local goback = "\\luafunction" .. fid .. "\\relax"
+
+function tex.resume()
+ if coroutine.status(stepper) == "dead" then
+ stepper = table.remove(stack)
+ end
+ if stepper then
+ coroutine.resume(stepper)
+ end
+end
+
+lua.get_functions_table()[fid] = tex.resume
+
+function tex.yield()
+ tex.sprint(goback)
+ coroutine.yield()
+ texio.closeinput()
+end
+
+function tex.routine(f)
+ table.insert(stack,stepper)
+ stepper = coroutine.create(f)
+ tex.sprint(goback)
+end
+\stopbuffer
+
+\ctxluabuffer[code]
+
+\startbuffer[demo]
+tex.routine(function()
+ tex.sprint(tex.box[0].width)
+ tex.sprint("\\enspace")
+ tex.sprint("\\setbox0\\hbox{!}")
+ tex.yield()
+ tex.sprint(tex.box[0].width)
+end)
+\stopbuffer
+
+\typebuffer[demo]
+We start a routine, jump out to \TEX\ in the middle, come back when we're done
+and continue. This gives us: \ctxluabuffer [demo], which is what we expect.
+
+\setbox0\hbox{xxxxxxxxxxx}
+
+\ctxluabuffer[demo]
+
+This mechanism permits efficient (nested) loops like:
+
+\startbuffer[demo]
+tex.routine(function()
+ for i=1,10000 do
+ tex.sprint("1")
+ tex.yield()
+ tex.sprint("2")
+ tex.routine(function()
+ for i=1,3 do
+ tex.sprint("3")
+ tex.yield()
+ tex.sprint("4")
+ tex.yield()
+ tex.sprint("5")
+ end
+ end)
+ tex.sprint("\\space")
+ tex.yield()
+ end
+end)
+\stopbuffer
+
+\typebuffer[demo]
+
+We do create coroutines, go back and forwards between \LUA\ and \TEX, but avoid
+memory being filled up with printed content. If we flush paragraphs (instead of
+e.g.\ the space) then the main difference is that instead of a small delay due to
+the loop unfolding in a large set of prints and accumulated content, we now get a
+steady flushing and processing.
+
+However, we can still have an overflow of input buffers because we still nest
+them: the limitation at the \TEX\ end has moved to a limitation at the \LUA\ end.
+How come? Here is the code that we use:
+
+\typebuffer[code]
+
+The \type {routine} creates a coroutine, and \type {yield} gives control to \TEX.
+The \type {resume} is done at the \TEX\ end when we're finished there. In
+practice this works fine and when you permit enough nesting and levels in \TEX\
+then you will not easily overflow.
+
+When I picked up this side project and wondered how to get around it, it suddenly
+struck me that if we could just quit the current input level then nesting would
+not be a problem. Adding a simple helper to the engine made that possible (of
+course figuring it out took a while):
+
+\startbuffer[code]
+local stepper = nil
+local stack = { }
+local fid = 0xFFFFFF
+local goback = "\\luafunction" .. fid .. "\\relax"
+
+function tex.resume()
+ if coroutine.status(stepper) == "dead" then
+ stepper = table.remove(stack)
+ end
+ if stepper then
+ coroutine.resume(stepper)
+ end
+end
+
+lua.get_functions_table()[fid] = tex.resume
+
+if texio.closeinput then
+ function tex.yield()
+ tex.sprint(goback)
+ coroutine.yield()
+ texio.closeinput()
+ end
+else
+ function tex.yield()
+ tex.sprint(goback)
+ coroutine.yield()
+ end
+end
+
+function tex.routine(f)
+ table.insert(stack,stepper)
+ stepper = coroutine.create(f)
+ tex.sprint(goback)
+end
+\stopbuffer
+
+\ctxluabuffer[code]
+
+\typebuffer[code]
+
+The trick is in \type {texio.closeinput}, a recent helper and one that should be
+used with care. We assume that the user knows what she or he is doing. On an old
+laptop with a i7-3840 processor running \WINDOWS\ 10 the following snippet takes
+less than 0.35 seconds with \LUATEX\ and 0.26 seconds with \LUAJITTEX.
+
+\startbuffer[code]
+tex.routine(function()
+ for i=1,10000 do
+ tex.sprint("\\setbox0\\hpack{x}")
+ tex.yield()
+ tex.sprint(tex.box[0].width)
+ tex.routine(function()
+ for i=1,3 do
+ tex.sprint("\\setbox0\\hpack{xx}")
+ tex.yield()
+ tex.sprint(tex.box[0].width)
+ end
+ end)
+ end
+end)
+\stopbuffer
+
+\typebuffer[code]
+
+% \testfeatureonce {1} {\setbox0\hpack{\ctxluabuffer[code]}} \elapsedtime
+
+Say that we run the bad snippet:
+
+\startbuffer[code]
+for i=1,10000 do
+ tex.sprint("\\setbox0\\hpack{x}")
+ tex.sprint(tex.box[0].width)
+ for i=1,3 do
+ tex.sprint("\\setbox0\\hpack{xx}")
+ tex.sprint(tex.box[0].width)
+ end
+end
+\stopbuffer
+
+\typebuffer[code]
+
+% \testfeatureonce {1} {\setbox0\hpack{\ctxluabuffer[code]}} \elapsedtime
+
+This time we need 0.12 seconds in both engines. So what if we run this:
+
+\startbuffer[code]
+\dorecurse{10000}{%
+ \setbox0\hpack{x}
+ \number\wd0
+ \dorecurse{3}{%
+ \setbox0\hpack{xx}
+ \number\wd0
+ }%
+}
+\stopbuffer
+
+\typebuffer[code]
+
+% \testfeatureonce {1} {\setbox0\hpack{\getbuffer[code]}} \elapsedtime
+
+Pure \TEX\ needs 0.30 seconds for both engines but there we lose 0.13 seconds on
+the loop code. In the \LUA\ example where we yield, the loop code takes hardly
+any time. As we need only 0.05 seconds more it demonstrates that when we use the
+power of \LUA\ the performance hit of the switch is quite small: we yield 40.000
+times! In general, such differences are far exceeded by the overhead: the time
+needed to typeset the content (which \type {\hpack} doesn't do), breaking
+paragraphs into lines, constructing pages and other overhead involved in the run.
+In \CONTEXT\ we use a slightly different variant which has 0.30 seconds more
+overhead, but that is probably true for all \LUA\ usage in \CONTEXT, but again,
+it disappears in other runtime.
+
+Here is another example:
+
+\startbuffer[code]
+\def\TestWord#1%
+ {\directlua{
+ tex.routine(function()
+ tex.sprint("\\setbox0\\hbox{\\tttf #1}")
+ tex.yield()
+ tex.sprint(math.round(100 * tex.box[0].width/tex.hsize))
+ tex.sprint(" percent of the hsize: ")
+ tex.sprint("\\box0")
+ end)
+ }}
+\stopbuffer
+
+\typebuffer[code] \getbuffer[code]
+
+\startbuffer
+The width of next word is \TestWord {inline}!
+\stopbuffer
+
+\typebuffer \getbuffer
+
+Now, in order to stay realistic, this macro can also be defined as:
+
+\startbuffer[code]
+\def\TestWord#1%
+ {\setbox0\hbox{\tttf #1}%
+ \directlua{
+ tex.sprint(math.round(100 * tex.box[0].width/tex.hsize))
+ } %
+ percent of the hsize: \box0\relax}
+\stopbuffer
+
+\typebuffer[code]
+
+We get the same result: \quotation {\getbuffer}.
+
+We have been using a \LUA|-|\TEX\ mix for over a decade now in \CONTEXT, and have
+never really needed this mixed model. There are a few places where we could
+(have) benefitted from it and we might use it in a few places, but so far we have
+done fine without it. In fact, in most cases typesetting can be done fine at the
+\TEX\ end. It's all a matter of imagination.
+
+\stopchapter
+
+\stopcomponent
diff --git a/doc/context/sources/general/manuals/onandon/onandon-expansion.tex b/doc/context/sources/general/manuals/onandon/onandon-expansion.tex
new file mode 100644
index 000000000..73a0b4953
--- /dev/null
+++ b/doc/context/sources/general/manuals/onandon/onandon-expansion.tex
@@ -0,0 +1,307 @@
+% language=uk
+
+\startcomponent onandon-expansion
+
+\environment onandon-environment
+
+\startchapter[title={More (new) expansion trickery}]
+
+Contrary to what one might expect when looking at macro definitions, \TEX\ is
+pretty efficient. Occasionally I wonder if some extra built in functionality
+could help me write better code but when you program with a bit care there is
+often not much to gain in terms of tokens and performance. \footnote {The long
+trip to the yearly Bacho\TeX\ meeting is always a good opportunity to ponder
+\TEX\ and its features. The new functionality discussed here is a side effect of
+the most recent trip.} Also, some possible extensions probably only would be
+applied a few times which makes them low priority. When you look at the
+extensions brought by \ETEX\ the number is not that large, and \LUATEX\ only
+added a few that deal with the language, for instance \tex {expanded} which is
+like an \tex {edef} without the defining a macro and acts on a token list wrapped
+in (normally) curly braces. Just as reference we mention some of the expansion
+related helpers.
+
+\starttabulate[|l|l|p|]
+\BC command \BC argument \BC
+ comment
+\NC \NR
+\HL
+\NC \tex {expandafter} \NC \type {token} \NC
+ The token after the next token gets expanded (one level only). In tricky
+ \TEX\ code you can often see multiple such commands in sequence which makes a
+ nice puzzle.
+\NC \NR
+\NC \tex {noexpand} \NC \type {token} \NC
+ The token after this command is not expanded in the context of expansion.
+\NC \NR
+\NC \tex {expanded} \NC \type {{tokens}} \NC
+ The given token list is expanded. This command showed up early in \LUATEX\
+ development and was taken from \ETEX\ follow|-|ups. I have mails from 2011
+ mentioning its presence in \PDFTEX\ 1.50 (which was targeted in 2008) but
+ somehow it never ended up in a production version at that time (and we're
+ still not at that version). In \CONTEXT\ we already had a command with that
+ name so there we use \tex {normalexpanded}. Users normally can just use the
+ \CONTEXT\ variant of \type {\expanded}.
+\NC \NR
+\NC \tex {unexpanded} \NC \type {{tokens}} \NC
+ The given token list is hidden from expansion. Again, in \CONTEXT\ we already
+ had a command serving as prefix for definitions so instead we use \tex
+ {normalunexpanded}. In the core of \CONTEXT\ this new \ETEX\ command is hardly
+ used.
+\NC \NR
+\NC \tex {detokenize} \NC \type {{tokens}} \NC
+ The given tokenlist becomes (basically) verbatim \TEX\ code. We had something
+ like that in \CONTEXT\ but have no nameclash. It is used in a few places. It's
+ also an \ETEX\ command.
+\NC \NR
+\NC \tex {scantokens} \NC \type {{tokens}} \NC
+ This primitive interprets its argument as a pseudo file. We don't really use it.
+\NC \NR %
+\NC \tex {scantextokens} \NC \type {{tokens}} \NC
+ This \LUATEX\ primitive does the same but has no end|-|of|-|file side
+ effects. This one is also not really used in \CONTEXT.
+\NC \NR
+\NC \tex {protected} \NC \type {\.def} \NC
+ The definition following this prefix, introduced in \ETEX, is unexpandable in
+ the context of expansion. We already used such a command in \CONTEXT\ but
+ with a completely different meaning so use \tex {normalprotected} as prefix
+ or \tex {unexpanded} which is an alias.
+\NC \NR
+\stoptabulate
+
+Here I will present two other extensions in \LUATEX\ that can come in handy, and
+they are there simply because their effect can hardly be realized otherwise
+(never say never in \TEX). One has to do with immediately applying a definition,
+the other with user defined conditions. The first one relates directly to
+expansion, the second one concerns conditions and relates more to parsing
+branches which on purpose avoids expansion.
+
+For the first one I use some silly examples. I must admit that although I can
+envision useful application, I really need to go over the large amount of
+\CONTEXT\ source code to really find a place where it is making things better.
+Take the following definitions:
+
+\startbuffer
+\newcount\NumberOfCalls
+
+\def\TestMe{\advance\NumberOfCalls1 }
+
+\edef\Tested{\TestMe foo:\the\NumberOfCalls}
+\edef\Tested{\TestMe foo:\the\NumberOfCalls}
+\edef\Tested{\TestMe foo:\the\NumberOfCalls}
+
+\meaning\Tested
+\stopbuffer
+
+\typebuffer
+
+The result is a macro \tex {Tested} that not only has the unexpanded incrementing
+code in its body but also hasn't done any advancing:
+
+\getbuffer
+
+Of course when you're typesetting something, this kind of expansion normally is
+not needed. Instead of the above definition we can define \tex {TestMe} in a way
+that expands the assignment immediately. You need of course to be aware of
+preventing look ahead interference by using a space or \tex {relax} (often an
+expression works better as it doesn't leave an \tex {relax}).
+
+\startbuffer
+\def\TestMe{\immediateassignment\advance\NumberOfCalls1 }
+
+\edef\Tested{\TestMe bar:\the\NumberOfCalls}
+\edef\Tested{\TestMe bar:\the\NumberOfCalls}
+\edef\Tested{\TestMe bar:\the\NumberOfCalls}
+
+\meaning\Tested
+\stopbuffer
+
+\typebuffer
+
+This time the counter gets updated and we don't see interference in the resulting
+\tex {Tested} macro:
+
+\getbuffer
+
+Here is a somewhat silly example of an expanded comparison of two \quote
+{strings}:
+
+\startbuffer
+\def\expandeddoifelse#1#2#3#4%
+ {\immediateassignment\edef\tempa{#1}%
+ \immediateassignment\edef\tempb{#2}%
+ \ifx\tempa\tempb
+ \immediateassignment\def\next{#3}%
+ \else
+ \immediateassignment\def\next{#4}%
+ \fi
+ \next}
+
+\edef\Tested
+ {(\expandeddoifelse{abc}{def}{yes}{nop}/%
+ \expandeddoifelse{abc}{abc}{yes}{nop})}
+
+\meaning\Tested
+\stopbuffer
+
+\typebuffer
+
+I don't remember many cases where I needed such an expanded comparison. We have a
+variant in \CONTEXT\ that uses \LUA\ but that one is not really used in the core.
+Anyway, the above code gives:
+
+\getbuffer
+
+You can do the same assignments as in preambles of \tex {halign} and after \tex
+{accent} which means that assignments to box registers are blocked (boxing
+involves grouping and delayed assignments and so). The error you will get when
+you use a non||assignment command refers to a prefix, because internally such
+commands are called prefixed commands. Leading spaces and \tex {relax} are
+ignored.
+
+In addition to this one|-|time immediate assignment a pseudo token list variant
+is provided, so the above could be rewritten to:
+
+\starttyping
+\def\expandeddoifelse#1#2#3#4%
+ {\immediateassigned {
+ \edef\tempa{#1}
+ \edef\tempb{#2}
+ }%
+ \ifx\tempa\tempb
+ \immediateassignment\def\next{#3}%
+ \else
+ \immediateassignment\def\next{#4}%
+ \fi
+ \next}
+\stoptyping
+
+While \tex {expanded} first builds a token lists that then gets used, the \tex
+{immediateassigned} primitive just walls over the list delimited by curly braces.
+
+A next extension concerns conditions. If you have done a bit of extensive \TEX\
+programming you know that nested conditions need to be properly constructed in
+for instance macro bodies. This is because (for good reason) \TEX\ goes into a
+fast scanning mode when there is a match and it has to skip the \tex {else} upto
+\tex {fi} branch. In order to do that properly a nested \tex {if} in there needs
+to have a matching \tex {fi}.
+
+In practice this is no real problem and careful coding will never give a problem
+here: you can either hide nested code in a macro or somehow jump over nested
+conditions if really needed. Actually you only need to care when you pickup a
+token inside the branch because likely you don't want to pick up for instance a
+\tex {fi} but something that comes after it. Say that we have a sane conditional
+setup like this:
+
+\starttyping
+\newif\iffoo \foofalse
+\newif\ifbar \bartrue
+
+\ifoo
+ \ifbar \else \fi
+\else
+ \ifbar \else \fi
+\fi
+\stoptyping
+
+Here the \tex {iffoo} and \tex {ifbar} need to be equivalent to \tex {iftrue} or
+\tex {iffalse} in order to succeed well and that is what for instance \tex
+{footrue} and \tex {foofalse} will do: change the meaning of \tex {iffoo}.
+
+But imagine that you want something more complex. You want for instance to let
+\tex {ifbar} do some calculations. In that case you want it to behave a bit like
+what a so called \type {vardef} in \METAPOST\ does: the end result is what
+matters. Now, because \TEX\ macros often are a complex mix of expandable and
+non|-|expandable this is not that trivial. One solution is a dedicated definer,
+say \tex {cdef} for defining a macro with conditional properties. I actually
+implemented such a definer a few years ago but left it so long in a folder with
+ideas that I only found it back after I had come up with another solution. It was
+probably proof that it was not that good an idea.
+
+The solution implemented in \LUATEX\ is just a special case of a test: \tex
+{ifcondition}. When looking at the next example, keep in mind that from the
+perspective of \TEX's scanner it only needs to know if something is a token that
+does some test and has a matching \tex {fi}. For that purpose you can consider
+\tex {ifcondition} to be \tex {iftrue}. When \TEX\ actually wants to do a test,
+which is the case in the true branch, then it will simply ignore this \tex
+{ifcondition} primitive and expands what comes after it (which is \TEX's natural
+behaviour). Effectively \tex {ifcondition} has no meaning except from when it has
+to be skipped, in which case it's a token flagged as \tex {if} kind of command.
+
+\starttyping
+\unexpanded\def\something#1#2%
+ {\edef\tempa{#1}%
+ \edef\tempb{#2}
+ \ifx\tempa\tempb}
+
+\ifcondition\something{a}{b}%
+ \ifcondition\something{a}{a}%
+ true 1
+ \else
+ false 1
+ \fi
+\else
+ \ifcondition\something{a}{a}%
+ true 2
+ \else
+ false 2
+ \fi
+\fi
+\stoptyping
+
+Wrapped in a macro you can actually make this fully expandable when you use the
+previously mentioned immediate assignment. Here is another example:
+
+\starttyping
+\unexpanded\def\onoddpage
+ {\ifodd\count0 }
+
+\ifcondition\onoddpage odd \else even \fi page
+\stoptyping
+
+The previously defined comparison macro can now be rewritten as:
+
+\starttyping
+\def\equaltokens#1#2%
+ {\immediateassignment\edef\tempa{#1}%
+ \immediateassignment\edef\tempb{#2}%
+ \ifx\tempa\tempb}
+
+\def\expandeddoifelse#1#2#3#4%
+ {\ifcondition\equaltokens{#1}{#2}%
+ \immediateassignment\def\next{#3}%
+ \else
+ \immediateassignment\def\next{#4}%
+ \fi
+ \next}
+\stoptyping
+
+When used this way it will of course also work without the \tex {ifcondition} but
+when used nested it can be like this. This last example also demonstrates that
+this feature probably only makes sense in more complicated cases where more work
+is done in the \tex {onoddpage} or \tex {equaltokens} macro. And again, I am not
+sure if for instance in \CONTEXT\ I have a real use for it because there are only
+a few cases where nesting like this could benefit. I did some tests with a low
+level macro where it made the code look nicer. It was actually a bit faster but
+most core macros are not called that often. Although the overhead of this feature
+can be neglected, performance should not be the reason for using it: in \CONTEXT\
+for instance one can often only measure such possible speed|-|ups on macros that
+are called tens or hundreds of thousands of times and that seldom happens in a
+real run end even then a change from say 0.827 seconds to 0.815 seconds for 10K
+calls of a complex case is just noise as the opposite can also happen.
+
+Although not strictly necessary these extensions might make some code look better
+so that is why they officially will be available in the 1.09 release of \LUATEX\
+in fall 2018. It might eventually inspire me to go over some code and see where I
+can improve the look and feel.
+
+The last few years I have implemented some more ideas as local experiments, for
+instance \tex {futurelet} variant or a simple (one level) \tex {expand}, but in
+the end rejected them because there is no real benefit in them (no better looking
+code, no gain in performance, hard to document, possible side effects, etc.), so
+it is very unlikely that we will have more extensions like this. After all, we
+could do more than 40 years without them. Although \unknown\ who knows what we
+will provide in \LUATEX\ version~2.
+
+\stopchapter
+
+\stopcomponent
diff --git a/doc/context/sources/general/manuals/onandon/onandon-fences.tex b/doc/context/sources/general/manuals/onandon/onandon-fences.tex
new file mode 100644
index 000000000..133b9bfeb
--- /dev/null
+++ b/doc/context/sources/general/manuals/onandon/onandon-fences.tex
@@ -0,0 +1,499 @@
+% language=uk
+
+\startcomponent onandon-fences
+
+\environment onandon-environment
+
+% avoid context defaults:
+%
+% \mathitalicsmode \plusone % default in context
+% \mathdelimitersmode\plusseven % optional in context
+
+\def\UseMode#1{\appendtoks\mathdelimitersmode#1\to\everymathematics}
+
+\startchapter[title={Tricky fences}]
+
+Occasionally one of my colleagues notices some suboptimal rendering and asks me
+to have a look at it. Now, one can argue about \quotation {what is right} and
+indeed there is not always a best answer to it. Such questions can even be a
+nuisance; let's think of the following scenario. You have a project where \TEX\
+is practically the only solution. Let it be an \XML\ rendering project, which
+means that there are some boundary conditions. Speaking in 2017 we find that in
+most cases a project starts out with the assumption that everything is possible.
+
+Often such a project starts with a folio in mind and therefore by decent tagging
+to match the educational and esthetic design. When rendering is mostly automatic
+and concerns too many (variants) to check all rendering, some safeguards are used
+(an example will be given below). Then different authors, editors and designers
+come into play and their expectations, also about what is best, often conflict.
+Add to that rendering for the web, and devices and additional limitations show
+up: features get dropped and even more cases need to be compensated (the quality
+rules for paper are often much higher). But, all that defeats the earlier
+attempts to do well because suddenly it has to match the lesser format. This in
+turn makes investing in improving rendering very inefficient (read: a bottomless
+pit because it never gets paid and there is no way to gain back the investment).
+Quite often it is spacing that triggers discussions and questions what rendering
+is best. And inconsistency dominates these questions.
+
+So, in case you wonder why I bother with subtle aspects of rendering as discussed
+below, the answer is that it is not so much professional demand but users (like
+my colleagues or those on the mailing lists) that make me look into it and often
+something that looks trivial takes days to sort out (even for someone who knows
+his way around the macro language, fonts and the inner working of the engine).
+And one can be sure that more cases will pop up.
+
+All this being said, let's move on to a recent example. In \CONTEXT\ we support
+\MATHML\ although in practice we're forced to a mix of that standard and
+\ASCIIMATH. When we're lucky, we even get a mix with good old \TEX-encoded math.
+One problem with an automated flow and processing (other than raw \TEX) is that
+one can get anything and therefore we need to play safe. This means for instance
+that you can get input like this:
+
+\starttyping
+f(x) + f(1/x)
+\stoptyping
+
+or in more structured \TEX\ speak:
+
+\startbuffer
+$f(x) + f(\frac{1}{x})$
+\stopbuffer
+
+\typebuffer
+
+Using \TeX\ Gyre Pagella, this renders as: {\UseMode\zerocount\inlinebuffer}, and
+when seeing this a \TEX\ user will revert to:
+
+\startbuffer
+$f(x) + f\left(\frac{1}{x}\right)$
+\stopbuffer
+
+\typebuffer
+
+which gives: {\UseMode\zerocount \inlinebuffer}. So, in order to be robust we can
+always use the \type {\left} and \type {\right} commands, can't we?
+
+\startbuffer
+$f(x) + f\left(x\right)$
+\stopbuffer
+
+\typebuffer
+
+which gives {\UseMode\zerocount \inlinebuffer}, but let's blow up this result a
+bit showing some additional tracing from left to right, now in Latin Modern:
+
+\startbuffer[blownup]
+\startcombination[nx=3,ny=2,after=\vskip3mm]
+ {\scale[scale=4000]{\hbox{$f(x)$}}}
+ {just characters}
+ {\scale[scale=4000]{\ruledhbox{\showglyphs \showfontkerns \showfontitalics$f(x)$}}}
+ {just characters}
+ {\scale[scale=4000]{\ruledhbox{\showglyphs \showfontkerns \showfontitalics \showmakeup$f(x)$}}}
+ {just characters}
+ {\scale[scale=4000]{\hbox{$f\left(x\right)$}}}
+ {using delimiters}
+ {\scale[scale=4000]{\ruledhbox{\showglyphs \showfontkerns \showfontitalics$f\left(x\right)$}}}
+ {using delimiters}
+ {\scale[scale=4000]{\ruledhbox{\showglyphs \showfontkerns \showfontitalics \showmakeup$f\left(x\right)$}}}
+ {using delimiters}
+\stopcombination
+\stopbuffer
+
+\startlinecorrection
+\UseMode\zerocount
+\switchtobodyfont[modern]\getbuffer[blownup]
+\stoplinecorrection
+
+When we visualize the glyphs and kerns we see that there's a space instead of a
+kern when we use delimiters. This is because the delimited sequence is processed
+as a subformula and injected as a so|-|called inner object and as such gets
+spaced according to the ordinal (for the $f$) and inner (\quotation {fenced} with
+delimiters $x$) spacing rules. Such a difference normally will go unnoticed but
+as we mentioned authors, editors and designers being involved, there's a good
+chance that at some point one will magnify a \PDF\ preview and suddenly notice
+that the difference between the $f$ and $($ is a bit on the large side for simple
+unstacked cases, something that in print is likely to go unnoticed. So, even when
+we don't know how to solve this, we do need to have an answer ready.
+
+When I was confronted by this example of rendering I started wondering if there
+was a way out. It makes no sense to hard code a negative space before a fenced
+subformula because sometimes you don't want that, especially not when there's
+nothing before it. So, after some messing around I decided to have a look at the
+engine instead. I wondered if we could just give the non|-|scaled fence case the
+same treatment as the character sequence.
+
+Unfortunately here we run into the somewhat complex way the rendering takes
+place. Keep in mind that it is quite natural from the perspective of \TEX\
+because normally a user will explicitly use \type {\left} and \type {\right} as
+needed, while in our case the fact that we automate and therefore want a generic
+solution interferes (as usual in such cases).
+
+Once read in the sequence \type {f(x)} can be represented as a list:
+
+\starttyping
+list = {
+ {
+ id = "noad", subtype = "ord", nucleus = {
+ {
+ id = "mathchar", fam = 0, char = "U+00066",
+ },
+ },
+ },
+ {
+ id = "noad", subtype = "open", nucleus = {
+ {
+ id = "mathchar", fam = 0, char = "U+00028",
+ },
+ },
+ },
+ {
+ id = "noad", subtype = "ord", nucleus = {
+ {
+ id = "mathchar", fam = 0, char = "U+00078",
+ },
+ },
+ },
+ {
+ id = "noad", subtype = "close", nucleus = {
+ {
+ id = "mathchar", fam = 0, char = "U+00029",
+ },
+ },
+ },
+}
+\stoptyping
+
+The sequence \type {f \left( x \right)} is also a list but now it is a tree (we
+leave out some unset keys):
+
+\starttyping
+list = {
+ {
+ id = "noad", subtype = "ord", nucleus = {
+ {
+ id = "mathchar", fam = 0, char = "U+00066",
+ },
+ },
+ },
+ {
+ id = "noad", subtype = "inner", nucleus = {
+ {
+ id = "submlist", head = {
+ {
+ id = "fence", subtype = "left", delim = {
+ {
+ id = "delim", small_fam = 0, small_char = "U+00028",
+ },
+ },
+ },
+ {
+ id = "noad", subtype = "ord", nucleus = {
+ {
+ id = "mathchar", fam = 0, char = "U+00078",
+ },
+ },
+ },
+ {
+ id = "fence", subtype = "right", delim = {
+ {
+ id = "delim", small_fam = 0, small_char = "U+00029",
+ },
+ },
+ },
+ },
+ },
+ },
+ },
+}
+\stoptyping
+
+So, the formula \type {f(x)} is just four characters and stays that way, but with
+some inter|-|character spacing applied according to the rules of \TEX\ math. The
+sequence \typ {f \left( x \right)} however becomes two components: the \type {f}
+is an ordinal noad,\footnote {Noads are the mathematical building blocks.
+Eventually they become nodes, the building blocks of paragraphs and boxed
+material.} and \typ {\left( x \right)} becomes an inner noad with a list as a
+nucleus, which gets processed independently. The way the code is written this is
+what (roughly) happens:
+
+\startitemize
+\startitem
+ A formula starts; normally this is triggered by one or two dollar signs.
+\stopitem
+\startitem
+ The \type {f} becomes an ordinal noad and \TEX\ goes~on.
+\stopitem
+\startitem
+ A fence is seen with a left delimiter and an inner noad is injected.
+\stopitem
+\startitem
+ That noad has a sub|-|math list that takes the left delimiter up to a
+ matching right one.
+\stopitem
+\startitem
+ When all is scanned a routine is called that turns a list of math noads into
+ a list of nodes.
+\stopitem
+\startitem
+ So, we start at the beginning, the ordinal \type {f}.
+\stopitem
+\startitem
+ Before moving on a check happens if this character needs to be kerned with
+ another (but here we have an ordinal|-|inner combination).
+\stopitem
+\startitem
+ Then we encounter the subformula (including fences) which triggers a nested
+ call to the math typesetter.
+\stopitem
+\startitem
+ The result eventually gets packaged into a hlist and we're back one level up
+ (here after the ordinal \type {f}).
+\stopitem
+\startitem
+ Processing a list happens in two passes and, to cut it short, it's the second
+ pass that deals with choosing fences and spacing.
+\stopitem
+\startitem
+ Each time when a (sub)list is processed a second pass over that list
+ happens.
+\stopitem
+\startitem
+ So, now \TEX\ will inject the right spaces between pairs of noads.
+\stopitem
+\startitem
+ In our case that is between an ordinal and an inner noad, which is quite
+ different from a sequence of ordinals.
+\stopitem
+\stopitemize
+
+It's these fences that demand a two-pass approach because we need to know the
+height and depth of the subformula. Anyway, do you see the complication? In our
+inner formula the fences are not scaled, but this is not communicated back in the
+sense that the inner noad can become an ordinal one, as in the simple \type {f(}
+pair. The information is not only lost, it is not even considered useful and the
+only way to somehow bubble it up in the processing so that it can be used in the
+spacing requires an extension. And even then we have a problem: the kerning that
+we see between \type {f(} is also lost. It must be noted that this kerning is
+optional and triggered by setting \type {\mathitalicsmode=1}. One reason for this
+is that fonts approach italic correction differently, and cheat with the
+combination of natural width and italic correction.
+
+Now, because such a workaround is definitely conflicting with the inner workings
+of \TEX, our experimenting demands another variable be created: \type
+{\mathdelimitersmode}. It might be a prelude to more manipulations but for now we
+stick to this one case. How messy it really is can be demonstrated when we render
+our example with Cambria.
+
+\startlinecorrection
+\UseMode\zerocount
+\switchtobodyfont[cambria]\getbuffer[blownup]
+\stoplinecorrection
+
+If you look closely you will notice that the parenthesis are moved up a bit. Also
+notice the more accurate bounding boxes. Just to be sure we also show Pagella:
+
+\startlinecorrection
+\UseMode\zerocount
+\switchtobodyfont[pagella]\getbuffer[blownup]
+\stoplinecorrection
+
+When we really want the unscaled variant to be somewhat compatible with the
+fenced one we now need to take into account:
+
+\startitemize[packed]
+\startitem
+ the optional axis|-|and|-|height|/|depth related shift of the fence (bit 1)
+\stopitem
+\startitem
+ the optional kern between characters (bit 2)
+\stopitem
+\startitem
+ the optional space between math objects (bit 4)
+\stopitem
+\stopitemize
+
+Each option can be set (which is handy for testing) but here we will set them
+all, so, when \type {\mathdelimitersmode=7}, we want cambria to come out as
+follows:
+
+\startlinecorrection
+\UseMode\plusseven
+\switchtobodyfont[cambria]\getbuffer[blownup]
+\stoplinecorrection
+
+When this mode is set the following happens:
+
+\startitemize
+\startitem
+ We keep track of the scaling and when we use the normal size this is
+ registered in the noad (we had space in the data structure for that).
+\stopitem
+\startitem
+ This information is picked up by the caller of the routine that does the
+ subformula and stored in the (parent) inner noad (again, we had space for
+ that).
+\stopitem
+\startitem
+ Kerns between a character (ordinal) and subformula (inner) are kept,
+ which can be bad for other cases but probably less than what we try
+ to solve here.
+\stopitem
+\startitem
+ When the fences are unscaled the inner property temporarily becomes
+ an ordinal one when we apply the inter|-|noad spacing.
+\stopitem
+\stopitemize
+
+Hopefully this is good enough but anything more fancy would demand drastic
+changes in one of the most sensitive mechanisms of \TEX. It might not always work
+out right, so for now I consider it an experiment, which means that it can be
+kept around, rejected or improved.
+
+In case one wonders if such an extension is truly needed, one should also take
+into account that automated typesetting (also of math) is probably one of the
+areas where \TEX\ can shine for a while. And while we can deal with much by using
+\LUA, this is one of the cases where the interwoven and integrated parsing,
+converting and rendering of the math machinery makes it hard. It also fits into a
+further opening up of the inner working by modes.
+
+\startbuffer[simple]
+\dontleavehmode
+\scale
+ [scale=3000]
+ {\ruledhbox
+ {\showglyphs
+ \showfontkerns
+ \showfontitalics
+ $f(x)$}}
+\stopbuffer
+
+\startbuffer[fenced]
+\dontleavehmode
+\scale
+ [scale=3000]
+ {\ruledhbox
+ {\showglyphs
+ \showfontkerns
+ \showfontitalics
+ $f\left(x\right)$}}
+\stopbuffer
+
+\def\TestMe#1%
+ {\bTR
+ \bTD[width=35mm,align=middle,toffset=3mm] \switchtobodyfont[#1]\UseMode\zerocount\getbuffer[simple] \eTD
+ \bTD[width=35mm,align=middle,toffset=3mm] \switchtobodyfont[#1]\UseMode\zerocount\getbuffer[fenced] \eTD
+ \bTD[width=35mm,align=middle,toffset=3mm] \switchtobodyfont[#1]\UseMode\plusseven\getbuffer[simple] \eTD
+ \bTD[width=35mm,align=middle,toffset=3mm] \switchtobodyfont[#1]\UseMode\plusseven\getbuffer[fenced] \eTD
+ \eTR
+ \bTR
+ \bTD[align=middle,nx=2] \type{\mathdelimitersmode=0} \eTD
+ \bTD[align=middle,nx=2] \type{\mathdelimitersmode=7} \eTD
+ \eTR
+ \bTR
+ \bTD[align=middle,nx=4] \switchtobodyfont[#1]\bf #1 \eTD
+ \eTR}
+
+\startbuffer
+\bTABLE[frame=off]
+ \TestMe{modern}
+ \TestMe{cambria}
+ \TestMe{pagella}
+\eTABLE
+\stopbuffer
+
+Another objection to such a solution can be that we should not alter the engine
+too much. However, fences already are an exception and treated specially (tests
+and jumps in the program) so adding this fits reasonably well into that part of
+the design.
+
+In the following examples we demonstrate the results for Latin Modern, Cambria
+and Pagella when \type {\mathdelimitersmode} is set to zero or one. First we show
+the case where \type {\mathitalicsmode} is disabled:
+
+\startlinecorrection
+ \mathitalicsmode\zerocount\getbuffer
+\stoplinecorrection
+
+When we enable \type {\mathitalicsmode} we get:
+
+\startlinecorrection
+ \mathitalicsmode\plusone \getbuffer
+\stoplinecorrection
+
+So is this all worth the effort? I don't know, but at least I got the picture and
+hopefully now you have too. It might also lead to some more modes in future
+versions of \LUATEX.
+
+\startbuffer[simple]
+\dontleavehmode
+\scale
+ [scale=2000]
+ {\ruledhbox
+ {\showglyphs
+ \showfontkerns
+ \showfontitalics
+ $f(x)$}}
+\stopbuffer
+
+\startbuffer[fenced]
+\dontleavehmode
+\scale
+ [scale=2000]
+ {\ruledhbox
+ {\showglyphs
+ \showfontkerns
+ \showfontitalics
+ $f\left(x\right)$}}
+\stopbuffer
+
+\def\TestMe#1%
+ {\bTR
+ \dostepwiserecurse{0}{7}{1}{
+ \bTD[align=middle,toffset=3mm] \switchtobodyfont[#1]\UseMode##1\getbuffer[simple] \eTD
+ }
+ \eTR
+ \bTR
+ \dostepwiserecurse{0}{7}{1}{
+ \bTD[align=middle,toffset=3mm] \switchtobodyfont[#1]\UseMode##1\getbuffer[fenced] \eTD
+ }
+ \eTR
+ \bTR
+ \dostepwiserecurse{0}{7}{1}{
+ \bTD[align=middle]
+ \tttf
+ \ifcase##1\relax
+ \or ns % 1
+ \or it % 2
+ \or ns it % 3
+ \or or % 4
+ \or ns or % 5
+ \or it or % 6
+ \or ns it or % 7
+ \fi
+ \eTD
+ }
+ \eTR
+ \bTR
+ \bTD[align=middle,nx=8] \switchtobodyfont[#1]\bf #1 \eTD
+ \eTR}
+
+\startbuffer
+\bTABLE[frame=off,distance=2mm]
+ \TestMe{modern}
+ \TestMe{cambria}
+ \TestMe{pagella}
+\eTABLE
+\stopbuffer
+
+\startlinecorrection
+\getbuffer
+\stoplinecorrection
+
+In \CONTEXT, a regular document can specify \type {\setupmathfences
+[method=auto]}, but in \MATHML\ or \ASCIIMATH\ this feature is enabled by default
+(so that we can test it).
+
+We end with a summary of all the modes (assuming italics mode is enabled) in the
+table below.
+
+\stopcomponent
diff --git a/doc/context/sources/general/manuals/onandon/onandon-media.tex b/doc/context/sources/general/manuals/onandon/onandon-media.tex
new file mode 100644
index 000000000..f44c3bb19
--- /dev/null
+++ b/doc/context/sources/general/manuals/onandon/onandon-media.tex
@@ -0,0 +1,220 @@
+% language=uk
+
+\startcomponent onandon-media
+
+\environment onandon-environment
+
+\startchapter[title={The state of \PDF}]
+
+\startsection[title={Introduction}]
+
+Below I will spend some words on the state of \PDF\ in \CONTEXT\ mid 2018. These
+are just some reflections, not an in|-|depth discussion of the state of affairs. I
+sometimes feel the need to wrap up.
+
+\stopsection
+
+\startsection[title={Media}]
+
+For over two decades \CONTEXT\ has supported fancy \PDF\ features like movies and
+sound. In fact, as happens more, the flexibility of \TEX\ made it possible to
+support such features right after they became available, often even before other
+applications supported them.
+
+The first approach to support such media clips was relatively easy. In \PDF\ one
+has the text flow, resulting from the typesetting process, either or not enhanced
+with images that are referred to from the flow. In that respect images are an
+integral part of \PDF. On a separate layer there can be annotations. There are
+many kinds and they are originally a sort of extension mechanism that permits
+plugins to add features to a document. Examples of this are hyperlinks and the
+already mentioned media clips. Video was supported by the quicktime movie plugin.
+As far as I know in the meantime that plugin has been dropped as official part of
+Acrobat but one can still plug it in.
+
+Later an extra mechanism was introduced, tagged renditions. It separates the
+views from the media and was more complex. When I first played with it, quite
+some media were possible, and I made a demo that could handle mov, mp3, smi and
+swf files. But last time I checked none of these really worked, apart from the
+swf file. One gets pop|-|ups for missing viewers and a look at the reader
+preferences makes one pessimistic about future support anyway. But one should be
+able to set up a list of useable players with this mechanism (although only an
+Adobe one seems to be okay so we're back to where we started).
+
+At some point support for u3d was added. Interesting is that there is quite some
+infrastructure described in the \PDF\ standard. Also something called rich media
+was introduced and that should replace the former video and audio annotations
+(definitely in \PDF\ version 2) and probably some day the renditions will no
+longer be supported either. Open source \PDF\ viewers just stuck to supporting
+text and static images.
+
+Now, do these rich media work well? Hardly. The standard leaves it to the viewer
+and provides ways to define viewers (although it's unclear to me how that works
+out in practice.) Basically in \PDF\ version 2 there is no native support for
+simple straightforward video. One has to construct a complex set of related
+annotations.
+
+One can give arguments (like security risks) for not supporting all these fancy
+features but then why make rich media part of the specification at all? Browsers
+beat \PDF\ viewers in showing media and as browsers can operate in kiosk mode I
+suppose that it's not that hard to delegate showing whatever you want in an
+embedded window in the \PDF\ viewer. Or why not simply support videolan out of
+the box. All we need is the ability to view movies and control them (play, pause,
+stop, rewind, etc). Where \HTML\ evolved towards easier media support, \PDF\
+evolved to more obscurity.
+
+So, how bad is it really? There are \PDF\ files around that have video! Indeed,
+but the way they're supposed to do this is as follows: currently one actually has
+to embed a shockwave video player (a user interface around something built|-|in)
+and let that player show for instance an mp4 movie. However, support for
+shockwave (flash) will be dropped in 2020 and that renders documents that use it
+obsolete. This even makes one wonder about \JAVASCRIPT\ and widgets like form
+fields, also a rather moving and somewhat unstable target. (I must have a
+document being a calculator somewhere made in the previous century, in the early
+days of \PDF.)
+
+I think that the plugin model failed already rather early in the \PDF\ history if
+only because it made no sense to develop them when in a next version of Acrobat
+the functionality was copied in the core. In a similar fashion \JAVASCRIPT\
+support seems to have stalled.
+
+Unfortunately the open source viewers never catched on with media, forms and
+\JAVASCRIPT\ and therefore there has been no momentum created to keep things
+supported. It all makes efforts spent on supporting this kind of \PDF\ features a
+waste of time. It also makes one careful in using them: it only works on the
+short term.
+
+Get me right, I'm not talking of complex media like 3d or animations but of
+straightforward video support. I understand that the rich media framework tries
+to cover complex cases but it's simple cases that carry the format. On the other
+hand, one can wonder why the \PDF\ format makes it possible to specify behaviour
+that in practice depends on \JAVASCRIPT\ and therefore could as well have been
+delegated to \JAVASCRIPT\ as well. It would probably have been much cleaner.
+\footnote {It looks like mu\PDF\ in 2018 got some support related to widgets aka
+fields but alas not for layers which would be quite useful.}
+
+The \PDF\ version 2 specification mentions \type {3D}, \type {Video} and \type
+{Audio} as primary content types so maybe future viewers will support video out
+of the box. Who knows. We try to keep up in \CONTEXT\ because it's often not that
+complex to support \PDF\ features but with hardly any possibility to test them,
+they have a low priority. And with Acrobat moving to the cloud and thereby
+creating a more of less lifelong dependency on remote resources it doesn't become
+much interesting to explore those routes either.
+
+\stopsection
+
+\startsection[title={Accessibility}]
+
+A popular \PDF\ related topic is accessibility. One aspect of that is tagged
+\PDF. This substandard is in my opinion not something that deserves a price for
+beauty. I know that there are \CONTEXT\ users who need to be compliant but I
+always wonder what a publisher really does with such a file. It's a bit like
+requiring \XML\ as source but at the same time sacrificing really rich encoded
+and sources for tweaks that suite the current limitations of for instance browsers,
+tool|-|chains and competence. We've seen it happen.
+
+Support for tagged \PDF\ has been available in \CONTEXT\ already for a while but
+as far as I know only Acrobat professional can do something with it. The reason
+for tagging is that a document is then useable for (for instance) visually
+impaired users, but aren't they better served with a proper complete and very
+structured source in some format that tools suitable for it can use? How many
+publishers distribute \PDF\ files while they can still make money on prints? How
+many are really interested in distributing enriched content that then can be
+reused somehow? And how many are willing to invest in tools instead of waiting
+for it to happen for free? It's a bit cheap trick to just expect authors (and
+their in the case of \TEX\ free tools) to suit a publishers needs. Anyway, just
+as with advanced interactive documents or forms, I wonder if it will catch on. At
+least no publisher ever asked us and by the time they might do the competition of
+web based dissemination could have driven \PDF\ to the background. But, in
+\CONTEXT\ we will keep supporting such features anyway, if only because it's
+quite doable. But \unknown\ it's user demand that drives development, not the
+market, which means that the motivation for implementing such features depends on
+user input as well as challenging aspects that make it somewhat fun to spend time
+on them.
+
+\stopsection
+
+\startsection[title={Quality assurance}]
+
+Another aspect popping up occasionally is validation. I'm not entirely sure what
+drives that but delegating a problem can be one reason. Often we see publishers
+and printers use old versions of \PDF\ related tools. Also, some workflows are
+kind of ancient anyway and are more driven by \POSTSCRIPT\ history than \PDF\
+possibilities. I sometimes get the impression that it takes at least a decade for
+these things to catch on, and by that time it doesn't matter any more that \TEX\
+and friends were at the front: their users are harassed by what the market
+demands by then.
+
+Support for several standards related to validation is already part of \CONTEXT\
+for quite a while. For instance the bump from \PDF\ 1.7 to 2.0 was hardly worth
+noticing, simply because there are not that many fundamental changes. Adapting
+\LUATEX\ was trivial (and actually not really needed), and macro packages can
+provide what is needed without much problems. So, yes, we can support it without
+much hassle. Personally I never ran into a case where validation was really
+needed. The danger of validation is that it can give a false impression of
+quality. And as with everything quality control created a market. As with other
+features it is users who drive the availability of support for this. After all,
+they are the ones testing it and figuring out the often fuzzy specifications.
+These are things that one can always look at in retrospect (like: it has to be
+done this or that way) while in practice in order to be an early adopter one has
+to gamble a bit and see where it fails or succeeds. Fortunately it's relatively
+easy to adapt macro packages and \CONTEXT\ users are willing to update so it's
+not really an issue.
+
+Putting a stamp of approval on a \PDF\ cannot hide the inconsistencies between
+for instance vector graphics produced by a third party. They also don't expose
+inconsistent use of color and fonts. The page streams produced by \LUATEX\ are
+simple and clean enough to not give problems with validation. The problem lays
+more with resources coming from elsewhere. When you're phoned by a printing house
+about an issue with \RGB\ images in a file where there is no sign of \RGB\ being
+used but where a validator reports an issue, you're lucky when an experienced
+printer dating back decades then replies that he already had that impression and
+will contact the origin. There is no easy way out of this but educating users
+(authors) is an option. However, they are often dependent on the publishers and
+departments that deal with these and those tend to come with directives that the
+authors cannot really argue with (or about).
+
+\stopsection
+
+\startsection[title={Interactivity}]
+
+This is an area where \TEX\ (an therefore also \CONTEXT) always had an edge,
+There is a lot possible and in principle all that \PDF\ provides can be
+supported. But the more fancy one goes, the more one depends on Acrobat.
+Interactivity in \PDF\ evolved stepwise and is mostly market driven. As a result
+it is (or was) not always consistent. This is partly due to the fact that we have
+a chicken|-|egg issue: you need typesetting machinery, viewer as well as a
+standard.
+
+The regular hyperlinks, page or named driven are normally supported by viewers.
+Some redefined named destinations (like going to a next page, or going back in a
+chain of followed links) not always. Launching applications, as it also relates
+to security, can be qualified as an unreliable mechanism. More advanced linking,
+for instance using \JAVASCRIPT\ is hardly supported. In that respect \PDF\
+viewers lag way behind \HTML\ browsers. I understand that there can be security
+risks involved. It's interesting to see that in Acrobat one can mess with
+internals of files which makes the \API\ large and complex, but if we stick to
+the useful core, the amount of interfacing needed is quite small. Lack of support
+in open source viewers (we're talking of about two decades now) made me loose
+interest in these features but they are and will be supported in \CONTEXT. We'll
+see if and when viewers catch up.
+
+Comments and attachments are also part of interactivity and of course we
+supported them right from the start. Some free viewers also support them by now.
+Personally I never use comments but they can be handy for popping up information
+or embedding snippets or (structured) sources (like \MATHML\ or bibliographic
+data). In \CONTEXT\ we can even support \PDF\ inclusion with (a reasonable)
+subset of these so called annotations. As the \PDF\ standard no longer evolves
+much we can expect all these features to become stable.
+
+\stopsection
+
+\startsection[title={Summary}]
+
+We have always supported the fancy \PDF\ features and we will continue doing so
+in \CONTEXT . However, many of them depends on what viewers support, and after
+decades of \PDF\ that is still kind of disappointing, which is not that
+motivating. We'll see what happens.
+
+\stopsection
+
+\stopchapter
diff --git a/doc/context/sources/general/manuals/onandon/onandon-modern.tex b/doc/context/sources/general/manuals/onandon/onandon-modern.tex
new file mode 100644
index 000000000..65b5d0490
--- /dev/null
+++ b/doc/context/sources/general/manuals/onandon/onandon-modern.tex
@@ -0,0 +1,1284 @@
+% language=uk
+
+% 284 instances, 234 shared in backend, 126 common vectors, 108 common hashes, load time 1.343 seconds
+
+%setupversion[alternative=concept,text={not corrected yet}]
+\setupversion[alternative=file,text={not corrected yet}]
+
+\definebodyfontenvironment[24pt]
+
+\usemodule[fonts-effects]
+
+\startcomponent onandon-modern
+
+\environment onandon-environment
+
+\startchapter[title={Modern Latin}]
+
+\startsection[title={Introduction}]
+
+In \CONTEXT, already in \MKII, we have a feature tagged \quote {effects} that can
+be used to render a font in outline or bolder versions. It uses some low level
+\PDF\ directives to accomplish this and it works quite well. When a user on the
+\CONTEXT\ list asked if we could also provide it as a font feature in the
+repertoire of additional features in \CONTEXT, I was a bit reluctant to provide
+that because it operates at another level than the glyph stream. Also, such a
+feature can be abused and result in a bad looking document. However, by adding a
+few simple options to the \LUATEX\ engine such a feature could actually be
+achieved rather easy: it was trivial to implement given that we can influence
+font handling at the \LUA\ end. In retrospect extended and pseudo slanted fonts
+could be done this way too but there we have some historic ballast. Also, the
+backend now handles such transformations very efficient because they are combined
+with font scaling. Anyway, by adding this feature in spite of possible
+objections, I could do some more advanced experiments.
+
+In the following pages I will demonstrate how we support effects as a feature in
+\CONTEXT. Instead of simply applying some magic \PDF\ text operators in the
+backend a more integrated approach is used. The difference with the normal effect
+mechanism is that where the one described here is bound to a font instance while
+the normal mechanism operates on the glyph stream.
+
+\stopsection
+
+\startsection[title={The basics}]
+
+\definefontsynonym[DemoSerif][file:lmroman10-regular]
+
+Let's start with a basic boldening example. First we demonstrate a regular Latin
+Modern sample (using \type {ward.tex}):
+
+\startnarrower
+ \definedfont[DemoSerif*default]
+ \samplefile{ward}
+\stopnarrower
+
+This font looks rather thin (light). Next we define an effect or \type {0.2} and
+typeset the same sample:
+
+\startbuffer
+\definefontfeature
+ [effect-1]
+ [effect=.2]
+\stopbuffer
+
+\typebuffer \getbuffer
+
+\startnarrower
+ \definedfont[DemoSerif*default,effect-1]
+ \samplefile{ward}
+\stopnarrower
+
+This simple call gives reasonable default results. But you can have more control
+than this. The previous examples use the following properties:
+
+{\definedfont[DemoSerif*default,effect-1] \showfonteffect}
+
+\startbuffer
+\definefontfeature
+ [effect-2]
+ [effect={width=.3}]
+\stopbuffer
+
+\typebuffer \getbuffer
+
+\startnarrower
+ \definedfont[DemoSerif*default,effect-2]
+ \samplefile{ward}
+\stopnarrower
+
+This time we use:
+
+{\definedfont[DemoSerif*default,effect-2] \showfonteffect}
+
+\startbuffer
+\definefontfeature
+ [effect-3]
+ [effect={width=.3,delta=0.4}]
+\stopbuffer
+
+\typebuffer \getbuffer
+
+\startnarrower
+ \showfontkerns
+ \definedfont[DemoSerif*default,effect-3]
+ \samplefile{ward}
+\stopnarrower
+
+We have now tweaked one more property and show the fontkerns in order to see what
+happens with them:
+
+{\definedfont[DemoSerif*default,effect-3] \showfonteffect}
+
+\startbuffer
+\definefontfeature
+ [effect-4]
+ [effect={width=.3,delta=0.4,factor=0.3}]
+\stopbuffer
+
+\typebuffer \getbuffer
+
+\startnarrower
+ \showfontkerns
+ \definedfont[DemoSerif*default,effect-4]
+ \samplefile{ward}
+\stopnarrower
+
+An additional parameter \type {factor} will influence the way (for instance)
+kerns get affected:
+
+{\definedfont[DemoSerif*effect-4] \showfonteffect}
+
+\stopsection
+
+\startsection[title=Outlines]
+
+There are four effects. Normally a font is rendered with effect \type {inner}.
+The \type {outer} effect just draws the outlines while \type {both} gives a
+rather fat result. The \type {hidden} effect hides the text.
+
+\startbuffer
+\definefontfeature
+ [effect-5]
+ [effect={width=0.2,delta=0.4,factor=0.3,effect=inner}]
+\stopbuffer
+
+\typebuffer \getbuffer
+
+\startnarrower
+ \showfontkerns
+ \definedfont[DemoSerif*default,effect-5]
+ \samplefile{ward}
+\stopnarrower
+
+An inner effect is rather useless unless you want to use the other properties of
+this mechanism.
+
+\startbuffer
+\definefontfeature
+ [effect-6]
+ [effect={width=.2,delta=0.4,factor=0.3,effect=outer}]
+\stopbuffer
+
+\typebuffer \getbuffer
+
+\startnarrower
+ \showfontkerns
+ \definedfont[DemoSerif*default,effect-6]
+ \samplefile{ward}
+\stopnarrower
+
+\startbuffer
+\definefontfeature
+ [effect-7]
+ [effect={width=.2,delta=0.4,factor=0.3,effect=both}]
+\stopbuffer
+
+\typebuffer \getbuffer
+
+\startnarrower
+ \showfontkerns
+ \definedfont[DemoSerif*default,effect-7]
+ \samplefile{ward}
+\stopnarrower
+
+\startbuffer
+\definefontfeature
+ [effect-8]
+ [effect={width=.2,delta=0.4,factor=0.3,effect=hidden},
+ boundingbox=yes] % to show something
+\stopbuffer
+
+\typebuffer \getbuffer
+
+We also show the boundingboxes of the glyphs here so that you can see what you're
+missing. Actually this text is still there and you can select it in the viewer.
+
+\startnarrower
+ \showfontkerns
+ \showglyphs
+ \definedfont[DemoSerif*default,effect-8]
+ \samplefile{ward}
+\stopnarrower
+
+\stopsection
+
+\startsection[title=The logic]
+
+In order to support this I had to make some choices. The calculations involved
+are best explained in terms of \CONTEXT\ font machinery.
+
+\startformula
+ \Delta _{\text{wd}} = \text{effect} _{\text{wdelta}}
+ \times \text{parameter}_{\text{hfactor}}
+ \times \text{effect} _{\text{width}}
+ \times 100
+\stopformula
+
+\startformula
+ \Delta _{\text{ht}} = \text{effect} _{\text{hdelta}}
+ \times \text{parameter}_{\text{vfactor}}
+ \times \text{effect} _{\text{width}}
+ \times 100
+\stopformula
+
+\startformula
+ \Delta _{\text{dp}} = \text{effect} _{\text{ddelta}}
+ \times \text{parameter}_{\text{vfactor}}
+ \times \text{effect} _{\text{width}}
+ \times 100
+\stopformula
+
+The factors in the parameter namespace are adapted according to:
+
+\startformula
+ \Delta _{\text{factor}} = \text{effect} _{\text{factor}}
+ \times \text{parameters}_{\text{factor}}
+\stopformula
+
+\startformula
+ \Delta _{\text{hfactor}} = \text{effect} _{\text{hfactor}}
+ \times \text{parameters}_{\text{hfactor}}
+\stopformula
+
+\startformula
+ \Delta _{\text{vfactor}} = \text{effect} _{\text{vfactor}}
+ \times \text{parameters}_{\text{vfactor}}
+\stopformula
+
+The horizontal and vertical scaling factors default to the normal factor that
+defaults to zero so by default we have no additional scaling of for instance
+kerns. The width (wd), height (ht) and depth (dp) of a glyph are adapted in
+relation to the line width. A glyph is shifted in its bounding box by half the
+width correction. The delta defaults to one.
+
+\stopsection
+
+\startsection[title=About features]
+
+This kind of boldening has limitations especially because some fonts use
+positioning features that closely relate to the visual font properties. Let's
+give some examples. The most common positioning is kerning. Take for instance
+these shapes:
+
+\startlinecorrection
+\startMPcode
+ def SampleShapes(expr dx, offset, pw, k) =
+ picture p ; p := image (
+ draw fullcircle scaled 1cm ;
+ draw fullsquare scaled 1cm shifted (dx+k,0) ;
+ draw point 8 of (fullcircle scaled 1cm) withcolor white ;
+ draw point 3.5 of (fullsquare scaled 1cm) shifted (dx+k,0) withcolor white ;
+ ) shifted (offset,0) ;
+ draw p withpen pencircle scaled pw ;
+ draw boundingbox p withcolor white ;
+ enddef ;
+ SampleShapes(15mm, 0mm,1mm,0mm) ;
+ SampleShapes(15mm, 40mm,2mm,0mm) ;
+ SampleShapes(17mm, 80mm,2mm,0mm) ;
+\stopMPcode
+\stoplinecorrection
+
+The first one is that we start with. The circle and square have a line width of
+one unit and a distance (kern) of five units. The second pair has a line width of
+two units and the same distance while the third pair has a distance of seven
+units. So, in the last case we have just increased the kern with a value relative
+to the increase of line width.
+
+\startlinecorrection
+\startMPcode
+ SampleShapes(15mm, 0mm,1mm,0mm) ;
+ SampleShapes(15mm, 40mm,2mm,2mm) ;
+ SampleShapes(17mm, 80mm,2mm,2mm) ;
+\stopMPcode
+\stoplinecorrection
+
+In this example we have done the same but we started with a distance of zero. You
+can consider this a kind of anchoring. This happens in for instance cursive
+scripts where entry and exit points are used to connect shapes. In a latin script
+you can think of a poor|-|mans attachment of a cedilla or ogonek. But what to do
+with for instance an accent on top of a character? In that case we could do the
+same as with kerning. However, when we mix styles we would like to have a
+consistent height so maybe there scaling is not a good idea. This is why we can
+set the factors and deltas explictly for vertical and horizontal movements.
+However, this will only work well when a font is consistent in how it applies
+these movements. In this case, if could recognize cursive anchoring (the last
+pair in the example) we could compensate for it.
+
+\startMPinclusions
+ def SampleShapes(expr dx, offset, pw, k) =
+ picture p ; p := image (
+ draw fullcircle scaled 1cm ;
+ draw fullsquare scaled 1cm shifted (dx+k,0) ;
+ draw point 8 of (fullcircle scaled 1cm) withcolor white ;
+ draw point 3.5 of (fullsquare scaled 1cm) shifted (dx+k,0) withcolor white ;
+ ) shifted (offset,0) ;
+ draw p withpen pencircle scaled pw ;
+ draw boundingbox p withcolor white ;
+ enddef ;
+\stopMPinclusions
+
+\startlinecorrection
+\startMPcode
+ SampleShapes(10mm, 0mm,1mm,0mm) ;
+ SampleShapes(10mm, 40mm,1mm,1mm) ;
+ SampleShapes(10mm, 80mm,2mm,0mm) ;
+ SampleShapes(10mm,120mm,2mm,2mm) ;
+\stopMPcode
+\stoplinecorrection
+
+So, an interesting extension to the positioning part of the font handler could be
+to influence all the scaling factors: anchors, cursives, single and pair wise
+positioning in both directions (so eight independent factors). Technically this
+is no big deal so I might give it a go when I have a need for it.
+
+\stopsection
+
+\startsection[title=Some (extreme) examples]
+
+The last decade buying a font has become a bit of a nightmare simply because you
+have to choose the weights that you need. It's the business model to not stick to
+four shapes in a few weights but offer a whole range and each of course costs
+money.
+
+Latin Modern is based on Computer Modern and is meant for high resolution rendering.
+The design of the font is such that you can create instances but in practice that
+isn't done. One property that let the font stand out is its bold which runs rather
+wide. However, how about cooking up a variant? For this we will use a series of
+definitions:
+
+\startbuffer
+\definefontfeature[effect-2-0-0]
+ [effect={width=0.2,delta=0}]
+\definefontfeature[effect-2-3-0]
+ [effect={width=0.2,delta=0.3}]
+\definefontfeature[effect-2-6-0]
+ [effect={width=0.2,delta=0.6}]
+\definefontfeature[effect-4-0-0]
+ [effect={width=0.4,delta=0}]
+\definefontfeature[effect-4-3-0]
+ [effect={width=0.4,delta=0.3}]
+\definefontfeature[effect-4-6-0]
+ [effect={width=0.4,delta=0.6}]
+\definefontfeature[effect-8-0-0]
+ [effect={width=0.8,delta=0}]
+\definefontfeature[effect-8-3-0]
+ [effect={width=0.8,delta=0.3}]
+\definefontfeature[effect-8-6-0]
+ [effect={width=0.8,delta=0.6}]
+\definefontfeature[effect-8-6-2]
+ [effect={width=0.8,delta=0.6,factor=0.2}]
+\definefontfeature[effect-8-6-4]
+ [effect={width=0.8,delta=0.6,factor=0.4}]
+\stopbuffer
+
+\typebuffer \getbuffer
+
+And a helper macro:
+
+\startbuffer
+\starttexdefinition ShowOneSample #1#2#3#4
+ %\testpage[5]
+ %\startsubsubsubject[title=\type{#1}]
+ \start
+ \definedfont[#2*#3 @ 10pt]
+ \setupinterlinespace
+ \startlinecorrection
+ \showglyphs \showfontkerns
+ \scale[sx=#4,sy=#4]{effective n\"ots}
+ \stoplinecorrection
+ \blank[samepage]
+ \dontcomplain
+ \showfontkerns
+ \margintext{\tt\txx\maincolor#1}
+ \samplefile{ward}
+ \par
+ \stop
+ %\stopsubsubsubject
+\stoptexdefinition
+\stopbuffer
+
+\typebuffer \getbuffer
+
+\starttexdefinition ShowSamples #1
+ \startsubsubject[title=#1]
+ \start
+ \ShowOneSample{no effect} {#1}{default} {5}
+ \ShowOneSample{width=0.2\\delta=0} {#1}{default,effect-2-0-0}{5}
+ \ShowOneSample{width=0.2\\delta=0.3} {#1}{default,effect-2-3-0}{5}
+ \ShowOneSample{width=0.2\\delta=0.6} {#1}{default,effect-2-6-0}{5}
+ \ShowOneSample{width=0.4\\delta=0} {#1}{default,effect-4-0-0}{5}
+ \ShowOneSample{width=0.4\\delta=0.3} {#1}{default,effect-4-3-0}{5}
+ \ShowOneSample{width=0.4\\delta=0.6} {#1}{default,effect-4-6-0}{5}
+ \ShowOneSample{width=0.8\\delta=0} {#1}{default,effect-8-0-0}{5}
+ \ShowOneSample{width=0.8\\delta=0.3} {#1}{default,effect-8-3-0}{5}
+ \ShowOneSample{width=0.8\\delta=0.6} {#1}{default,effect-8-6-0}{5}
+ \ShowOneSample{width=0.8\\delta=0.6\\factor=0.2}{#1}{default,effect-8-6-2}{5}
+ \ShowOneSample{width=0.8\\delta=0.6\\factor=0.4}{#1}{default,effect-8-6-4}{5}
+ \stop
+ \stopsubsubject
+\stoptexdefinition
+
+We show some extremes, using the font used in this document. so don't complain
+about beauty here.
+
+\texdefinition{ShowSamples}{Serif}
+\texdefinition{ShowSamples}{SerifBold}
+\texdefinition{ShowSamples}{SerifItalic}
+\texdefinition{ShowSamples}{SerifBoldItalic}
+\texdefinition{ShowSamples}{Sans}
+
+\start
+ \setupalign[flushleft,broad,nothyphenated,verytolerant]
+ \texdefinition{ShowSamples}{Mono}
+\stop
+
+\stopsection
+
+\startsection[title=Pitfall]
+
+The quality of the result depends on how the font is made. For instance,
+ligatures can be whole shapes, replaced glyphs and|/|or repositioned
+glyphs, or whatever the designer thinks reasonable. In \in {figure}
+[fig:ligature-effects-mess] this is demonstrated. We use the following
+feature sets:
+
+\startbuffer
+\definefontfeature
+ [demo-1]
+ [default]
+ [hlig=yes]
+
+\definefontfeature
+ [demo-2]
+ [demo-1]
+ [effect=0.5]
+\stopbuffer
+
+\typebuffer \getbuffer
+
+\startplacefigure[title={The effects on ligatures.},reference=fig:ligature-effects-mess]
+ \startcombination[1*3]
+ { \scale [scale=5000] {
+ \definedfont[texgyrepagellaregular*demo-1]fist effe
+ \par
+ \definedfont[texgyrepagellaregular*demo-2]fist effe
+ } } {
+ texgyre pagella regular
+ } { \scale [scale=5000] {
+ \definedfont[cambria*demo-1]fist effe
+ \par
+ \definedfont[cambria*demo-2]fist effe
+ } } {
+ cambria
+ } { \scale [scale=5000] {
+ \definedfont[ebgaramond12regular*demo-1]fist effe
+ \par
+ \definedfont[ebgaramond12regular*demo-2]fist effe
+ } } {
+ ebgaramond 12 regular
+ }
+ \stopcombination
+\stopplacefigure
+
+Normally the artifacts (as in the fi ligature in ebgaramond as of 2018) will go
+unnoticed at small sized. Also, when the user has a low res display, printer or
+when the publishers is one of those who print a scanned \PDF\ the reader might
+not notice it at all. Most readers don't even know what to look at.
+
+\stopsection
+
+\startsection[title=A modern Modern]
+
+So how can we make an effective set of Latin Modern that fits in todays look and
+feel. Of course this is a very subjective experiment but we've seen experiments
+with these fonts before (like these cm super collections). Here is an example of
+a typescript definition:
+
+\starttyping
+\starttypescriptcollection[modernlatin]
+
+ \definefontfeature[lm-rm-regular][effect={width=0.15,delta=1.00}]
+ \definefontfeature[lm-rm-bold] [effect={width=0.30,delta=1.00}]
+ \definefontfeature[lm-ss-regular][effect={width=0.10,delta=1.00}]
+ \definefontfeature[lm-ss-bold] [effect={width=0.20,delta=1.00}]
+ \definefontfeature[lm-tt-regular][effect={width=0.15,delta=1.00}]
+ \definefontfeature[lm-tt-bold] [effect={width=0.30,delta=1.00}]
+ \definefontfeature[lm-mm-regular][effect={width=0.15,delta=1.00}]
+ \definefontfeature[lm-mm-bold] [effect={width=0.30,delta=1.00}]
+
+ \starttypescript [serif] [modern-latin]
+ \definefontsynonym
+ [Serif] [file:lmroman10-regular]
+ [features={default,lm-rm-regular}]
+ \definefontsynonym
+ [SerifItalic] [file:lmroman10-italic]
+ [features={default,lm-rm-regular}]
+ \definefontsynonym
+ [SerifSlanted] [file:lmromanslant10-regular]
+ [features={default,lm-rm-regular}]
+ \definefontsynonym
+ [SerifBold] [file:lmroman10-regular]
+ [features={default,lm-rm-bold}]
+ \definefontsynonym
+ [SerifBoldItalic] [file:lmroman10-italic]
+ [features={default,lm-rm-bold}]
+ \definefontsynonym
+ [SerifBoldSlanted] [file:lmromanslant10-regular]
+ [features={default,lm-rm-bold}]
+ \stoptypescript
+
+ \starttypescript [sans] [modern-latin]
+ \definefontsynonym
+ [Sans] [file:lmsans10-regular]
+ [features={default,lm-ss-regular}]
+ \definefontsynonym
+ [SansItalic] [file:lmsans10-oblique]
+ [features={default,lm-ss-regular}]
+ \definefontsynonym
+ [SansSlanted] [file:lmsans10-oblique]
+ [features={default,lm-ss-regular}]
+ \definefontsynonym
+ [SansBold] [file:lmsans10-regular]
+ [features={default,lm-ss-bold}]
+ \definefontsynonym
+ [SansBoldItalic] [file:lmsans10-oblique]
+ [features={default,lm-ss-bold}]
+ \definefontsynonym
+ [SansBoldSlanted] [file:lmsans10-oblique]
+ [features={default,lm-ss-bold}]
+ \stoptypescript
+
+ \starttypescript [mono] [modern-latin]
+ \definefontsynonym
+ [Mono] [file:lmmono10-regular]
+ [features={default,lm-tt-regular}]
+ \definefontsynonym
+ [MonoItalic] [file:lmmono10-italic]
+ [features={default,lm-tt-regular}]
+ \definefontsynonym
+ [MonoSlanted] [file:lmmonoslant10-regular]
+ [features={default,lm-tt-regular}]
+ \definefontsynonym
+ [MonoBold] [file:lmmono10-regular]
+ [features={default,lm-tt-bold}]
+ \definefontsynonym
+ [MonoBoldItalic] [file:lmmono10-italic]
+ [features={default,lm-tt-bold}]
+ \definefontsynonym
+ [MonoBoldSlanted] [file:lmmonoslant10-regular]
+ [features={default,lm-tt-bold}]
+ \stoptypescript
+
+ \starttypescript [math] [modern-latin]
+ \loadfontgoodies[lm]
+ \definefontsynonym
+ [MathRoman] [file:latinmodern-math-regular.otf]
+ [features={math\mathsizesuffix,lm-mm-regular,mathextra},
+ goodies=lm]
+ \definefontsynonym
+ [MathRomanBold] [file:latinmodern-math-regular.otf]
+ [features={math\mathsizesuffix,lm-mm-bold,mathextra},
+ goodies=lm]
+ \stoptypescript
+
+ \starttypescript [modern-latin]
+ \definetypeface [\typescriptone]
+ [rm] [serif] [modern-latin] [default]
+ \definetypeface [\typescriptone]
+ [ss] [sans] [modern-latin] [default]
+ \definetypeface [\typescriptone]
+ [tt] [mono] [modern-latin] [default]
+ \definetypeface [\typescriptone]
+ [mm] [math] [modern-latin] [default]
+ \quittypescriptscanning
+ \stoptypescript
+
+\stoptypescriptcollection
+\stoptyping
+
+We show some more samples now for which we use\type {zapf.tex}.
+
+\startbuffer
+ {\tf\samplefile{zapf}}\blank {\bf\samplefile{zapf}}\blank
+ {\it\samplefile{zapf}}\blank {\bi\samplefile{zapf}}\blank
+ {\sl\samplefile{zapf}}\blank {\bs\samplefile{zapf}}\blank
+\stopbuffer
+
+\startsubsubsubject[title={\type{\switchtobodyfont[modern-latin,rm,10pt]}}]
+ \start
+ \switchtobodyfont[modern-latin,rm,10pt]
+ \getbuffer
+ \stop
+\stopsubsubsubject
+
+\startsubsubsubject[title={\type{\switchtobodyfont[modern-latin,ss,10pt]}}]
+ \start
+ \switchtobodyfont[modern-latin,ss,10pt]
+ \getbuffer
+ \stop
+\stopsubsubsubject
+
+\startsubsubsubject[title={\type{\switchtobodyfont[modern-latin,tt,10pt]}}]
+ \start
+ \switchtobodyfont[modern-latin,tt,10pt]
+ \setupalign[flushleft,broad,nothyphenated,verytolerant]
+ \getbuffer
+ \stop
+\stopsubsubsubject
+
+\stopsection
+
+\startsection[title=Finetuning]
+
+In practice we only need to compensate the width but can leave the height
+and depth untouched. In the following examples we see the normal bold next
+to the regular as well as the boldened version. For this we will use a couple
+of definitions:
+
+\startbuffer
+\definefontfeature[lm-bald][effect={width=0.25,effect=both}]
+\definefontfeature[pg-bald][effect={width=0.25,effect=both}]
+\definefontfeature[dj-bald][effect={width=0.35,effect=both}]
+
+\definefontfeature
+ [lm-bold]
+ [effect={width=0.25,hdelta=0,ddelta=0,effect=both},
+ extend=1.10]
+
+\definefontfeature
+ [pg-bold]
+ [effect={width=0.25,hdelta=0,ddelta=0,effect=both},
+ extend=1.00]
+
+\definefontfeature
+ [dj-bold]
+ [effect={width=0.35,hdelta=0,ddelta=0,effect=both},
+ extend=1.05]
+
+\definefont[lmbald][Serif*default,lm-bald sa d]
+\definefont[pgbald][Serif*default,pg-bald sa d]
+\definefont[djbald][Serif*default,dj-bald sa d]
+
+\definefont[lmbold][Serif*default,lm-bold sa d]
+\definefont[pgbold][Serif*default,pg-bold sa d]
+\definefont[djbold][Serif*default,dj-bold sa d]
+\stopbuffer
+
+\typebuffer \getbuffer
+
+We can combine the extend and effect features to get a bold running as wide as a
+normal bold. We limit the height and depth so that we can use regular and bold in
+the same sentence. It's all a matter of taste, but some control is there.
+
+\starttabulate[|l|l|l|l|]
+\NC
+ \BC
+ \tt modern \BC
+ \tt pagella \BC
+ \tt dejavu \NC
+\NR
+\NC
+ \type{\tfd} \NC
+ \switchtobodyfont [modern,24pt]\strut\ruledhbox{\tfd ABC}\NC
+ \switchtobodyfont[pagella,24pt]\strut\ruledhbox{\tfd ABC}\NC
+ \switchtobodyfont [dejavu,24pt]\strut\ruledhbox{\tfd ABC}\NC
+\NR
+\NC
+ \type{\..bald} \NC
+ \switchtobodyfont [modern,24pt]\strut\ruledhbox{\lmbald ABC}\NC
+ \switchtobodyfont[pagella,24pt]\strut\ruledhbox{\pgbald ABC}\NC
+ \switchtobodyfont [dejavu,24pt]\strut\ruledhbox{\djbald ABC}\NC
+\NR
+\NC
+ \type{\bfd} \NC
+ \switchtobodyfont [modern,24pt]\strut\ruledhbox{\bfd ABC}\NC
+ \switchtobodyfont[pagella,24pt]\strut\ruledhbox{\bfd ABC}\NC
+ \switchtobodyfont [dejavu,24pt]\strut\ruledhbox{\bfd ABC}\NC
+\NR
+\NC
+ \type{\..bold} \NC
+ \switchtobodyfont [modern,24pt]\strut\ruledhbox{\lmbold ABC}\NC
+ \switchtobodyfont[pagella,24pt]\strut\ruledhbox{\pgbold ABC}\NC
+ \switchtobodyfont [dejavu,24pt]\strut\ruledhbox{\djbold ABC}\NC
+\NR
+\stoptabulate
+
+Let's take another go at Pagella. We define a few features, colors
+and fonts first:
+
+\startbuffer
+\definefontfeature
+ [pg-fake-1]
+ [effect={width=0.25,effect=both}]
+
+\definefontfeature
+ [pg-fake-2]
+ [effect={width=0.25,hdelta=0,ddelta=0,effect=both}]
+
+\definefont[pgregular] [Serif*default]
+\definefont[pgbold] [SerifBold*default]
+\definefont[pgfakebolda][Serif*default,pg-fake-1]
+\definefont[pgfakeboldb][Serif*default,pg-fake-2]
+
+\definecolor[color-pgregular] [t=.5,a=1,r=.6]
+\definecolor[color-pgbold] [t=.5,a=1,g=.6]
+\definecolor[color-pgfakebolda][t=.5,a=1,b=.6]
+\definecolor[color-pgfakeboldb][t=.5,a=1,r=.6,g=.6]
+\stopbuffer
+
+\typebuffer \getbuffer
+
+When we apply these we get the results of \in {figure} [fig:pagella-compared]
+while we show the same overlayed in \in {figure} [fig:pagella-overlayed]. As you
+can see, the difference in real bold and fake bold is subtle: the inner shape of
+the \quote {o} differs. Also note that the position of the accents doesn't change
+in the vertical direction but moves along with the width.
+
+\def\SampleWord{\^o\"ep\c s}
+
+\startplacefigure[title={Four pagella style variants compared.},reference=fig:pagella-compared]
+ \startcombination[2*2]
+ {
+ \scale [scale=7500] {
+ \ruledhbox{\showglyphs\pgregular \SampleWord}
+ }
+ } {
+ regular (red)
+ } {
+ \scale [scale=7500] {
+ \ruledhbox{\showglyphs\pgbold \SampleWord}
+ }
+ } {
+ bold (green)
+ } {
+ \scale [scale=7500] {
+ \ruledhbox{\showglyphs\pgfakebolda \SampleWord}
+ }
+ } {
+ fakebolda (blue)
+ } {
+ \scale [scale=7500] {
+ \ruledhbox{\showglyphs\pgfakeboldb \SampleWord}
+ }
+ } {
+ fakeboldb (yellow)
+ }
+ \stopcombination
+\stopplacefigure
+
+\startplacefigure[title={Four pagella style variants overlayed.},reference=fig:pagella-overlayed]
+ \startcombination[2*3]
+ {
+ \scale [scale=7500] {
+ \startoverlay
+ {\color[color-pgregular] {\pgregular \SampleWord}}
+ {\color[color-pgbold] {\pgbold \SampleWord}}
+ \stopoverlay
+ }
+ } {
+ bold over regular
+ } {
+ \scale [scale=7500] {
+ \startoverlay
+ {\color[color-pgregular] {\pgregular \SampleWord}}
+ {\color[color-pgfakeboldb]{\pgfakeboldb \SampleWord}}
+ \stopoverlay
+ }
+ } {
+ fakebolda over regular
+ } {
+ \scale [scale=7500] {
+ \startoverlay
+ {\color[color-pgregular] {\pgregular \SampleWord}}
+ {\color[color-pgfakebolda]{\pgfakeboldb \SampleWord}}
+ \stopoverlay
+ }
+ } {
+ fakeboldb over regular
+ } {
+ \scale [scale=7500] {
+ \startoverlay
+ {\color[color-pgbold] {\pgbold \SampleWord}}
+ {\color[color-pgfakeboldb]{\pgfakeboldb \SampleWord}}
+ \stopoverlay
+ }
+ } {
+ fakeboldb over bold
+ } {
+ \scale [scale=7500] {
+ \startoverlay
+ {\color[color-pgfakebolda]{\pgfakebolda \SampleWord}}
+ {\color[color-pgfakeboldb]{\pgfakeboldb \SampleWord}}
+ \stopoverlay
+ }
+ } {
+ fakeboldb over fakebolda
+ } {
+ \scale [scale=7500] {
+ \startoverlay
+ {\color[color-pgregular] {\pgregular \SampleWord}}
+ {\color[color-pgbold] {\pgbold \SampleWord}}
+ {\color[color-pgfakebolda]{\pgfakebolda \SampleWord}}
+ {\color[color-pgfakeboldb]{\pgfakeboldb \SampleWord}}
+ \stopoverlay
+ }
+ } {
+ all four overlayed
+ }
+ \stopcombination
+\stopplacefigure
+
+\stopsection
+
+\startsection[title=The code]
+
+The amount of code involved is not that large and is a nice illustration of what
+\LUATEX\ provides (I have omitted a few lines of tracing and error reporting).
+The only thing added to the font scaler elsewhere is that we pass the \type
+{mode} and \type {width} parameters to \TEX\ so that they get used in the backend
+to inject the few operators needed.
+
+\starttyping
+local effects = {
+ inner = 0,
+ outer = 1,
+ both = 2,
+ hidden = 3,
+}
+
+local function initialize(tfmdata,value)
+ local spec
+ if type(value) == "number" then
+ spec = { width = value }
+ else
+ spec = utilities.parsers.settings_to_hash(value)
+ end
+ local effect = spec.effect or "both"
+ local width = tonumber(spec.width) or 0
+ local mode = effects[effect]
+ if mode then
+ local factor = tonumber(spec.factor) or 0
+ local hfactor = tonumber(spec.vfactor) or factor
+ local vfactor = tonumber(spec.hfactor) or factor
+ local delta = tonumber(spec.delta) or 1
+ local wdelta = tonumber(spec.wdelta) or delta
+ local hdelta = tonumber(spec.hdelta) or delta
+ local ddelta = tonumber(spec.ddelta) or hdelta
+ tfmdata.parameters.mode = mode
+ tfmdata.parameters.width = width * 1000
+ tfmdata.properties.effect = {
+ effect = effect, width = width,
+ wdelta = wdelta, factor = factor,
+ hdelta = hdelta, hfactor = hfactor,
+ ddelta = ddelta, vfactor = vfactor,
+ }
+ end
+end
+
+local function manipulate(tfmdata)
+ local effect = tfmdata.properties.effect
+ if effect then
+ local characters = tfmdata.characters
+ local parameters = tfmdata.parameters
+ local multiplier = effect.width * 100
+ local wdelta = effect.wdelta * parameters.hfactor * multiplier
+ local hdelta = effect.hdelta * parameters.vfactor * multiplier
+ local ddelta = effect.ddelta * parameters.vfactor * multiplier
+ local hshift = wdelta / 2
+ local factor = (1 + effect.factor) * parameters.factor
+ local hfactor = (1 + effect.hfactor) * parameters.hfactor
+ local vfactor = (1 + effect.vfactor) * parameters.vfactor
+ for unicode, char in next, characters do
+ local oldwidth = char.width
+ local oldheight = char.height
+ local olddepth = char.depth
+ if oldwidth and oldwidth > 0 then
+ char.width = oldwidth + wdelta
+ char.commands = {
+ { "right", hshift },
+ { "char", unicode },
+ }
+ end
+ if oldheight and oldheight > 0 then
+ char.height = oldheight + hdelta
+ end
+ if olddepth and olddepth > 0 then
+ char.depth = olddepth + ddelta
+ end
+ end
+ parameters.factor = factor
+ parameters.hfactor = hfactor
+ parameters.vfactor = vfactor
+ end
+end
+
+local specification = {
+ name = "effect",
+ description = "apply effects to glyphs",
+ initializers = {
+ base = initialize,
+ node = initialize,
+ },
+ manipulators = {
+ base = manipulate,
+ node = manipulate,
+ },
+}
+
+fonts.handlers.otf.features.register(specification)
+fonts.handlers.afm.features.register(specification)
+\stoptyping
+
+The real code is slightly more complex because we want to stack virtual features
+properly but the principle is the same.
+
+\stopsection
+
+\startsection[title=Arabic]
+
+It is tempting to test effects with arabic but we need to keep in mind that for
+that we should add some more support in the \CONTEXT\ font handler. Let's define
+some features.
+
+\startbuffer
+\definefontfeature
+ [bolden-arabic-1]
+ [effect={width=0.4}]
+
+\definefontfeature
+ [bolden-arabic-2]
+ [effect={width=0.4,effect=outer}]
+
+\definefontfeature
+ [bolden-arabic-3]
+ [effect={width=0.5,wdelta=0.5,ddelta=.2,hdelta=.2,factor=.1}]
+\stopbuffer
+
+\typebuffer \getbuffer
+
+\startbuffer
+
+\setupalign
+ [righttoleft]
+
+\setupinterlinespace
+ [1.5]
+
+\start
+ \definedfont[arabictest*arabic,bolden-arabic-1 @ 30pt]
+ \samplefile{khatt-ar}\par
+ \definedfont[arabictest*arabic,bolden-arabic-2 @ 30pt]
+ \samplefile{khatt-ar}\par
+ \definedfont[arabictest*arabic,bolden-arabic-3 @ 30pt]
+ \samplefile{khatt-ar}\par
+\stop
+\stopbuffer
+
+With \MICROSOFT\ Arabtype the \type {khatt-ar.tex} looks as follows:
+
+\typebuffer \start \definefontsynonym[arabictest][arabtype] \getbuffer\stop
+
+And with Idris' Husayni we get:
+
+\typebuffer \start \definefontsynonym[arabictest][husayni] \getbuffer\stop
+
+Actually, quite okay are the following. We don't over do bold here and to get
+a distinction we make the original thinner.
+
+\startbuffer
+\definefontfeature[effect-ar-thin] [effect={width=0.01,effect=inner}]
+\definefontfeature[effect-ar-thick][effect={width=0.20,extend=1.05}]
+\stopbuffer
+
+\typebuffer \getbuffer
+
+\start
+ \setupalign
+ [righttoleft]
+
+ \setupinterlinespace
+ [1.5]
+
+ \definedfont[husayni*arabic,effect-ar-thin @ 30pt]
+ \samplefile{khatt-ar}\par
+ \definedfont[husayni*arabic,effect-ar-thick @ 30pt]
+ \samplefile{khatt-ar}\par
+\stop
+
+The results are acceptable at small sizes but at larger sizes you will start to
+see kerning, anchoring and cursive artifacts. The outline examples show that the
+amount of overlap differs per font and the more overlap we have the better
+boldening will work.
+
+\startMPinclusions
+ def DrawShapes(expr how) =
+ def SampleShapes(expr offset, pw, xc, xs, xt, yc, ys, yt, txt, more) =
+ numeric l ; l := pw * mm ;
+ picture p ; p := image (
+ draw fullcircle scaled 10 ;
+ draw fullcircle scaled 3 shifted (-3+xc ,8+yc) withcolor "darkred" ;
+ draw fullsquare scaled 3 shifted ( 6+xs ,7+ys) withcolor "darkblue";
+ draw fulltriangle scaled 4 shifted ( 6+xt+5,6+yt) withcolor "darkgreen";
+ ) shifted (offset,0) scaled mm ;
+ draw p
+ withpen pencircle
+ if how = 2 :
+ xscaled l yscaled (l/2) rotated 30 ;
+ else :
+ scaled l ;
+ fi ;
+ draw boundingbox p
+ withcolor "darkyellow" ;
+ draw textext(txt)
+ shifted (xpart center p, -8mm) ;
+ draw textext(more)
+ shifted (xpart center p, -11mm) ;
+ enddef ;
+ SampleShapes( 0,1, 0,0,0, 0, 0, 0, "\tinyfont \setstrut \strut original", "\tinyfont \setstrut \strut ") ;
+ SampleShapes( 25,2, 0,0,0, 0, 0, 0, "\tinyfont \setstrut \strut instance", "\tinyfont \setstrut \strut ") ;
+ SampleShapes( 50,2,-1,1,0, 0, 0, 0, "\tinyfont \setstrut \strut mark", "\tinyfont \setstrut \strut x only") ;
+ SampleShapes( 75,2,-1,1,1, 0, 0, 0, "\tinyfont \setstrut \strut mark + mkmk","\tinyfont \setstrut \strut x only") ;
+ SampleShapes(100,2,-1,1,1, 1, 1, 1, "\tinyfont \setstrut \strut mark + mkmk","\tinyfont \setstrut \strut x and y") ;
+ SampleShapes(125,2,-1,2,2,-1/2,-1/2,-1/2,"\tinyfont \setstrut \strut mark + mkmk","\tinyfont \setstrut \strut x and -y") ;
+ enddef ;
+\stopMPinclusions
+
+In arabic (and sometimes latin) fonts the marks (or accents in latin) are
+attached to base shapes and normally one will use the \type {mark} to anchor a
+mark to a base character or specific component of a ligature. The \type {mkmk}
+feature is then used to anchor marks to other marks. Consider the following
+example.
+
+\startlinecorrection
+\scale
+ [width=\textwidth]
+ {\startMPcode DrawShapes(1) ; \stopMPcode}
+\stoplinecorrection
+
+We start with \type {original}: a base shape with three marks: the red circle and
+blue square anchor to the base and the green triangle anchors to the blue square.
+When we bolden, the shapes will start touching. In the case of latin scripts,
+it's normal to keep the accents on the same height so this is why the third
+picture only shifts in the horizontal direction. The fourth picture demonstrates
+that we need to compensate the two bound marks. One can decide to move the lot up
+as in the fifth picture but that is no option here.
+
+Matters can be even more complex when a non circular pen is introduced. In that
+case a transformation from one font to another using the transformed \OPENTYPE\
+positioning logic (values) is even more tricky and unless one knows the
+properties (and usage) of a mark it makes no sense at all. Actually the sixths
+variant is probably nicer here but there we actually move the marks down!
+
+\startlinecorrection
+\scale
+ [width=\textwidth]
+ {\startMPcode DrawShapes(2) ; \stopMPcode}
+\stoplinecorrection
+
+For effects this means that when it gets applied to such a font, only small
+values work out well.
+
+\stopsection
+
+\startsection[title=Math]
+
+Math is dubious as there is all kind of positioning involved. Future versions
+might deal with this, although bolder math (math itself has bold, so actually
+we're talking of bold with some heavy) is needed for titling. If we keep that
+in mind we can actually just bolden math and probably most will come out
+reasonable well. One of the potential troublemakers is the radical (root) sign
+that can be bound to a rule. Bumping the rules is no big deal and patching the
+relevant radical properties neither, so indeed we can do:
+
+\startbuffer[mathblob]
+2\times\sqrt{\frac{\sqrt{\frac{\sqrt{2}}{\sqrt{2}}}}
+ {\sqrt{\frac{\sqrt{2}}{\sqrt{2}}}}}
+\stopbuffer
+
+\startbuffer
+\switchtobodyfont [modernlatin,17.3pt]
+$
+ \mr \darkblue \getbuffer[mathblob] \quad
+ \mb \darkgreen \getbuffer[mathblob]
+$
+\stopbuffer
+
+\typebuffer \blank \start \getbuffer \stop \blank
+
+Where the \type {mathblob} buffer is:
+
+\typebuffer[mathblob]
+
+Here you also see a fraction rule that has been bumped. In display mode we
+get:
+
+\startbuffer
+\switchtobodyfont[modernlatin,17.3pt]
+\startformula
+ \mr \darkblue \getbuffer[mathblob] \quad
+ \mb \darkgreen \getbuffer[mathblob]
+\stopformula
+\stopbuffer
+
+\typebuffer \blank \start \getbuffer \stop \blank
+
+Extensibles behave well too:
+
+\startbuffer
+\switchtobodyfont [modernlatin,17.3pt]
+\dostepwiserecurse {1} {30} {5} {
+ $
+ \mr \sqrt{\blackrule[width=2mm,height=#1mm,color=darkblue]}
+ \quad
+ \mb \sqrt{\blackrule[width=2mm,height=#1mm,color=darkgreen]}
+ $
+}
+\stopbuffer
+
+\typebuffer \blank \start \getbuffer \stop \blank
+
+\definecolor[colormr] [t=.5,a=1,b=.6]
+\definecolor[colormb] [t=.5,a=1,g=.6]
+
+In \in {figure} [fig:regular-over-bold] we overlay regular and bold. The result
+doesn't look that bad after all, does it? It took however a bit of experimenting
+and a fix in \LUATEX: pickup the value from the font instead of the currently
+used (but frozen) math parameter.
+
+\startplacefigure[title={Modern Latin regular over bold.},reference=fig:regular-over-bold]
+\switchtobodyfont[modernlatin,17.3pt]
+\scale[width=.25\textwidth]{\startoverlay
+ {\color[colormb]{$\mb\sqrt{\frac{1}{x}}$}}
+ {\color[colormr]{$ \sqrt{\frac{1}{x}}$}}
+\stopoverlay}
+\stopplacefigure
+
+In case you wonder how currently normal Latin Modern bold looks, here we go:
+
+\startbuffer
+\switchtobodyfont[latinmodern,17.3pt]
+\startformula
+ \mr \darkblue \getbuffer[mathblob] \quad
+ \mb \darkgreen \getbuffer[mathblob]
+\stopformula
+\stopbuffer
+
+\typebuffer \blank \start \getbuffer \stop \blank
+
+\unexpanded\def\ShowMathSample#1%
+ {\switchtobodyfont[#1,14.4pt]%
+ \mathematics{%
+ \mr \darkblue \getbuffer[mathblob] \quad
+ \mb \darkgreen \getbuffer[mathblob]
+ }}
+
+\unexpanded\def\ShowMathCaption#1%
+ {\switchtobodyfont[#1]%
+ #1:
+ $
+ {\mr2\enspace \scriptstyle2\enspace \scriptscriptstyle2}
+ \enspace
+ {\mb2\enspace \scriptstyle2\enspace \scriptscriptstyle2}
+ $}
+
+\startcombination[3*2]
+ {\ShowMathSample {dejavu}} {\ShowMathCaption{dejavu}}
+ {\ShowMathSample{pagella}} {\ShowMathCaption{pagella}}
+ {\ShowMathSample {termes}} {\ShowMathCaption{termes}}
+ {\ShowMathSample {bonum}} {\ShowMathCaption{bonum}}
+ {\ShowMathSample {schola}} {\ShowMathCaption{schola}}
+ {\ShowMathSample{cambria}} {\ShowMathCaption{cambria}}
+\stopcombination
+
+I must admit that I cheat a bit. In order to get a better looking pseudo math
+we need to extend the shapes horizontally as well as squeeze them a bit vertically.
+So, the real effect definitions more look like this:
+
+\starttyping
+\definefontfeature
+ [boldened-30]
+ [effect={width=0.3,extend=1.15,squeeze=0.985,%
+ delta=1,hdelta=0.225,ddelta=0.225,vshift=0.225}]
+\stoptyping
+
+and because we can calculate the funny values sort of automatically, this gets
+simplified to:
+
+\starttyping
+\definefontfeature
+ [boldened-30]
+ [effect={width=0.30,auto=yes}]
+\stoptyping
+
+We leave it to your imagination to figure out what happens behind the screens.
+Just think of some virtual font magic combined with the engine supported \type
+{extend} and \type {squeeze} function. And because we already support bold math
+in \CONTEXT, you will get it when you are doing bold titling.
+
+\startbuffer
+\def\MathSample
+ {\overbrace{2 +
+ \sqrt{\frac{\sqrt{\frac{\sqrt{2}}{\sqrt{2}}}}
+ {\sqrt{\frac{\sqrt{\underbar{2}}}{\sqrt{\overbar{2}}}}}}}}
+
+\definehead
+ [mysubject]
+ [subject]
+
+\setuphead
+ [mysubject]
+ [style=\tfc,
+ color=darkblue,
+ before=\blank,
+ after=\blank]
+
+\mysubject{Regular\quad$\MathSample\quad\mb\MathSample$}
+
+\setuphead
+ [mysubject]
+ [style=\bfc,
+ color=darkred]
+
+\mysubject{Bold \quad$\MathSample\quad\mb\MathSample$}
+\stopbuffer
+
+\typebuffer
+
+\getbuffer
+
+Of course one can argue about the right values for boldening and compensation if
+dimensions so don't expect the current predefined related features to be frozen
+yet.
+
+For sure this mechanism will create more fonts than normal but fortunately it
+can use the low level optimizations for sharing instances so in the end the
+overhead is not that large. This chapter uses 36 different fonts, creates 270
+font instances (different scaling and properties) of which 220 are shared in the
+backend. The load time is 5 seconds in \LUATEX\ and 1.2 seconds in \LUAJITTEX\ on
+a somewhat old laptop with a i7-3840QM processor running 64 bit \MSWINDOWS. Of
+course we load a lot of bodyfonts at different sizes so in a normal run the extra
+loading is limited to just a couple of extra instances for math (normally 3, one
+for each math size).
+
+\stopsection
+
+\startsection[title=Conclusion]
+
+So what can we conclude? When we started with \LUATEX, right from the start
+\CONTEXT\ supported true \UNICODE\ math by using virtual \UNICODE\ math fonts.
+One of the objectives of the \TEX Gyre project is to come up with a robust
+complete set of math fonts, text fonts with a bunch of useful symbols, and
+finally a subset bold math font for titling. Now we have real \OPENTYPE\ math
+fonts, although they are still somewhat experimental. Because we're impatient, we
+now provide bold math by using effects but the future will learn to what extent
+the real bold math fonts will differ and be more pleasant to look at. After all,
+what we describe he is just an experiment that got a bit out of hands.
+
+% And if you wonder if this kind of messing with fonts is okay? Well, you don't
+% know what specs we sometimes get (and then ignore).
+
+\stopsection
+
+\stopchapter
+
+\stopcomponent
diff --git a/doc/context/sources/general/manuals/onandon/onandon-performance.tex b/doc/context/sources/general/manuals/onandon/onandon-performance.tex
index 279383a8c..b1b34443d 100644
--- a/doc/context/sources/general/manuals/onandon/onandon-performance.tex
+++ b/doc/context/sources/general/manuals/onandon/onandon-performance.tex
@@ -28,8 +28,8 @@ So what exactly does performance refer to? If you use \CONTEXT\ there are
probably only two things that matter:
\startitemize[packed]
-\startitem How long does one run take. \stopitem
-\startitem How many runs do I need. \stopitem
+\startitem How long does one run take? \stopitem
+\startitem How many runs do I need? \stopitem
\stopitemize
Processing speed is reported at the end of a run in terms of seconds spent on the
@@ -50,72 +50,74 @@ i7-3840QM as reference. A simple
\stoptext
\stoptyping
-document reports 0.4 seconds but as we wrap the run in an \type {mtxrun}
-management run we have an additional 0.3 overhead (auxiliary file handling, \PDF\
-viewer management, etc). This includes loading the Latin Modern font. With
-\LUAJITTEX\ these times are below 0.3 and 0.2 seconds. It might look like much
-overhead but in an edit|-|preview runs it feels snappy. One can try this:
+document reports 0.4 seconds but, as we wrap the run in an \type {mtxrun}
+management run, we have an additional 0.3 overhead (auxiliary file handling,
+\PDF\ viewer management, etc). This includes loading the Latin Modern font. With
+\LUAJITTEX, these times are below 0.3 and 0.2 seconds. It might look like a lot
+of overhead, but in an edit|-|preview runs it feels snappy. One can try this:
\starttyping
\stoptext
\stoptyping
-which bring down the time to about 0.2 seconds for both engines but as it doesn't
-do anything useful that is is no practice.
+which bring down the time to about 0.2 seconds for both engines but it doesn't
+do anything useful in practice.
-Finishing a document is not that demanding because most gets flushed as we go.
-The more (large) fonts we use, the longer it takes to finish a document but on
+Finishing a document is not that demanding, because most gets flushed as we go.
+The more (large) fonts we use, the longer it takes to finish a document, but, on
the average that time is not worth noticing. The main runtime contribution comes
from processing the pages.
Okay, this is not always true. For instance, if we process a 400 page book from
2500 small \XML\ files with multiple graphics per page, there is a little
-overhead in loading the files and constructing the \XML\ tree as well as in
-inserting the graphics but in such cases one expects a few seconds more runtime. The
-\METAFUN\ manual has some 450 pages with over 2500 runtime generated \METAPOST\
-graphics. It has color, uses quite some fonts, has lots of font switches
-(verbatim too) but still one run takes only 18 seconds in stock \LUATEX\ and less
-that 15 seconds with \LUAJITTEX. Keep these numbers in mind if a non|-|\CONTEXT\
-users barks against the performance tree that his few page mediocre document
-takes 10 seconds to compile: the content, styling, quality of macros and whatever
-one can come up with all plays a role. Personally I find any rate between 10 and
-30 pages per second acceptable, and if I get the lower rate then I normally know
-pretty well that the job is demanding in all kind of aspects.
-
-Over time the \CONTEXT||\LUATEX\ combination, in spite of the fact that more
+overhead in loading the files and in constructing the \XML\ tree as well as in
+inserting the graphics, but in such cases one expects a few seconds longer
+runtime. \METAFUN\ manual has some 450 pages with over 2500 runtime|-|generated
+\METAPOST\ graphics. It has color, uses quite some fonts, has lots of font
+switches (verbatim, too), but, still, one run takes only 18 seconds in stock
+\LUATEX\ and less and less that 15 seconds with \LUAJITTEX. Keep these numbers in
+mind if a non|-|\CONTEXT\ users bark against the performance tree that his few
+page mediocre document takes 10 seconds to compile: the content, styling, quality
+of macros and whatever one can come up with all play a role. Personally I find
+any rate between 10 and 30 pages per second acceptable, and, if I get the lower
+rate, then I normally know pretty well that the job is demanding in all kind of
+aspects.
+
+Over time, the \CONTEXT||\LUATEX\ combination, in spite of the fact that more
functionality has been added, has not become slower. In fact, some subsystems
-have been sped up. For instance font handling is very sensitive for adding
+have been sped up. For instance, font handling is very sensitive to adding
functionality. However, each version so far performed a bit better. Whenever some
neat new trickery was added, at the same time improvements were made thanks to
-more insight in the matter. In practice we're not talking of changes in speed by
+more insight in the matter. In practice, we're not talking of changes in speed by
large factors but more by small percentages. I'm pretty sure that most \CONTEXT\
-users never noticed. Recently a 15\endash30\% speed up (in font handling) was
-realized (for more complex fonts) but only when you use such complex fonts and
-pages full of text you will see a positive impact on the whole run.
+users never noticed. Recently, a 15\endash30\% speed up (in font handling) was
+realized (for more complex fonts), but only when you use such complex fonts and
+pages full of text will you see a positive impact on the whole run.
There is one important factor I didn't mention yet: the efficiency of the
console. You can best check that by making a format (\typ {context --make en}).
When that is done by piping the messages to a file, it takes 3.2 seconds on my
laptop and about the same when done from the editor (\SCITE), maybe because the
\LUATEX\ run and the log pane run on a different thread. When I use the standard
-console it takes 3.8 seconds in Windows 10 Creative update (in older versions it
+console, it takes 3.8 seconds in Windows 10 Creative update (in older versions it
took 4.3 and slightly less when using a console wrapper). The powershell takes
-3.2 seconds which is the same as piping to a file. Interesting is that in Bash on
-Windows it takes 2.8 seconds and 2.6 seconds when piped to a file. Normal runs
-are somewhat slower, but it looks like the 64 bit Linux binary is somewhat faster
-than the 64 bit mingw version. \footnote {Long ago we found that \LUATEX\ is very
-sensitive to for instance the \CPU\ cache so maybe there are some differences due
-to optimization flags and|/|or the fact that bash runs in one thread and all file
-\IO\ in the main windows instance. Who knows.} Anyway, it demonstrates that when
-someone yells a number you need to ask what the conditions where.
-
-At a \CONTEXT\ meeting there has been a presentation about possible speed|-|up of
-a run for instance by using a separate syntax checker to prevent a useless run.
-However, the use case concerned a document that took a minute on the machine
-used, while the same document took a few seconds on mine. At the same meeting we
-also did a comparison of speed for a \LATEX\ run using \PDFTEX\ and the same
-document migrated to \CONTEXT\ \MKIV\ using \LUATEX\ (Harald K\"onigs \XML\
-torture and compatibility test). Contrary to what one might expect, the
+3.2 seconds, which is the same as piping to a file. Interesting is that in Bash
+on Windows, it takes 2.8 seconds and 2.6 seconds when piped to a file. Normal
+runs are somewhat slower, but it looks like the 64 bit Linux binary is somewhat
+faster than the 64 bit mingw version. \footnote {Long ago, we found that \LUATEX\
+is very sensitive to for instance the \CPU\ cache, so maybe there are some
+differences due to optimization flags and|/|or the fact that bash runs in one
+thread, and all file \IO\ takes place in the main Windows instance. Who knows.}
+Anyway, it demonstrates that when someone yells a number you need to ask what the
+conditions were.
+
+At a \CONTEXT\ meeting, there has been a presentation about possible speed|-|up
+of of a run by using, for instance, a separate syntax checker to prevent a
+useless run. However, the use case concerned a document that took a minute on the
+machine used, while the same document took a few seconds on mine. At the same
+meeting, we also did a comparison of speed for a \LATEX\ run using \PDFTEX\ and
+the same document migrated to \CONTEXT\ \MKIV\ using \LUATEX\ (Harald K\"onigs
+\XML\ torture and compatibility test). Contrary to what one might expect, the
\CONTEXT\ run was significantly faster; the resulting document was a few
gigabytes in size.
@@ -126,76 +128,77 @@ gigabytes in size.
I will discuss a few potential bottlenecks next. A complex integrated system like
\CONTEXT\ has lots of components and some can be quite demanding. However, when
something is not used, it has no (or hardly any) impact on performance. Even when
-we spend a lot of time in \LUA\ that is not the reason for a slow|-|down.
+we spend a lot of time in \LUA, that is not the reason for a slow|-|down.
Sometimes using \LUA\ results in a speedup, sometimes it doesn't matter. Complex
-mechanisms like natural tables for instance will not suddenly become less
+mechanisms like natural tables, for instance, will not suddenly become less
complex. So, let's focus on the \quotation {aspects} that come up in those
complaints: fonts and \LUA. Because I only use \CONTEXT\ and occasionally test
with the plain \TEX\ version that we provide, I will not explore the potential
-impact of using truckloads of packages, styles and such, which I'm sure of plays
-a role, but one neglected in the discussion.
+impact of using truckloads of packages, styles, and such, which I'm sure of plays
+a role, but one neglected in my discussion.
\startsubsubject[title=Fonts]
-According to the principles of \LUATEX\ we process (\OPENTYPE) fonts using \LUA.
-That way we have complete control over any aspect of font handling, and can, as
+According to the principles of \LUATEX, we process (\OPENTYPE) fonts using \LUA.
+That way, we have complete control over any aspect of font handling, and can, as
to be expected in \TEX\ systems, provide users what they need, now and in the
-future. In fact, if we didn't had that freedom in \CONTEXT\ I'd probably already
+future. In fact, if we didn't had that freedom in \CONTEXT, I'd probably already
quit using \TEX\ a decade ago and found myself some other (programming) niche.
-After a font is loaded, part of the data gets passed to the \TEX\ engine so that
-it can do its work. For instance, in order to be able to typeset a paragraph,
-\TEX\ needs to know the dimensions of glyphs. Once a font has been loaded
-(that is, the binary blob) the next time it's fetched from a cache. Initial
-loading (and preparation) takes some time, depending on the complexity or size of
-the font. Loading from cache is close to instantaneous. After loading the
-dimensions are passed to \TEX\ but all data remains accessible for any desired
-usage. The \OPENTYPE\ feature processor for instance uses that data and \CONTEXT\
-for sure needs that data (fast accessible) for different purposes too.
-
-When a font is used in so called base mode, we let \TEX\ do the ligaturing and
+After a font has been loaded, part of the data gets passed to the \TEX\ engine,
+so that it can do its work. For instance, in order to be able to typeset a
+paragraph, \TEX\ needs to know the dimensions of glyphs. Once a font has been
+loaded (that is, the binary blob) it's fetched from a cache the next time.
+Initial loading (and preparation) takes some time, depending on the complexity
+and the size of the font. Loading from cache is close to instantaneous. After
+loading, the dimensions are passed to \TEX\ but all data remains accessible for
+any desired usage. The \OPENTYPE\ feature processor, for instance, uses that data
+and \CONTEXT, for sure, needs that data (quickly accessible) for different
+purposes, too.
+
+When a font is used in so|-|called base mode, we let \TEX\ do the ligaturing and
kerning. This is possible with simple fonts and features. If you have a critical
-workflow you might enable base mode, which can be done per font instance.
-Processing in node mode takes some time but how much depends on the font and
-script. Normally there is no difference between \CONTEXT\ and generic usage. In
-\CONTEXT\ we also have dynamic features, and the impact on performance depends on
-usage. In addition to base and node we also have plug mode but that is only used
+workflow, you might enable base mode, which can be done per font instance.
+Processing in node mode takes some time, but how much depends on the font and
+script. Normally, there is no difference between \CONTEXT\ and generic usage. In
+\CONTEXT, we also have dynamic features, and the impact on performance depends on
+usage. In addition to base and node, we also have plug mode, but that is only used
for testing and therefore not advertised.
Every \type {\hbox} and every paragraph goes through the font handler. Because
we support mixed modes, some analysis takes place, and because we do more in
-\CONTEXT, the generic analyzer is more light weight, which again can mean that a
+\CONTEXT, the generic analyzer is more lightweight, which again can mean that a
generic run is not slower than a similar \CONTEXT\ one.
Interesting is that added functionality for variable and|/|or color fonts had no
-impact on performance. Runtime added user features can have some impact but when
-defined well it can be neglected. I bet that when you add additional node list
-handling yourself, its impact on performance is larger. But in the end what
-counts is that the job gets done and the more you demand the higher the price you
-pay.
+impact on performance. Runtime|-|added user features can have some impact, but,
+when defined well, it can be neglected. I bet that when you add additional node
+list handling yourself, its impact on performance will be larger. But in the end
+what counts is that the job gets done and the more you demand the higher the
+price you pay.
\stopsubsubject
\startsubsubject[title=\LUA]
The second possible bottleneck when using \LUATEX\ can be in using \LUA\ code.
-However, using that as argument for slow runs is laughable. For instance
-\CONTEXT\ \MKIV\ can easily spend half its time in \LUA\ and that is not making
+However, using that is laughable as an argument for slow runs. For instance,
+\CONTEXT\ \MKIV\ can easily spend half its time in \LUA, and that is not making
it any slower than \MKII\ using \PDFTEX\ doing equally complex things. For
-instance the embedded \METAPOST\ library makes \MKIV\ way faster than \MKII, and
+instance, the embedded \METAPOST\ library makes \MKIV\ way faster than \MKII, and
the built|-|in \XML\ processing capabilities in \MKIV\ can easily beat \MKII\
\XML\ handling, apart from the fact that it can do more, like filtering by path
and expression. In fact, files that take, say, half a minute in \MKIV, could as
well have taken 15 minutes or more in \MKII\ (and imagine multiple runs then).
-So, for \CONTEXT\ using \LUA\ to achieve its objectives is mandate. The
+So, for \CONTEXT, using \LUA\ to achieve its objectives is mandatory. The
combination of \TEX, \METAPOST\ and \LUA\ is pretty powerful! Each of these
components is really fast. If \TEX\ is your bottleneck, review your macros! When
\LUA\ seems to be the bad, go over your code and make it better. Much of the
-\LUA\ code I see flying around doesn't look that efficient, which is okay because
+\LUA\ code I see flying around doesn't look that efficient, which is okay, because
the interpreter is really fast, but don't blame \LUA\ beforehand, blame your
coding (style) first. When \METAPOST\ is the bottleneck, well, sometimes not much
-can be done about it, but when you know that language well enough you can often
+can be done about it, but when you know that language well enough, you can often
make it perform better.
For the record: every additional mechanism that kicks in, like character spacing
@@ -210,26 +213,26 @@ gets pretty well obscured by other things happening, just that you know.
\startsection[title=Some timing]
-Next I will show some timings related to fonts. For this I use stock \LUATEX\
-(second column) as well as \LUAJITTEX\ (last column) which of course performs
-much better. The timings are given in 3 decimals but often (within a set of runs)
-and as the system load is normally consistent in a set of test runs the last two
-decimals only matter in relative comparison. So, for comparing runs over time
-round to the first decimal. Let's start with loading a bodyfont. This happens
-once per document and normally one has only one bodyfont active. Loading involves
-definitions as well as setting up math so a couple of fonts are actually loaded,
-even if they're not used later on. A setup normally involves a serif, sans, mono,
-and math setup (in \CONTEXT). \footnote {The timing for Latin Modern is so low
+Next, I will show some timings related to fonts. For this, I use stock \LUATEX\
+(second column) as well as \LUAJITTEX\ (last column), which, of course, performs
+much better. The timings are rounded to three decimal places, but, as the system
+load is usually only consistent in a set of test runs, the last two decimals only
+matter in relative comparison. So, for comparing runs over time, round to the
+first decimal. Let's start with loading a bodyfont. This happens once per
+document, and one usually only has one bodyfont active. Loading involves
+definitions as well as setting up math, so a couple of fonts are actually loaded
+even if they're not used later on. A setup normally involves a serif, sans, mono
+and math setup (in \CONTEXT). \footnote {The timing for Latin Modern is so low,
because that font is loaded already.}
\environment onandon-speed-000
\ShowSample{onandon-speed-000} % bodyfont
-There is a bit difference between the font sets but a safe average is 150 milli
-seconds and this is rather constant over runs.
+There is a bit of a difference between the font sets, but a safe average is 150
+milliseconds, and this is rather constant over runs.
-An actual font switch can result in loading a font but this is a one time overhead.
+An actual font switch can result in loading a font, but this is a one|-|time overhead.
Loading four variants (regular, bold, italic and bold italic) roughly takes the
following time:
@@ -239,34 +242,32 @@ Using them again later on takes no time:
\ShowSample{onandon-speed-002} % four variants
-Before we start timing the font handler, first a few baseline benchmarks are
-shown. When no font is applied and nothing else is done with the node list we
-get:
+Before we start timing the font handler, a few baseline benchmarks are shown.
+When no font is applied and nothing else is done with the node list, we get:
\ShowSample{onandon-speed-009}
-A simple monospaced, no features applied, run takes a bit more:
+A simple monospaced, no|-|features|-|applied, run takes a bit more:
\ShowSample{onandon-speed-010}
-Now we show a one font typesetting run. As the two benchmarks before, we just
-typeset a text in a \type {\hbox}, so no par builder interference happens. We use
-the \type {sapolsky} sample text and typeset it 100 times 4 (either of not with
-font switches).
+Now, we show a one|-|font typesetting run. As with the two benchmarks before, we
+just typeset a text in a \type {\hbox}, so no par builder interference happens.
+We use the \type {sapolsky} sample text and typeset it 100 times 4, first without
+font switches.
\ShowSample{onandon-speed-003}
-Much more runtime is needed when we typeset with four font switches. The garamond
-is most demanding. Actually we're not doing 4 fonts there because it has no bold,
-so the numbers are a bit lower than expected for this example. One reason for it
-being demanding is that it has lots of (contextual) lookups. The only comment I
-can make about that is that it also depends on the strategies of the font
-designer. Combining lookups saves space and time so complexity of a font is not
-always a good predictor for performance hits.
+Much more runtime is needed when we typeset with four font switches. Ebgaramond
+is the most demanding. Actually, we're not doing 4 fonts there because ebgaramond
+has no bold, so the numbers are a bit lower than expected for this example. One
+reason for it being demanding is that it has lots of (contextual) lookups.
+Combining lookups saves space and time, so complexity of a font is not always a
+good predictor for performance hits.
% \ShowSample{onandon-speed-004}
-If we typeset paragraphs we get this:
+If we typeset paragraphs, we get the following:
\ShowSample{onandon-speed-005}
@@ -274,11 +275,11 @@ We're talking of some 275 pages here.
\ShowSample{onandon-speed-006}
-There is of course overhead in handling paragraphs and pages:
+There is, of course overhead in handling paragraphs and pages:
\ShowSample{onandon-speed-011}
-Before I discuss these numbers in more details two more benchmarks are
+Before I discuss these numbers in more detail, two more benchmarks are
shown. The next table concerns a paragraph with only a few (bold) words.
\ShowSample{onandon-speed-007}
@@ -290,11 +291,11 @@ typeset using \type{\type}.
When a node list (hbox or paragraph) is processed, each glyph is looked at. One
important property of \LUATEX\ (compared to \PDFTEX) is that it hyphenates the
-whole text, not only the most feasible spots. For the \type {sapolsky} snippet
-this results in 200 potential breakpoints, registered in an equal number of
-discretionary nodes. The snippet has 688 characters grouped into 125 words and
-because it's an English quote we're not hampered with composed characters or
-complex script handling. And, when we mention 100 runs then we actually mean
+whole text, not only the most feasible spots. For the \type {sapolsky} snippet,
+this results in 200 potential breakpoints registered in an equal number of
+discretionary nodes. The snippet has 688 characters grouped into 125 words and,
+because it's an English quote, we're not hampered with composed characters or
+complex script handling. And, when we mention 100 runs, then we actually mean
400 ones when font switching and bodyfonts are compared
\startnarrower
@@ -302,7 +303,7 @@ complex script handling. And, when we mention 100 runs then we actually mean
\input sapolsky \wordright{Robert M. Sapolsky}
\stopnarrower
-In order to get substitutions and positioning right we need not only to consult
+In order to get substitutions and positioning right, we need not only to consult
streams of glyphs but also combinations with preceding pre or replace, or
trailing post and replace texts. When a font has a bit more complex substitutions,
as ebgaramond has, multiple (sometimes hundreds of) passes over the list are made.
@@ -312,15 +313,15 @@ Another factor, one you could easily deduce from the benchmarks, is intermediate
font switches. Even a few such switches (in the last benchmarks) already result
in a runtime penalty. The four switch benchmarks show an impressive increase of
runtime, but it's good to know that such a situation seldom happens. It's also
-important not to confuse for instance a verbatim snippet with a bold one. The
+important not to confuse, for instance, a verbatim snippet with a bold one. The
bold one is indeed leading to a pass over the list, but verbatim is normally
-skipped because it uses a font that needs no processing. That verbatim or bold
+skipped, because it uses a font that needs no processing. That verbatim or bold
have the same penalty is mainly due to the fact that verbatim itself is costly:
the text is picked up using a different catcode regime and travels through \TEX\
and \LUA\ before it finally gets typeset. This relates to special treatments of
-spacing and syntax highlighting and such.
+spacing, syntax highlighting, and such.
-Also keep in mind that the page examples are quite unreal. We use a layout with
+Also, keep in mind that the page examples are quite unreal. We use a layout with
no margins, just text from edge to edge.
\placefigure
@@ -343,19 +344,19 @@ no margins, just text from edge to edge.
{\SampleTitle{onandon-speed-011}}
{\externalfigure[onandon-speed-011][frame=on,orientation=90,width=.45\textheight]}
-So what is a realistic example? That is hard to say. Unfortunately no one ever
-asked us to typeset novels. They are rather brain dead products for a machinery
-so they process fast. On the mentioned laptop 350 word pages in Dejavu fonts can
-be processed at a rate of 75 pages per second with \LUATEX\ and over 100 pages
-per second with \LUAJITTEX . On a more modern laptop or professional server
-performance is of course better. And for automated flows batch mode is your
-friend. The rate is not much worse for a document in a language with a bit more
-complex character handling, take accents or ligatures. Of course \PDFTEX\ is
-faster on such a dumb document but kick in some more functionality and the
-advantage quickly disappears. So, if someone complains that \LUATEX\ needs 10 or
-more seconds for a simple few page document \unknown\ you can bet that when the
-fonts are seen as reason, that the setup is pretty bad. Personally I'd not waste
-time on such a complaint.
+So, what is a realistic example? That is hard to say. Unfortunately, no one has
+ever asked us to typeset novels. They are rather brain dead-products for a
+machinery, so they process fast. On the mentioned laptop, 350 word pages in
+Dejavu fonts can be processed at a rate of 75 pages per second with \LUATEX\ and
+over 100 pages per second with \LUAJITTEX . On a more modern laptop or a
+professional server, the performance is of course better. And, for automated
+flows, batch mode is your friend. The rate is not much worse for a document in a
+language with a bit more complex character handling, take accents or ligatures.
+Of course, \PDFTEX\ is faster on such a dumb document, but kick in some more
+functionality, and the advantage quickly disappears. So, if someone complains
+that \LUATEX\ needs 10 or more seconds for a simple few page document \unknown\
+you can bet that when the fonts are seen as reason, then the setup is pretty bad.
+Personally I would not waste time on such a complaint.
\stopsection
@@ -366,74 +367,75 @@ about the slowness of \LUATEX:
\startsubsubject[title={What engines do you compare?}]
-If you come from \PDFTEX\ you come from an 8~bit world: input and font handling
-are based on bytes and hyphenation is integrated into the par builder. If you use
-\UTF-8\ in \PDFTEX, the input is decoded by \TEX\ macros which carries a speed
+If you come from \PDFTEX, you come from an 8-bit world: input and font handling
+are based on bytes, and hyphenation is integrated into the par builder. If you use
+\UTF-8\ in \PDFTEX, the input is decoded by \TEX\ macros, which carries a speed
penalty. Because in the wide engines macro names can also be \UTF\ sequences,
construction of macro names is less efficient too.
-When you try to use wide fonts, again there is a penalty. Now, if you use \XETEX\
-or \LUATEX\ your input is \UTF-8 which becomes something 32 bit internally. Fonts
-are wide so more resources are needed, apart from these fonts being larger and in
-need of more processing due to feature handling. Where \XETEX\ uses a library,
-\LUATEX\ uses its own handler. Does that have a consequence for performance? Yes
-and no. First of all it depends on how much time is spent on fonts at all, but
-even then the difference is not that large. Sometimes \XETEX\ wins, sometimes
-\LUATEX. One thing is clear: \LUATEX\ is more flexible as we can roll out our own
-solutions and therefore do more advanced font magic. For \CONTEXT\ it doesn't
-matter as we use \LUATEX\ exclusively and rely on the flexible font handler, also
-for future extensions. If really needed you can kick in a library based handler
-but it's (currently) not distributed as we loose other functionality which in
-turn would result in complaints about that fact (apart from conflicting with the
-strive for independence).
-
-There is no doubt that \PDFTEX\ is faster but for \CONTEXT\ it's an obsolete
-engine. The hard coded solutions engine \XETEX\ is also not feasible for
-\CONTEXT\ either. So, in practice \CONTEXT\ users have no choice: \LUATEX\ is
-used, but users of other macro packages can use the alternatives if they are not
-satisfied with performance. The fact that \CONTEXT\ users don't complain about
-speed is a clear signal that this is no issue. And, if you want more speed you
-can use \LUAJITTEX. \footnote {In plug mode we can actually test a library and
-experiments have shown that performance on the average is much worse but it can
+When you try to use wide fonts, there is, again, a penalty. Now, if you use
+\XETEX\ or \LUATEX, your input is \UTF-8, which becomes something 32-bit
+internally. Fonts are wide, so more resources are needed, apart from these fonts
+being larger and in need of more processing due to feature handling. Where
+\XETEX\ uses a library, \LUATEX\ uses its own handler. Does that have a
+consequence for performance? Yes and no. First of all, it depends on how much
+time is spent on fonts at all, but even then, the difference is not that large.
+Sometimes \XETEX\ wins, sometimes it's \LUATEX. One thing is clear: \LUATEX\ is
+more flexible as we can roll out our own solutions and therefore do more advanced
+font magic. For \CONTEXT, it doesn't matter as we use \LUATEX\ exclusively, and
+we rely on the flexible font handler, also for future extensions. If really
+needed, you can kick in a library-based handler but it's (currently) not
+distributed as we lose other functionality, which would, in turn, result in
+complaints about that fact (apart from conflicting with the strive for
+independence).
+
+There is no doubt that \PDFTEX\ is faster, but, for \CONTEXT, it's an obsolete
+engine. The hard-coded-solutions engine \XETEX\ is not feasible for \CONTEXT\
+either. So, in practice, \CONTEXT\ users have no choice: \LUATEX\ is used, but
+users of other macro packages can use the alternatives if they are not satisfied
+with performance. The fact that \CONTEXT\ users don't complain about speed is a
+clear signal that this is a no|-|issue. And, if you want more speed, you can always
+use \LUAJITTEX. \footnote {In plug mode, we can actually test a library and
+experiments have shown that performance on the average is much worse, but it can
be a bit better for complex scripts, although a gain gets unnoticed in normal
documents. So, one can decide to use a library but at the cost of much other
-functionality that \CONTEXT\ offers, so we don't support it.} In the last section
+functionality that \CONTEXT\ offers, so we don't support it.} In the last section,
the different engines will be compared in more detail.
-Just that you know, when we do the four switches example in plain \TEX\ on my
-laptop I get a rate of 40 pages per second, and for one font 180 pages per
-second. There is of course a bit more going on in \CONTEXT\ in page building and
-so, but the difference between plain and \CONTEXT\ is not that large.
+Just that you know, when we do the four|-|switches example in plain \TEX\ on my
+laptop, I get a rate of 40 pages per second, and, for one font, 180 pages per
+second. There is, of course, a bit more going on in \CONTEXT\ in page building
+and so, but the difference between plain and \CONTEXT\ is not that large.
\stopsubsubject
\startsubsubject[title={What macro package is used?}]
-If the answer is that when plain \TEX\ is used, a follow up question is: what
-variant? The \CONTEXT\ distribution ships with \type {luatex-plain} and that is
-our benchmark. If there really is a bottleneck it is worth exploring. But keep in
-mind that in order to be plain, not that much can be done. The \LUATEX\ part is
-just an example of an implementation. We already discussed \CONTEXT, and for
-\LATEX\ I don't want to speculate where performance hits might come from. When
-we're talking fonts, \CONTEXT\ can actually a bit slower than the generic (or
-\LATEX) variant because we can kick in more functionality. Also, when you compare
-macro packages, keep in mind that when node list processing code is added in that
-package the impact depends on interaction with other functionality and depends on
-the efficiency of the code. You can't compare mechanisms or draw general
-conclusions when you don't know what else is done!
+When plain \TEX\ is used, a follow up question is: what variant? The \CONTEXT\
+distribution ships with \type {luatex-plain}, and that is our benchmark. If there
+really is a bottleneck, it is worth exploring, but keep in mind that, in order to
+be plain, not that much can be done. The \LUATEX\ part is just an example of an
+implementation. We already discussed \CONTEXT, and for \LATEX, I don't want to
+speculate where performance hits might come from. When we're talking fonts,
+\CONTEXT\ can actually be a bit slower than the generic (or \LATEX) variant, because
+we can kick in more functionality. Also, when you compare macro packages, keep in
+mind that, when node list processing code is added in that package, the impact
+depends on interaction with other functionality and depends on the efficiency of
+the code. You can't compare mechanisms or draw general conclusions when you don't
+know what else is done!
\stopsubsubject
\startsubsubject[title={What do you load?}]
-Most \CONTEXT\ modules are small and load fast. Of course there can be exceptions
-when we rely on third party code; for instance loading tikz takes a a bit of
-time. It makes no sense to look for ways to speed that system up because it is
-maintained elsewhere. There can probably be gained a bit but again, no user
-complained so far.
+Most \CONTEXT\ modules are small and load fast. Of course, there can be exceptions
+when we rely on third party code; for instance, loading tikz takes a bit of
+time. It makes no sense to look for ways to speed that system up, because it is
+maintained elsewhere. There can probably be gained a bit, but, again, no user
+has complained so far.
-If \CONTEXT\ is not used, one probably also uses a large \TEX\ installations.
-File lookup in \CONTEXT\ is done differently and can can be faster. Even loading
+If \CONTEXT\ is not used, one probably also uses a large \TEX\ installation.
+File lookup in \CONTEXT\ is done differently, and can be faster. Even loading
can be more efficient in \CONTEXT, but it's hard to generalize that conclusion.
If one complains about loading fonts being an issue, just try to measure how much
time is spent on loading other code.
@@ -443,36 +445,36 @@ time is spent on loading other code.
\startsubsubject[title={Did you patch macros?}]
Not everyone is a \TEX pert. So, coming up with macros that are expanded many
-times and|/|or have inefficient user interfacing can have some impact. If someone
-complains about one subsystem being slow, then honestly demands to complain about
+times and|/|or have inefficient user interfacing, can have some impact. If someone
+complains about one subsystem being slow, then honesty demands to complain about
other subsystems as well. You get what you ask for.
\stopsubsubject
\startsubsubject[title={How efficient is the code that you use?}]
-Writing super efficient code only makes sense when it's used frequently. In
-\CONTEXT\ most code is reasonable efficient. It can be that in one document fonts
-are responsible for most runtime, but in another document table construction can
+Writing super|-|efficient code only makes sense when it's used frequently. In
+\CONTEXT, most code is reasonable efficient. It can be that in one document fonts
+are responsible for most runtime, but in another document, table construction can
be more demanding while yet another document puts some stress on interactive
-features. When hz or protrusion is enabled then you run substantially slower
-anyway so when you are willing to sacrifice 10\% or more runtime don't complain
-about other components. The same is true for enabling \SYNCTEX: if you are
-willing to add more than 10\% runtime for that, don't wither about the same
-amount for font handling. \footnote {In \CONTEXT\ we use a \SYNCTEX\ alternative
-that is somewhat faster but it remains a fact that enabling more and more
-functionality will make the penalty of for instance font processing relatively
-small.}
+features. When hz or protrusion is enabled, then you run substantially slower
+anyway, so when you are willing to sacrifice 10 \% or more of runtime, don't
+complain about other components. The same is true for enabling \SYNCTEX: if you
+are willing to add more than 10 \% of runtime for that, don't wither about the
+same amount for font handling. \footnote {In \CONTEXT, we use a \SYNCTEX\
+alternative that is somewhat faster, but it remains a fact that enabling more and
+more functionality will make the penalty of, for instance, font processing
+relatively small.}
\stopsubsubject
\startsubsubject[title={How efficient is the styling that you use?}]
-Probably the most easily overseen optimization is in switching fonts and color.
-Although in \CONTEXT\ font switching is fast, I have no clue about it in other
-macro packages. But in a style you can decide to use inefficient (massive) font
+Probably the most easily overlooked optimization is in switching fonts and colors.
+Although in \CONTEXT, font switching is fast, I have no clue about it in other
+macro packages. But in a style, you can decide to use inefficient (massive) font
switches. The effects can easily be tested by commenting bit and pieces. For
-instance sometimes you need to do a full bodyfont switch when changing a style,
+instance, sometimes you need to do a full bodyfont switch when changing a style,
like assigning \type {\small\bf} to the \type {style} key in \type {\setuphead},
but often using e.g.\ \type {\tfd} is much more efficient and works quite as
well. Just try it.
@@ -481,24 +483,24 @@ well. Just try it.
\startsubsubject[title={Are fonts really the bottleneck?}]
-We already mentioned that one can look in the wrong direction. Maybe once someone
+We already mentioned that one can look in the wrong direction. Maybe, once someone
is convinced that fonts are the culprit, it gets hard to look at the real issue.
-If a similar job in different macro packages has a significant different runtime
+If a similar job in different macro packages has a significantly different runtime,
one can wonder what happens indeed.
It is good to keep in mind that the amount of text is often not as large as you
-think. It's easy to do a test with hundreds of paragraphs of text but in practice
+think. It's easy to do a test with hundreds of paragraphs of text, but, in practice,
we have whitespace, section titles, half empty pages, floats, itemize and similar
-constructs, etc. Often we don't mix many fonts in the running text either. So, in
-the end a real document is the best test.
+constructs, etc. Often, we don't mix many fonts in the running text either. So,
+in the end, a real document is your best test.
\stopsubsubject
\startsubsubject[title={If you use \LUA, is that code any good?}]
You can gain from the faster virtual machine of \LUAJITTEX. Don't expect wonders
-from the jitting as that only pays of for long runs with the same code used over
-and over again. If the gain is high you can even wonder how well written your
+from the jitting as that only pays off in long runs with the same code used over
+and over again. If the gain is high, you can even wonder how well-written your
\LUA\ code is anyway.
\stopsubsubject
@@ -506,18 +508,18 @@ and over again. If the gain is high you can even wonder how well written your
\startsubsubject[title={What if they don't believe you?}]
So, say that someone finds \LUATEX\ slow, what can be done about it? Just advice
-him or her to stick to tool used previously. Then, if arguments come that one
+them to stick to their previously|-|used tool. Then, if arguments come that one
also wants to use \UTF-8, \OPENTYPE\ fonts, a bit of \METAPOST, and is looking
forward to using \LUA\ runtime, the only answer is: take it or leave it. You pay
-a price for progress, but if you do your job well, the price is not that large.
-Tell them to spend time on learning and maybe adapting and bark against their own
+a price for progress, but, if you do your job well, the price is not that high.
+Tell them to spend time on learning and maybe adapting and to bark against their own
tree before barking against those who took that step a decade ago. Most \CONTEXT\
users took that step and someone still using \LUATEX\ after a decade can't be
that stupid. It's always best to first wonder what one actually asks from \LUATEX,
and if the benefit of having \LUA\ on board has an advantage. If not, one can
just use another engine.
-Also think of this. When a job is slow, for me it's no problem to identify where
+Also think of this: when a job is slow, for me it's no problem to identify where
the problem is. The question then is: can something be done about it? Well, I
happily keep the answer for myself. After all, some people always need room to
complain, maybe if only to hide their ignorance or incompetence. Who knows.
@@ -529,13 +531,13 @@ complain, maybe if only to hide their ignorance or incompetence. Who knows.
\startsection[title={Comparing engines}]
The next comparison is to be taken with a grain of salt and concerns the state of
-affairs mid 2017. First of all, you cannot really compare \MKII\ with \MKIV: the
-later has more functionality (or a more advanced implementation of
-functionality). And as mentioned you can also not really compare \PDFTEX\ and the
-wide engines. Anyway, here are some (useless) tests. First a bunch of loads. Keep
+affairs mid-2017. First of all, you cannot really compare \MKII\ with \MKIV: the
+latter has more functionality (or a more advanced implementation of
+functionality). And, as mentioned, you can also not really compare \PDFTEX\ and the
+wide engines. Anyway, here are some (useless) tests. First, a bunch of loads. Keep
in mind that different engines also deal differently with reading files. For
-instance \MKIV\ uses \LUATEX\ callbacks to normalize the input and has its own
-readers. There is a bit more overhead in starting up a \LUATEX\ run and some
+instance, \MKIV\ uses \LUATEX\ callbacks to normalize the input and has its own
+readers. There is a bit more overhead in starting up a \LUATEX\ run, and some
functionality is enabled that is not present in \MKII. The format is also larger,
if only because we preload a lot of useful font, character and script related
data.
@@ -549,7 +551,7 @@ data.
\stoptext
\stoptyping
-When looking at the numbers one should realize that the times include startup and
+When looking at the numbers, one should realize that the times include startup and
job management by the runner scripts. We also run in batchmode to avoid logging
to influence runtime. The average is calculated from 5 runs.
@@ -593,8 +595,7 @@ The second example does a few switches in a paragraph:
\HL
\stoptabulate
-The third examples does a few more, resulting in multiple subranges
-per style:
+The third example does more, resulting in multiple subranges per style:
\starttyping
\starttext
@@ -623,7 +624,7 @@ per style:
The last example adds some color. Enabling more functionality can have an impact
on performance. In fact, as \MKIV\ uses a lot of \LUA\ and is also more advanced
-that \MKII, one can expect a performance hit but in practice the opposite
+that \MKII, one can expect a performance hit, but, in practice, the opposite
happens, which can also be due to some fundamental differences deep down at the
macro level.
@@ -654,20 +655,20 @@ macro level.
\HL
\stoptabulate
-In these measurements the accuracy is a few decimals but a pattern is visible. As
-expected \PDFTEX\ wins on simple documents but starts loosing when things get
-more complex. For these tests I used 64 bit binaries. A 32 bit \XETEX\ with
-\MKII\ performs the same as \LUAJITTEX\ with \MKIV, but a 64 bit \XETEX\ is
-actually quite a bit slower. In that case the mingw cross compiled \LUATEX\
-version does pretty well. A 64 bit \PDFTEX\ is also slower (it looks) that a 32
-bit version. So in the end, there are more factors that play a role. Choosing
-between \LUATEX\ and \LUAJITTEX\ depends on how well the memory limited
+In these measurements, the accuracy is a few decimals, but a pattern is visible.
+As expected, \PDFTEX\ wins on simple documents but starts losing when things get
+more complex. For these tests, I used 64 bit binaries. A 32-bit \XETEX\ with
+\MKII\ performs the same as \LUAJITTEX\ with \MKIV, but a 64-bit \XETEX\ is
+actually quite a bit slower. In that case, the mingw cross|-|compiled \LUATEX\
+version does pretty well. A 64-bit \PDFTEX\ is also slower (it looks) than a
+32-bit version. So, in the end, there are more factors that play a role. Choosing
+between \LUATEX\ and \LUAJITTEX\ depends on how well the memory|-|limited
\LUAJITTEX\ variant can handle your documents and fonts.
Because in most of our recent styles we use \OPENTYPE\ fonts and (structural)
-features as well as recent \METAFUN\ extensions only present in \MKIV\ we cannot
+features as well as recent \METAFUN\ extensions only present in \MKIV, we cannot
compare engines using such documents. The mentioned performance of \LUATEX\ (or
-\LUAJITTEX) and \MKIV\ on the \METAFUN\ manual illustrate that in most cases this
+\LUAJITTEX) and \MKIV\ on the \METAFUN\ manual illustrate that, in most cases, this
combination is a clear winner.
\starttyping
@@ -703,8 +704,8 @@ That leaves the zero run:
\stoptext
\stoptyping
-This gives the following numbers. In longer runs the difference in overhead is
-neglectable.
+This gives the following numbers. In longer runs, the difference in overhead is
+negligible.
% sample 6, number of runs: 5
@@ -719,31 +720,31 @@ neglectable.
\HL
\stoptabulate
-It will be clear that when we use different fonts the numbers will also be
-different. And if you use a lot of runtime \METAPOST\ graphics (for instance for
-backgrounds), the \MKIV\ runs end up at the top. And when we process \XML\ it
+It will be clear that when we use different fonts, the numbers will also be
+different. And, if you use a lot of runtime \METAPOST\ graphics (for instance for
+backgrounds), the \MKIV\ runs end up at the top. And, when we process \XML, it
will be clear that going back to \MKII\ is no longer a realistic option. It must
-be noted that I occasionally manage to improve performance but we've now reached
+be noted that I occasionally manage to improve performance, but we've now reached
a state where there is not that much to gain. Some functionality is hard to
-compare. For instance in \CONTEXT\ we don't use much of the \PDF\ backend
-features because we implement them all in \LUA. In fact, even in \MKII\ already a
-done in \TEX, so in the end the speed difference there is not large and often in
+compare. For instance, in \CONTEXT, we don't use much of the \PDF\ backend
+features because we implement them all in \LUA. In fact, even in \MKII, already
+done in \TEX, so in the end, the speed difference there is not large and often in
favour of \MKIV.
-For the record I mention that shipping out the about 1250 pages has some overhead
-too: about 2 seconds. Here \LUAJITTEX\ is 20\% more efficient which is an
+For the record, I mention that shipping out the about 1250 pages has some overhead
+too: about 2 seconds. Here, \LUAJITTEX\ is 20\% more efficient, which is an
indication of quite some \LUA\ involvement. Loading the input files has an
-overhead of about half a second. Starting up \LUATEX\ takes more time that
+overhead of about half a second. Starting up \LUATEX\ takes more time than
\PDFTEX\ and \XETEX, but that disadvantage disappears with more pages. So, in the
-end there are quite some factors that blur the measurements. In practice what
-matters is convenience: does the runtime feel reasonable and in most cases it
+end, there are quite some factors that blur the measurements. In practice, what
+matters is convenience: does the runtime feel reasonable and, in most cases, it
does.
-If I would replace my laptop with a reasonable comparable alternative that one
+If I would replace my laptop with a reasonable comparable alternative, that one
would be some 35\% faster (single threads on processors don't gain much per year).
-I guess that this is about the same increase in performance that \CONTEXT\
-\MKIV\ got in that period. I don't expect such a gain in the coming years so
-at some point we're stuck with what we have.
+I guess that this is about the same increase in performance than \CONTEXT\
+\MKIV\ got in that period. I don't expect such a gain in the upcoming years, so,
+at some point, we're stuck with what we have.
\stopsection
@@ -754,29 +755,29 @@ go back in time to when the first wide engines showed up, \OMEGA\ was considered
to be slow, although I never tested that myself. Then, when \XETEX\ showed up,
there was not much talk about speed, just about the fact that we could use
\OPENTYPE\ fonts and native \UTF\ input. If you look at the numbers, for sure you
-can say that it was much slower than \PDFTEX. So how come that some people
+can say that it was much slower than \PDFTEX. So, how come that some people
complain about \LUATEX\ being so slow, especially when we take into account that
-it's not that much slower than \XETEX, and that \LUAJITTEX\ is often faster that
-\XETEX. Also, computers have become faster. With the wide engines you get more
+it's not that much slower than \XETEX, and that \LUAJITTEX\ is often faster than
+\XETEX. Also, computers have become faster. With the wide engines, you get more
functionality and that comes at a price. This was accepted for \XETEX\ and is
also acceptable for \LUATEX. But the price is nto that high if you take into
account that hardware performs better: you just need to compare \LUATEX\ (and
\XETEX) runtime with \PDFTEX\ runtime 15 years ago.
As a comparison, look at games and video. Resolution became much higher as did
-color depth. Higher frame rates were in demand. Therefore the hardware had to
-become faster and it did, and as a result the user experience kept up. No user
+color depth. Higher frame rates were in demand. Therefore, the hardware had to
+become faster, and it did, and, as a result, the user experience kept up. No user
will say that a modern game is slower than an old one, because the old one does
500 frames per second compared to some 50 for the new game on the modern
hardware. In a similar fashion, the demands for typesetting became higher:
\UNICODE, \OPENTYPE, graphics, \XML, advanced \PDF, more complex (niche)
typesetting, etc. This happened more or less in parallel with computers becoming
more powerful. So, as with games, the user experience didn't degrade with
-demands. Comparing \LUATEX\ with \PDFTEX\ is like comparing a low res, low frame
-rate, low color game with a modern one. You need to have up to date hardware and
-even then, the writer of such programs need to make sure it runs efficient,
-simply because hardware no longer scales like it did decades ago. You need to
-look at the larger picture.
+demands. Comparing \LUATEX\ with \PDFTEX\ is like comparing a low|-|res,
+low|-|framerate, low|-|color game with a modern one. You need to have
+up|-|to|-|date hardware and even then, the writer of such programs needs to make
+sure that they run efficiently, simply because hardware no longer scales like it
+did decades ago. You need to look at the bigger picture.
\stopsection
diff --git a/doc/context/sources/general/manuals/onandon/onandon-runtoks.tex b/doc/context/sources/general/manuals/onandon/onandon-runtoks.tex
new file mode 100644
index 000000000..b3adeb4a5
--- /dev/null
+++ b/doc/context/sources/general/manuals/onandon/onandon-runtoks.tex
@@ -0,0 +1,531 @@
+% language=uk
+
+\startcomponent onandon-amputating
+
+\environment onandon-environment
+
+\startchapter[title={Amputating code}]
+
+\startsection[title={Introduction}]
+
+Because \CONTEXT\ is already rather old in terms of software life and because it
+evolves over time, code can get replaced by better code. Reasons for this can be:
+
+\startitemize[packed]
+\startitem a better understanding of the way \TEX\ and \METAPOST\ work \stopitem
+\startitem demand for more advanced options \stopitem
+\startitem a brainwave resulting in a better solution \stopitem
+\startitem new functionality provided in \TEX\ engine used \stopitem
+\startitem the necessity to speed up a core process \stopitem
+\stopitemize
+
+Replacing code that in itself does a good job but is no longer the best to be
+used comes with sentiments. It can be rather satisfying to cook up a
+(conceptually as well as codewise) good solution and therefore removing code from
+a file can result in a somewhat bad feeling and even a feeling of losing
+something. Hence the title of this chapter.
+
+Here I will discuss one of the more complex subsystems: the one dealing with
+typeset text in \METAPOST\ graphics. I will stick to the principles and not
+present (much) code as that can be found in archives. This is not a tutorial,
+but more a sort of wrap|-|up for myself. It anyhow show the thinking behind
+this mechanism. I'll also introduce a new \LUATEX\ feature here: subruns.
+
+\stopsection
+
+\startsection[title={The problem}]
+
+\METAPOST\ is meant for drawing graphics and adding text to them is not really
+part of the concept. Its a bit like how \TEX\ sees images: the dimensions matter,
+the content doesn't. This means that in \METAPOST\ a blob of text is an
+abstraction. The native way to create a typeset text picture is:
+
+\starttyping
+picture p ; p := btex some text etex ;
+\stoptyping
+
+In traditional \METAPOST\ this will create a temporary \TEX\ file with the words
+\type {some text} wrapped in a box that when typeset is just shipped out. The
+result is a \DVI\ file that with an auxiliary program will be transformed into a
+\METAPOST\ picture. That picture itself is made from multiple pictures, because
+each sequences of characters becomes a picture and kerns become shifts.
+
+There is also a primitive \type {infont} that takes a text and just converts it
+into a low level text object but no typesetting is done there: so no ligatures
+and no kerns are found there. In \CONTEXT\ this operator is redefined to do the
+right thing.
+
+In both cases, what ends up in the \POSTSCRIPT\ file is references to fonts and
+characters and the original idea is that \DVIPS\ understands what
+fonts to embed. Details are communicated via specials (comments) that \DVIPS\ is
+supposed to intercept and understand. This all happens in an 8~bit (font) universe.
+
+When we moved on to \PDF, a converter from \METAPOST's rather predictable and
+simple \POSTSCRIPT\ code to \PDF\ was written in \TEX. The graphic operators
+became \PDF\ operators and the text was retypeset using the font information and
+snippets of strings and injected at the right spot. The only complication was
+that a non circular pen actually produced two path of which one has to be
+transformed.
+
+At that moment it already had become clear that a more tight integration in
+\CONTEXT\ would happen and not only would that demand a more sophisticated
+handling of text, but it would also require more features not present in
+\METAPOST, like dealing with \CMYK\ colors, special color spaces, transparency,
+images, shading, and more. All this was implemented. In the next sections we will
+only discuss texts.
+
+\stopsection
+
+\startsection[title={Using the traditional method}]
+
+The \type {btex} approach was not that flexible because what happens is that
+\type {btex} triggers the parser to just grabbing everything upto the \type
+{etex} and pass that to an external program. It's special scanner mode and
+because because of that using macros for typesetting texts is a pain. So, instead
+of using this method in \CONTEXT\ we used \type {textext}. Before a run the
+\METAPOST\ file was scanned and for each \type {textext} the argument was copied
+to a file. The \type {btex} calls were scanned to and replaced by \type {textext}
+calls.
+
+For each processed snippet the dimensions were stored in order to be loaded at
+the start of the \METAPOST\ run. In fact, each text was just a rectangle with
+certain dimensions. The \PDF\ converter would use the real snippet (by
+typesetting it).
+
+Of course there had to be some housekeeping in order to make sure that the right
+snippets were used, because the order of definition (as picture) can be different
+from them being used. This mechanism evolved into reasonable robust text handling
+but of course was limited by the fact that the file was scanned for snippets. So,
+the string had to be string and not assembled one. This disadvantage was
+compensated by the fact that we could communicate relevant bits of the
+environment and apply all the usual context trickery in texts in a way that was
+consistent with the rest of the document.
+
+A later implementation could communicate the text via specials which is more
+flexible. Although we talk of this method in the past sense it is still used in
+\MKII.
+
+\stopsection
+
+\startsection[title={Using the library}]
+
+When the \MPLIB\ library showed up in \LUATEX, the same approach was used but
+soon we moved on to a different approach. We already used specials to communicate
+extensions to the backend, using special colors and fake objects as signals. But
+at that time paths got pre- and postscripts fields and those could be used to
+really carry information with objects because unlike specials, they were bound to
+that object. So, all extensions using specials as well as texts were rewritten to
+use these scripts.
+
+The \type {textext} macro changed its behaviour a bit too. Remember that a
+text effectively was just a rectangle with some transformation applied. However
+this time the postscript field carried the text and the prescript field some
+specifics, like the fact that that we are dealing with text. Using the script made
+it possible to carry some more inforation around, like special color demands.
+
+\starttyping
+draw textext("foo") ;
+\stoptyping
+
+Among the prescripts are \typ {tx_index=trial} and \typ {tx_state=trial}
+(multiple prescripts are prepended) and the postscript is \type {foo}. In a
+second run the prescript is \type {tx_index=trial} and \typ {tx_state=final}.
+After the first run we analyze all objects, collect the texts (those with a \type
+{tx_} variables set) and typeset them. As part of the second run we pass the
+dimensions of each indexed text snippet. Internally before the first run we
+\quote {reset} states, then after the first run we \quote {analyze}, and after
+the second run we \quote {process} as part of the conversion of output to \PDF.
+
+\stopsection
+
+\startsection[title={Using \type {runscript}}]
+
+When the \type {runscript} feature was introduced in the library we no longer
+needed to pass the dimensions via subscripted variables. Instead we could just
+run a \LUA\ snippets and ask for the dimensions of a text with some index. This
+is conceptually not much different but it saves us creating \METAPOST\ code that
+stored the dimensions, at the cost of potentially a bit more runtime due to the
+\type {runscript} calls. But the code definitely looks a bit cleaner this way. Of
+course we had to keep the dimensions at the \LUA\ end but we already did that
+because we stored the preprocessed snippets for final usage.
+
+\stopsection
+
+\startsection[title={Using a sub \TEX\ run}]
+
+We now come the current (post \LUATEX\ 1.08) solution. For reasons I will
+mention later a two pass approach is not optimal, but we can live with that,
+especially because \CONTEXT\ with \METAFUN\ (which is what we're talking about
+here) is quit efficient. More important is that it's kind of ugly to do all the
+not that special work twice. In addition to text we also have outlines, graphics
+and more mechanisms that needed two passes and all these became one pass
+features.
+
+A \TEX\ run is special in many ways. At some point after starting up \TEX\
+enters the main loop and begins reading text and expanding macros. Normally you
+start with a file but soon a macro is seen, and a next level of input is entered,
+because as part of the expansion more text can be met, files can be opened,
+other macros be expanded. When a macro expands a token register, another level is
+entered and the same happens when a \LUA\ call is triggered. Such a call can
+print back something to \TEX\ and that has to be scanned as if it came from a
+file.
+
+When token lists (and macros) get expanded, some commands result in direct
+actions, others result in expansion only and processing later as one of more
+tokens can end up in the input stack. The internals of the engine operate in
+miraculous ways. All commands trigger a function call, but some have their own
+while others share one with a switch statement (in \CCODE\ speak) because they
+belong to a category of similar actions. Some are expanded directly, some get
+delayed.
+
+Does it sound complicated? Well, it is. It's even more so when you consider that
+\TEX\ uses nesting, which means pushing and popping local assignments, knows
+modes, like horizontal, vertical and math mode, keeps track of interrupts and at
+the same type triggers typesetting, par building, page construction and flushing
+to the output file.
+
+It is for this reason plus the fact that users can and will do a lot to influence
+that behaviour that there is just one main loop and in many aspects global state.
+There are some exceptions, for instance when the output routine is called, which
+creates a sort of closure: it interrupts the process and for that reason gets
+grouping enforced so that it doesn't influence the main run. But even then the
+main loop does the job.
+
+Starting with version 1.10 \LUATEX\ provides a way to do a local run. There are
+two ways provided: expanding a token register and calling a \LUA\ function. It
+took a bit of experimenting to reach an implementation that works out reasonable
+and many variants were tried. In the appendix we give an example of usage.
+
+The current variant is reasonable robust and does the job but care is needed.
+First of all, as soon as you start piping something to \TEX\ that gets typeset
+you'd better in a valid mode. If not, then for instance glyphs can end up in a
+vertical list and \LUATEX\ will abort. In case you wonder why we don't intercept
+this: we can't because we don't know the users intentions. We cannot enforce a
+mode for instance as this can have side effects, think of expanding \type
+{\everypar} or injecting an indentation box. Also, as soon as you start juggling
+nodes there is no way that \TEX\ can foresee what needs to be copied to
+discarded. Normally it works out okay but because in \LUATEX\ you can cheat in
+numerous ways with \LUA, you can get into trouble.
+
+So, what has this to do with \METAPOST ? Well, first of all we could now use a
+one pass approach. The \type {textext} macro calls \LUA, which then let \TEX\ do
+some typesetting, and then gives back the dimensions to \METAPOST. The \quote
+{analyze} phase is now integrated in the run. For a regular text this works quite
+well because we just box some text and that's it. However, in the next section we
+will see where things get complicated.
+
+Let's summarize the one pass approach: the \type {textext} macro creates
+rectangle with the right dimensions and for doing passes the string to \LUA\
+using \type {runscript}. We store the argument of \type {textext} in a variable,
+then call \type {runtoks}, which expands the given token list, where we typeset a
+box with the stored text (that we fetch with a \LUA\ call), and the \type
+{runscript} passes back the three dimensions as fake \RGB\ color to \METAPOST\
+which applies a \type {scantokens} to the result. So, in principle there is no
+real conceptual difference except that we now analyze in|-|place instead of
+between runs. I will not show the code here because in \CONTEXT\ we use a wrapper
+around \type {runscript} so low level examples won't run well.
+
+\stopsection
+
+\startsection[title={Some aspects}]
+
+An important aspect of the text handling is that the whole text can be
+transformed. Normally this is only some scaling but rotation is also quite valid.
+In the first approach, the original \METAPOST\ one, we have pictures constructed
+of snippets and pictures transform well as long as the backend is not too
+confused, something that can happen when for instance very small or large font
+scales are used. There were some limitations with respect to the number of fonts
+and efficient inclusion when for instance randomization was used (I remember
+cases with thousands of font instances). The \PDF\ backend could handle most
+cases well, by just using one size and scaling at the \PDF\ level. All the \type
+{textext} approaches use rectangles as stubs which is very efficient and permits
+all transforms.
+
+How about color? Think of this situation:
+
+\starttyping
+\startMPcode
+ draw textext("some \color[red]{text}")
+ withcolor green ;
+\stopMPcode
+\stoptyping
+
+And what about the document color? We suffice by saying that this is all well
+supported. Of course using transparency, spot colors etc.\ also needs extensions.
+These are however not directly related to texts although we need to take it into
+account when dealing with the inclusion.
+
+\starttyping
+\startMPcode
+ draw textext("some \color[red]{text}")
+ withcolor "blue"
+ withtransparency (1,0.5) ;
+\stopMPcode
+\stoptyping
+
+What if you have a graphic with many small snippets of which many have the same
+content? These are by default shared, but if needed you can disable it. This makes
+sense if you have a case like this:
+
+\starttyping
+\useMPlibrary[dum]
+
+\startMPcode
+ draw textext("\externalfigure[unknown]") notcached ;
+ draw textext("\externalfigure[unknown]") notcached ;
+\stopMPcode
+\stoptyping
+
+Normally each unknown image gets a nice placeholder with some random properties.
+So, do we want these two to have the same or not? At least you can control it.
+
+When I said that things can get complicated with the one pass approach the
+previous code snippet is a good example. The dummy figure is generated by
+\METAPOST. So, as we have one pass, and jump temporarily back to \TEX,
+we have two problems: we reenter the \MPLIB\ instance again in the middle of
+a run, and we might pipe back something to and|/|or from \TEX\ nested.
+
+The first problem could be solved by starting a new \MPLIB\ session. This
+normally is not a problem as both runs are independent of each other. In
+\CONTEXT\ we can have \METAPOST\ runs in many places and some produce some more
+of less stand alone graphic in the text while other calls produce \PDF\ code in
+the backend that is used in a different way (for instance in a font). In the
+first case the result gets nicely wrapped in a box, while in the second case it
+might directly end up in the page stream. And, as \TEX\ has no knowledge of what
+is needed, it's here that we can get the complications that can lead to aborting
+a run when you are careless. But in any case, if you abort, then you can be sure
+you're doing the wrong thing. So, the second problem can only be solved by
+careful programming.
+
+When I ran the test suite on the new code, some older modules had to be fixed.
+They were doing the right thing from the perspective of intermediate runs and
+therefore independent box handling, putting a text in a box and collecting
+dimensions, but interwoven they demanded a bit more defensive programming. For
+instance, the multi|-|pass approach always made copies snippets while the one
+pass approach does that only when needed. And that confused some old code in a
+module, which incidentally is never used today because we have better
+functionality built|-|in (the \METAFUN\ \type {followtext} mechanism).
+
+The two pass approach has special code for cases where a text is not used.
+Imagine this:
+
+\starttyping
+picture p ; p := textext("foo") ;
+
+draw boundingbox p;
+\stoptyping
+
+Here the \quote {analyze} stage will never see the text because we don't flush p.
+However because \type {textext} is called it can also make sure we still know the
+dimensions. In the next case we do use the text but in two different ways. These
+subtle aspects are dealt with properly and could be made a it simpler in the
+single pass approach.
+
+\starttyping
+picture p ; p := textext("foo") ;
+
+draw p rotated 90 withcolor red ;
+draw p withcolor green ;
+\stoptyping
+
+\stopsection
+
+\startsection[title=One or two runs]
+
+So are we better off now? One problem with two passes is that if you use the
+equation solver you need to make sure that you don't run into the redundant
+equation issue. So, you need to manage your variables well. In fact you need to
+do that anyway because you can call out to \METAPOST\ many times in a run so old
+variables can interfere anyway. So yes, we're better off here.
+
+Are we worse off now? The two runs with in between the text processing is very
+robust. There is no interference of nested runs and no interference of nested
+local \TEX\ calls. So, maybe we're also bit worse off. You need to anyhow keep
+this in mind when you write your own low level \TEX|-|\METAPOST\ interaction
+trickery, but fortunately now many users do that. And if you did write your own
+plugins, you now need to make them single pass.
+
+The new code is conceptually cleaner but also still not trivial because due to
+the mentioned complications. It's definitely less code but somehow amputating the
+old code does hurt a bit. Maybe I should keep it around as reference of how text
+handling evolved over a few decades.
+
+\stopsection
+
+\startsection[title=Appendix]
+
+Because the single pass approach made me finally look into a (although somewhat
+limited) local \TEX\ run, I will show a simple example. For the sake of
+generality I will use \type {\directlua}. Say that you need the dimensions of a
+box while in \LUA:
+
+\startbuffer
+\directlua {
+ tex.sprint("result 1: <")
+
+ tex.sprint("\\setbox0\\hbox{one}")
+ tex.sprint("\\number\\wd0")
+
+ tex.sprint("\\setbox0\\hbox{\\directlua{tex.print{'first'}}}")
+ tex.sprint(",")
+ tex.sprint("\\number\\wd0")
+
+ tex.sprint(">")
+}
+\stopbuffer
+
+\typebuffer \getbuffer
+
+This looks ok, but only because all printed text is collected and pushed into a
+new input level once the \LUA\ call is done. So take this then:
+
+\startbuffer
+\directlua {
+ tex.sprint("result 2: <")
+
+ tex.sprint("\\setbox0\\hbox{one}")
+ tex.sprint(tex.getbox(0).width)
+
+ tex.sprint("\\setbox0\\hbox{\\directlua{tex.print{'first'}}}")
+ tex.sprint(",")
+ tex.sprint(tex.getbox(0).width)
+
+ tex.sprint(">")
+}
+\stopbuffer
+
+\typebuffer \getbuffer
+
+This time we get the widths of the box known at the moment that we are in \LUA,
+but we haven't typeset the content yet, so we get the wrong dimensions. This
+however will work okay:
+
+\startbuffer
+\toks0{\setbox0\hbox{one}}
+\toks2{\setbox0\hbox{first}}
+\directlua {
+ tex.forcehmode(true)
+
+ tex.sprint("<")
+
+ tex.runtoks(0)
+ tex.sprint(tex.getbox(0).width)
+
+ tex.runtoks(2)
+ tex.sprint(",")
+ tex.sprint(tex.getbox(0).width)
+
+ tex.sprint(">")
+}
+\stopbuffer
+
+\typebuffer \getbuffer
+
+as does this:
+
+\startbuffer
+\toks0{\setbox0\hbox{\directlua{tex.sprint(MyGlobalText)}}}
+\directlua {
+ tex.forcehmode(true)
+
+ tex.sprint("result 3: <")
+
+ MyGlobalText = "one"
+ tex.runtoks(0)
+ tex.sprint(tex.getbox(0).width)
+
+ MyGlobalText = "first"
+ tex.runtoks(0)
+ tex.sprint(",")
+ tex.sprint(tex.getbox(0).width)
+
+ tex.sprint(">")
+}
+\stopbuffer
+
+\typebuffer \getbuffer
+
+Here is a variant that uses functions:
+
+\startbuffer
+\directlua {
+ tex.forcehmode(true)
+
+ tex.sprint("result 4: <")
+
+ tex.runtoks(function()
+ tex.sprint("\\setbox0\\hbox{one}")
+ end)
+ tex.sprint(tex.getbox(0).width)
+
+ tex.runtoks(function()
+ tex.sprint("\\setbox0\\hbox{\\directlua{tex.print{'first'}}}")
+ end)
+ tex.sprint(",")
+ tex.sprint(tex.getbox(0).width)
+
+ tex.sprint(">")
+}
+\stopbuffer
+
+\typebuffer \getbuffer
+
+The \type {forcemode} is needed when you do this in vertical mode. Otherwise the
+run aborts. Of course you can also force horizontal mode before the call. I'm
+sure that users will be surprised by side effects when they really use this
+feature but that is to be expected: you really need to be aware of the subtle
+interference of input levels and mix of input media (files, token lists, macros
+or \LUA) as well as the fact that \TEX\ often looks one token ahead, and often,
+when forced to typeset something, also can trigger builders. You're warned.
+
+\stopsection
+
+\stopchapter
+
+\stopcomponent
+
+% \starttext
+
+% \toks0{\hbox{test}} [\ctxlua{tex.runtoks(0)}]\par
+
+% \toks0{\relax\relax\hbox{test}\relax\relax}[\ctxlua{tex.runtoks(0)}]\par
+
+% \toks0{xxxxxxx} [\ctxlua{tex.runtoks(0)}]\par
+
+% \toks0{\hbox{(\ctxlua{context("test")})}} [\ctxlua{tex.runtoks(0)}]\par
+
+% \toks0{\global\setbox1\hbox{(\ctxlua{context("test")})}} [\ctxlua{tex.runtoks(0)}\box1]\par
+
+% \startluacode
+% local s = "[\\ctxlua{tex.runtoks(0)}\\box1]"
+% context("<")
+% context( function() context(s) end)
+% context( function() context(s) end)
+% context(">")
+% \stopluacode\par
+
+% \toks10000{\hbox{\red test1}}
+% \toks10002{\green\hbox{test2}}
+% \toks10004{\hbox{\global\setbox1\hbox to 1000sp{\directlua{context("!4!")}}}}
+% \toks10006{\hbox{\global\setbox3\hbox to 2000sp{\directlua{context("?6?")}}}}
+% \hbox{x\startluacode
+% local s0 = "(\\hbox{\\ctxlua{tex.runtoks(10000)}})"
+% local s2 = "[\\hbox{\\ctxlua{tex.runtoks(10002)}}]"
+% context("<!")
+% -- context( function() context(s0) end)
+% -- context( function() context(s0) end)
+% -- context( function() context(s2) end)
+% context(s0)
+% context(s0)
+% context(s2)
+% context("<")
+% tex.runtoks(10004)
+% context("X")
+% tex.runtoks(10006)
+% context(tex.box[1].width)
+% context("/")
+% context(tex.box[3].width)
+% context("!>")
+% \stopluacode x}\par
+
+
diff --git a/doc/context/sources/general/manuals/onandon/onandon-variable.tex b/doc/context/sources/general/manuals/onandon/onandon-variable.tex
index c73196cef..c308864e6 100644
--- a/doc/context/sources/general/manuals/onandon/onandon-variable.tex
+++ b/doc/context/sources/general/manuals/onandon/onandon-variable.tex
@@ -12,7 +12,7 @@
\startsubject[title=Introduction]
-History shows the tendency to recycle ideas. Often quite some effort is made by
+History shows the tendency to recycle ideas. Often, quite some effort is made by
historians to figure out what really happened, not just long ago, when nothing
was written down and we have to do with stories or pictures at most, but also in
recent times. Descriptions can be conflicting, puzzling, incomplete, partially
@@ -20,64 +20,64 @@ lost, biased, \unknown
Just as language was invented (or evolved) several times, so were scripts. The
same might be true for rendering scripts on a medium. Semaphores came and went
-within decades and how many people know now that they existed and that encryption
+within decades, and how many people know now that they existed and that encryption
was involved? Are the old printing presses truly the old ones, or are older
examples simply gone? One of the nice aspects of the internet is that one can now
-more easily discover similar solutions for the same problem, but with a different
+more easily discover similar solutions to the same problem but with a different
(and independent) origin.
-So, how about this \quotation {new big thing} in font technology: variable fonts.
-In this case, history shows that it's not that new. For most \TEX\ users the
-names \METAFONT\ and \METAPOST\ will ring bells. They have a very well documented
-history so there is not much left to speculation. There are articles, books,
+So, how about this \quotation {next big thing} in font technology: variable fonts?
+In this case, history shows that it's not that new. For most \TEX\ users, the
+names \METAFONT\ and \METAPOST\ will ring bells. They have a very well-documented
+history, so there is not much left to speculation. There are articles, books,
pictures, examples, sources, and more around for decades. So, the ability to
change the appearance of a glyph in a font depending on some parameters is not
new. What probably {\em is} new is that creating variable fonts is done in the
-natural environment where fonts are designed: an interactive program. The
-\METAFONT\ toolkit demands quite some insight in programming shapes in such a way
+natural environment, where fonts are designed: an interactive program. The
+\METAFONT\ toolkit demands quite some insight into programming shapes in such a way
that one can change look and feel depending on parameters. There are not that
-many meta fonts made and one reason is that making them requires a certain mind-
-and skill set. On the other hand, faster computers, interactive programs,
-evolving web technologies, where rea|l|-time rendering and therefore more or less
+many metafonts made, and one reason is that making them requires a certain mind-
+and skillset. On the other hand, faster computers, interactive programs,
+evolving web technologies, where real-time rendering and therefore more or less
real-time tweaking of fonts is a realistic option, all play a role in acceptance.
-But do interactive font design programs make this easier? You still need to be
-able to translate ideas into usable beautiful fonts. Taking the common shapes of
-glyphs, defining extremes and letting a program calculate some interpolations
-will not always bring good results. It's like morphing a picture of your baby's
-face into yours of old age (or that of your grandparent): not all intermediate
-results will look great. It's good to notice that variable fonts are a revival of
-existing techniques and ideas used in, for instance, multiple master fonts. The
-details might matter even more as they can now be exaggerated when some
-transformation is applied.
-
-There is currently (March 2017) not much information about these fonts so what I
+But do interactive font design programs make this easier? You still need to
+translate ideas into usable beautiful fonts. Taking the common shapes of glyphs,
+defining extremes and letting a program calculate some interpolations will not
+always give good results. It's like morphing a picture of your baby's face into
+yours of old age or that of your grandparent: not all intermediate results will
+look great. It's good to notice that variable fonts are a revival of existing
+techniques and ideas used in, for instance, multiple master fonts. The details
+might matter even more as they can now be exaggerated when transformations are
+applied.
+
+There is currently (March 2017) not much information about these fonts, so what I
say next may be partially wrong or at least different from what is intended. The
-perspective will be one from a \TEX\ user and coder. Whatever you think of them,
-these fonts will be out there and for sure there will be nice examples
-circulating soon. And so, when I ran into a few experimental fonts, with
+perspective will be one of a \TEX\ user and coder. Whatever you think of them,
+these fonts will be out there, and, for sure, there will be nice examples
+circulating soon. And so, when I ran into a few experimental fonts with
\POSTSCRIPT\ and \TRUETYPE\ outlines, I decided to have a look at what is inside.
-After all, because it's visual, it's also fun to play with. Let's stress that at
-the moment of this writing I only have a few simple fonts available, fonts that
-are designed for testing and not usage. Some recommended tables were missing and
-no complex \OPENTYPE\ features are used in these fonts.
+After all, because it's visual, it's also fun to play with. Let's stress that, at
+the moment of this writing, I only have a few simple fonts available, fonts that
+are designed for testing and not for usage. Some recommended tables were missing
+and no complex \OPENTYPE\ features were used in these fonts.
\stopsubject
\startsubject[title=The specification]
I'm not that good at reading specifications, first of all because I quickly fall
-asleep with such documents, but most of all because I prefer reading other stuff
+asleep with such documents, but mostly because I prefer reading other stuff
(I do have lots of books waiting to be read). I'm also someone who has to play
with something in order to understand it: trial and error is my modus operandi.
-Eventually it's my intended usage that drives the interface and that is when
+Eventually, it's my intended usage that drives the interface and that is when
everything comes together.
Exploring this technology comes down to: locate a font, get the \OPENTYPE\ 1.8
specification from the \MICROSOFT\ website, and try to figure out what is in the
-font. When I had a rough idea the next step was to get to the shapes and see if I
-could manipulate them. Of course it helped that in \CONTEXT\ we already can load
-fonts and play with shapes (using \METAPOST). I didn't have to install and learn
+font. When I had a rough idea, the next step was to get to the shapes and see if
+I could manipulate them. Of course, it helped that we can already load fonts and
+play with shapes in \CONTEXT using \METAPOST. I didn't have to install and learn
other programs. Once I could render them, in this case by creating a virtual font
with inline \PDF\ literals, a next step was to apply variation. Then came the
first experiments with a possible user interface. Seeing more variation then
@@ -88,18 +88,18 @@ The main extension to the data packaged in a font file concerns the (to be
discussed) axis along which variable fonts operate and deltas to be applied to
coordinates. The \type {gdef} table has been extended and contains information
that is used in \type {gpos} features. There are new \type {hvar}, \type {vvar}
-and \type {mvar} tables that influence the horizontal, vertical and general font
+and \type {mvar} tables that influence the horizontal, vertical, and general font
dimensions. The \type {gvar} table is used for \TRUETYPE\ variants, while the
\type {cff2} table replaces the \type {cff} table for \OPENTYPE\ \POSTSCRIPT\
outlines. The \type {avar} and \type {stat} tables contain some
-meta|-|information about the axes of variations.
+metainformation about the axes of variations.
-It must be said that because this is new technology the information in the
+It must be said that, because this is a new technology, the information in the
standard is not always easy to understand. The fact that we have two rendering
techniques, \POSTSCRIPT\ \type {cff} and \TRUETYPE\ \type {ttf}, also means that
we have different information and perspectives. But this situation is not much
-different from \OPENTYPE\ standards a few years ago: it takes time but in the end
-I will get there. And, after all, users also complain about the lack of
+different from \OPENTYPE\ standards a few years ago: it takes time, but, in the
+end, I will get there. And, after all, users also complain about the lack of
documentation for \CONTEXT, so who am I to complain? In fact, it will be those
\CONTEXT\ users who will provide feedback and make the implementation better in
the~end.
@@ -110,41 +110,41 @@ the~end.
Before we discuss some details, it will be useful to summarize what the font
loader does when a user requests a font at a certain size and with specific
-features enabled. When a font is used the first time, its binary format is
-converted into a form that makes it suitable for use within \CONTEXT\ and
-therefore \LUATEX. This conversion involves collecting properties of the font as
-a whole (official names, general dimensions like x-height and em-width, etc.), of
+features enabled. When a font is used for the first time, its binary format is
+converted into a form that makes it suitable for use in \CONTEXT\ and therefore
+in \LUATEX. This conversion involves collecting the properties of the font as a
+whole (official names, general dimensions like x-height and em-width, etc.), of
glyphs (dimensions, \UNICODE\ properties, optional math properties), and all
-kinds of information that relates to (contextual) replacements of glyphs (small
+kinds of information that relate to (contextual) replacements of glyphs (small
caps, oldstyle, scripts like Arabic) and positioning (kerning, anchoring marks,
-etc.). In the \CONTEXT\ font loader this conversion is done in \LUA.
+etc.). In the \CONTEXT\ font loader, this conversion is done in \LUA.
-The result is stored in a condensed format in a cache and the next time the font
-is needed it loads in an instant. In the cached version the dimensions are
-untouched, so a font at different sizes has just one copy in the cache. Often a
-font is needed at several sizes and for each size we create a copy with scaled
+The result is stored in a condensed format in a cache, and, the next time the font
+is needed, it loads in an instant. In the cached version, the dimensions are
+untouched, so a font at different sizes has just one copy in the cache. Often, a
+font is needed at several sizes, and for each size, we create a copy with scaled
glyph dimensions. The feature-related dimensions (kerning, anchoring, etc.)\ are
shared and scaled when needed. This happens when sequences of characters in the
node list get converted into sequences of glyphs. We could do the same with glyph
-dimensions but one reason for having a scaled copy is that this copy can also
-contain virtual glyphs and these have to be scaled beforehand. In practice there
+dimensions, but one reason for having a scaled copy is that this copy can also
+contain virtual glyphs, and these have to be scaled beforehand. In practice, there
are several layers of caching in order to keep the memory footprint within
-reasonable bounds. \footnote {In retrospect one can wonder if that makes sense;
+reasonable bounds. \footnote {In retrospect, one can wonder if that makes sense;
just look at how much memory a browser uses when it has been open for some time.
-In the beginning of \LUATEX\ users wondered about caching fonts, but again, just
+In the beginning of \LUATEX, users wondered about caching fonts, but again, just
look at what amounts browsers cache: it gets pretty close to the average amount
of writes that a \SSD\ can handle per day within its guarantee.}
When the font is actually used, interaction between characters is resolved using
-the feature|-|related information. When for instance two characters need to be
+the feature|-|related information. When, for instance, two characters need to be
kerned, a lookup results in the injection of a kern, scaled from general
dimensions to the current size of the font.
-When the outlines of glyphs are needed in \METAFUN\ the font is also converted
+When the outlines of glyphs are needed in \METAFUN, the font is also converted
from its binary form to something in \LUA, but this time we filter the shapes.
-For a \type {cff} this comes down to interpreting the \type {charstrings} and
-reducing the complexity to \type {moveto}, \type {lineto} and \type {curveto}
-operators. In the process subroutines are inlined. The result is something that
+For a \type {cff}, this comes down to interpreting the \type {charstrings} and
+reducing the complexity to \type {moveto}, \type {lineto}, and \type {curveto}
+operators. In the process, subroutines are inlined. The result is something that
\METAPOST\ is happy with but that also can be turned into a piece of a \PDF.
We now come to what a variable font actually is: a basic design which is
@@ -170,7 +170,7 @@ endfor ;
\stopMPcode
\stoplinecorrection
-Here we have a linear scaling but glyphs are not normally done that way. There
+Here we have linear scaling, but glyphs are not normally done that way. There
are font collections out there with lots of intermediate variants (say from light
to heavy) and it's more profitable to sell each variant independently. However,
there is often some logic behind it, probably supported by programs that
@@ -178,68 +178,69 @@ designers use, so why not build that logic into the font and have one file that
represents many intermediate forms. In fact, once we have multiple axes, even
when the designer has clear ideas of the intended usage, nothing will prevent
users from tinkering with the axis properties in ways that will fulfil their
-demands but hurt the designers eyes. We will not discuss that dilemma here.
+demands (and hurt the designer's eyes). I will not discuss that dilemma here.
When a variable font follows the route described above, we face a problem. When
-you load a \TRUETYPE\ font it will just work. The glyphs are packaged in the same
-format as static fonts. However, a variable font has axes and on each axis a
-value can be set. Each axis has a minimum, maximum and default. It can be that
-the default instance also assumes some transformations are applied. The standard
-recommends adding tables to describe these things but the fonts that I played
-with each lacked such tables. So that leaves some guesswork. But still, just
-loading a \TRUETYPE\ font gives some sort of outcome, although the dimensions
-(widths) might be weird due to lack of a (default) axis being applied.
+you load a \TRUETYPE\ font, it will just work. The glyphs are packaged in the
+same format as static fonts. However, a variable font has axes, and, on each
+axis, a value can be set. Each axis has a minimum, a maximum, and a default. It
+can be that the default instance also assumes some transformations are applied.
+The standard recommends adding tables to describe these things, but the fonts
+that I played with each lacked such tables. So that leaves some guesswork. But
+still, just loading a \TRUETYPE\ font gives some sort of outcome, although the
+dimensions (widths) might be weird due to the lack of a (default) axis being
+applied.
An \OPENTYPE\ font with \POSTSCRIPT\ outlines is different: the internal \type
-{cff} format has been upgraded to \type {cff2} which on the one hand is less
-complicated but on the other hand has a few new operators \emdash\ which results
-in programs that have not been adapted complaining or simply quitting on them.
+{cff} format has been upgraded to \type {cff2}, which on the one hand is less
+complicated, but on the other hand has a few new operators, which results
+in programs that have not been adapted complaining or simply crashing on them.
One could argue that a font is just a resource and that one only has to pass it
-along but that's not what works well in practice. Take \LUATEX. We can of course
-load the font and apply axis vales so that we can process the document as we
-normally do. But at some point we have to create a \PDF. We can simply embed the
-\TRUETYPE\ files but no axis values are applied. This is because, even if we add
-the relevant information, there is no way in current \PDF\ formats to deal with
-it. For that, we should be able to pass all relevant axis|-|related information
-as well as specify what values to use along these axes. And for \TRUETYPE\ fonts
-this information is not part of the shape description so then we in fact need to
-filter and pass more. An \OPENTYPE\ \POSTSCRIPT\ font is much cleaner because
-there we have the information needed to transform the shape mostly in the glyph
-description. There we only need to carry some extra information on how to apply
-these so|-|called blend values. The region|/|axis model used there only demands
-passing a relatively simple table (stripped down to what we need). But, as said
-above, \type {cff2} is not backward-compatible so a viewer will (currently)
-simply not show anything.
-
-Recalling how we load fonts, how does that translate with variable changes? If we
+along, but that's not what works well in practice. Take \LUATEX. We can, of
+course, load the font and apply axis vales, so that we can process the document
+as we normally do. But, at some point, we have to create a \PDF\ file. We can
+simply embed the \TRUETYPE\ files, but no axis values are applied. This is
+because, even if we add the relevant information, there is no way in the current
+\PDF\ formats to deal with it. For that, we should be able to pass all relevant
+axis|-|related information as well as specify what values to use along these
+axes. And, for \TRUETYPE\ fonts, this information is not part of the shape
+description so then we need to filter and pass more. An \OPENTYPE\ \POSTSCRIPT\
+font is much cleaner, because there we have the information needed to transform
+the shape mostly in the glyph description. There, we only need to carry some
+extra information on how to apply these so|-|called blend values. The
+region|/|axis model used there only demands passing a relatively simple table
+(stripped down to what we need). But, as said above, \type {cff2} is not
+backward-compatible, so a viewer will (currently) simply not show anything.
+
+Recalling how we load fonts, how does that change with variable changes? If we
have two characters with glyphs that get transformed and that have a kern between
them, the kern may or may not transform. So, when we choose values on an axis,
-then not only glyph properties change but also relations. We no longer can share
-positional information and scale afterwards because each instance can have
+then not only glyph properties change but also relations. We can no longer share
+positional information and scale afterwards, because each instance can have
different values to start with. We could carry all that information around and
-apply it at runtime but because we're typesetting documents with a static design
+apply it at runtime, but, because we're typesetting documents with a static design,
it's more convenient to just apply it once and create an instance. We can use the
-same caching as mentioned before but each chosen instance (provided by the font
+same caching as mentioned before, but each chosen instance (provided by the font
or made up by user specifications) is kept in the cache. As a consequence, using
a variable font has no overhead, apart from initial caching.
So, having dealt with that, how do we proceed? Processing a font is not different
from what we already had. However, I would not be surprised if users are not
-always satisfied with, for instance, kerning, because in such fonts a lot of care
-has to be given to this by the designer. Of course I can imagine that programs
+always satisfied with, for instance, kerning, because in such fonts, a lot of care
+has to be given to this by the designer. Of course, I can imagine that programs
used to create fonts deal with this, but even then, there is a visual aspect to
-it too. The good news is that in \CONTEXT\ we can manipulate features so in
-theory one can create a so|-|called font goodie file for a specific instance.
+it, too. The good news is that in \CONTEXT\ we can manipulate features, so, in
+theory, one can create a so|-|called font goodie file for a specific instance.
\stopsubject
\startsubject[title=Shapes]
-For \OPENTYPE\ \POSTSCRIPT\ shapes we always have to do a dummy rendering in
-order to get the right bounding box information. For \TRUETYPE\ this information
+For \OPENTYPE\ \POSTSCRIPT\ shapes, we always have to do a dummy rendering in
+order to get the right bounding box information. For \TRUETYPE, this information
is already present but not when we use a variable instance, so I had to do a bit
-of coding for that. Here we face a problem. For \TEX\ we need the width, height
+of coding for that. Here we face a problem. For \TEX, we need the width, height
and depth of a glyph. Consider the following case:
\startlinecorrection
@@ -259,16 +260,16 @@ draw boundingbox currentpicture
The shape has a bounding box that fits the shape. However, its left corner is not
at the origin. So, when we calculate a tight bounding box, we cannot use it for
actually positioning the glyph. We do use it (for horizontal scripts) to get the
-height and depth but for the width we depend on an explicit value. In \OPENTYPE\
-\POSTSCRIPT\ we have the width available and how the shape is positioned relative
-to the origin doesn't much matter. In a \TRUETYPE\ shape a bounding box is part
-of the specification, as is the width, but for a variable font one has to use
-so-called phantom points to recalculate the width and the test fonts I had were
+height and depth, but for the width, we depend on an explicit value. In \OPENTYPE\
+\POSTSCRIPT, we have the width available, and how the shape is positioned relative
+to the origin doesn't much matter. In a \TRUETYPE\ shape, a bounding box is part
+of the specification, as is the width, but for a variable font, one has to use
+so|-|called phantom points to recalculate the width, and the test fonts I had were
not suitable for investigating this.
At any rate, once I could generate documents with typeset text using variable
-fonts it became time to start thinking about a user interface. A variable font
-can have predefined instances but of course a user also wants to mess with axis
+fonts, it was time to start thinking about a user interface. A variable font
+can have predefined instances, but, of course, a user also wants to mess with axis
values. Take one of the test fonts: Adobe Variable Font Prototype. It has several
instances:
@@ -310,7 +311,7 @@ The Avenir Next variable demo font (currently) provides:
\SampleFont {avenirnextvariable} {heavy condensed} {heavycondensed}
\stoptabulate
-Before we continue I will show a few examples of variable shapes. Here we use some
+Before we continue, I will show a few examples of variable shapes. Here, we use some
\METAFUN\ magic. Just take these definitions for granted.
\startbuffer[a]
@@ -357,15 +358,15 @@ Before we continue I will show a few examples of variable shapes. Here we use so
\typebuffer[a,b,c,d]
The results are shown in \in {figure} [fig:whatever:1]. What we see here is that
-as long as we fill the shape everything will look as expected but using an
-outline only won't. The crucial (control) points are moved to different locations
-and as a result they can end up inside the shape. Giving up outlines is the price
-we evidently need to pay. Of course this is not unique for variable fonts
-although in practice static fonts behave better. To some extent we're back to
+as long as we fill the shape everything will look as expected, but only using an
+outline won't. The crucial (control) points are moved to different locations
+and, as a result, they can end up inside the shape. Giving up outlines is the price
+we evidently need to pay. Of course this is not unique for variable fonts,
+although, in practice, static fonts behave better. To some extent, we're back to
where we were with \METAFONT\ and (for instance) Computer Modern: because these
originate in bitmaps (and probably use similar design logic) we also can have
overlap and bits and pieces pasted together and no one will notice that. The
-first outline variants of Computer Modern also had such artifacts while in the
+first outline variants of Computer Modern also had such artifacts, while in the
static Latin Modern successors, outlines were cleaned up.
\startplacefigure[title=Four variants,reference=fig:whatever:1]
@@ -377,9 +378,10 @@ static Latin Modern successors, outlines were cleaned up.
\stopcombination
\stopplacefigure
-The fact that we need to preprocess an instance but only know how to do that when
-we have gotten the information about axis values from the font means that the
-font handler has to be adapted to keep caching correct. Another definition is:
+The fact that we need to preprocess an instance, but that we only know how to do
+that after we have retrieved information about axis values from the font means
+that the font handler has to be adapted to keep caching correct. Another
+definition is:
\starttyping
\definefontfeature
@@ -392,9 +394,9 @@ font handler has to be adapted to keep caching correct. Another definition is:
[name:adobevariablefontprototype*lightdefault]
\stoptyping
-Here the complication is that where normally features are dealt with after
+Here, the complication is that where normally features are dealt with after
loading, the axis feature is part of the preparation (and caching). If you want
-the virtual font solution you can do this:
+the virtual font solution, you can do this:
\starttyping
\definefontfeature
@@ -408,12 +410,12 @@ the virtual font solution you can do this:
[name:adobevariablefontprototype*inlinelightdefault]
\stoptyping
-When playing with these fonts it was hard to see if loading was done right. For
-instance not all values make sense. It is beyond the scope of this article, but
-axes like weight, width, contrast and italic values get applied differently to
-so|-|called regions (subspaces). So say that we have an $x$ coordinate with value
-$50$. This value can be adapted in, for instance, four subspaces (regions), so we
-actually get:
+When playing with these fonts, it was hard to see if loading was done right. For
+instance, not all values make sense. It is beyond the scope of this article, but
+axes like weight, width, contrast, and italic values get applied differently to
+so|-|called regions (subspaces). So, say that we have an $x$ coordinate with the
+value $50$. This value can be adapted in, for instance, four subspaces (regions),
+so we actually get:
\startformula
x^\prime = x
@@ -423,11 +425,11 @@ actually get:
+ s_4 \times x_4
\stopformula
-The (here) four scale factors $s_n$ are determined by the axis value. Each axis
+The (here four) scale factors $s_n$ are determined by the axis value. Each axis
has some rules about how to map the values $230$ for weight and $50$ for contrast
-to such a factor. And each region has its own translation from axis values to
-these factors. The deltas $x_1,\dots,x_4$ are provided by the font. For a
-\POSTSCRIPT|-|based font we find sequences like:
+to such a factor. Each region has its own translation from axis values to these
+factors. The deltas $x_1,\dots,x_4$ are provided by the font. In a
+\POSTSCRIPT|-|based font, we find sequences like:
\starttyping
1 <setvstore>
@@ -437,10 +439,10 @@ these factors. The deltas $x_1,\dots,x_4$ are provided by the font. For a
A store refers to a region specification. From there the factors are calculated
using the chosen values on the axis. The deltas are part of the glyphs
-specification. Officially there can be multiple region specifications, but how
+specification. Officially, there can be multiple region specifications, but how
likely it is that they will be used in real fonts is an open question.
-For \TRUETYPE\ fonts the deltas are not in the glyph specification but in a
+In \TRUETYPE\ fonts, the deltas are not in the glyph specification but in a
dedicated \type {gvar} table.
\starttyping
@@ -448,7 +450,7 @@ apply x deltas [10 -30 40 -60] to x 120
apply y deltas [30 -10 -30 20] to y 100
\stoptyping
-Here the deltas come from tables outside the glyph specification and their
+Here, the deltas come from tables outside the glyph specification and their
application is triggered by a combination of axis values and regions.
The following two examples use Avenir Next Variable and demonstrate that kerning
@@ -488,19 +490,19 @@ is adapted to the variant.
\startsubject[title=Embedding]
-Once we're done typesetting and a \PDF\ file has to be created there are three
+Once we're done typesetting and a \PDF\ file has to be created, there are three
possible routes:
\startitemize
\startitem
We can embed the shapes as \PDF\ images (inline literal) using virtual
- font technology. We cannot use so|-|called xforms here because we want to
+ font technology. We cannot use so|-|called xforms here, because we want to
support color selectively in text.
\stopitem
\startitem
- We can wait till the \PDF\ format supports such fonts, which might happen
- but even then we might be stuck for years with viewers getting there. Also
- documents need to get printed, and when printer support might
+ We can wait till the \PDF\ format supports such fonts, which might
+ happen, but even then we might be stuck for years with viewers getting
+ there. Also, documents need to be printed, and when printer support might
arrive is another unknown.
\stopitem
\startitem
@@ -512,43 +514,43 @@ possible routes:
Once I could interpret the right information in the font, the first route was the
way to go. A side effect of having a converter for both outline types meant that
it was trivial to create a virtual font at runtime. This option will stay in
-\CONTEXT\ as pseudo|-|feature \type {variableshapes}.
+\CONTEXT\ as a pseudo|-|feature \type {variableshapes}.
-When trying to support variable fonts I tried to limit the impact on the backend
+When trying to support variable fonts, I tried to limit the impact on the backend
code. Also, processing features and such was not touched. The inclusion of the
right shapes is done via a callback that requests the blob to be injected in the
-\type {cff} or \type {glyf} table. When implementing this I actually found out
-that the \LUATEX\ backend also does some juggling of charstrings, to serve the
-purpose of inlining subroutines. In retrospect I could have learned a few tricks
-faster by looking at that code but I never realized that it was there. Looking at
-the code again, it strikes me that the whole inclusion could be done with \LUA\
-code and some day I will give that a try.
+\type {cff} or \type {glyf} table. When implementing this, I actually found out
+that the \LUATEX\ backend also does some juggling of charstrings to inline
+subroutines. In retrospect, I could have learned a few tricks faster by looking
+at that code, but I never realized that it was there. Looking at the code again,
+it strikes me that the whole inclusion could be done with \LUA\ code, and, some
+day, I will give that a try.
\stopsubject
\startsubject[title=Conclusion]
-When I first heard about variable fonts I was confident that when they showed up
-they could be supported. Of course a specimen was needed to prove this. A first
-implementation demonstrates that indeed it's no big deal to let \CONTEXT\ with
-\LUATEX\ handle such fonts. Of course we need to fill in some gaps which can be
-done once we have complete fonts. And then of course users will demand more
-control. In the meantime the helper script that deals with identifying fonts by
-name has been extended and the relevant code has been added to the distribution.
-At some point the \CONTEXT\ Garden will provide the \LUATEX\ binary that has the
-callback.
-
-I end with a warning. On the one hand this technology looks promising but on the
-other hand one can easily get lost. Probably most such fonts operate over a
-well|-|defined domain of values but even then one should be aware of complex
+When I first heard about variable fonts, I was confident that when they showed
+up, they could be supported. Of course, a specimen was needed to prove this. A
+first implementation demonstrates that, indeed, it's no big deal to let \CONTEXT\
+with \LUATEX\ handle such fonts. Of course, we need to fill in some gaps, which
+can be done once we have complete fonts. And then, of course, users will demand
+more control. In the meantime, the helper script that deals with identifying
+fonts by name has been extended, and the relevant code has been added to the
+distribution. At some point, the \CONTEXT\ Garden will provide the \LUATEX\
+binary that has the callback.
+
+I end on a warning note. On the one hand, this technology looks promising, but on
+the other hand, one can easily get lost. Most such fonts probably operate over a
+well|-|defined domain of values, but, even then, one should be aware of complex
interactions with features like positioning or replacements. Not all combinations
-can be tested. It's probably best to stick to fonts that have all the relevant
+can be tested. It's probably best to stick with fonts that have all the relevant
tables and don't depend on properties of a specific rendering technology.
-Although support is now present in the core of \CONTEXT\ the official release
-will happen at the \CONTEXT\ meeting in 2017. By then I hope to have tested more
-fonts. Maybe the interface has also been extended by then because after all,
-\TEX\ is about control.
+Although support is now present in the core of \CONTEXT, the official release
+will happen at the \CONTEXT\ meeting in 2017. By then, I hope to have tested more
+fonts. Maybe the interface will also have been extended by then, because, after
+all, \TEX\ is about control.
\stopsubject
diff --git a/doc/context/sources/general/manuals/onandon/onandon.tex b/doc/context/sources/general/manuals/onandon/onandon.tex
index 60b626a5e..00b01f9ae 100644
--- a/doc/context/sources/general/manuals/onandon/onandon.tex
+++ b/doc/context/sources/general/manuals/onandon/onandon.tex
@@ -35,22 +35,18 @@
\startbodymatter
\component onandon-decade
\component onandon-ffi
- % \startchapter[title=Variable fonts] First published in user group magazines. \stopchapter
- \component onandon-variable
+ \component onandon-variable % first published in user group magazines
\component onandon-emoji
- \startchapter[title={Children of \TEX}] First published in user group magazines. \stopchapter
- % \component onandon-children
\component onandon-performance
\component onandon-editing
- \startchapter[title={Advertising \TEX}] First published in user group magazines. \stopchapter
- % \component onandon-perception
- \startchapter[title={Tricky fences}] First published in user group magazines. \stopchapter
- % \component onandon-fences
- % \component onandon-media
- \startchapter[title={From 5.2 to 5.3}] Maybe first published in user group magazines. \stopchapter
- % \component onandon-53
- \startchapter[title={Executing \TEX}] Maybe first published in user group magazines. \stopchapter
- % \component onandon-execute
+ \component onandon-fences % first published in user group magazines
+ \component onandon-media
+ \component onandon-53 % first published in user group magazines
+ \component onandon-execute % first published in user group magazines
+ \component onandon-modern
+ \component onandon-expansion
+ \component onandon-runtoks
+ \component onandon-110
\stopbodymatter
\stopproduct