From 059fc69b2c7853b937ddb4cfc9d36304dee07893 Mon Sep 17 00:00:00 2001
From: Hans Hagen We save positional information in the main utility table. Not only
-can we store much more information in We start with a registration system for atributes so that we can use the
-symbolic names later on. We reserve this one as we really want it to be always set (faster). private attributes are used by the system and public ones are for users. We use dedicated
-ranges of numbers for them. Of course a the This is a prelude to integrated bibliography support. This file just loads
-bibtex files and converts them to xml so that the we access the content
-in a convenient way. Actually handling the data takes place elsewhere. This module implements some methods and creates additional datastructured
-from the big character table that we use for all kind of purposes:
- We assume that at this point This converts a string (if given) into a number. At this point we assume that the big data table is loaded. From this
-table we derive a few more. Next comes a whole series of helper methods. These are (will be) part
-of the official Requesting lower and uppercase codes: In order to deal with 8-bit output, we need to find a way to go from This leaves us problems with characters that are specific to We get a more efficient variant of this when we integrate
-replacements in collapser. This more or less renders the previous
-private code redundant. The following code is equivalent but the
-first snippet uses the relocated dollars. Instead of using a Setting the lccodes is also done in a loop over the data table. When a sequence of This module implements methods for collapsing and expanding We implement these manipulations as filters. One can run multiple filters
-over a string. The old code has now been moved to char-obs.lua which we keep around for
-educational purposes. It only makes sense to collapse at runtime, since we don't expect source code
-to depend on collapsing. The next code started out as adaptation of code from Wolfgang Schuster as
-posted on the mailing list. The current version supports nested braces and
-unbraced integers as scripts. This module implements a bunch of conversions. Some are more
-efficient than their Some code may move to a module in the language namespace. This module provides a (multipass) container for arbitrary data. It
-replaces the twopass data mechanism. We also provide an efficient variant for page states. We save multi-pass information in the main utility table. This is a
-bit of a mess because we support old and new methods. A utility file has always been part of Variables are saved using in the previously defined table and passed
-onto Once we found ourselves defining similar cache constructs several times,
-containers were introduced. Containers are used to collect tables in memory and
-reuse them when possible based on (unique) hashes (to be provided by the calling
-function). Caching to disk is disabled by default. Version numbers are stored in the
-saved table which makes it possible to change the table structures without
-bothering about the disk cache. Examples of usage can be found in the font related code. This code is not
-ideal but we need it in generic too so we compromise. We use a url syntax for accessing the tar file itself and file in it: This module deals with caching data. It sets up the paths and implements
-loaders and savers for tables. Best is to set the following variable. When not
-set, the usual paths will be checked. Personally I prefer the (users) temporary
-path. Currently we do no locking when we write files. This is no real problem
-because most caching involves fonts and the chance of them being written at the
-same time is small. We also need to extend luatools with a recache feature. We use a url syntax for accessing the zip file itself and file in it: It's more convenient to manipulate filenames (paths) in It's more convenient to manipulate filenames (paths) in
- For ligatures, only characters with a code smaller than 128 make sense,
-anything larger is encoding dependent. An interesting complication is that a
-character can be in an encoding twice but is hashed once. Here we only implement a few helper functions. We need to normalize the scale factor (in scaled points). This has to
-do with the fact that Beware, the boundingbox is passed as reference so we may not overwrite it
-in the process; numbers are of course copies. Here 65536 equals 1pt. (Due to
-excessive memory usage in CJK fonts, we no longer pass the boundingbox.) The reason why the scaler was originally split, is that for a while we experimented
-with a helper function. However, in practice the A unique hash value is generated by: In principle we can share tfm tables when we are in need for a font, but then
-we need to define a font switch as an id/attr switch which is no fun, so in that
-case users can best use dynamic features ... so, we will not use that speedup. Okay,
-when we get rid of base mode we can optimize even further by sharing, but then we
-loose our testcases for We need to check for default features. For this we provide
-a helper function. So far we haven't really dealt with features (or whatever we want
-to pass along with the font definition. We distinguish the following
-situations:
-name:xetex like specs
-name@virtual font spec
-name*context specification
-
---ldx]]--
-
--- currently fonts are scaled while constructing the font, so we
--- have to do scaling of commands in the vf at that point using e.g.
--- "local scale = g.parameters.factor or 1" after all, we need to
--- work with copies anyway and scaling needs to be done at some point;
--- however, when virtual tricks are used as feature (makes more
--- sense) we scale the commands in fonts.constructors.scale (and set the
--- factor there)
+-- So far we haven't really dealt with features (or whatever we want to pass along
+-- with the font definition. We distinguish the following situations:
+--
+-- name:xetex like specs
+-- name@virtual font spec
+-- name*context specification
+--
+-- Currently fonts are scaled while constructing the font, so we have to do scaling
+-- of commands in the vf at that point using e.g. "local scale = g.parameters.factor
+-- or 1" after all, we need to work with copies anyway and scaling needs to be done
+-- at some point; however, when virtual tricks are used as feature (makes more
+-- sense) we scale the commands in fonts.constructors.scale (and set the factor
+-- there).
local loadfont = definers.loadfont
@@ -2385,10 +2378,8 @@ dimenfactors.em = nil
dimenfactors["%"] = nil
dimenfactors.pct = nil
---[[ldx--
-Before a font is passed to
Here we deal with defining fonts. We do so by intercepting the
-default loader that only handles
We hardly gain anything when we cache the final (pre scaled)
-
We can prefix a font specification by
The following function split the font specification into components -and prepares a table that will move along as we proceed.
---ldx]]-- +-- We hardly gain anything when we cache the final (pre scaled) TFM table. But it +-- can be handy for debugging, so we no longer carry this code along. Also, we now +-- have quite some reference to other tables so we would end up with lots of +-- catches. +-- +-- We can prefix a font specification by "name:" or "file:". The first case will +-- result in a lookup in the synonym table. +-- +-- [ name: | file: ] identifier [ separator [ specification ] ] +-- +-- The following function split the font specification into components and prepares +-- a table that will move along as we proceed. -- beware, we discard additional specs -- @@ -164,9 +156,7 @@ if context then end ---[[ldx-- -We can resolve the filename using the next function:
---ldx]]-- +-- We can resolve the filename using the next function: definers.resolvers = definers.resolvers or { } local resolvers = definers.resolvers @@ -258,23 +248,17 @@ function definers.resolve(specification) return specification end ---[[ldx-- -The main read function either uses a forced reader (as determined by -a lookup) or tries to resolve the name using the list of readers.
- -We need to cache when possible. We do cache raw tfm data (from
Watch out, here we do load a font, but we don't prepare the -specification yet.
---ldx]]-- - --- very experimental: +-- The main read function either uses a forced reader (as determined by a lookup) or +-- tries to resolve the name using the list of readers. +-- +-- We need to cache when possible. We do cache raw tfm data (from TFM, AFM or OTF). +-- After that we can cache based on specificstion (name) and size, that is, TeX only +-- needs a number for an already loaded fonts. However, it may make sense to cache +-- fonts before they're scaled as well (store TFM's with applied methods and +-- features). However, there may be a relation between the size and features (esp in +-- virtual fonts) so let's not do that now. +-- +-- Watch out, here we do load a font, but we don't prepare the specification yet. function definers.applypostprocessors(tfmdata) local postprocessors = tfmdata.postprocessors @@ -439,17 +423,13 @@ function constructors.readanddefine(name,size) -- no id -- maybe a dummy first return fontdata[id], id end ---[[ldx-- -So far the specifiers. Now comes the real definer. Here we cache -based on id's. Here we also intercept the virtual font handler. Since -it evolved stepwise I may rewrite this bit (combine code).
- -In the previously defined reader (the one resulting in aWe overload the
Because encodings are going to disappear, we don't bother defining -them in tables. But we may do so some day, for consistency.
---ldx]]-- +-- Because encodings are going to disappear, we don't bother defining them in +-- tables. But we may do so some day, for consistency. local report_encoding = logs.reporter("fonts","encoding") @@ -43,24 +41,19 @@ function encodings.is_known(encoding) return containers.is_valid(encodings.cache,encoding) end ---[[ldx-- -An encoding file looks like this:
- -Beware! The generic encoding files don't always apply to the ones that
-ship with fonts. This has to do with the fact that names follow (slightly)
-different standards. However, the fonts where this applies to (for instance
-Latin Modern or
There is no unicode encoding but for practical purposes we define -one.
---ldx]]-- +-- There is no unicode encoding but for practical purposes we define one. -- maybe make this a function: diff --git a/tex/context/base/mkiv/font-fbk.lua b/tex/context/base/mkiv/font-fbk.lua index b6c9a430d..da04b50a8 100644 --- a/tex/context/base/mkiv/font-fbk.lua +++ b/tex/context/base/mkiv/font-fbk.lua @@ -10,10 +10,6 @@ local cos, tan, rad, format = math.cos, math.tan, math.rad, string.format local utfbyte, utfchar = utf.byte, utf.char local next = next ---[[ldx-- -This is very experimental code!
---ldx]]-- - local trace_visualize = false trackers.register("fonts.composing.visualize", function(v) trace_visualize = v end) local trace_define = false trackers.register("fonts.composing.define", function(v) trace_define = v end) diff --git a/tex/context/base/mkiv/font-imp-tex.lua b/tex/context/base/mkiv/font-imp-tex.lua index b4b9a7b69..87a1ae3aa 100644 --- a/tex/context/base/mkiv/font-imp-tex.lua +++ b/tex/context/base/mkiv/font-imp-tex.lua @@ -13,36 +13,31 @@ local otf = fonts.handlers.otf local registerotffeature = otf.features.register local addotffeature = otf.addfeature --- tlig (we need numbers for some fonts so ...) +-- We provide a few old and obsolete compatibility input features. We need numbers +-- for some fonts so no names here. Do we also need them for afm fonts? -local specification = { +local tlig = { type = "ligature", order = { "tlig" }, prepend = true, data = { - -- endash = "hyphen hyphen", - -- emdash = "hyphen hyphen hyphen", - [0x2013] = { 0x002D, 0x002D }, - [0x2014] = { 0x002D, 0x002D, 0x002D }, - -- quotedblleft = "quoteleft quoteleft", - -- quotedblright = "quoteright quoteright", - -- quotedblleft = "grave grave", - -- quotedblright = "quotesingle quotesingle", - -- quotedblbase = "comma comma", + [0x2013] = { 0x002D, 0x002D }, + [0x2014] = { 0x002D, 0x002D, 0x002D }, }, } -addotffeature("tlig",specification) - -registerotffeature { - -- this makes it a known feature (in tables) - name = "tlig", - description = "tex ligatures", +local tquo = { + type = "ligature", + order = { "tquo" }, + prepend = true, + data = { + [0x201C] = { 0x0060, 0x0060 }, + [0x201D] = { 0x0027, 0x0027 }, + [0x201E] = { 0x002C, 0x002C }, + }, } --- trep - -local specification = { +local trep = { type = "substitution", order = { "trep" }, prepend = true, @@ -53,13 +48,13 @@ local specification = { }, } -addotffeature("trep",specification) +addotffeature("trep",trep) -- last +addotffeature("tlig",tlig) +addotffeature("tquo",tquo) -- first -registerotffeature { - -- this makes it a known feature (in tables) - name = "trep", - description = "tex replacements", -} +registerotffeature { name = "tlig", description = "tex ligatures" } +registerotffeature { name = "tquo", description = "tex quotes" } +registerotffeature { name = "trep", description = "tex replacements" } -- some day this will be moved to font-imp-scripts.lua diff --git a/tex/context/base/mkiv/font-ini.lua b/tex/context/base/mkiv/font-ini.lua index 8bab6d902..201cc69f4 100644 --- a/tex/context/base/mkiv/font-ini.lua +++ b/tex/context/base/mkiv/font-ini.lua @@ -6,9 +6,7 @@ if not modules then modules = { } end modules ['font-ini'] = { license = "see context related readme files" } ---[[ldx-- -Not much is happening here.
---ldx]]-- +-- Not much is happening here. local allocate = utilities.storage.allocate local sortedhash = table.sortedhash diff --git a/tex/context/base/mkiv/font-log.lua b/tex/context/base/mkiv/font-log.lua index 092b5a62e..96b5864fd 100644 --- a/tex/context/base/mkiv/font-log.lua +++ b/tex/context/base/mkiv/font-log.lua @@ -19,12 +19,9 @@ fonts.loggers = loggers local usedfonts = utilities.storage.allocate() ----- loadedfonts = utilities.storage.allocate() ---[[ldx-- -The following functions are used for reporting about the fonts -used. The message itself is not that useful in regular runs but since -we now have several readers it may be handy to know what reader is -used for which font.
---ldx]]-- +-- The following functions are used for reporting about the fonts used. The message +-- itself is not that useful in regular runs but since we now have several readers +-- it may be handy to know what reader is used for which font. function loggers.onetimemessage(font,char,message,reporter) local tfmdata = fonts.hashes.identifiers[font] diff --git a/tex/context/base/mkiv/font-nod.lua b/tex/context/base/mkiv/font-nod.lua index a7dcfd9b0..1e39784d9 100644 --- a/tex/context/base/mkiv/font-nod.lua +++ b/tex/context/base/mkiv/font-nod.lua @@ -7,11 +7,6 @@ if not modules then modules = { } end modules ['font-nod'] = { license = "see context related readme files" } ---[[ldx-- -This is rather experimental. We need more control and some of this -might become a runtime module instead. This module will be cleaned up!
---ldx]]-- - local utfchar = utf.char local concat, fastcopy = table.concat, table.fastcopy local match, rep = string.match, string.rep diff --git a/tex/context/base/mkiv/font-one.lua b/tex/context/base/mkiv/font-one.lua index 829f52ea0..25efc2a04 100644 --- a/tex/context/base/mkiv/font-one.lua +++ b/tex/context/base/mkiv/font-one.lua @@ -7,18 +7,16 @@ if not modules then modules = { } end modules ['font-one'] = { license = "see context related readme files" } ---[[ldx-- -Some code may look a bit obscure but this has to do with the fact that we also use
-this code for testing and much code evolved in the transition from
The following code still has traces of intermediate font support where we handles -font encodings. Eventually font encoding went away but we kept some code around in -other modules.
- -This version implements a node mode approach so that users can also more easily -add features.
---ldx]]-- +-- Some code may look a bit obscure but this has to do with the fact that we also +-- use this code for testing and much code evolved in the transition from TFM to AFM +-- to OTF. +-- +-- The following code still has traces of intermediate font support where we handles +-- font encodings. Eventually font encoding went away but we kept some code around +-- in other modules. +-- +-- This version implements a node mode approach so that users can also more easily +-- add features. local fonts, logs, trackers, containers, resolvers = fonts, logs, trackers, containers, resolvers @@ -71,15 +69,13 @@ local overloads = fonts.mappings.overloads local applyruntimefixes = fonts.treatments and fonts.treatments.applyfixes ---[[ldx-- -We cache files. Caching is taken care of in the loader. We cheat a bit by adding -ligatures and kern information to the afm derived data. That way we can set them faster -when defining a font.
- -We still keep the loading two phased: first we load the data in a traditional
-fashion and later we transform it to sequences. Then we apply some methods also
-used in opentype fonts (like
These helpers extend the basic table with extra ligatures, texligatures -and extra kerns. This saves quite some lookups later.
---ldx]]-- +-- These helpers extend the basic table with extra ligatures, texligatures and extra +-- kerns. This saves quite some lookups later. local addthem = function(rawdata,ligatures) if ligatures then @@ -349,17 +343,14 @@ local function enhance_add_ligatures(rawdata) addthem(rawdata,afm.helpdata.ligatures) end ---[[ldx-- -We keep the extra kerns in separate kerning tables so that we can use -them selectively.
---ldx]]-- - --- This is rather old code (from the beginning when we had only tfm). If --- we unify the afm data (now we have names all over the place) then --- we can use shcodes but there will be many more looping then. But we --- could get rid of the tables in char-cmp then. Als, in the generic version --- we don't use the character database. (Ok, we can have a context specific --- variant). +-- We keep the extra kerns in separate kerning tables so that we can use them +-- selectively. +-- +-- This is rather old code (from the beginning when we had only tfm). If we unify +-- the afm data (now we have names all over the place) then we can use shcodes but +-- there will be many more looping then. But we could get rid of the tables in +-- char-cmp then. Als, in the generic version we don't use the character database. +-- (Ok, we can have a context specific variant). local function enhance_add_extra_kerns(rawdata) -- using shcodes is not robust here local descriptions = rawdata.descriptions @@ -440,9 +431,7 @@ local function enhance_add_extra_kerns(rawdata) -- using shcodes is not robust h do_it_copy(afm.helpdata.rightkerned) end ---[[ldx-- -The copying routine looks messy (and is indeed a bit messy).
---ldx]]-- +-- The copying routine looks messy (and is indeed a bit messy). local function adddimensions(data) -- we need to normalize afm to otf i.e. indexed table instead of name if data then @@ -619,11 +608,9 @@ end return nil end ---[[ldx-- -Originally we had features kind of hard coded for
As soon as we could intercept the
We have the usual two modes and related features initializers and processors.
---ldx]]-- +-- We have the usual two modes and related features initializers and processors. registerafmfeature { name = "mode", diff --git a/tex/context/base/mkiv/font-onr.lua b/tex/context/base/mkiv/font-onr.lua index 9e5a012bd..6234742a3 100644 --- a/tex/context/base/mkiv/font-onr.lua +++ b/tex/context/base/mkiv/font-onr.lua @@ -7,18 +7,16 @@ if not modules then modules = { } end modules ['font-onr'] = { license = "see context related readme files" } ---[[ldx-- -Some code may look a bit obscure but this has to do with the fact that we also use
-this code for testing and much code evolved in the transition from
The following code still has traces of intermediate font support where we handles -font encodings. Eventually font encoding went away but we kept some code around in -other modules.
- -This version implements a node mode approach so that users can also more easily -add features.
---ldx]]-- +-- Some code may look a bit obscure but this has to do with the fact that we also +-- use this code for testing and much code evolved in the transition from TFM to AFM +-- to OTF. +-- +-- The following code still has traces of intermediate font support where we handles +-- font encodings. Eventually font encoding went away but we kept some code around +-- in other modules. +-- +-- This version implements a node mode approach so that users can also more easily +-- add features. local fonts, logs, trackers, resolvers = fonts, logs, trackers, resolvers @@ -44,12 +42,9 @@ afm.readers = readers afm.version = 1.513 -- incrementing this number one up will force a re-cache ---[[ldx-- -We start with the basic reader which we give a name similar to the built in
We use a new (unfinished) pfb loader but I see no differences between the old -and new vectors (we actually had one bad vector with the old loader).
---ldx]]-- +-- We start with the basic reader which we give a name similar to the built in TFM +-- and OTF reader. We use a PFB loader but I see no differences between the old and +-- new vectors (we actually had one bad vector with the old loader). local get_indexes, get_shapes @@ -305,11 +300,10 @@ do end ---[[ldx-- -We start with the basic reader which we give a name similar to the built in
Analyzers run per script and/or language and are needed in order to -process features right.
---ldx]]-- +-- Analyzers run per script and/or language and are needed in order to process +-- features right. local setstate = nuts.setstate local getstate = nuts.getstate diff --git a/tex/context/base/mkiv/font-ots.lua b/tex/context/base/mkiv/font-ots.lua index 6d7c5fb25..48f85c365 100644 --- a/tex/context/base/mkiv/font-ots.lua +++ b/tex/context/base/mkiv/font-ots.lua @@ -7,92 +7,90 @@ if not modules then modules = { } end modules ['font-ots'] = { -- sequences license = "see context related readme files", } ---[[ldx-- -I need to check the description at the microsoft site ... it has been improved -so maybe there are some interesting details there. Most below is based on old and -incomplete documentation and involved quite a bit of guesswork (checking with the -abstract uniscribe of those days. But changing things is tricky!
- -This module is a bit more split up that I'd like but since we also want to test
-with plain
The specification of OpenType is (or at least decades ago was) kind of vague. -Apart from a lack of a proper free specifications there's also the problem that -Microsoft and Adobe may have their own interpretation of how and in what order to -apply features. In general the Microsoft website has more detailed specifications -and is a better reference. There is also some information in the FontForge help -files. In the end we rely most on the Microsoft specification.
- -Because there is so much possible, fonts might contain bugs and/or be made to -work with certain rederers. These may evolve over time which may have the side -effect that suddenly fonts behave differently. We don't want to catch all font -issues.
- -After a lot of experiments (mostly by Taco, me and Idris) the first implementation
-was already quite useful. When it did most of what we wanted, a more optimized version
-evolved. Of course all errors are mine and of course the code can be improved. There
-are quite some optimizations going on here and processing speed is currently quite
-acceptable and has been improved over time. Many complex scripts are not yet supported
-yet, but I will look into them as soon as
The specification leaves room for interpretation. In case of doubt the Microsoft -implementation is the reference as it is the most complete one. As they deal with -lots of scripts and fonts, Kai and Ivo did a lot of testing of the generic code and -their suggestions help improve the code. I'm aware that not all border cases can be -taken care of, unless we accept excessive runtime, and even then the interference -with other mechanisms (like hyphenation) are not trivial.
- -Especially discretionary handling has been improved much by Kai Eigner who uses complex -(latin) fonts. The current implementation is a compromis between his patches and my code -and in the meantime performance is quite ok. We cannot check all border cases without -compromising speed but so far we're okay. Given good test cases we can probably improve -it here and there. Especially chain lookups are non trivial with discretionaries but -things got much better over time thanks to Kai.
- -Glyphs are indexed not by unicode but in their own way. This is because there is no
-relationship with unicode at all, apart from the fact that a font might cover certain
-ranges of characters. One character can have multiple shapes. However, at the
-
The initial data table is rather close to the open type specification and also not
-that different from the one produced by
This module is sparsely documented because it is has been a moving target. The -table format of the reader changed a bit over time and we experiment a lot with -different methods for supporting features. By now the structures are quite stable
- -Incrementing the version number will force a re-cache. We jump the number by one -when there's a fix in the reader or processing code that can result in different -results.
- -This code is also used outside context but in context it has to work with other -mechanisms. Both put some constraints on the code here.
- ---ldx]]-- - --- Remark: We assume that cursives don't cross discretionaries which is okay because it --- is only used in semitic scripts. +-- I need to check the description at the microsoft site ... it has been improved so +-- maybe there are some interesting details there. Most below is based on old and +-- incomplete documentation and involved quite a bit of guesswork (checking with the +-- abstract uniscribe of those days. But changing things is tricky! +-- +-- This module is a bit more split up that I'd like but since we also want to test +-- with plain TeX it has to be so. This module is part of ConTeXt and discussion +-- about improvements and functionality mostly happens on the ConTeXt mailing list. +-- +-- The specification of OpenType is (or at least decades ago was) kind of vague. +-- Apart from a lack of a proper free specifications there's also the problem that +-- Microsoft and Adobe may have their own interpretation of how and in what order to +-- apply features. In general the Microsoft website has more detailed specifications +-- and is a better reference. There is also some information in the FontForge help +-- files. In the end we rely most on the Microsoft specification. +-- +-- Because there is so much possible, fonts might contain bugs and/or be made to +-- work with certain rederers. These may evolve over time which may have the side +-- effect that suddenly fonts behave differently. We don't want to catch all font +-- issues. +-- +-- After a lot of experiments (mostly by Taco, me and Idris) the first +-- implementation was already quite useful. When it did most of what we wanted, a +-- more optimized version evolved. Of course all errors are mine and of course the +-- code can be improved. There are quite some optimizations going on here and +-- processing speed is currently quite acceptable and has been improved over time. +-- Many complex scripts are not yet supported yet, but I will look into them as soon +-- as ConTeXt users ask for it. +-- +-- The specification leaves room for interpretation. In case of doubt the Microsoft +-- implementation is the reference as it is the most complete one. As they deal with +-- lots of scripts and fonts, Kai and Ivo did a lot of testing of the generic code +-- and their suggestions help improve the code. I'm aware that not all border cases +-- can be taken care of, unless we accept excessive runtime, and even then the +-- interference with other mechanisms (like hyphenation) are not trivial. +-- +-- Especially discretionary handling has been improved much by Kai Eigner who uses +-- complex (latin) fonts. The current implementation is a compromis between his +-- patches and my code and in the meantime performance is quite ok. We cannot check +-- all border cases without compromising speed but so far we're okay. Given good +-- test cases we can probably improve it here and there. Especially chain lookups +-- are non trivial with discretionaries but things got much better over time thanks +-- to Kai. +-- +-- Glyphs are indexed not by unicode but in their own way. This is because there is +-- no relationship with unicode at all, apart from the fact that a font might cover +-- certain ranges of characters. One character can have multiple shapes. However, at +-- the TeX end we use unicode so and all extra glyphs are mapped into a private +-- space. This is needed because we need to access them and TeX has to include then +-- in the output eventually. +-- +-- The initial data table is rather close to the open type specification and also +-- not that different from the one produced by Fontforge but we uses hashes instead. +-- In ConTeXt that table is packed (similar tables are shared) and cached on disk so +-- that successive runs can use the optimized table (after loading the table is +-- unpacked). +-- +-- This module is sparsely documented because it is has been a moving target. The +-- table format of the reader changed a bit over time and we experiment a lot with +-- different methods for supporting features. By now the structures are quite stable +-- +-- Incrementing the version number will force a re-cache. We jump the number by one +-- when there's a fix in the reader or processing code that can result in different +-- results. +-- +-- This code is also used outside ConTeXt but in ConTeXt it has to work with other +-- mechanisms. Both put some constraints on the code here. +-- +-- Remark: We assume that cursives don't cross discretionaries which is okay because +-- it is only used in semitic scripts. -- -- Remark: We assume that marks precede base characters. -- --- Remark: When complex ligatures extend into discs nodes we can get side effects. Normally --- this doesn't happen; ff\d{l}{l}{l} in lm works but ff\d{f}{f}{f}. +-- Remark: When complex ligatures extend into discs nodes we can get side effects. +-- Normally this doesn't happen; ff\d{l}{l}{l} in lm works but ff\d{f}{f}{f}. -- -- Todo: check if we copy attributes to disc nodes if needed. -- --- Todo: it would be nice if we could get rid of components. In other places we can use --- the unicode properties. We can just keep a lua table. +-- Todo: it would be nice if we could get rid of components. In other places we can +-- use the unicode properties. We can just keep a lua table. -- --- Remark: We do some disc juggling where we need to keep in mind that the pre, post and --- replace fields can have prev pointers to a nesting node ... I wonder if that is still --- needed. +-- Remark: We do some disc juggling where we need to keep in mind that the pre, post +-- and replace fields can have prev pointers to a nesting node ... I wonder if that +-- is still needed. -- -- Remark: This is not possible: -- @@ -1038,10 +1036,8 @@ function handlers.gpos_pair(head,start,dataset,sequence,kerns,rlmode,skiphash,st end end ---[[ldx-- -We get hits on a mark, but we're not sure if the it has to be applied so -we need to explicitly test for basechar, baselig and basemark entries.
---ldx]]-- +-- We get hits on a mark, but we're not sure if the it has to be applied so we need +-- to explicitly test for basechar, baselig and basemark entries. function handlers.gpos_mark2base(head,start,dataset,sequence,markanchors,rlmode,skiphash) local markchar = getchar(start) @@ -1236,10 +1232,8 @@ function handlers.gpos_cursive(head,start,dataset,sequence,exitanchors,rlmode,sk return head, start, false end ---[[ldx-- -I will implement multiple chain replacements once I run into a font that uses -it. It's not that complex to handle.
---ldx]]-- +-- I will implement multiple chain replacements once I run into a font that uses it. +-- It's not that complex to handle. local chainprocs = { } @@ -1292,29 +1286,22 @@ end chainprocs.reversesub = reversesub ---[[ldx-- -This chain stuff is somewhat tricky since we can have a sequence of actions to be -applied: single, alternate, multiple or ligature where ligature can be an invalid -one in the sense that it will replace multiple by one but not neccessary one that -looks like the combination (i.e. it is the counterpart of multiple then). For -example, the following is valid:
- -Therefore we we don't really do the replacement here already unless we have the -single lookup case. The efficiency of the replacements can be improved by deleting -as less as needed but that would also make the code even more messy.
---ldx]]-- - ---[[ldx-- -Here we replace start by a single variant.
---ldx]]-- - --- To be done (example needed): what if > 1 steps - --- this is messy: do we need this disc checking also in alternates? +-- This chain stuff is somewhat tricky since we can have a sequence of actions to be +-- applied: single, alternate, multiple or ligature where ligature can be an invalid +-- one in the sense that it will replace multiple by one but not neccessary one that +-- looks like the combination (i.e. it is the counterpart of multiple then). For +-- example, the following is valid: +-- +-- xxxabcdexxx [single a->A][multiple b->BCD][ligature cde->E] xxxABCDExxx +-- +-- Therefore we we don't really do the replacement here already unless we have the +-- single lookup case. The efficiency of the replacements can be improved by +-- deleting as less as needed but that would also make the code even more messy. +-- +-- Here we replace start by a single variant. +-- +-- To be done : what if > 1 steps (example needed) +-- This is messy: do we need this disc checking also in alternates? local function reportzerosteps(dataset,sequence) logwarning("%s: no steps",cref(dataset,sequence)) @@ -1390,9 +1377,7 @@ function chainprocs.gsub_single(head,start,stop,dataset,sequence,currentlookup,r return head, start, false end ---[[ldx-- -Here we replace start by new glyph. First we delete the rest of the match.
---ldx]]-- +-- Here we replace start by new glyph. First we delete the rest of the match. -- char_1 mark_1 -> char_x mark_1 (ignore marks) -- char_1 mark_1 -> char_x @@ -1444,9 +1429,7 @@ function chainprocs.gsub_alternate(head,start,stop,dataset,sequence,currentlooku return head, start, false end ---[[ldx-- -Here we replace start by a sequence of new glyphs.
---ldx]]-- +-- Here we replace start by a sequence of new glyphs. function chainprocs.gsub_multiple(head,start,stop,dataset,sequence,currentlookup,rlmode,skiphash,chainindex) local mapping = currentlookup.mapping @@ -1470,11 +1453,9 @@ function chainprocs.gsub_multiple(head,start,stop,dataset,sequence,currentlookup return head, start, false end ---[[ldx-- -When we replace ligatures we use a helper that handles the marks. I might change -this function (move code inline and handle the marks by a separate function). We -assume rather stupid ligatures (no complex disc nodes).
---ldx]]-- +-- When we replace ligatures we use a helper that handles the marks. I might change +-- this function (move code inline and handle the marks by a separate function). We +-- assume rather stupid ligatures (no complex disc nodes). -- compare to handlers.gsub_ligature which is more complex ... why @@ -2532,7 +2513,7 @@ local function handle_contextchain(head,start,dataset,sequence,contexts,rlmode,s -- fonts can have many steps (each doing one check) or many contexts -- todo: make a per-char cache so that we have small contexts (when we have a context - -- n == 1 and otherwise it can be more so we can even distingish n == 1 or more) + -- n == 1 and otherwise it can be more so we can even distinguish n == 1 or more) local nofcontexts = contexts.n -- #contexts diff --git a/tex/context/base/mkiv/font-syn.lua b/tex/context/base/mkiv/font-syn.lua index e80d57f41..9fba3d8d4 100644 --- a/tex/context/base/mkiv/font-syn.lua +++ b/tex/context/base/mkiv/font-syn.lua @@ -56,10 +56,8 @@ local trace_rejections = false trackers.register("fonts.rejections", fu local report_names = logs.reporter("fonts","names") ---[[ldx-- -This module implements a name to filename resolver. Names are resolved -using a table that has keys filtered from the font related files.
---ldx]]-- +-- This module implements a name to filename resolver. Names are resolved using a +-- table that has keys filtered from the font related files. fonts = fonts or { } -- also used elsewhere @@ -88,10 +86,6 @@ local autoreload = true directives.register("fonts.autoreload", function(v) autoreload = toboolean(v) end) directives.register("fonts.usesystemfonts", function(v) usesystemfonts = toboolean(v) end) ---[[ldx-- -A few helpers.
---ldx]]-- - -- -- what to do with these -- -- -- -- thin -> thin @@ -305,10 +299,8 @@ local function analyzespec(somename) end end ---[[ldx-- -It would make sense to implement the filters in the related modules, -but to keep the overview, we define them here.
---ldx]]-- +-- It would make sense to implement the filters in the related modules, but to keep +-- the overview, we define them here. filters.afm = fonts.handlers.afm.readers.getinfo filters.otf = fonts.handlers.otf.readers.getinfo @@ -412,11 +404,9 @@ filters.ttc = filters.otf -- end -- end ---[[ldx-- -The scanner loops over the filters using the information stored in -the file databases. Watch how we check not only for the names, but also -for combination with the weight of a font.
---ldx]]-- +-- The scanner loops over the filters using the information stored in the file +-- databases. Watch how we check not only for the names, but also for combination +-- with the weight of a font. filters.list = { "otf", "ttf", "ttc", "afm", -- no longer dfont support (for now) @@ -1402,11 +1392,8 @@ local function is_reloaded() end end ---[[ldx-- -The resolver also checks if the cached names are loaded. Being clever -here is for testing purposes only (it deals with names prefixed by an -encoding name).
---ldx]]-- +-- The resolver also checks if the cached names are loaded. Being clever here is for +-- testing purposes only (it deals with names prefixed by an encoding name). local function fuzzy(mapping,sorted,name,sub) -- no need for reverse sorted here local condensed = gsub(name,"[^%a%d]","") diff --git a/tex/context/base/mkiv/font-tfm.lua b/tex/context/base/mkiv/font-tfm.lua index 945421a42..81f94532b 100644 --- a/tex/context/base/mkiv/font-tfm.lua +++ b/tex/context/base/mkiv/font-tfm.lua @@ -50,21 +50,18 @@ constructors.resolvevirtualtoo = false -- wil be set in font-ctx.lua fonts.formats.tfm = "type1" -- we need to have at least a value here fonts.formats.ofm = "type1" -- we need to have at least a value here ---[[ldx-- -The next function encapsulates the standard
We provide a simple treatment mechanism (mostly because I want to demonstrate -something in a manual). It's one of the few places where an lfg file gets loaded -outside the goodies manager.
---ldx]]-- +-- We provide a simple treatment mechanism (mostly because I want to demonstrate +-- something in a manual). It's one of the few places where an lfg file gets loaded +-- outside the goodies manager. local treatments = fonts.treatments or { } fonts.treatments = treatments diff --git a/tex/context/base/mkiv/font-vir.lua b/tex/context/base/mkiv/font-vir.lua index c3071cac0..6142ddafd 100644 --- a/tex/context/base/mkiv/font-vir.lua +++ b/tex/context/base/mkiv/font-vir.lua @@ -6,9 +6,8 @@ if not modules then modules = { } end modules ['font-vir'] = { license = "see context related readme files" } ---[[ldx-- -This is very experimental code! Not yet adapted to recent changes. This will change.
---ldx]]-- +-- This is very experimental code! Not yet adapted to recent changes. This will +-- change. Actually we moved on. -- present in the backend but unspecified: -- @@ -25,10 +24,8 @@ local constructors = fonts.constructors local vf = constructors.handlers.vf vf.version = 1.000 -- same as tfm ---[[ldx-- -We overload the
Hyphenating
Callbacks are the real asset of
When you (temporarily) want to install a callback function, and after a -while wants to revert to the original one, you can use the following two -functions. This only works for non-frozen ones.
---ldx]]-- +-- When you (temporarily) want to install a callback function, and after a while +-- wants to revert to the original one, you can use the following two functions. +-- This only works for non-frozen ones. local trace_callbacks = false trackers.register("system.callbacks", function(v) trace_callbacks = v end) local trace_calls = false -- only used when analyzing performance and initializations @@ -47,13 +43,12 @@ local list = callbacks.list local permit_overloads = false local block_overloads = false ---[[ldx-- -By now most callbacks are frozen and most provide a way to plug in your own code. For instance -all node list handlers provide before/after namespaces and the file handling code can be extended -by adding schemes and if needed I can add more hooks. So there is no real need to overload a core -callback function. It might be ok for quick and dirty testing but anyway you're on your own if -you permanently overload callback functions.
---ldx]]-- +-- By now most callbacks are frozen and most provide a way to plug in your own code. +-- For instance all node list handlers provide before/after namespaces and the file +-- handling code can be extended by adding schemes and if needed I can add more +-- hooks. So there is no real need to overload a core callback function. It might be +-- ok for quick and dirty testing but anyway you're on your own if you permanently +-- overload callback functions. -- This might become a configuration file only option when it gets abused too much. @@ -279,65 +274,50 @@ end) -- callbacks.freeze("read_.*_file","reading file") -- callbacks.freeze("open_.*_file","opening file") ---[[ldx-- -The simple case is to remove the callback:
- -
-callbacks.push('linebreak_filter')
-... some actions ...
-callbacks.pop('linebreak_filter')
-
-
-Often, in such case, another callback or a macro call will pop -the original.
- -In practice one will install a new handler, like in:
- -
-callbacks.push('linebreak_filter', function(...)
- return something_done(...)
-end)
-
-
-Even more interesting is:
- -
-callbacks.push('linebreak_filter', function(...)
- callbacks.pop('linebreak_filter')
- return something_done(...)
-end)
-
-
-This does a one-shot.
---ldx]]-- - ---[[ldx-- -Callbacks may result in
At some point in the development we did some tests with counting -nodes (in this case 121049).
- -setstepmul | seconds | megabytes |
200 | 24.0 | 80.5 |
175 | 21.0 | 78.2 |
150 | 22.0 | 74.6 |
160 | 22.0 | 74.6 |
165 | 21.0 | 77.6 |
125 | 21.5 | 89.2 |
100 | 21.5 | 88.4 |
The following code is kind of experimental. In the documents
-that describe the development of
We cannot load anything yet. However what we will do us reserve a few tables. -These can be used for runtime user data or third party modules and will not be -cluttered by macro package code.
---ldx]]-- +-- We cannot load anything yet. However what we will do us reserve a few tables. +-- These can be used for runtime user data or third party modules and will not be +-- cluttered by macro package code. userdata = userdata or { } -- for users (e.g. functions etc) thirddata = thirddata or { } -- only for third party modules diff --git a/tex/context/base/mkiv/lxml-aux.lua b/tex/context/base/mkiv/lxml-aux.lua index fc17371e5..217f81c13 100644 --- a/tex/context/base/mkiv/lxml-aux.lua +++ b/tex/context/base/mkiv/lxml-aux.lua @@ -110,11 +110,7 @@ function xml.processattributes(root,pattern,handle) return collected end ---[[ldx-- -The following functions collect elements and texts.
---ldx]]-- - --- are these still needed -> lxml-cmp.lua +-- The following functions collect elements and texts. function xml.collect(root, pattern) return xmlapplylpath(root,pattern) @@ -153,9 +149,7 @@ function xml.collect_tags(root, pattern, nonamespace) end end ---[[ldx-- -We've now arrived at the functions that manipulate the tree.
---ldx]]-- +-- We've now arrived at the functions that manipulate the tree. local no_root = { no_root = true } @@ -780,9 +774,7 @@ function xml.remapname(root, pattern, newtg, newns, newrn) end end ---[[ldx-- -Helper (for q2p).
---ldx]]-- +-- Helper (for q2p). function xml.cdatatotext(e) local dt = e.dt @@ -879,9 +871,7 @@ end -- xml.addentitiesdoctype(x,"hexadecimal") -- print(x) ---[[ldx-- -Here are a few synonyms.
---ldx]]-- +-- Here are a few synonyms: xml.all = xml.each xml.insert = xml.insertafter diff --git a/tex/context/base/mkiv/lxml-ent.lua b/tex/context/base/mkiv/lxml-ent.lua index df80a7985..1d6d058b6 100644 --- a/tex/context/base/mkiv/lxml-ent.lua +++ b/tex/context/base/mkiv/lxml-ent.lua @@ -10,14 +10,10 @@ local next = next local byte, format = string.byte, string.format local setmetatableindex = table.setmetatableindex ---[[ldx-- -We provide (at least here) two entity handlers. The more extensive
-resolver consults a hash first, tries to convert to
We do things different now but it's still somewhat experimental
---ldx]]-- +-- We provide (at least here) two entity handlers. The more extensive resolver +-- consults a hash first, tries to convert to UTF next, and finaly calls a handler +-- when defines. When this all fails, the original entity is returned. We do things +-- different now but it's still somewhat experimental. local trace_entities = false trackers.register("xml.entities", function(v) trace_entities = v end) diff --git a/tex/context/base/mkiv/lxml-lpt.lua b/tex/context/base/mkiv/lxml-lpt.lua index 78a9fca2e..d242b07de 100644 --- a/tex/context/base/mkiv/lxml-lpt.lua +++ b/tex/context/base/mkiv/lxml-lpt.lua @@ -20,28 +20,21 @@ local formatters = string.formatters -- no need (yet) as paths are cached anyway -- beware, this is not xpath ... e.g. position is different (currently) and -- we have reverse-sibling as reversed preceding sibling ---[[ldx-- -This module can be used stand alone but also inside
If I can get in the mood I will make a variant that is XSLT compliant -but I wonder if it makes sense.
---ldx]]-- - ---[[ldx-- -Expecially the lpath code is experimental, we will support some of xpath, but
-only things that make sense for us; as compensation it is possible to hook in your
-own functions. Apart from preprocessing content for
We've now arrived at an interesting part: accessing the tree using a subset
-of
This is the main filter function. It returns whatever is asked for.
---ldx]]-- + +-- This is the main filter function. It returns whatever is asked for. function xml.filter(root,pattern) -- no longer funny attribute handling here return applylpath(root,pattern) @@ -1525,21 +1515,16 @@ expressions.tag = function(e,n) -- only tg end end ---[[ldx-- -Often using an iterators looks nicer in the code than passing handler
-functions. The
The following helper functions best belong to the
The parser used here is inspired by the variant discussed in the lua book, but
-handles comment and processing instructions, has a different structure, provides
-parent access; a first version used different trickery but was less optimized to we
-went this route. First we had a find based parser, now we have an
First a hack to enable namespace resolving. A namespace is characterized by
-a
The next function associates a namespace prefix with an
The next function also registers a namespace, but this time we map a
-given namespace prefix onto a registered one, using the given
-
Next we provide a way to turn an
A namespace in an element can be remapped onto the registered
-one efficiently by using the
This version uses
Next comes the parser. The rather messy doctype definition comes in many
-disguises so it is no surprice that later on have to dedicate quite some
-
The code may look a bit complex but this is mostly due to the fact that we -resolve namespaces and attach metatables. There is only one public function:
- -An optional second boolean argument tells this function not to create a root -element.
- -Valid entities are:
- -Packaging data in an xml like table is done with the following -function. Maybe it will go away (when not used).
---ldx]]-- +-- Packaging data in an xml like table is done with the following function. Maybe it +-- will go away (when not used). function xml.is_valid(root) return root and root.dt and root.dt[1] and type(root.dt[1]) == "table" and not root.dt[1].er @@ -1354,11 +1326,8 @@ end xml.errorhandler = report_xml ---[[ldx-- -We cannot load an
When we inject new elements, we need to convert strings to -valid trees, which is what the next function does.
---ldx]]-- +-- When we inject new elements, we need to convert strings to valid trees, which is +-- what the next function does. local no_root = { no_root = true } @@ -1398,11 +1365,9 @@ function xml.toxml(data) end end ---[[ldx-- -For copying a tree we use a dedicated function instead of the -generic table copier. Since we know what we're dealing with we -can speed up things a bit. The second argument is not to be used!
---ldx]]-- +-- For copying a tree we use a dedicated function instead of the generic table +-- copier. Since we know what we're dealing with we can speed up things a bit. The +-- second argument is not to be used! -- local function copy(old) -- if old then @@ -1466,13 +1431,10 @@ end xml.copy = copy ---[[ldx-- -In
At the cost of some 25% runtime overhead you can first convert the tree to a string -and then handle the lot.
---ldx]]-- +-- At the cost of some 25% runtime overhead you can first convert the tree to a +-- string and then handle the lot. -- new experimental reorganized serialize @@ -1711,21 +1671,18 @@ newhandlers { } } ---[[ldx-- -How you deal with saving data depends on your preferences. For a 40 MB database -file the timing on a 2.3 Core Duo are as follows (time in seconds):
- -Beware, these were timing with the old routine but measurements will not be that -much different I guess.
---ldx]]-- +-- How you deal with saving data depends on your preferences. For a 40 MB database +-- file the timing on a 2.3 Core Duo are as follows (time in seconds): +-- +-- 1.3 : load data from file to string +-- 6.1 : convert string into tree +-- 5.3 : saving in file using xmlsave +-- 6.8 : converting to string using xml.tostring +-- 3.6 : saving converted string in file +-- +-- Beware, these were timing with the old routine but measurements will not be that +-- much different I guess. -- maybe this will move to lxml-xml @@ -1827,10 +1784,8 @@ xml.newhandlers = newhandlers xml.serialize = serialize xml.tostring = xmltostring ---[[ldx-- -The next function operated on the content only and needs a handle function -that accepts a string.
---ldx]]-- +-- The next function operated on the content only and needs a handle function that +-- accepts a string. local function xmlstring(e,handle) if not handle or (e.special and e.tg ~= "@rt@") then @@ -1849,9 +1804,7 @@ end xml.string = xmlstring ---[[ldx-- -A few helpers:
---ldx]]-- +-- A few helpers: --~ xmlsetproperty(root,"settings",settings) @@ -1899,11 +1852,9 @@ function xml.name(root) end end ---[[ldx-- -The next helper erases an element but keeps the table as it is, -and since empty strings are not serialized (effectively) it does -not harm. Copying the table would take more time. Usage:
---ldx]]-- +-- The next helper erases an element but keeps the table as it is, and since empty +-- strings are not serialized (effectively) it does not harm. Copying the table +-- would take more time. function xml.erase(dt,k) if dt then @@ -1915,13 +1866,9 @@ function xml.erase(dt,k) end end ---[[ldx-- -The next helper assigns a tree (or string). Usage:
- -The next helper assigns a tree (or string). Usage:
-Remapping mathematics alphabets.
---ldx]]-- - --- oldstyle: not really mathematics but happened to be part of --- the mathematics fonts in cmr --- --- persian: we will also provide mappers for other --- scripts - --- todo: alphabets namespace --- maybe: script/scriptscript dynamic, - --- superscripped primes get unscripted ! - --- to be looked into once the fonts are ready (will become font --- goodie): --- --- (U+2202,U+1D715) : upright --- (U+2202,U+1D715) : italic --- (U+2202,U+1D715) : upright --- --- plus add them to the regular vectors below so that they honor \it etc +-- persian: we will also provide mappers for other scripts +-- todo : alphabets namespace +-- maybe : script/scriptscript dynamic, +-- check : (U+2202,U+1D715) : upright +-- (U+2202,U+1D715) : italic +-- (U+2202,U+1D715) : upright +-- add them to the regular vectors below so that they honor \it etc local type, next = type, next local merged, sortedhash = table.merged, table.sortedhash diff --git a/tex/context/base/mkiv/meta-fun.lua b/tex/context/base/mkiv/meta-fun.lua index ddbbd9a52..aa388b0ca 100644 --- a/tex/context/base/mkiv/meta-fun.lua +++ b/tex/context/base/mkiv/meta-fun.lua @@ -13,15 +13,18 @@ local format, load, type = string.format, load, type local context = context local metapost = metapost -metapost.metafun = metapost.metafun or { } -local metafun = metapost.metafun +local metafun = metapost.metafun or { } +metapost.metafun = metafun function metafun.topath(t,connector) context("(") if #t > 0 then + if not connector then + connector = ".." + end for i=1,#t do if i > 1 then - context(connector or "..") + context(connector) end local ti = t[i] if type(ti) == "string" then @@ -39,12 +42,15 @@ end function metafun.interpolate(f,b,e,s,c) local done = false context("(") - for i=b,e,(e-b)/s do - local d = load(format("return function(x) return %s end",f)) - if d then - d = d() + local d = load(format("return function(x) return %s end",f)) + if d then + d = d() + if not c then + c = "..." + end + for i=b,e,(e-b)/s do if done then - context(c or "...") + context(c) else done = true end diff --git a/tex/context/base/mkiv/mlib-fio.lua b/tex/context/base/mkiv/mlib-fio.lua index 51c88eb22..39a709505 100644 --- a/tex/context/base/mkiv/mlib-fio.lua +++ b/tex/context/base/mkiv/mlib-fio.lua @@ -54,8 +54,18 @@ local function validftype(ftype) end end +local remapped = { + -- We don't yet have an interface for adding more here but when needed + -- there will be one. + ["hatching.mp"] = "mp-remapped-hatching.mp", + ["boxes.mp"] = "mp-remapped-boxes.mp", + ["hatching"] = "mp-remapped-hatching.mp", + ["boxes"] = "mp-remapped-boxes.mp", +} + finders.file = function(specification,name,mode,ftype) - return resolvers.findfile(name,validftype(ftype)) + local usedname = remapped[name] or name + return resolvers.findfile(usedname,validftype(ftype)) end local function i_finder(name,mode,ftype) -- fake message for mpost.map and metafun.mpvi diff --git a/tex/context/base/mkiv/mlib-run.lua b/tex/context/base/mkiv/mlib-run.lua index 602d6f36c..82426668f 100644 --- a/tex/context/base/mkiv/mlib-run.lua +++ b/tex/context/base/mkiv/mlib-run.lua @@ -6,28 +6,12 @@ if not modules then modules = { } end modules ['mlib-run'] = { license = "see context related readme files", } --- cmyk -> done, native --- spot -> done, but needs reworking (simpler) --- multitone -> --- shade -> partly done, todo: cm --- figure -> done --- hyperlink -> low priority, easy - --- new * run --- or --- new * execute^1 * finish - --- a*[b,c] == b + a * (c-b) - ---[[ldx-- -The directional helpers and pen analysis are more or less translated from the
-
Most of the code that had accumulated here is now separated in modules.
---ldx]]-- - --- I need to clean up this module as it's a bit of a mess now. The latest luatex --- has most tables but we have a few more in luametatex. Also, some are different --- between these engines. We started out with hardcoded tables, that then ended --- up as comments and are now gone (as they differ per engine anyway). +-- Most of the code that had accumulated here is now separated in modules. local next, type, tostring = next, type, tostring local gsub = string.gsub local concat, remove = table.concat, table.remove local sortedhash, sortedkeys, swapped = table.sortedhash, table.sortedkeys, table.swapped ---[[ldx-- -Access to nodes is what gives
When manipulating node lists in
First of all, we noticed that the bottleneck is more with excessive callbacks
-(some gets called very often) and the conversion from and to
This resulted in two special situations in passing nodes back to
Insertion is handled (at least in
When we collapse (something that we only do when really needed), we also -ignore the empty nodes. [This is obsolete!]
---ldx]]-- +-- Access to nodes is what gives LuaTeX its power. Here we implement a few helper +-- functions. These functions are rather optimized. +-- +-- When manipulating node lists in ConTeXt, we will remove nodes and insert new +-- ones. While node access was implemented, we did quite some experiments in order +-- to find out if manipulating nodes in Lua was feasible from the perspective of +-- performance. +-- +-- First of all, we noticed that the bottleneck is more with excessive callbacks +-- (some gets called very often) and the conversion from and to TeX's +-- datastructures. However, at the Lua end, we found that inserting and deleting +-- nodes in a table could become a bottleneck. +-- +-- This resulted in two special situations in passing nodes back to TeX: a table +-- entry with value 'false' is ignored, and when instead of a table 'true' is +-- returned, the original table is used. +-- +-- Insertion is handled (at least in ConTeXt as follows. When we need to insert a +-- node at a certain position, we change the node at that position by a dummy node, +-- tagged 'inline' which itself has_attribute the original node and one or more new +-- nodes. Before we pass back the list we collapse the list. Of course collapsing +-- could be built into the TeX engine, but this is a not so natural extension. + +-- When we collapse (something that we only do when really needed), we also ignore +-- the empty nodes. [This is obsolete!] -- local gf = node.direct.getfield -- local n = table.setmetatableindex("number") diff --git a/tex/context/base/mkiv/node-res.lua b/tex/context/base/mkiv/node-res.lua index 5c669f9da..f2c6e97e9 100644 --- a/tex/context/base/mkiv/node-res.lua +++ b/tex/context/base/mkiv/node-res.lua @@ -9,11 +9,6 @@ if not modules then modules = { } end modules ['node-res'] = { local type, next = type, next local gmatch, format = string.gmatch, string.format ---[[ldx-- -The next function is not that much needed but in
This is rather experimental. We need more control and some of this -might become a runtime module instead. This module will be cleaned up!
---ldx]]-- +-- Some of the code here might become a runtime module instead. This old module will +-- be cleaned up anyway! local next = next local utfchar = utf.char diff --git a/tex/context/base/mkiv/pack-obj.lua b/tex/context/base/mkiv/pack-obj.lua index 445085776..dda828749 100644 --- a/tex/context/base/mkiv/pack-obj.lua +++ b/tex/context/base/mkiv/pack-obj.lua @@ -6,10 +6,8 @@ if not modules then modules = { } end modules ['pack-obj'] = { license = "see context related readme files" } ---[[ldx-- -We save object references in the main utility table. jobobjects are -reusable components.
---ldx]]-- +-- We save object references in the main utility table; job objects are reusable +-- components. local context = context local codeinjections = backends.codeinjections diff --git a/tex/context/base/mkiv/pack-rul.lua b/tex/context/base/mkiv/pack-rul.lua index 98117867c..20db028ec 100644 --- a/tex/context/base/mkiv/pack-rul.lua +++ b/tex/context/base/mkiv/pack-rul.lua @@ -7,10 +7,6 @@ if not modules then modules = { } end modules ['pack-rul'] = { license = "see context related readme files" } ---[[ldx-- -An explanation is given in the history document
This is a prelude to integrated bibliography support. This file just loads -bibtex files and converts them to xml so that the we access the content -in a convenient way. Actually handling the data takes place elsewhere.
---ldx]]-- - if not characters then dofile(resolvers.findfile("char-utf.lua")) dofile(resolvers.findfile("char-tex.lua")) diff --git a/tex/context/base/mkiv/publ-ini.lua b/tex/context/base/mkiv/publ-ini.lua index dac0ab441..aa96dd8bc 100644 --- a/tex/context/base/mkiv/publ-ini.lua +++ b/tex/context/base/mkiv/publ-ini.lua @@ -296,7 +296,8 @@ do local checksum = nil local username = file.addsuffix(file.robustname(formatters["%s-btx-%s"](prefix,name)),"lua") if userdata and next(userdata) then - if job.passes.first then + if environment.currentrun == 1 then + -- if job.passes.first then local newdata = serialize(userdata) checksum = md5.HEX(newdata) io.savedata(username,newdata) diff --git a/tex/context/base/mkiv/publ-ini.mkiv b/tex/context/base/mkiv/publ-ini.mkiv index 6e34d3ab5..05d93ef85 100644 --- a/tex/context/base/mkiv/publ-ini.mkiv +++ b/tex/context/base/mkiv/publ-ini.mkiv @@ -342,7 +342,7 @@ \newtoks\t_btx_cmd \newbox \b_btx_cmd -\t_btx_cmd{\global\setbox\b_btx_cmd\hpack{\clf_btxcmdstring}} +\t_btx_cmd{\global\setbox\b_btx_cmd\hbox{\clf_btxcmdstring}} % no \hpack, otherwise prerolling --- doesn't work \let\btxcmd\btxcommand diff --git a/tex/context/base/mkiv/regi-ini.lua b/tex/context/base/mkiv/regi-ini.lua index 2a3b2caaf..460d97d5e 100644 --- a/tex/context/base/mkiv/regi-ini.lua +++ b/tex/context/base/mkiv/regi-ini.lua @@ -6,11 +6,8 @@ if not modules then modules = { } end modules ['regi-ini'] = { license = "see context related readme files" } ---[[ldx-- -Regimes take care of converting the input characters into
-
We will hook regime handling code into the input methods.
---ldx]]-- +-- We will hook regime handling code into the input methods. local trace_translating = false trackers.register("regimes.translating", function(v) trace_translating = v end) diff --git a/tex/context/base/mkiv/sort-ini.lua b/tex/context/base/mkiv/sort-ini.lua index 98f516c22..a375d7057 100644 --- a/tex/context/base/mkiv/sort-ini.lua +++ b/tex/context/base/mkiv/sort-ini.lua @@ -6,49 +6,45 @@ if not modules then modules = { } end modules ['sort-ini'] = { license = "see context related readme files" } --- It took a while to get there, but with Fleetwood Mac's "Don't Stop" --- playing in the background we sort of got it done. - ---[[The code here evolved from the rather old mkii approach. There -we concatinate the key and (raw) entry into a new string. Numbers and -special characters get some treatment so that they sort ok. In -addition some normalization (lowercasing, accent stripping) takes -place and again data is appended ror prepended. Eventually these -strings are sorted using a regular string sorter. The relative order -of character is dealt with by weighting them. It took a while to -figure this all out but eventually it worked ok for most languages, -given that the right datatables were provided.
- -Here we do follow a similar approach but this time we don't append -the manipulated keys and entries but create tables for each of them -with entries being tables themselves having different properties. In -these tables characters are represented by numbers and sorting takes -place using these numbers. Strings are simplified using lowercasing -as well as shape codes. Numbers are filtered and after getting an offset -they end up at the right end of the spectrum (more clever parser will -be added some day). There are definitely more solutions to the problem -and it is a nice puzzle to solve.
- -In the future more methods can be added, as there is practically no -limit to what goes into the tables. For that we will provide hooks.
- -Todo: decomposition with specific order of accents, this is -relatively easy to do.
- -Todo: investigate what standards and conventions there are and see -how they map onto this mechanism. I've learned that users can come up -with any demand so nothing here is frozen.
- -Todo: I ran into the Unicode Collation document and noticed that -there are some similarities (like the weights) but using that method -would still demand extra code for language specifics. One option is -to use the allkeys.txt file for the uc vectors but then we would also -use the collapsed key (sq, code is now commented). In fact, we could -just hook those into the replacer code that we reun beforehand.
- -In the future index entries will become more clever, i.e. they will -have language etc properties that then can be used.
-]]-- +-- It took a while to get there, but with Fleetwood Mac's "Don't Stop" playing in +-- the background we sort of got it done. +-- +-- The code here evolved from the rather old mkii approach. There we concatinate the +-- key and (raw) entry into a new string. Numbers and special characters get some +-- treatment so that they sort ok. In addition some normalization (lowercasing, +-- accent stripping) takes place and again data is appended ror prepended. +-- Eventually these strings are sorted using a regular string sorter. The relative +-- order of character is dealt with by weighting them. It took a while to figure +-- this all out but eventually it worked ok for most languages, given that the right +-- datatables were provided. +-- +-- Here we do follow a similar approach but this time we don't append the +-- manipulated keys and entries but create tables for each of them with entries +-- being tables themselves having different properties. In these tables characters +-- are represented by numbers and sorting takes place using these numbers. Strings +-- are simplified using lowercasing as well as shape codes. Numbers are filtered and +-- after getting an offset they end up at the right end of the spectrum (more clever +-- parser will be added some day). There are definitely more solutions to the +-- problem and it is a nice puzzle to solve. +-- +-- In the future more methods can be added, as there is practically no limit to what +-- goes into the tables. For that we will provide hooks. +-- +-- Todo: decomposition with specific order of accents, this is relatively easy to +-- do. +-- +-- Todo: investigate what standards and conventions there are and see how they map +-- onto this mechanism. I've learned that users can come up with any demand so +-- nothing here is frozen. +-- +-- Todo: I ran into the Unicode Collation document and noticed that there are some +-- similarities (like the weights) but using that method would still demand extra +-- code for language specifics. One option is to use the allkeys.txt file for the uc +-- vectors but then we would also use the collapsed key (sq, code is now commented). +-- In fact, we could just hook those into the replacer code that we reun beforehand. +-- +-- In the future index entries will become more clever, i.e. they will have language +-- etc properties that then can be used. local gsub, find, rep, sub, sort, concat, tohash, format = string.gsub, string.find, string.rep, string.sub, table.sort, table.concat, table.tohash, string.format local utfbyte, utfchar, utfcharacters = utf.byte, utf.char, utf.characters diff --git a/tex/context/base/mkiv/status-files.pdf b/tex/context/base/mkiv/status-files.pdf index de994239b..476b1642f 100644 Binary files a/tex/context/base/mkiv/status-files.pdf and b/tex/context/base/mkiv/status-files.pdf differ diff --git a/tex/context/base/mkiv/status-lua.pdf b/tex/context/base/mkiv/status-lua.pdf index e6773acf4..734e7705c 100644 Binary files a/tex/context/base/mkiv/status-lua.pdf and b/tex/context/base/mkiv/status-lua.pdf differ diff --git a/tex/context/base/mkiv/syst-con.lua b/tex/context/base/mkiv/syst-con.lua index 6a11fa8d3..f0ea8546a 100644 --- a/tex/context/base/mkiv/syst-con.lua +++ b/tex/context/base/mkiv/syst-con.lua @@ -20,10 +20,9 @@ local implement = interfaces.implement local formatters = string.formatters ---[[ldx-- -For raw 8 bit characters, the offset is 0x110000 (bottom of plane 18) at
-the top of
Internally
A conversion function that takes a number, unit (string) and optional -format (string) is implemented using this table.
---ldx]]-- +-- A conversion function that takes a number, unit (string) and optional format +-- (string) is implemented using this table. local f_none = formatters["%s%s"] local f_true = formatters["%0.5F%s"] @@ -110,9 +106,7 @@ local function numbertodimen(n,unit,fmt) -- will be redefined later ! end end ---[[ldx-- -We collect a bunch of converters in the
More interesting it to implement a (sort of) dimen datatype, one
-that permits calculations too. First we define a function that
-converts a string to scaledpoints. We use
We use a metatable to intercept errors. When no key is found in -the table with factors, the metatable will be consulted for an -alternative index function.
---ldx]]-- +-- We use a metatable to intercept errors. When no key is found in the table with +-- factors, the metatable will be consulted for an alternative index function. setmetatableindex(dimenfactors, function(t,s) -- error("wrong dimension: " .. (s or "?")) -- better a message return false end) ---[[ldx-- -We redefine the following function later on, so we comment it -here (which saves us bytecodes.
---ldx]]-- +-- We redefine the following function later on, so we comment it here (which saves +-- us bytecodes. -- function string.todimen(str) -- if type(str) == "number" then @@ -182,44 +170,38 @@ here (which saves us bytecodes. local stringtodimen -- assigned later (commenting saves bytecode) local amount = S("+-")^0 * R("09")^0 * S(".,")^0 * R("09")^0 -local unit = P("pt") + P("cm") + P("mm") + P("sp") + P("bp") + P("in") + - P("pc") + P("dd") + P("cc") + P("nd") + P("nc") +local unit = P("pt") + P("cm") + P("mm") + P("sp") + P("bp") + + P("es") + P("ts") + P("pc") + P("dd") + P("cc") + + P("in") + -- + P("nd") + P("nc") local validdimen = amount * unit lpeg.patterns.validdimen = validdimen ---[[ldx-- -This converter accepts calls like:
- -With this in place, we can now implement a proper datatype for dimensions, one -that permits us to do this:
- -We create a local metatable for this new type:
---ldx]]-- +-- This converter accepts calls like: +-- +-- string.todimen("10") +-- string.todimen(".10") +-- string.todimen("10.0") +-- string.todimen("10.0pt") +-- string.todimen("10pt") +-- string.todimen("10.0pt") +-- +-- With this in place, we can now implement a proper datatype for dimensions, one +-- that permits us to do this: +-- +-- s = dimen "10pt" + dimen "20pt" + dimen "200pt" +-- - dimen "100sp" / 10 + "20pt" + "0pt" +-- +-- We create a local metatable for this new type: local dimensions = { } ---[[ldx-- -The main (and globally) visible representation of a dimen is defined next: it is -a one-element table. The unit that is returned from the match is normally a number -(one of the previously defined factors) but we also accept functions. Later we will -see why. This function is redefined later.
---ldx]]-- +-- The main (and globally) visible representation of a dimen is defined next: it is +-- a one-element table. The unit that is returned from the match is normally a +-- number (one of the previously defined factors) but we also accept functions. +-- Later we will see why. This function is redefined later. -- function dimen(a) -- if a then @@ -241,11 +223,9 @@ see why. This function is redefined later. -- end -- end ---[[ldx-- -This function return a small hash with a metatable attached. It is -through this metatable that we can do the calculations. We could have -shared some of the code but for reasons of speed we don't.
---ldx]]-- +-- This function return a small hash with a metatable attached. It is through this +-- metatable that we can do the calculations. We could have shared some of the code +-- but for reasons of speed we don't. function dimensions.__add(a, b) local ta, tb = type(a), type(b) @@ -281,20 +261,16 @@ function dimensions.__unm(a) return setmetatable({ - a }, dimensions) end ---[[ldx-- -It makes no sense to implement the power and modulo function but -the next two do make sense because they permits is code like:
- -We also need to provide a function for conversion to string (so that
-we can print dimensions). We print them as points, just like
Since it does not take much code, we also provide a way to access -a few accessors
- -In the converter from string to dimension we support functions as
-factors. This is because in
The previous code is rather efficient (also thanks to
When we cache converted strings this becomes 16.3 seconds. In order not -to waste too much memory on it, we tag the values of the cache as being -week which mean that the garbage collector will collect them in a next -sweep. This means that in most cases the speed up is mostly affecting the -current couple of calculations and as such the speed penalty is small.
- -We redefine two previous defined functions that can benefit from -this:
---ldx]]-- +-- In the converter from string to dimension we support functions as factors. This +-- is because in TeX we have a few more units: 'ex' and 'em'. These are not constant +-- factors but depend on the current font. They are not defined by default, but need +-- an explicit function call. This is because at the moment that this code is +-- loaded, the relevant tables that hold the functions needed may not yet be +-- available. + + dimenfactors["ex"] = 4 /65536 -- 4pt + dimenfactors["em"] = 10 /65536 -- 10pt +-- dimenfactors["%"] = 4 /65536 -- 400pt/100 + dimenfactors["eu"] = (9176/129)/65536 -- 1es + +-- The previous code is rather efficient (also thanks to LPEG) but we can speed it +-- up by caching converted dimensions. On my machine (2008) the following loop takes +-- about 25.5 seconds. +-- +-- for i=1,1000000 do +-- local s = dimen "10pt" + dimen "20pt" + dimen "200pt" +-- - dimen "100sp" / 10 + "20pt" + "0pt" +-- end +-- +-- When we cache converted strings this becomes 16.3 seconds. In order not to waste +-- too much memory on it, we tag the values of the cache as being week which mean +-- that the garbage collector will collect them in a next sweep. This means that in +-- most cases the speed up is mostly affecting the current couple of calculations +-- and as such the speed penalty is small. +-- +-- We redefine two previous defined functions that can benefit from this: local known = { } setmetatable(known, { __mode = "v" }) @@ -436,14 +398,10 @@ function number.toscaled(d) return format("%0.5f",d/0x10000) -- 2^16 end ---[[ldx-- -In a similar fashion we can define a glue datatype. In that case we -probably use a hash instead of a one-element table.
---ldx]]-- - ---[[ldx-- -Goodie:s
---ldx]]-- +-- In a similar fashion we can define a glue datatype. In that case we probably use +-- a hash instead of a one-element table. +-- +-- A goodie: function number.percent(n,d) -- will be cleaned up once luatex 0.30 is out d = d or texget("hsize") diff --git a/tex/context/base/mkiv/util-fmt.lua b/tex/context/base/mkiv/util-fmt.lua index fe80c6420..4da4ef985 100644 --- a/tex/context/base/mkiv/util-fmt.lua +++ b/tex/context/base/mkiv/util-fmt.lua @@ -11,7 +11,7 @@ utilities.formatters = utilities.formatters or { } local formatters = utilities.formatters local concat, format = table.concat, string.format -local tostring, type = tostring, type +local tostring, type, unpack = tostring, type, unpack local strip = string.strip local lpegmatch = lpeg.match @@ -21,12 +21,15 @@ function formatters.stripzeros(str) return lpegmatch(stripper,str) end -function formatters.formatcolumns(result,between) +function formatters.formatcolumns(result,between,header) if result and #result > 0 then - between = between or " " - local widths, numbers = { }, { } - local first = result[1] - local n = #first + local widths = { } + local numbers = { } + local templates = { } + local first = result[1] + local n = #first + between = between or " " + -- for i=1,n do widths[i] = 0 end @@ -35,13 +38,6 @@ function formatters.formatcolumns(result,between) for j=1,n do local rj = r[j] local tj = type(rj) --- if tj == "number" then --- numbers[j] = true --- end --- if tj ~= "string" then --- rj = tostring(rj) --- r[j] = rj --- end if tj == "number" then numbers[j] = true rj = tostring(rj) @@ -55,29 +51,59 @@ function formatters.formatcolumns(result,between) end end end + if header then + for i=1,#header do + local h = header[i] + for j=1,n do + local hj = tostring(h[j]) + h[j] = hj + local w = #hj + if w > widths[j] then + widths[j] = w + end + end + end + end for i=1,n do local w = widths[i] if numbers[i] then if w > 80 then - widths[i] = "%s" .. between - else - widths[i] = "%0" .. w .. "i" .. between + templates[i] = "%s" .. between + else + templates[i] = "% " .. w .. "i" .. between end else if w > 80 then - widths[i] = "%s" .. between - elseif w > 0 then - widths[i] = "%-" .. w .. "s" .. between + templates[i] = "%s" .. between + elseif w > 0 then + templates[i] = "%-" .. w .. "s" .. between else - widths[i] = "%s" + templates[i] = "%s" end end end - local template = strip(concat(widths)) + local template = strip(concat(templates)) for i=1,#result do local str = format(template,unpack(result[i])) result[i] = strip(str) end + if header then + for i=1,n do + local w = widths[i] + if w > 80 then + templates[i] = "%s" .. between + elseif w > 0 then + templates[i] = "%-" .. w .. "s" .. between + else + templates[i] = "%s" + end + end + local template = strip(concat(templates)) + for i=1,#header do + local str = format(template,unpack(header[i])) + header[i] = strip(str) + end + end end - return result + return result, header end diff --git a/tex/context/base/mkiv/util-seq.lua b/tex/context/base/mkiv/util-seq.lua index 35839f230..49952dd98 100644 --- a/tex/context/base/mkiv/util-seq.lua +++ b/tex/context/base/mkiv/util-seq.lua @@ -6,15 +6,13 @@ if not modules then modules = { } end modules ['util-seq'] = { license = "see context related readme files" } ---[[ldx-- -Here we implement a mechanism for chaining the special functions
-that we use in
We start with a registration system for atributes so that we can use the -symbolic names later on.
---ldx]]-- +-- We start with a registration system for atributes so that we can use the symbolic +-- names later on. local nodes = nodes local context = context @@ -71,17 +69,13 @@ trackers.register("attributes.values", function(v) trace_values = v end) -- end -- end ---[[ldx-- -We reserve this one as we really want it to be always set (faster).
---ldx]]-- +-- We reserve this one as we really want it to be always set (faster). names[0], numbers["fontdynamic"] = "fontdynamic", 0 ---[[ldx-- -private attributes are used by the system and public ones are for users. We use dedicated
-ranges of numbers for them. Of course a the
In order to deal with 8-bit output, we need to find a way to go from
This leaves us problems with characters that are specific to
We get a more efficient variant of this when we integrate -replacements in collapser. This more or less renders the previous -private code redundant. The following code is equivalent but the -first snippet uses the relocated dollars.
- -Instead of using a
Setting the lccodes is also done in a loop over the data table.
---ldx]]-- - implement { name = "chardescription", arguments = "integer", diff --git a/tex/context/base/mkxl/cont-new.mkxl b/tex/context/base/mkxl/cont-new.mkxl index 9a6fc93da..53ccef0b6 100644 --- a/tex/context/base/mkxl/cont-new.mkxl +++ b/tex/context/base/mkxl/cont-new.mkxl @@ -13,7 +13,7 @@ % \normalend % uncomment this to get the real base runtime -\newcontextversion{2023.03.20 15:42} +\newcontextversion{2023.04.01 09:28} %D This file is loaded at runtime, thereby providing an excellent place for hacks, %D patches, extensions and new features. There can be local overloads in cont-loc diff --git a/tex/context/base/mkxl/context.mkxl b/tex/context/base/mkxl/context.mkxl index 1a07772eb..6f4b7d052 100644 --- a/tex/context/base/mkxl/context.mkxl +++ b/tex/context/base/mkxl/context.mkxl @@ -29,7 +29,7 @@ %D {YYYY.MM.DD HH:MM} format. \immutable\edef\contextformat {\jobname} -\immutable\edef\contextversion{2023.03.20 15:42} +\immutable\edef\contextversion{2023.04.01 09:28} %overloadmode 1 % check frozen / warning %overloadmode 2 % check frozen / error @@ -215,8 +215,9 @@ \loadmkxlfile{unic-ini} -\loadmkxlfile{core-two} +%loadmkxlfile{core-two} % retired, not in testsuite, not on garden, not in styles \loadmkxlfile{core-dat} +\loadmkxlfile{core-pag} \loadmkxlfile{colo-ini} \loadmkxlfile{colo-nod} @@ -647,26 +648,26 @@ % we will definitely freeze mkiv and then use lmt files for futher development % of lmtx. We also no longer use the macro feature to replace 5.3 compatible % function calls by native 5.4 features as lmt files assume 5.4 anyway. This -% makes format generation a little faster (not that it's that slow). It might \ +% makes format generation a little faster (not that it's that slow). It might % take a while before we dealt with all of them because I'll also clean them -% up a bit when doing. +% up a bit when doing. Some will probably always be shared, like char-def.lua. % % % luat-bas.mkxl l-macro-imp-optimize % this is no longer used -% c:/data/develop/context/sources/buff-imp-default.lua -% c:/data/develop/context/sources/buff-imp-escaped.lua -% c:/data/develop/context/sources/buff-imp-lua.lua -% c:/data/develop/context/sources/buff-imp-mp.lua -% c:/data/develop/context/sources/buff-imp-nested.lua -% c:/data/develop/context/sources/buff-imp-parsed-xml.lua -% c:/data/develop/context/sources/buff-imp-tex.lua -% c:/data/develop/context/sources/buff-imp-xml.lua - % c:/data/develop/context/sources/buff-par.lua % c:/data/develop/context/sources/buff-ver.lua +% +% c:/data/develop/context/sources/buff-imp-default.lua % shared +% c:/data/develop/context/sources/buff-imp-escaped.lua % shared +% c:/data/develop/context/sources/buff-imp-lua.lua % shared +% c:/data/develop/context/sources/buff-imp-mp.lua % shared +% c:/data/develop/context/sources/buff-imp-nested.lua % shared +% c:/data/develop/context/sources/buff-imp-parsed-xml.lua % shared +% c:/data/develop/context/sources/buff-imp-tex.lua % shared +% c:/data/develop/context/sources/buff-imp-xml.lua % shared % c:/data/develop/context/sources/char-cjk.lua -% c:/data/develop/context/sources/char-def.lua +% c:/data/develop/context/sources/char-def.lua % shared data file, a real big one % c:/data/develop/context/sources/char-enc.lua % c:/data/develop/context/sources/char-ent.lua % c:/data/develop/context/sources/char-fio.lua @@ -680,7 +681,7 @@ % c:/data/develop/context/sources/cldf-com.lua % c:/data/develop/context/sources/cldf-ini.lua -% c:/data/develop/context/sources/cldf-prs.lua % use in chemistry +% c:/data/develop/context/sources/cldf-prs.lua % used in chemistry % c:/data/develop/context/sources/cldf-scn.lua % c:/data/develop/context/sources/cldf-stp.lua % c:/data/develop/context/sources/cldf-ver.lua @@ -690,8 +691,6 @@ % c:/data/develop/context/sources/core-con.lua % c:/data/develop/context/sources/core-ctx.lua -% c:/data/develop/context/sources/core-dat.lua -% c:/data/develop/context/sources/core-two.lua % data... @@ -700,7 +699,7 @@ % c:/data/develop/context/sources/file-res.lua % c:/data/develop/context/sources/font-afk.lua -% c:/data/develop/context/sources/font-agl.lua +% c:/data/develop/context/sources/font-agl.lua % shared data file % c:/data/develop/context/sources/font-aux.lua % c:/data/develop/context/sources/font-cid.lua % c:/data/develop/context/sources/font-enc.lua @@ -724,16 +723,16 @@ % c:/data/develop/context/sources/font-trt.lua % c:/data/develop/context/sources/font-web.lua % proof of concept, never used -% c:/data/develop/context/sources/font-imp-combining.lua % shared, like typescript -% c:/data/develop/context/sources/font-imp-dimensions.lua % idem -% c:/data/develop/context/sources/font-imp-italics.lua % idem -% c:/data/develop/context/sources/font-imp-notused.lua % idem -% c:/data/develop/context/sources/font-imp-properties.lua % idem -% c:/data/develop/context/sources/font-imp-reorder.lua % idem -% c:/data/develop/context/sources/font-imp-spacekerns.lua % idem -% c:/data/develop/context/sources/font-imp-tex.lua % idem -% c:/data/develop/context/sources/font-imp-tweaks.lua % idem -% c:/data/develop/context/sources/font-imp-unicode.lua % idem +% c:/data/develop/context/sources/font-imp-combining.lua % shared +% c:/data/develop/context/sources/font-imp-dimensions.lua % shared +% c:/data/develop/context/sources/font-imp-italics.lua % shared +% c:/data/develop/context/sources/font-imp-notused.lua % shared +% c:/data/develop/context/sources/font-imp-properties.lua % shared +% c:/data/develop/context/sources/font-imp-reorder.lua % shared +% c:/data/develop/context/sources/font-imp-spacekerns.lua % shared +% c:/data/develop/context/sources/font-imp-tex.lua % shared +% c:/data/develop/context/sources/font-imp-tweaks.lua % shared +% c:/data/develop/context/sources/font-imp-unicode.lua % shared % c:/data/develop/context/sources/good-ctx.lua % c:/data/develop/context/sources/good-ini.lua @@ -749,26 +748,26 @@ % c:/data/develop/context/sources/java-ini.lua -% c:/data/develop/context/sources/lang-cnt.lua -% c:/data/develop/context/sources/lang-def.lua % these are data files -% c:/data/develop/context/sources/lang-txt.lua % these are data files +% c:/data/develop/context/sources/lang-cnt.lua % shared data file +% c:/data/develop/context/sources/lang-def.lua % shared data file +% c:/data/develop/context/sources/lang-txt.lua % shared data file % c:/data/develop/context/sources/lang-wrd.lua % c:/data/develop/context/sources/luat-exe.lua % c:/data/develop/context/sources/luat-iop.lua % c:/data/develop/context/sources/luat-mac.lua % will become lmt -% c:/data/develop/context/sources/lxml-aux.lua -% c:/data/develop/context/sources/lxml-css.lua -% c:/data/develop/context/sources/lxml-dir.lua -% c:/data/develop/context/sources/lxml-ent.lua -% c:/data/develop/context/sources/lxml-ini.lua -% c:/data/develop/context/sources/lxml-lpt.lua -% c:/data/develop/context/sources/lxml-mis.lua -% c:/data/develop/context/sources/lxml-sor.lua -% c:/data/develop/context/sources/lxml-tab.lua -% c:/data/develop/context/sources/lxml-tex.lua -% c:/data/develop/context/sources/lxml-xml.lua +% c:/data/develop/context/sources/lxml-aux.lua % the xml interfcace is rather stable +% c:/data/develop/context/sources/lxml-css.lua % and is also provided/used in lua so +% c:/data/develop/context/sources/lxml-dir.lua % might as well share these because they +% c:/data/develop/context/sources/lxml-ent.lua % are unlikely to change +% c:/data/develop/context/sources/lxml-ini.lua % +% c:/data/develop/context/sources/lxml-lpt.lua % +% c:/data/develop/context/sources/lxml-mis.lua % +% c:/data/develop/context/sources/lxml-sor.lua % +% c:/data/develop/context/sources/lxml-tab.lua % +% c:/data/develop/context/sources/lxml-tex.lua % +% c:/data/develop/context/sources/lxml-xml.lua % % c:/data/develop/context/sources/meta-blb.lua % c:/data/develop/context/sources/meta-fun.lua @@ -788,16 +787,16 @@ % c:/data/develop/context/sources/page-pst.lua % c:/data/develop/context/sources/publ-aut.lua % shared -% c:/data/develop/context/sources/publ-dat.lua -% c:/data/develop/context/sources/publ-fnd.lua -% c:/data/develop/context/sources/publ-inc.lua -% c:/data/develop/context/sources/publ-ini.lua -% c:/data/develop/context/sources/publ-jrn.lua -% c:/data/develop/context/sources/publ-oth.lua -% c:/data/develop/context/sources/publ-reg.lua -% c:/data/develop/context/sources/publ-sor.lua -% c:/data/develop/context/sources/publ-tra.lua -% c:/data/develop/context/sources/publ-usr.lua +% c:/data/develop/context/sources/publ-dat.lua % shared +% c:/data/develop/context/sources/publ-fnd.lua % shared +% c:/data/develop/context/sources/publ-inc.lua % shared +% c:/data/develop/context/sources/publ-ini.lua % shared +% c:/data/develop/context/sources/publ-jrn.lua % shared +% c:/data/develop/context/sources/publ-oth.lua % shared +% c:/data/develop/context/sources/publ-reg.lua % shared +% c:/data/develop/context/sources/publ-sor.lua % shared +% c:/data/develop/context/sources/publ-tra.lua % shared +% c:/data/develop/context/sources/publ-usr.lua % shared % c:/data/develop/context/sources/scrn-but.lua % c:/data/develop/context/sources/scrn-fld.lua @@ -828,6 +827,3 @@ % c:/data/develop/context/sources/trac-lmx.lua % c:/data/develop/context/sources/trac-par.lua % c:/data/develop/context/sources/trac-tex.lua - -% c:/data/develop/context/sources/typo-cln.lua -- wrong name for what it does -% c:/data/develop/context/sources/typo-dha.lua diff --git a/tex/context/base/mkxl/core-dat.lmt b/tex/context/base/mkxl/core-dat.lmt new file mode 100644 index 000000000..fd8aa0fb6 --- /dev/null +++ b/tex/context/base/mkxl/core-dat.lmt @@ -0,0 +1,225 @@ +if not modules then modules = { } end modules ['core-dat'] = { + version = 1.001, + comment = "companion to core-dat.mkiv", + author = "Hans Hagen, PRAGMA-ADE, Hasselt NL", + copyright = "PRAGMA ADE / ConTeXt Development Team", + license = "see context related readme files" +} + +-- This module provides a (multipass) container for arbitrary data. It replaces the +-- twopass data mechanism. + +local tonumber, tostring, type = tonumber, tostring, type + +local context = context + +local trace_datasets = false trackers.register("job.datasets" , function(v) trace_datasets = v end) + +local report_dataset = logs.reporter("dataset") + +local allocate = utilities.storage.allocate +local settings_to_hash = utilities.parsers.settings_to_hash + +local texgetcount = tex.getcount +local texsetcount = tex.setcount + +local v_yes = interfaces.variables.yes + +local new_latelua = nodes.pool.latelua + +local implement = interfaces.implement + +local c_realpageno = tex.iscount("realpageno") + +local collected = allocate() +local tobesaved = allocate() + +local datasets = { + collected = collected, + tobesaved = tobesaved, +} + +job.datasets = datasets + +local function initializer() + collected = datasets.collected + tobesaved = datasets.tobesaved +end + +job.register('job.datasets.collected', tobesaved, initializer, nil) + +local sets = { } + +table.setmetatableindex(tobesaved, function(t,k) + local v = { } + t[k] = v + return v +end) + +table.setmetatableindex(sets, function(t,k) + local v = { + index = 0, + order = 0, + } + t[k] = v + return v +end) + +local function setdata(settings) + local name = settings.name + local tag = settings.tag + local data = settings.data + local list = tobesaved[name] + if settings.convert and type(data) == "string" then + data = settings_to_hash(data) + end + if type(data) ~= "table" then + data = { data = data } + end + if not tag then + tag = #list + 1 + else + tag = tonumber(tag) or tag -- autonumber saves keys + end + list[tag] = data + if settings.delay == v_yes then + local set = sets[name] + local index = set.index + 1 + set.index = index + data.index = index + data.order = index + data.realpage = texgetcount(c_realpageno) + if trace_datasets then + report_dataset("action %a, name %a, tag %a, index %a","assign delayed",name,tag,index) + end + elseif trace_datasets then + report_dataset("action %a, name %a, tag %a","assign immediate",name,tag) + end + return name, tag, data +end + +datasets.setdata = setdata + +function datasets.extend(name,tag) + if type(name) == "table" then + name, tag = name.name, name.tag + end + local set = sets[name] + local order = set.order + 1 + local realpage = texgetcount(c_realpageno) + set.order = order + local t = tobesaved[name][tag] + t.realpage = realpage + t.order = order + if trace_datasets then + report_dataset("action %a, name %a, tag %a, page %a, index %a","flush by order",name,tag,t.index or 0,order,realpage) + end +end + +function datasets.getdata(name,tag,key,default) + local t = collected[name] + if t == nil then + if trace_datasets then + report_dataset("error: unknown dataset, name %a",name) + end + elseif type(t) ~= "table" then + return t + else + t = t[tag] or t[tonumber(tag)] + if not t then + if trace_datasets then + report_dataset("error: unknown dataset, name %a, tag %a",name,tag) + end + elseif key then + return t[key] or default + else + return t + end + end + return default +end + +local function setdataset(settings) + settings.convert = true + local name, tag = setdata(settings) + if settings.delay ~= v_yes then + -- + else + context(new_latelua { action = job.datasets.extend, name = name, tag = tag }) + end +end + +local cache = table.setmetatableindex(function(t,k) + local v = table.load(k..".tuc") + if v then + v = v.job + if v then + v = v.datasets + if v then + v = v.collected + end + end + end + if not v then + v = { } + if trace_datasets then + report_dataset("error: unknown dataset job %a",k) + end + end + t[k] = v + return v +end) + +local function datasetvariable(name,tag,key,cache) + local t = (cache or collected)[name] + if t == nil then + if trace_datasets then + report_dataset("error: unknown dataset, name %a, tag %a, not passed to tex",name) -- no tag + end + elseif type(t) ~= "table" then + context(tostring(t)) + else + t = t and (t[tag] or t[tonumber(tag)]) + if not t then + if trace_datasets then + report_dataset("error: unknown dataset, name %a, tag %a, not passed to tex",name,tag) + end + elseif type(t) == "table" then + local s = t[key] + if type(s) ~= "table" then + context(tostring(s)) + elseif trace_datasets then + report_dataset("error: unknown dataset, name %a, tag %a, not passed to tex",name,tag) + end + end + end +end + +local function datasetvariablefromjob(jobnname,name,tag,key) + datasetvariable(name,tag,key,cache[jobnname]) +end + +implement { + name = "setdataset", + actions = setdataset, + arguments = { + { + { "name" }, + { "tag" }, + { "delay" }, + { "data" }, + } + } +} + +implement { + name = "datasetvariable", + actions = datasetvariable, + arguments = "3 strings", +} + +implement { + name = "datasetvariablefromjob", + arguments = { "string", "string", "string", "string" }, + actions = datasetvariablefromjob +} diff --git a/tex/context/base/mkxl/core-dat.mkxl b/tex/context/base/mkxl/core-dat.mkxl index ab40d874c..6d7d1bd14 100644 --- a/tex/context/base/mkxl/core-dat.mkxl +++ b/tex/context/base/mkxl/core-dat.mkxl @@ -1,6 +1,6 @@ %D \module %D [ file=core-dat, -%D version=20122.04.17, % replaces core-two from 1997.03.31, +%D version=2021.04.17, % replaces core-two from 1997.03.31, %D title=\CONTEXT\ Core Macros, %D subtitle=Multipass Datasets, %D author=Hans Hagen, @@ -42,7 +42,7 @@ \unprotect -\registerctxluafile{core-dat}{} +\registerctxluafile{core-dat}{autosuffix} \installcorenamespace{dataset} @@ -78,50 +78,4 @@ \expandafter\clf_datasetvariable \fi} -\installcorenamespace{pagestate} -\installcorenamespace{pagestatecounter} - -\installcommandhandler \??pagestate {pagestate} \??pagestate - -\def\syst_pagestates_allocate - {\expandafter\newinteger\csname\??pagestatecounter\currentpagestate\endcsname} - -\appendtoks - \syst_pagestates_allocate -\to \everydefinepagestate - -\setuppagestate - [\c!delay=\v!yes] - -\permanent\tolerant\protected\def\setpagestate[#1]#*[#2]% - {\begingroup - \edef\currentpagestate{#1}% - \ifcsname\??pagestatecounter\currentpagestate\endcsname - \scratchcounter\lastnamedcs - \advanceby\scratchcounter\plusone - \else - \scratchcounter\plusone - \syst_pagestates_allocate - \fi - \global\csname\??pagestatecounter\currentpagestate\endcsname\scratchcounter - \clf_setpagestate - name {\currentpagestate}% - tag {\ifparameter#2\or#2\else\number\scratchcounter\fi}% - delay {\pagestateparameter\c!delay}% - \relax - \endgroup} - -\permanent\protected\def\autosetpagestate#1% - {\setpagestate[#1]\relax} - -\permanent\def\autopagestatenumber#1{\begincsname\??pagestatecounter#1\endcsname} - -\permanent\def\pagestaterealpage #1#2{\clf_pagestaterealpage {#1}{#2}} -\permanent\def\setpagestaterealpageno#1#2{\clf_setpagestaterealpageno{#1}{#2}} -\permanent\def\pagestaterealpageorder#1#2{\clf_pagestaterealpageorder{#1}#2\relax} - -\permanent\def\autopagestaterealpage #1{\clf_pagestaterealpage {#1}{\number\autopagestatenumber{#1}}} -\permanent\def\setautopagestaterealpageno#1{\clf_setpagestaterealpageno{#1}{\number\autopagestatenumber{#1}}} -\permanent\def\autopagestaterealpageorder#1{\clf_pagestaterealpageorder{#1}\numexpr\autopagestatenumber{#1}\relax} - \protect diff --git a/tex/context/base/mkxl/core-pag.lmt b/tex/context/base/mkxl/core-pag.lmt new file mode 100644 index 000000000..219171d42 --- /dev/null +++ b/tex/context/base/mkxl/core-pag.lmt @@ -0,0 +1,160 @@ +if not modules then modules = { } end modules ['core-dat'] = { + version = 1.001, + comment = "companion to core-dat.mkiv", + author = "Hans Hagen, PRAGMA-ADE, Hasselt NL", + copyright = "PRAGMA ADE / ConTeXt Development Team", + license = "see context related readme files" +} + +-- This module provides a (multipass) container for arbitrary data. It replaces the +-- twopass data mechanism. + +local tonumber = tonumber + +local context = context +local ctx_latelua = context.latelua + +local trace_pagestates = false trackers.register("job.pagestates", function(v) trace_pagestates = v end) + +local report_pagestate = logs.reporter("pagestate") + +local allocate = utilities.storage.allocate + +local texgetcount = tex.getcount +local texsetcount = tex.setcount + +local new_latelua = nodes.pool.latelua + +local implement = interfaces.implement +local getnamespace = interfaces.getnamespace + +local c_realpageno = tex.iscount("realpageno") +local c_realpagestateno = tex.iscount("realpagestateno") + +local collected = allocate() +local tobesaved = allocate() + +local pagestates = { + collected = collected, + tobesaved = tobesaved, +} + +job.pagestates = pagestates + +local function initializer() + collected = pagestates.collected + tobesaved = pagestates.tobesaved +end + +job.register("job.pagestates.collected", tobesaved, initializer, nil) + +table.setmetatableindex(tobesaved, "table") + +local function setstate(settings) + local name = settings.name + local tag = settings.tag + local list = tobesaved[name] + if not tag then + tag = #list + 1 + else + tag = tonumber(tag) or tag -- autonumber saves keys + end + local realpage = texgetcount(c_realpageno) + local data = realpage + list[tag] = data + if trace_pagestates then + report_pagestate("action %a, name %a, tag %a, preset %a","set",name,tag,realpage) + end + return name, tag, data +end + +local function extend(name,tag) + local realpage = texgetcount(c_realpageno) + if trace_pagestates then + report_pagestate("action %a, name %a, tag %a, preset %a","synchronize",name,tag,realpage) + end + tobesaved[name][tag] = realpage +end + +local function realpage(name,tag,default) + local t = collected[name] + if t then + t = t[tag] or t[tonumber(tag)] + if t then + return tonumber(t or default) + elseif trace_pagestates then + report_pagestate("error: unknown dataset, name %a, tag %a",name,tag) + end + elseif trace_pagestates then + report_pagestate("error: unknown dataset, name %a, tag %a",name) -- nil + end + return default +end + +local function realpageorder(name,tag) + local t = collected[name] + if t then + local p = t[tag] + if p then + local n = 1 + for i=tag-1,1,-1 do + if t[i] == p then + n = n +1 + end + end + return n + end + end + return 0 +end + +pagestates.setstate = setstate +pagestates.extend = extend +pagestates.realpage = realpage +pagestates.realpageorder = realpageorder + +function pagestates.countervalue(name) + return name and texgetcount(getnamespace("pagestatecounter") .. name) or 0 +end + +local function setpagestate(settings) + local name, tag = setstate(settings) + -- context(new_latelua(function() extend(name,tag) end)) + ctx_latelua(function() extend(name,tag) end) +end + +local function setpagestaterealpageno(name,tag) + local t = collected[name] + t = t and (t[tag] or t[tonumber(tag)]) + texsetcount("realpagestateno",t or texgetcount(c_realpageno)) +end + +implement { + name = "setpagestate", + actions = setpagestate, + arguments = { + { + { "name" }, + { "tag" }, + { "delay" }, + } + } +} + +implement { + name = "pagestaterealpage", + actions = { realpage, context }, + arguments = "2 strings", +} + +implement { + name = "setpagestaterealpageno", + actions = setpagestaterealpageno, + arguments = "2 strings", +} + +implement { + name = "pagestaterealpageorder", + actions = { realpageorder, context }, + arguments = { "string", "integer" } +} diff --git a/tex/context/base/mkxl/core-pag.mkxl b/tex/context/base/mkxl/core-pag.mkxl new file mode 100644 index 000000000..43b398b16 --- /dev/null +++ b/tex/context/base/mkxl/core-pag.mkxl @@ -0,0 +1,68 @@ +%D \module +%D [ file=core-pag, +%D version=2023.03.23, % moved from core-dat +%D title=\CONTEXT\ Core Macros, +%D subtitle=Multipass Pagestate, +%D author=Hans Hagen, +%D date=\currentdate, +%D copyright={PRAGMA ADE \& \CONTEXT\ Development Team}] +%C +%C This module is part of the \CONTEXT\ macro||package and is +%C therefore copyrighted by \PRAGMA. See mreadme.pdf for +%C details. + +\writestatus{loading}{ConTeXt Core Macros / Multipass Pagestate} + +\unprotect + +\newinteger\realpagestateno + +\registerctxluafile{core-pag}{autosuffix} + +\installcorenamespace{pagestate} +\installcorenamespace{pagestatecounter} + +\installcommandhandler \??pagestate {pagestate} \??pagestate + +\def\syst_pagestates_allocate + {\expandafter\newinteger\csname\??pagestatecounter\currentpagestate\endcsname} + +\appendtoks + \syst_pagestates_allocate +\to \everydefinepagestate + +\setuppagestate + [\c!delay=\v!yes] + +\permanent\tolerant\protected\def\setpagestate[#1]#*[#2]% + {\begingroup + \edef\currentpagestate{#1}% + \ifcsname\??pagestatecounter\currentpagestate\endcsname + \scratchcounter\lastnamedcs + \advanceby\scratchcounter\plusone + \else + \scratchcounter\plusone + \syst_pagestates_allocate + \fi + \global\csname\??pagestatecounter\currentpagestate\endcsname\scratchcounter + \clf_setpagestate + name {\currentpagestate}% + tag {\ifparameter#2\or#2\else\number\scratchcounter\fi}% + delay {\pagestateparameter\c!delay}% + \relax + \endgroup} + +\permanent\protected\def\autosetpagestate#1% + {\setpagestate[#1]\relax} + +\permanent\def\autopagestatenumber#1{\begincsname\??pagestatecounter#1\endcsname} + +\permanent\def\pagestaterealpage #1#2{\clf_pagestaterealpage {#1}{#2}} +\permanent\def\setpagestaterealpageno#1#2{\clf_setpagestaterealpageno{#1}{#2}} +\permanent\def\pagestaterealpageorder#1#2{\clf_pagestaterealpageorder{#1}#2\relax} + +\permanent\def\autopagestaterealpage #1{\clf_pagestaterealpage {#1}{\number\autopagestatenumber{#1}}} +\permanent\def\setautopagestaterealpageno#1{\clf_setpagestaterealpageno{#1}{\number\autopagestatenumber{#1}}} +\permanent\def\autopagestaterealpageorder#1{\clf_pagestaterealpageorder{#1}\numexpr\autopagestatenumber{#1}\relax} + +\protect diff --git a/tex/context/base/mkxl/core-two.lmt b/tex/context/base/mkxl/core-two.lmt new file mode 100644 index 000000000..7ea42374e --- /dev/null +++ b/tex/context/base/mkxl/core-two.lmt @@ -0,0 +1,210 @@ +if not modules then modules = { } end modules ['core-two'] = { + version = 1.001, + comment = "companion to core-two.mkiv", + author = "Hans Hagen, PRAGMA-ADE, Hasselt NL", + copyright = "PRAGMA ADE / ConTeXt Development Team", + license = "see context related readme files" +} + +-- This is actually one of the oldest MkIV files and basically a port of MkII but +-- the old usage has long be phased out. Also, the public part is now handled by +-- datasets which makes this a more private store. + +-- local next = next +-- local remove, concat = table.remove, table.concat + +local allocate = utilities.storage.allocate + +local collected = allocate() +local tobesaved = allocate() + +local jobpasses = { + collected = collected, + tobesaved = tobesaved, +} + +job.passes = jobpasses + +local function initializer() + collected = jobpasses.collected + tobesaved = jobpasses.tobesaved +end + +job.register('job.passes.collected', tobesaved, initializer, nil) + +function jobpasses.getcollected(id) + return collected[id] or { } +end + +function jobpasses.gettobesaved(id) + local t = tobesaved[id] + if not t then + t = { } + tobesaved[id] = t + end + return t +end + +-- local function define(id) +-- local p = tobesaved[id] +-- if not p then +-- p = { } +-- tobesaved[id] = p +-- end +-- return p +-- end +-- +-- local function save(id,str,index) +-- local jti = define(id) +-- if index then +-- jti[index] = str +-- else +-- jti[#jti+1] = str +-- end +-- end +-- +-- local function savetagged(id,tag,str) +-- local jti = define(id) +-- jti[tag] = str +-- end +-- +-- local function getdata(id,index,default) +-- local jti = collected[id] +-- local value = jti and jti[index] +-- return value ~= "" and value or default or "" +-- end +-- +-- local function getfield(id,index,tag,default) +-- local jti = collected[id] +-- jti = jti and jti[index] +-- local value = jti and jti[tag] +-- return value ~= "" and value or default or "" +-- end +-- +-- local function getcollected(id) +-- return collected[id] or { } +-- end +-- +-- local function gettobesaved(id) +-- return define(id) +-- end +-- +-- local function get(id) +-- local jti = collected[id] +-- if jti and #jti > 0 then +-- return remove(jti,1) +-- end +-- end +-- +-- local function first(id) +-- local jti = collected[id] +-- return jti and jti[1] +-- end +-- +-- local function last(id) +-- local jti = collected[id] +-- return jti and jti[#jti] +-- end +-- +-- local function find(id,n) +-- local jti = collected[id] +-- return jti and jti[n] or nil +-- end +-- +-- local function count(id) +-- local jti = collected[id] +-- return jti and #jti or 0 +-- end +-- +-- local function list(id) +-- local jti = collected[id] +-- if jti then +-- return concat(jti,',') +-- end +-- end +-- +-- local function inlist(id,str) +-- local jti = collected[id] +-- if jti then +-- for _, v in next, jti do +-- if v == str then +-- return true +-- end +-- end +-- end +-- return false +-- end +-- +-- local check = first +-- +-- jobpasses.define = define +-- jobpasses.save = save +-- jobpasses.savetagged = savetagged +-- jobpasses.getdata = getdata +-- jobpasses.getfield = getfield +-- jobpasses.getcollected = getcollected +-- jobpasses.gettobesaved = gettobesaved +-- jobpasses.get = get +-- jobpasses.first = first +-- jobpasses.last = last +-- jobpasses.find = find +-- jobpasses.list = list +-- jobpasses.count = count +-- jobpasses.check = check +-- jobpasses.inlist = inlist +-- +-- -- interface +-- +-- local implement = interfaces.implement +-- +-- implement { name = "gettwopassdata", actions = { get, context }, arguments = "string" } +-- implement { name = "getfirsttwopassdata",actions = { first, context }, arguments = "string" } +-- implement { name = "getlasttwopassdata", actions = { last, context }, arguments = "string" } +-- implement { name = "findtwopassdata", actions = { find, context }, arguments = "2 strings" } +-- implement { name = "gettwopassdatalist", actions = { list, context }, arguments = "string" } +-- implement { name = "counttwopassdata", actions = { count, context }, arguments = "string" } +-- implement { name = "checktwopassdata", actions = { check, context }, arguments = "string" } +-- +-- implement { +-- name = "definetwopasslist", +-- actions = define, +-- arguments = "string" +-- } +-- +-- implement { +-- name = "savetwopassdata", +-- actions = save, +-- arguments = "2 strings", +-- } +-- +-- implement { +-- name = "savetaggedtwopassdata", +-- actions = savetagged, +-- arguments = "3 strings", +-- } +-- +-- implement { +-- name = "doifelseintwopassdata", +-- actions = { inlist, commands.doifelse }, +-- arguments = "2 strings", +-- } +-- +-- -- local ctx_latelua = context.latelua +-- +-- -- implement { +-- -- name = "lazysavetwopassdata", +-- -- arguments = "3 strings", +-- -- public = true, +-- -- actions = function(a,b,c) +-- -- ctx_latelua(function() save(a,c) end) +-- -- end, +-- -- } +-- +-- -- implement { +-- -- name = "lazysavetaggedtwopassdata", +-- -- arguments = "3 strings", +-- -- public = true, +-- -- actions = function(a,b,c) +-- -- ctx_latelua(function() savetagged(a,b,c) end) +-- -- end, +-- -- } diff --git a/tex/context/base/mkxl/core-two.mkxl b/tex/context/base/mkxl/core-two.mkxl index 38f03c7c4..10a7eec9e 100644 --- a/tex/context/base/mkxl/core-two.mkxl +++ b/tex/context/base/mkxl/core-two.mkxl @@ -1,6 +1,6 @@ %D \module %D [ file=core-two, % moved from core-uti -%D version=1997.03.31, +%D version=1997.03.31, % stripped down 2023-03-21 %D title=\CONTEXT\ Core Macros, %D subtitle=Two Pass Data, %D author=Hans Hagen, @@ -11,102 +11,110 @@ %C therefore copyrighted by \PRAGMA. See mreadme.pdf for %C details. -\writestatus{loading}{ConTeXt Core Macros / Two Pass Data} +%D The public interface is replaced by datasets and two pass data is now private +%D to the engine. For the moment we keep some commands commented. The unused +%D (second) argument is an inheritance from \MKII. If needed we can bring back +%D a compatible interface. -%D This is a rather old mechanism which has not changed much over time, apart from -%D adding a few more selectors. This code used to be part of \type {core-uti}. The -%D following examples demonstrate the interface. -%D -%D \startbuffer -%D \definetwopasslist{test-1} -%D -%D \gettwopassdatalist{test-1} [\twopassdatalist=] -%D \checktwopassdata {test-1} [\twopassdata=] -%D \checktwopassdata {test-1} [\twopassdata=] -%D \gettwopassdata {test-1} [\twopassdata=] -%D \gettwopassdata {test-1} [\twopassdata=] -%D -%D \definetwopasslist{test-2} -%D -%D \lazysavetwopassdata{test-2}{1}{x} -%D \lazysavetwopassdata{test-2}{2}{y} -%D \lazysavetwopassdata{test-2}{3}{z} -%D -%D \gettwopassdatalist{test-2} [\twopassdatalist=x,y,z] -%D \checktwopassdata {test-2} [\twopassdata=x] -%D \checktwopassdata {test-2} [\twopassdata=x] -%D \gettwopassdata {test-2} [\twopassdata=x] -%D \gettwopassdata {test-2} [\twopassdata=y] -%D \gettwopassdata {test-2} [\twopassdata=z] -%D \gettwopassdata {test-2} [\twopassdata=] -%D -%D \definetwopasslist{test-3} -%D -%D \lazysavetaggedtwopassdata{test-3}{1}{x}{a} -%D \lazysavetaggedtwopassdata{test-3}{2}{y}{b} -%D \lazysavetaggedtwopassdata{test-3}{3}{z}{c} -%D -%D \findtwopassdata{test-3}{x} [\twopassdata=a] -%D \findtwopassdata{test-3}{y} [\twopassdata=b] -%D \findtwopassdata{test-3}{z} [\twopassdata=c] -%D \findtwopassdata{test-3}{w} [\twopassdata=] -%D -%D \definetwopasslist{test-4} -%D -%D \lazysavetwopassdata{test-4}{1}{A} -%D \lazysavetwopassdata{test-4}{2}{B} -%D \lazysavetwopassdata{test-4}{3}{C} -%D -%D \getfirsttwopassdata{test-4} [\twopassdata=A] -%D \getlasttwopassdata {test-4} [\twopassdata=C] -%D \getfirsttwopassdata{test-4} [\twopassdata=A] -%D \getlasttwopassdata {test-4} [\twopassdata=C] -%D \getfromtwopassdata {test-4}{1} [\twopassdata=A] -%D \getfromtwopassdata {test-4}{3} [\twopassdata=C] -%D \getfromtwopassdata {test-4}{2} [\twopassdata=B] -%D \stopbuffer -%D -%D \getbuffer \typebuffer +\writestatus{loading}{ConTeXt Core Macros / Two Pass Data} \unprotect -\registerctxluafile{core-two}{} - -\permanent\def\immediatesavetwopassdata #1#2#3{\normalexpanded{\noexpand\clf_savetwopassdata{#1}{#3}}} -\permanent\def \lazysavetwopassdata #1#2#3{\normalexpanded{\noexpand\ctxlatecommand{savetwopassdata("#1","#3")}}} -\permanent\let \savetwopassdata \lazysavetwopassdata -\permanent\def \savetaggedtwopassdata#1#2#3#4{\normalexpanded{\noexpand\clf_savetaggedtwopassdata{#1}{#3}{#4}}} -\permanent\def\lazysavetaggedtwopassdata#1#2#3#4{\normalexpanded{\noexpand\ctxlatecommand{savetaggedtwopassdata("#1",'#3',"#4")}}} - -% temp hack: needs a proper \starteverytimeluacode - -\setfalse\twopassdatafound - -\mutable\lettonothing\twopassdata -\mutable\lettonothing\twopassdatalist - -\mutable\let\noftwopassitems\!!zeropoint - -\def\syst_twopass_check % can be delegated to lua once obsolete is gone - {\ifempty\twopassdata - \setfalse\twopassdatafound - \else - \settrue\twopassdatafound - \fi} - -\permanent\protected\def\definetwopasslist #1{\clf_definetwopasslist{#1}} -\permanent\protected\def\gettwopassdata #1{\edef\twopassdata {\clf_gettwopassdata {#1}}\syst_twopass_check} -\permanent\protected\def\checktwopassdata #1{\edef\twopassdata {\clf_checktwopassdata {#1}}\syst_twopass_check} -\permanent\protected\def\findtwopassdata #1#2{\edef\twopassdata {\clf_findtwopassdata {#1}{#2}}\syst_twopass_check} -\permanent\protected\def\getfirsttwopassdata #1{\edef\twopassdata {\clf_getfirsttwopassdata {#1}}\syst_twopass_check} -\permanent\protected\def\getlasttwopassdata #1{\edef\twopassdata {\clf_getlasttwopassdata {#1}}% - \edef\noftwopassitems{\clf_counttwopassdata {#1}}\syst_twopass_check} -\permanent\protected\def\getnamedtwopassdatalist#1#2{\edef #1{\clf_gettwopassdatalist {#2}}} -\permanent\protected\def\gettwopassdatalist #1{\edef\twopassdatalist{\clf_gettwopassdatalist {#1}}} - -\permanent\protected\def\doifelseintwopassdata #1#2{\clf_doifelseintwopassdata{#1}{#2}} +\registerctxluafile{core-two}{autosuffix} -\aliased\let\doifintwopassdataelse\doifelseintwopassdata -\aliased\let\getfromtwopassdata \findtwopassdata +% %D This is a rather old mechanism which has not changed much over time, apart from +% %D adding a few more selectors. This code used to be part of \type {core-uti}. The +% %D following examples demonstrate the interface. +% %D +% %D \startbuffer +% %D \definetwopasslist{test-1} +% %D +% %D \gettwopassdatalist{test-1} [\twopassdatalist=] +% %D \checktwopassdata {test-1} [\twopassdata=] +% %D \checktwopassdata {test-1} [\twopassdata=] +% %D \gettwopassdata {test-1} [\twopassdata=] +% %D \gettwopassdata {test-1} [\twopassdata=] +% %D +% %D \definetwopasslist{test-2} +% %D +% %D \lazysavetwopassdata{test-2}{1}{x} +% %D \lazysavetwopassdata{test-2}{2}{y} +% %D \lazysavetwopassdata{test-2}{3}{z} +% %D +% %D \gettwopassdatalist{test-2} [\twopassdatalist=x,y,z] +% %D \checktwopassdata {test-2} [\twopassdata=x] +% %D \checktwopassdata {test-2} [\twopassdata=x] +% %D \gettwopassdata {test-2} [\twopassdata=x] +% %D \gettwopassdata {test-2} [\twopassdata=y] +% %D \gettwopassdata {test-2} [\twopassdata=z] +% %D \gettwopassdata {test-2} [\twopassdata=] +% %D +% %D \definetwopasslist{test-3} +% %D +% %D \lazysavetaggedtwopassdata{test-3}{1}{x}{a} +% %D \lazysavetaggedtwopassdata{test-3}{2}{y}{b} +% %D \lazysavetaggedtwopassdata{test-3}{3}{z}{c} +% %D +% %D \findtwopassdata{test-3}{x} [\twopassdata=a] +% %D \findtwopassdata{test-3}{y} [\twopassdata=b] +% %D \findtwopassdata{test-3}{z} [\twopassdata=c] +% %D \findtwopassdata{test-3}{w} [\twopassdata=] +% %D +% %D \definetwopasslist{test-4} +% %D +% %D \lazysavetwopassdata{test-4}{1}{A} +% %D \lazysavetwopassdata{test-4}{2}{B} +% %D \lazysavetwopassdata{test-4}{3}{C} +% %D +% %D \getfirsttwopassdata{test-4} [\twopassdata=A] +% %D \getlasttwopassdata {test-4} [\twopassdata=C] +% %D \getfirsttwopassdata{test-4} [\twopassdata=A] +% %D \getlasttwopassdata {test-4} [\twopassdata=C] +% %D \getfromtwopassdata {test-4}{1} [\twopassdata=A] +% %D \getfromtwopassdata {test-4}{3} [\twopassdata=C] +% %D \getfromtwopassdata {test-4}{2} [\twopassdata=B] +% %D \stopbuffer +% %D +% %D \getbuffer \typebuffer +% +% %D The next code can be simplified (read: defined at the \LUA\ end) but we never use this +% %D mechanism which has been replaced by datasets so it's not worth the effort. +% +% \permanent\def\immediatesavetwopassdata #1#2#3{\normalexpanded{\noexpand\clf_savetwopassdata{#1}{#3}}} +% \permanent\def \lazysavetwopassdata #1#2#3{\normalexpanded{\noexpand\ctxlatecommand{savetwopassdata("#1","#3")}}} +% \permanent\let \savetwopassdata \lazysavetwopassdata +% \permanent\def \savetaggedtwopassdata#1#2#3#4{\normalexpanded{\noexpand\clf_savetaggedtwopassdata{#1}{#3}{#4}}} +% \permanent\def\lazysavetaggedtwopassdata#1#2#3#4{\normalexpanded{\noexpand\ctxlatecommand{savetaggedtwopassdata("#1","#3","#4")}}} +% +% % temp hack: needs a proper \starteverytimeluacode +% +% \setfalse\twopassdatafound +% +% \mutable\lettonothing\twopassdata +% \mutable\lettonothing\twopassdatalist +% +% \mutable\let\noftwopassitems\!!zeropoint +% +% \def\syst_twopass_check % can be delegated to lua once obsolete is gone +% {\ifempty\twopassdata +% \setfalse\twopassdatafound +% \else +% \settrue\twopassdatafound +% \fi} +% +% \permanent\protected\def\definetwopasslist #1{\clf_definetwopasslist{#1}} +% \permanent\protected\def\gettwopassdata #1{\edef\twopassdata {\clf_gettwopassdata {#1}}\syst_twopass_check} +% \permanent\protected\def\checktwopassdata #1{\edef\twopassdata {\clf_checktwopassdata {#1}}\syst_twopass_check} +% \permanent\protected\def\findtwopassdata #1#2{\edef\twopassdata {\clf_findtwopassdata {#1}{#2}}\syst_twopass_check} +% \permanent\protected\def\getfirsttwopassdata #1{\edef\twopassdata {\clf_getfirsttwopassdata {#1}}\syst_twopass_check} +% \permanent\protected\def\getlasttwopassdata #1{\edef\twopassdata {\clf_getlasttwopassdata {#1}}% +% \edef\noftwopassitems{\clf_counttwopassdata {#1}}\syst_twopass_check} +% \permanent\protected\def\getnamedtwopassdatalist#1#2{\edef #1{\clf_gettwopassdatalist {#2}}} +% \permanent\protected\def\gettwopassdatalist #1{\edef\twopassdatalist{\clf_gettwopassdatalist {#1}}} +% +% \permanent\protected\def\doifelseintwopassdata #1#2{\clf_doifelseintwopassdata{#1}{#2}} +% +% \aliased\let\doifintwopassdataelse\doifelseintwopassdata +% \aliased\let\getfromtwopassdata \findtwopassdata \protect \endinput diff --git a/tex/context/base/mkxl/core-uti.lmt b/tex/context/base/mkxl/core-uti.lmt index 966428b36..e4b6606e3 100644 --- a/tex/context/base/mkxl/core-uti.lmt +++ b/tex/context/base/mkxl/core-uti.lmt @@ -6,16 +6,13 @@ if not modules then modules = { } end modules ['core-uti'] = { license = "see context related readme files" } --- todo: keep track of changes here (hm, track access, and only true when --- accessed and changed) - ---[[ldx-- -A utility file has always been part of
Variables are saved using in the previously defined table and passed
-onto
It's more convenient to manipulate filenames (paths) in
-
Here we only implement a few helper functions.
---ldx]]-- +-- Watch out: no negative depths and negative heights are permitted in regular +-- fonts. Also, the code in LMTX is a bit different. Here we only implement a +-- few helper functions. local fonts = fonts local constructors = fonts.constructors or { } @@ -53,11 +51,9 @@ constructors.loadedfonts = loadedfonts ----- scalecommands = fonts.helpers.scalecommands ---[[ldx-- -We need to normalize the scale factor (in scaled points). This has to
-do with the fact that
Beware, the boundingbox is passed as reference so we may not overwrite it -in the process; numbers are of course copies. Here 65536 equals 1pt. (Due to -excessive memory usage in CJK fonts, we no longer pass the boundingbox.)
---ldx]]-- - --- The scaler is only used for otf and afm and virtual fonts. If a virtual font has italic --- correction make sure to set the hasitalics flag. Some more flags will be added in the --- future. - ---[[ldx-- -The reason why the scaler was originally split, is that for a while we experimented
-with a helper function. However, in practice the
A unique hash value is generated by:
---ldx]]-- +-- A unique hash value is generated by: local hashmethods = { } constructors.hashmethods = hashmethods @@ -1069,13 +1059,11 @@ hashmethods.normal = function(list) end end ---[[ldx-- -In principle we can share tfm tables when we are in need for a font, but then
-we need to define a font switch as an id/attr switch which is no fun, so in that
-case users can best use dynamic features ... so, we will not use that speedup. Okay,
-when we get rid of base mode we can optimize even further by sharing, but then we
-loose our testcases for
We need to check for default features. For this we provide -a helper function.
---ldx]]-- +-- We need to check for default features. For this we provide a helper function. function constructors.checkedfeatures(what,features) local defaults = handlers[what].features.defaults diff --git a/tex/context/base/mkxl/font-ctx.lmt b/tex/context/base/mkxl/font-ctx.lmt index 77953d64a..1d59ad728 100644 --- a/tex/context/base/mkxl/font-ctx.lmt +++ b/tex/context/base/mkxl/font-ctx.lmt @@ -529,19 +529,13 @@ do end ---[[ldx-- -So far we haven't really dealt with features (or whatever we want -to pass along with the font definition. We distinguish the following -situations:
-situations: - -
-name:xetex like specs
-name@virtual font spec
-name*context specification
-
---ldx]]--
-
+-- So far we haven't really dealt with features (or whatever we want to pass along
+-- with the font definition. We distinguish the following situations:
+--
+-- name:xetex like specs
+-- name@virtual font spec
+-- name*context specification
+--
-- Currently fonts are scaled while constructing the font, so we have to do scaling
-- of commands in the vf at that point using e.g. "local scale = g.parameters.factor
-- or 1" after all, we need to work with copies anyway and scaling needs to be done
@@ -2269,10 +2263,8 @@ dimenfactors.em = nil
dimenfactors["%"] = nil
dimenfactors.pct = nil
---[[ldx--
-Before a font is passed to
Here we deal with defining fonts. We do so by intercepting the
-default loader that only handles
We hardly gain anything when we cache the final (pre scaled)
-
We can prefix a font specification by
The following function split the font specification into components -and prepares a table that will move along as we proceed.
---ldx]]-- +-- We hardly gain anything when we cache the final (pre scaled) TFM table. But it +-- can be handy for debugging, so we no longer carry this code along. Also, we now +-- have quite some reference to other tables so we would end up with lots of +-- catches. +-- +-- We can prefix a font specification by "name:" or "file:". The first case will +-- result in a lookup in the synonym table. +-- +-- [ name: | file: ] identifier [ separator [ specification ] ] +-- +-- The following function split the font specification into components and prepares +-- a table that will move along as we proceed. -- beware, we discard additional specs -- @@ -166,9 +158,7 @@ do end ---[[ldx-- -We can resolve the filename using the next function:
---ldx]]-- +-- We can resolve the filename using the next function: definers.resolvers = definers.resolvers or { } local resolvers = definers.resolvers @@ -261,23 +251,17 @@ function definers.resolve(specification) return specification end ---[[ldx-- -The main read function either uses a forced reader (as determined by -a lookup) or tries to resolve the name using the list of readers.
- -We need to cache when possible. We do cache raw tfm data (from
Watch out, here we do load a font, but we don't prepare the -specification yet.
---ldx]]-- - --- very experimental: +-- The main read function either uses a forced reader (as determined by a lookup) or +-- tries to resolve the name using the list of readers. +-- +-- We need to cache when possible. We do cache raw tfm data (from TFM, AFM or OTF). +-- After that we can cache based on specificstion (name) and size, that is, TeX only +-- needs a number for an already loaded fonts. However, it may make sense to cache +-- fonts before they're scaled as well (store TFM's with applied methods and +-- features). However, there may be a relation between the size and features (esp in +-- virtual fonts) so let's not do that now. +-- +-- Watch out, here we do load a font, but we don't prepare the specification yet. function definers.applypostprocessors(tfmdata) local postprocessors = tfmdata.postprocessors @@ -431,17 +415,13 @@ function constructors.readanddefine(name,size) -- no id -- maybe a dummy first return fontdata[id], id end ---[[ldx-- -So far the specifiers. Now comes the real definer. Here we cache -based on id's. Here we also intercept the virtual font handler. Since -it evolved stepwise I may rewrite this bit (combine code).
- -In the previously defined reader (the one resulting in aThis is very experimental code!
---ldx]]-- - local trace_visualize = false trackers.register("fonts.composing.visualize", function(v) trace_visualize = v end) local trace_define = false trackers.register("fonts.composing.define", function(v) trace_define = v end) diff --git a/tex/context/base/mkxl/font-fil.mklx b/tex/context/base/mkxl/font-fil.mklx index 79535ea11..73348645d 100644 --- a/tex/context/base/mkxl/font-fil.mklx +++ b/tex/context/base/mkxl/font-fil.mklx @@ -294,7 +294,7 @@ % pre-expansion. \def\font_helpers_update_font_class_parameters - {\edef\m_font_class_direction {\begincsname\??fontclass\fontclass\fontstyle\s!direction \endcsname}% + {%edef\m_font_class_direction {\begincsname\??fontclass\fontclass\fontstyle\s!direction \endcsname}% \edef\m_font_class_features {\begincsname\??fontclass\fontclass\fontstyle\s!features \endcsname}% \edef\m_font_class_fallbacks {\begincsname\??fontclass\fontclass\fontstyle\s!fallbacks \endcsname}% \edef\m_font_class_goodies {\begincsname\??fontclass\fontclass\fontstyle\s!goodies \endcsname}% diff --git a/tex/context/base/mkxl/font-ini.lmt b/tex/context/base/mkxl/font-ini.lmt index bc68fa83d..dcec8594e 100644 --- a/tex/context/base/mkxl/font-ini.lmt +++ b/tex/context/base/mkxl/font-ini.lmt @@ -6,10 +6,6 @@ if not modules then modules = { } end modules ['font-ini'] = { license = "see context related readme files" } ---[[ldx-- -Not much is happening here.
---ldx]]-- - local sortedhash, setmetatableindex = table.sortedhash, table.setmetatableindex local allocate = utilities.storage.allocate diff --git a/tex/context/base/mkxl/font-ini.mklx b/tex/context/base/mkxl/font-ini.mklx index 6efae2ae1..ea727bde4 100644 --- a/tex/context/base/mkxl/font-ini.mklx +++ b/tex/context/base/mkxl/font-ini.mklx @@ -755,6 +755,16 @@ \immutable\dimensiondef\d_font_default_size 10pt +%lettonothing\m_font_class_direction % no longer used +\lettonothing\m_font_class_features +\lettonothing\m_font_class_fallbacks +\lettonothing\m_font_class_goodies + +\lettonothing\m_font_direction +\lettonothing\m_font_features +\lettonothing\m_font_fallbacks +\lettonothing\m_font_goodies + \protected\def\font_helpers_low_level_define {\ifconditional\c_font_compact \expandafter\font_helpers_low_level_define_compact diff --git a/tex/context/base/mkxl/font-mat.mklx b/tex/context/base/mkxl/font-mat.mklx index 76f6f87b9..54473a347 100644 --- a/tex/context/base/mkxl/font-mat.mklx +++ b/tex/context/base/mkxl/font-mat.mklx @@ -337,15 +337,17 @@ %D 0 while in rl mode 0 is a copy of 1. There is no real overhead involved in this. %D This also permits different font definitions for normal and mixed. -\lettonothing\m_font_class_direction -\lettonothing\m_font_class_features -\lettonothing\m_font_class_fallbacks -\lettonothing\m_font_class_goodies - -\lettonothing\m_font_direction -\lettonothing\m_font_features -\lettonothing\m_font_fallbacks -\lettonothing\m_font_goodies +% moved to ini +% +% \lettonothing\m_font_class_direction +% \lettonothing\m_font_class_features +% \lettonothing\m_font_class_fallbacks +% \lettonothing\m_font_class_goodies +% +% \lettonothing\m_font_direction +% \lettonothing\m_font_features +% \lettonothing\m_font_fallbacks +% \lettonothing\m_font_goodies \appendtoks \font_helpers_set_math_family\c_font_fam_mr\s!mr diff --git a/tex/context/base/mkxl/font-one.lmt b/tex/context/base/mkxl/font-one.lmt index 453f61192..71694dcca 100644 --- a/tex/context/base/mkxl/font-one.lmt +++ b/tex/context/base/mkxl/font-one.lmt @@ -7,18 +7,16 @@ if not modules then modules = { } end modules ['font-one'] = { license = "see context related readme files" } ---[[ldx-- -Some code may look a bit obscure but this has to do with the fact that we also use
-this code for testing and much code evolved in the transition from
The following code still has traces of intermediate font support where we handles -font encodings. Eventually font encoding went away but we kept some code around in -other modules.
- -This version implements a node mode approach so that users can also more easily -add features.
---ldx]]-- +-- Some code may look a bit obscure but this has to do with the fact that we also +-- use this code for testing and much code evolved in the transition from TFM to AFM +-- to OTF. +-- +-- The following code still has traces of intermediate font support where we handles +-- font encodings. Eventually font encoding went away but we kept some code around +-- in other modules. +-- +-- This version implements a node mode approach so that users can also more easily +-- add features. local fonts, logs, trackers, containers, resolvers = fonts, logs, trackers, containers, resolvers @@ -71,15 +69,13 @@ local overloads = fonts.mappings.overloads local applyruntimefixes = fonts.treatments and fonts.treatments.applyfixes ---[[ldx-- -We cache files. Caching is taken care of in the loader. We cheat a bit by adding -ligatures and kern information to the afm derived data. That way we can set them faster -when defining a font.
- -We still keep the loading two phased: first we load the data in a traditional
-fashion and later we transform it to sequences. Then we apply some methods also
-used in opentype fonts (like
These helpers extend the basic table with extra ligatures, texligatures -and extra kerns. This saves quite some lookups later.
---ldx]]-- +-- These helpers extend the basic table with extra ligatures, texligatures and extra +-- kerns. This saves quite some lookups later. local addthem = function(rawdata,ligatures) if ligatures then @@ -349,17 +343,14 @@ local function enhance_add_ligatures(rawdata) addthem(rawdata,afm.helpdata.ligatures) end ---[[ldx-- -We keep the extra kerns in separate kerning tables so that we can use -them selectively.
---ldx]]-- - --- This is rather old code (from the beginning when we had only tfm). If --- we unify the afm data (now we have names all over the place) then --- we can use shcodes but there will be many more looping then. But we --- could get rid of the tables in char-cmp then. Als, in the generic version --- we don't use the character database. (Ok, we can have a context specific --- variant). +-- We keep the extra kerns in separate kerning tables so that we can use them +-- selectively. +-- +-- This is rather old code (from the beginning when we had only tfm). If we unify +-- the afm data (now we have names all over the place) then we can use shcodes but +-- there will be many more looping then. But we could get rid of the tables in +-- char-cmp then. Als, in the generic version we don't use the character database. +-- (Ok, we can have a context specific variant). local function enhance_add_extra_kerns(rawdata) -- using shcodes is not robust here local descriptions = rawdata.descriptions @@ -440,9 +431,7 @@ local function enhance_add_extra_kerns(rawdata) -- using shcodes is not robust h do_it_copy(afm.helpdata.rightkerned) end ---[[ldx-- -The copying routine looks messy (and is indeed a bit messy).
---ldx]]-- +-- The copying routine looks messy (and is indeed a bit messy). local function adddimensions(data) -- we need to normalize afm to otf i.e. indexed table instead of name if data then @@ -619,11 +608,9 @@ end return nil end ---[[ldx-- -Originally we had features kind of hard coded for
As soon as we could intercept the
We have the usual two modes and related features initializers and processors.
---ldx]]-- +-- We have the usual two modes and related features initializers and processors. registerafmfeature { name = "mode", diff --git a/tex/context/base/mkxl/font-onr.lmt b/tex/context/base/mkxl/font-onr.lmt index d28c247df..04f9d3bb2 100644 --- a/tex/context/base/mkxl/font-onr.lmt +++ b/tex/context/base/mkxl/font-onr.lmt @@ -7,18 +7,16 @@ if not modules then modules = { } end modules ['font-onr'] = { license = "see context related readme files" } ---[[ldx-- -Some code may look a bit obscure but this has to do with the fact that we also use
-this code for testing and much code evolved in the transition from
The following code still has traces of intermediate font support where we handles -font encodings. Eventually font encoding went away but we kept some code around in -other modules.
- -This version implements a node mode approach so that users can also more easily -add features.
---ldx]]-- +-- Some code may look a bit obscure but this has to do with the fact that we also +-- use this code for testing and much code evolved in the transition from TFM to AFM +-- to OTF. +-- +-- The following code still has traces of intermediate font support where we handles +-- font encodings. Eventually font encoding went away but we kept some code around +-- in other modules. +-- +-- This version implements a node mode approach so that users can also more easily +-- add features. local fonts, logs, trackers, resolvers = fonts, logs, trackers, resolvers @@ -49,12 +47,9 @@ pfb.version = 1.002 local readers = afm.readers or { } afm.readers = readers ---[[ldx-- -We start with the basic reader which we give a name similar to the built in
We use a new (unfinished) pfb loader but I see no differences between the old -and new vectors (we actually had one bad vector with the old loader).
---ldx]]-- +-- We start with the basic reader which we give a name similar to the built in TFM +-- and OTF reader. We use a PFB loader but I see no differences between the old and +-- new vectors (we actually had one bad vector with the old loader). local get_indexes, get_shapes @@ -71,7 +66,7 @@ do -- local plain = bxor(cipher,rshift(r,8)) local plain = (cipher ~ ((r >> 8) & 0xFFFFFFFF)) -- r = ((cipher + r) * c1 + c2) % 65536 - r = ((cipher + r) * c1 + c2) % 0x10000 + r = ((cipher + r) * c1 + c2) % 0x10000 return char(plain) end @@ -366,11 +361,10 @@ do end ---[[ldx-- -We start with the basic reader which we give a name similar to the built in
Analyzers run per script and/or language and are needed in order to -process features right.
---ldx]]-- +-- Analyzers run per script and/or language and are needed in order to process +-- features right. local setstate = nuts.setstate local getstate = nuts.getstate diff --git a/tex/context/base/mkxl/font-ots.lmt b/tex/context/base/mkxl/font-ots.lmt index e7fcfc576..0e99de6d1 100644 --- a/tex/context/base/mkxl/font-ots.lmt +++ b/tex/context/base/mkxl/font-ots.lmt @@ -7,92 +7,90 @@ if not modules then modules = { } end modules ['font-ots'] = { -- sequences license = "see context related readme files", } ---[[ldx-- -I need to check the description at the microsoft site ... it has been improved -so maybe there are some interesting details there. Most below is based on old and -incomplete documentation and involved quite a bit of guesswork (checking with the -abstract uniscribe of those days. But changing things is tricky!
- -This module is a bit more split up that I'd like but since we also want to test
-with plain
The specification of OpenType is (or at least decades ago was) kind of vague. -Apart from a lack of a proper free specifications there's also the problem that -Microsoft and Adobe may have their own interpretation of how and in what order to -apply features. In general the Microsoft website has more detailed specifications -and is a better reference. There is also some information in the FontForge help -files. In the end we rely most on the Microsoft specification.
- -Because there is so much possible, fonts might contain bugs and/or be made to -work with certain rederers. These may evolve over time which may have the side -effect that suddenly fonts behave differently. We don't want to catch all font -issues.
- -After a lot of experiments (mostly by Taco, me and Idris) the first implementation
-was already quite useful. When it did most of what we wanted, a more optimized version
-evolved. Of course all errors are mine and of course the code can be improved. There
-are quite some optimizations going on here and processing speed is currently quite
-acceptable and has been improved over time. Many complex scripts are not yet supported
-yet, but I will look into them as soon as
The specification leaves room for interpretation. In case of doubt the Microsoft -implementation is the reference as it is the most complete one. As they deal with -lots of scripts and fonts, Kai and Ivo did a lot of testing of the generic code and -their suggestions help improve the code. I'm aware that not all border cases can be -taken care of, unless we accept excessive runtime, and even then the interference -with other mechanisms (like hyphenation) are not trivial.
- -Especially discretionary handling has been improved much by Kai Eigner who uses complex -(latin) fonts. The current implementation is a compromis between his patches and my code -and in the meantime performance is quite ok. We cannot check all border cases without -compromising speed but so far we're okay. Given good test cases we can probably improve -it here and there. Especially chain lookups are non trivial with discretionaries but -things got much better over time thanks to Kai.
- -Glyphs are indexed not by unicode but in their own way. This is because there is no
-relationship with unicode at all, apart from the fact that a font might cover certain
-ranges of characters. One character can have multiple shapes. However, at the
-
The initial data table is rather close to the open type specification and also not
-that different from the one produced by
This module is sparsely documented because it is has been a moving target. The -table format of the reader changed a bit over time and we experiment a lot with -different methods for supporting features. By now the structures are quite stable
- -Incrementing the version number will force a re-cache. We jump the number by one -when there's a fix in the reader or processing code that can result in different -results.
- -This code is also used outside context but in context it has to work with other -mechanisms. Both put some constraints on the code here.
- ---ldx]]-- - --- Remark: We assume that cursives don't cross discretionaries which is okay because it --- is only used in semitic scripts. +-- I need to check the description at the microsoft site ... it has been improved so +-- maybe there are some interesting details there. Most below is based on old and +-- incomplete documentation and involved quite a bit of guesswork (checking with the +-- abstract uniscribe of those days. But changing things is tricky! +-- +-- This module is a bit more split up that I'd like but since we also want to test +-- with plain TeX it has to be so. This module is part of ConTeXt and discussion +-- about improvements and functionality mostly happens on the ConTeXt mailing list. +-- +-- The specification of OpenType is (or at least decades ago was) kind of vague. +-- Apart from a lack of a proper free specifications there's also the problem that +-- Microsoft and Adobe may have their own interpretation of how and in what order to +-- apply features. In general the Microsoft website has more detailed specifications +-- and is a better reference. There is also some information in the FontForge help +-- files. In the end we rely most on the Microsoft specification. +-- +-- Because there is so much possible, fonts might contain bugs and/or be made to +-- work with certain rederers. These may evolve over time which may have the side +-- effect that suddenly fonts behave differently. We don't want to catch all font +-- issues. +-- +-- After a lot of experiments (mostly by Taco, me and Idris) the first +-- implementation was already quite useful. When it did most of what we wanted, a +-- more optimized version evolved. Of course all errors are mine and of course the +-- code can be improved. There are quite some optimizations going on here and +-- processing speed is currently quite acceptable and has been improved over time. +-- Many complex scripts are not yet supported yet, but I will look into them as soon +-- as ConTeXt users ask for it. +-- +-- The specification leaves room for interpretation. In case of doubt the Microsoft +-- implementation is the reference as it is the most complete one. As they deal with +-- lots of scripts and fonts, Kai and Ivo did a lot of testing of the generic code +-- and their suggestions help improve the code. I'm aware that not all border cases +-- can be taken care of, unless we accept excessive runtime, and even then the +-- interference with other mechanisms (like hyphenation) are not trivial. +-- +-- Especially discretionary handling has been improved much by Kai Eigner who uses +-- complex (latin) fonts. The current implementation is a compromis between his +-- patches and my code and in the meantime performance is quite ok. We cannot check +-- all border cases without compromising speed but so far we're okay. Given good +-- test cases we can probably improve it here and there. Especially chain lookups +-- are non trivial with discretionaries but things got much better over time thanks +-- to Kai. +-- +-- Glyphs are indexed not by unicode but in their own way. This is because there is +-- no relationship with unicode at all, apart from the fact that a font might cover +-- certain ranges of characters. One character can have multiple shapes. However, at +-- the TeX end we use unicode so and all extra glyphs are mapped into a private +-- space. This is needed because we need to access them and TeX has to include then +-- in the output eventually. +-- +-- The initial data table is rather close to the open type specification and also +-- not that different from the one produced by Fontforge but we uses hashes instead. +-- In ConTeXt that table is packed (similar tables are shared) and cached on disk so +-- that successive runs can use the optimized table (after loading the table is +-- unpacked). +-- +-- This module is sparsely documented because it is has been a moving target. The +-- table format of the reader changed a bit over time and we experiment a lot with +-- different methods for supporting features. By now the structures are quite stable +-- +-- Incrementing the version number will force a re-cache. We jump the number by one +-- when there's a fix in the reader or processing code that can result in different +-- results. +-- +-- This code is also used outside ConTeXt but in ConTeXt it has to work with other +-- mechanisms. Both put some constraints on the code here. +-- +-- Remark: We assume that cursives don't cross discretionaries which is okay because +-- it is only used in semitic scripts. -- -- Remark: We assume that marks precede base characters. -- --- Remark: When complex ligatures extend into discs nodes we can get side effects. Normally --- this doesn't happen; ff\d{l}{l}{l} in lm works but ff\d{f}{f}{f}. +-- Remark: When complex ligatures extend into discs nodes we can get side effects. +-- Normally this doesn't happen; ff\d{l}{l}{l} in lm works but ff\d{f}{f}{f}. -- -- Todo: check if we copy attributes to disc nodes if needed. -- --- Todo: it would be nice if we could get rid of components. In other places we can use --- the unicode properties. We can just keep a lua table. +-- Todo: it would be nice if we could get rid of components. In other places we can +-- use the unicode properties. We can just keep a lua table. -- --- Remark: We do some disc juggling where we need to keep in mind that the pre, post and --- replace fields can have prev pointers to a nesting node ... I wonder if that is still --- needed. +-- Remark: We do some disc juggling where we need to keep in mind that the pre, post +-- and replace fields can have prev pointers to a nesting node ... I wonder if that +-- is still needed. -- -- Remark: This is not possible: -- @@ -1092,10 +1090,8 @@ function handlers.gpos_pair(head,start,dataset,sequence,kerns,rlmode,skiphash,st end end ---[[ldx-- -We get hits on a mark, but we're not sure if the it has to be applied so -we need to explicitly test for basechar, baselig and basemark entries.
---ldx]]-- +-- We get hits on a mark, but we're not sure if the it has to be applied so we need +-- to explicitly test for basechar, baselig and basemark entries. function handlers.gpos_mark2base(head,start,dataset,sequence,markanchors,rlmode,skiphash) local markchar = getchar(start) @@ -1292,10 +1288,8 @@ function handlers.gpos_cursive(head,start,dataset,sequence,exitanchors,rlmode,sk return head, start, false end ---[[ldx-- -I will implement multiple chain replacements once I run into a font that uses -it. It's not that complex to handle.
---ldx]]-- +-- I will implement multiple chain replacements once I run into a font that uses it. +-- It's not that complex to handle. local chainprocs = { } @@ -1348,29 +1342,22 @@ end chainprocs.reversesub = reversesub ---[[ldx-- -This chain stuff is somewhat tricky since we can have a sequence of actions to be -applied: single, alternate, multiple or ligature where ligature can be an invalid -one in the sense that it will replace multiple by one but not neccessary one that -looks like the combination (i.e. it is the counterpart of multiple then). For -example, the following is valid:
- -Therefore we we don't really do the replacement here already unless we have the -single lookup case. The efficiency of the replacements can be improved by deleting -as less as needed but that would also make the code even more messy.
---ldx]]-- - ---[[ldx-- -Here we replace start by a single variant.
---ldx]]-- - --- To be done (example needed): what if > 1 steps - --- this is messy: do we need this disc checking also in alternates? +-- This chain stuff is somewhat tricky since we can have a sequence of actions to be +-- applied: single, alternate, multiple or ligature where ligature can be an invalid +-- one in the sense that it will replace multiple by one but not neccessary one that +-- looks like the combination (i.e. it is the counterpart of multiple then). For +-- example, the following is valid: +-- +-- xxxabcdexxx [single a->A][multiple b->BCD][ligature cde->E] xxxABCDExxx +-- +-- Therefore we we don't really do the replacement here already unless we have the +-- single lookup case. The efficiency of the replacements can be improved by +-- deleting as less as needed but that would also make the code even more messy. +-- +-- Here we replace start by a single variant. +-- +-- To be done : what if > 1 steps (example needed) +-- This is messy: do we need this disc checking also in alternates? local function reportzerosteps(dataset,sequence) logwarning("%s: no steps",cref(dataset,sequence)) @@ -1446,9 +1433,7 @@ function chainprocs.gsub_single(head,start,stop,dataset,sequence,currentlookup,r return head, start, false end ---[[ldx-- -Here we replace start by new glyph. First we delete the rest of the match.
---ldx]]-- +-- Here we replace start by new glyph. First we delete the rest of the match. -- char_1 mark_1 -> char_x mark_1 (ignore marks) -- char_1 mark_1 -> char_x @@ -1500,9 +1485,7 @@ function chainprocs.gsub_alternate(head,start,stop,dataset,sequence,currentlooku return head, start, false end ---[[ldx-- -Here we replace start by a sequence of new glyphs.
---ldx]]-- +-- Here we replace start by a sequence of new glyphs. function chainprocs.gsub_multiple(head,start,stop,dataset,sequence,currentlookup,rlmode,skiphash,chainindex) local mapping = currentlookup.mapping @@ -1526,11 +1509,9 @@ function chainprocs.gsub_multiple(head,start,stop,dataset,sequence,currentlookup return head, start, false end ---[[ldx-- -When we replace ligatures we use a helper that handles the marks. I might change -this function (move code inline and handle the marks by a separate function). We -assume rather stupid ligatures (no complex disc nodes).
---ldx]]-- +-- When we replace ligatures we use a helper that handles the marks. I might change +-- this function (move code inline and handle the marks by a separate function). We +-- assume rather stupid ligatures (no complex disc nodes). -- compare to handlers.gsub_ligature which is more complex ... why diff --git a/tex/context/base/mkxl/font-tfm.lmt b/tex/context/base/mkxl/font-tfm.lmt index 9fce8fc5f..d6857b39e 100644 --- a/tex/context/base/mkxl/font-tfm.lmt +++ b/tex/context/base/mkxl/font-tfm.lmt @@ -50,21 +50,18 @@ constructors.resolvevirtualtoo = false -- wil be set in font-ctx.lua fonts.formats.tfm = "type1" -- we need to have at least a value here fonts.formats.ofm = "type1" -- we need to have at least a value here ---[[ldx-- -The next function encapsulates the standard
Hyphenating
Callbacks are the real asset of
When you (temporarily) want to install a callback function, and after a -while wants to revert to the original one, you can use the following two -functions. This only works for non-frozen ones.
---ldx]]-- +-- When you (temporarily) want to install a callback function, and after a while +-- wants to revert to the original one, you can use the following two functions. +-- This only works for non-frozen ones. local trace_callbacks = false trackers.register("system.callbacks", function(v) trace_callbacks = v end) local trace_calls = false -- only used when analyzing performance and initializations @@ -47,13 +43,12 @@ local list = callbacks.list local permit_overloads = false local block_overloads = false ---[[ldx-- -By now most callbacks are frozen and most provide a way to plug in your own code. For instance -all node list handlers provide before/after namespaces and the file handling code can be extended -by adding schemes and if needed I can add more hooks. So there is no real need to overload a core -callback function. It might be ok for quick and dirty testing but anyway you're on your own if -you permanently overload callback functions.
---ldx]]-- +-- By now most callbacks are frozen and most provide a way to plug in your own code. +-- For instance all node list handlers provide before/after namespaces and the file +-- handling code can be extended by adding schemes and if needed I can add more +-- hooks. So there is no real need to overload a core callback function. It might be +-- ok for quick and dirty testing but anyway you're on your own if you permanently +-- overload callback functions. -- This might become a configuration file only option when it gets abused too much. diff --git a/tex/context/base/mkxl/luat-cod.mkxl b/tex/context/base/mkxl/luat-cod.mkxl index ed4a13981..322076aa1 100644 --- a/tex/context/base/mkxl/luat-cod.mkxl +++ b/tex/context/base/mkxl/luat-cod.mkxl @@ -42,7 +42,7 @@ \toksapp \everydump {% \permanent\let\ctxlatelua \latelua \permanent\def\ctxlatecommand#1{\latelua{commands.#1}}% - \aliased\let\lateluacode \ctxlatelua + \aliased\let\lateluacode \ctxlatelua } % no \appendtoks yet \protect \endinput diff --git a/tex/context/base/mkxl/luat-ini.lmt b/tex/context/base/mkxl/luat-ini.lmt index 3202ea42b..56e3bd1c1 100644 --- a/tex/context/base/mkxl/luat-ini.lmt +++ b/tex/context/base/mkxl/luat-ini.lmt @@ -6,11 +6,9 @@ if not modules then modules = { } end modules ['luat-ini'] = { license = "see context related readme files" } ---[[ldx-- -We cannot load anything yet. However what we will do us reserve a few tables. -These can be used for runtime user data or third party modules and will not be -cluttered by macro package code.
---ldx]]-- +-- We cannot load anything yet. However what we will do us reserve a few tables. +-- These can be used for runtime user data or third party modules and will not be +-- cluttered by macro package code. userdata = userdata or { } -- for users (e.g. functions etc) thirddata = thirddata or { } -- only for third party modules diff --git a/tex/context/base/mkxl/math-act.lmt b/tex/context/base/mkxl/math-act.lmt index 0c75147f6..4a46baff9 100644 --- a/tex/context/base/mkxl/math-act.lmt +++ b/tex/context/base/mkxl/math-act.lmt @@ -533,7 +533,7 @@ do k = mathgaps[k] or k local character = targetcharacters[k] if character then --- if not character.tweaked then -- todo: add a force + -- if not character.tweaked then -- todo: add a force local t = type(v) if t == "number" then v = list[v] @@ -666,7 +666,7 @@ do else report_mathtweak("invalid dimension entry %U",k) end --- character.tweaked = true + -- character.tweaked = true if v.all then local nxt = character.next if nxt then @@ -680,7 +680,7 @@ do end end end --- end + -- end else report_tweak("no character %U",target,original,k) end @@ -1938,63 +1938,178 @@ do -- vfmath.builders.extension(target) local rbe = newprivateslot("radical bar extender") + local fbe = newprivateslot("fraction bar extender") + + local frp = { + newprivateslot("flat rule left piece"), + newprivateslot("flat rule middle piece"), + newprivateslot("flat rule right piece"), + } + + local rrp = { + newprivateslot("radical rule middle piece"), + newprivateslot("radical rule right piece"), + } + + local mrp = { + newprivateslot("minus rule left piece"), + newprivateslot("minus rule middle piece"), + newprivateslot("minus rule right piece"), + } - local function useminus(unicode,characters,parameters) + local function useminus(target,unicode,characters,parameters,skipfirst,what) local minus = characters[0x2212] - local xoffset = parameters.xoffset or .075 - local yoffset = parameters.yoffset or .9 - local xscale = parameters.xscale or 1 - local yscale = parameters.yscale or 1 - local xwidth = parameters.width or (1 - 2*xoffset) - local xheight = parameters.height or (1 - yoffset) - local mheight = minus.height - local mwidth = minus.width - local height = xheight*mheight - local xshift = xoffset * mwidth - local yshift = yoffset * mheight - local advance = xwidth * mwidth - local step = mwidth / 2 - characters[unicode] = { - height = height, - depth = height, - width = advance, - commands = { - push, - leftcommand[xshift], - downcommand[yshift], - -- slotcommand[0][0x2212], - { "slot", 0, 0x2212, xscale, yscale }, - pop, - }, - unicode = unicode, - -- parts = { - -- { extender = 0, glyph = first, ["end"] = fw/2, start = 0, advance = fw }, - -- { extender = 1, glyph = middle, ["end"] = mw/2, start = mw/2, advance = mw }, - -- { extender = 0, glyph = last, ["end"] = 0, start = lw/2, advance = lw }, - -- }, - parts = { - { extender = 0, glyph = unicode, ["end"] = step, start = 0, advance = advance }, - { extender = 1, glyph = unicode, ["end"] = step, start = step, advance = advance }, - }, - partsorientation = "horizontal", - } + local parts = minus.parts + if parameters == true then + parameters = { } + end + if parts then + parts = copytable(parts) + local xscale = parameters.xscale or 1 + local yscale = parameters.yscale or 1 + local mheight = minus.height + local height = (parameters.height or 1) * mheight + local yshift = (parameters.yoffset or 0) * mheight + if skipfirst then + table.remove(parts,1) + end + height = height / 2 + yshift = yshift + height + for i=1,#parts do + local part = parts[i] + local glyph = part.glyph + local gdata = characters[glyph] + local width = gdata.width + local xshift = 0 + if i == 1 and parameters.leftoffset then + xshift = (parameters.leftoffset) * width + width = width - xshift + elseif i == #parts and parameters.rightoffset then + width = (1 + parameters.rightoffset) * width + end + characters[what[i]] = { + height = height, + depth = height, + width = width, + commands = { + leftcommand[xshift], + downcommand[yshift], +-- slotcommand[0][glyph], + { "slot", 0, glyph, xscale, yscale }, + }, + } + part.glyph = what[i] + part.advance = width + end + characters[unicode] = { + height = height, + depth = height, + width = advance, + commands = { + downcommand[yshift], +-- slotcommand[0][0x2212], + { "slot", 0, 0x2212, xscale, yscale }, + }, + unicode = unicode, + parts = parts, + partsorientation = "horizontal", + } + end + end + + -- add minus parts of not there and create clipped clone + + local function checkminus(target,unicode,characters,parameters,skipfirst,what) + local minus = characters[unicode] + local parts = minus.parts + if parameters == true then + parameters = { } + end + local p_normal = 0 + local p_flat = 0 + local mwidth = minus.width + local height = minus.height + local depth = minus.depth + local loffset = parameters.leftoffset or 0 + local roffset = parameters.rightoffset or 0 + local lshift = mwidth * loffset + local rshift = mwidth * roffset + local width = mwidth - lshift - rshift + if parts then + -- print("minus has parts") + if lshift ~= 0 or width ~= mwidth then + parts = copytable(parts) + for i=1,#parts do + local part = parts[i] + local glyph = part.glyph + local gdata = characters[glyph] + local width = gdata.width + local advance = part.advance + local lshift = 0 + if i == 1 and left ~= 0 then + lshift = loffset * width + width = width - lshift + advance = advance - lshift + elseif i == #parts and roffset ~= 0 then + width = width - rshift + advance = advance - rshift + end + characters[what[i]] = { + height = height, + depth = depth, + width = width, + commands = { + leftcommand[lshift], + slotcommand[0][glyph], + }, + } + part.glyph = what[i] + part.advance = advance + end + minus.parts = parts + minus.partsorientation = "horizontal" + + end + else + local f_normal = formatters["M-NORMAL-%H"](unicode) + -- local p_normal = hasprivate(main,f_normal) + p_normal = addprivate(target,f_normal,{ + height = height, + width = width, + commands = { + push, + leftcommand[lshift], + slotcommand[0][unicode], + pop, + }, + }) + local step = width/2 + minus.parts = { + { extender = 0, glyph = p_normal, ["end"] = step, start = 0, advance = width }, + { extender = 1, glyph = p_normal, ["end"] = step, start = step, advance = width }, + { extender = 0, glyph = p_normal, ["end"] = 0, start = step, advance = width }, + } + minus.partsorientation = "horizontal" + end end function mathtweaks.replacerules(target,original,parameters) local characters = target.characters + local minus = parameters.minus local fraction = parameters.fraction local radical = parameters.radical + local stacker = parameters.stacker + if minus then + checkminus(target,0x2212,characters,minus,false,mrp) + end if fraction then - local template = fraction.template - if template == 0x2212 or template == "minus" then - useminus(0x203E,characters,fraction) - end + useminus(target,fbe,characters,fraction,false,frp) end if radical then - local template = radical.template - if template == 0x2212 or template == "minus" then - useminus(rbe,characters,radical) - end + useminus(target,rbe,characters,radical,true,rrp) + end + if stacker then + useminus(target,0x203E,characters,stacker,false,frp) end end @@ -2110,6 +2225,7 @@ do return { -- [0x002D] = { { left = slack, right = slack, glyph = 0x2212 }, single }, -- rel +-- [0x2212] = { { left = slack, right = slack, glyph = 0x2212 }, single }, -- rel -- [0x2190] = leftsingle, -- leftarrow [0x219E] = leftsingle, -- twoheadleftarrow @@ -3091,59 +3207,6 @@ do local doubleRemapping mathematics alphabets.
---ldx]]-- - --- oldstyle: not really mathematics but happened to be part of --- the mathematics fonts in cmr --- --- persian: we will also provide mappers for other --- scripts - --- todo: alphabets namespace --- maybe: script/scriptscript dynamic, - --- superscripped primes get unscripted ! - --- to be looked into once the fonts are ready (will become font --- goodie): --- --- (U+2202,U+1D715) : upright --- (U+2202,U+1D715) : italic --- (U+2202,U+1D715) : upright --- --- plus add them to the regular vectors below so that they honor \it etc +-- persian: we will also provide mappers for other scripts +-- todo : alphabets namespace +-- maybe : script/scriptscript dynamic, +-- check : (U+2202,U+1D715) : upright +-- (U+2202,U+1D715) : italic +-- (U+2202,U+1D715) : upright +-- add them to the regular vectors below so that they honor \it etc local type, next = type, next local merged, sortedhash = table.merged, table.sortedhash diff --git a/tex/context/base/mkxl/math-noa.lmt b/tex/context/base/mkxl/math-noa.lmt index 4a0cb5744..f64783ed9 100644 --- a/tex/context/base/mkxl/math-noa.lmt +++ b/tex/context/base/mkxl/math-noa.lmt @@ -890,39 +890,43 @@ do local data = fontdata[font] local characters = data.characters local olddata = characters[oldchar] --- local oldheight = olddata.height or 0 --- local olddepth = olddata.depth or 0 - local template = olddata.varianttemplate - local newchar = mathematics.big(data,template or oldchar,size,method) - local newdata = characters[newchar] - local newheight = newdata.height or 0 - local newdepth = newdata.depth or 0 - if template then --- local ratio = (newheight + newdepth) / (oldheight + olddepth) --- setheight(pointer,ratio * oldheight) --- setdepth(pointer,ratio * olddepth) - setheight(pointer,newheight) - setdepth(pointer,newdepth) - if not olddata.extensible then - -- check this on bonum and antykwa - setoptions(pointer,0) - end - if trace_fences then --- report_fences("replacing %C using method %a, size %a, template %C and ratio %.3f",newchar,method,size,template,ratio) - report_fences("replacing %C using method %a, size %a and template %C",newchar,method,size,template) - end - else - -- 1 scaled point is a signal, for now - if ht == 1 then + if olddata then +-- local oldheight = olddata.height or 0 +-- local olddepth = olddata.depth or 0 + local template = olddata.varianttemplate + local newchar = mathematics.big(data,template or oldchar,size,method) + local newdata = characters[newchar] + local newheight = newdata.height or 0 + local newdepth = newdata.depth or 0 + if template then +-- local ratio = (newheight + newdepth) / (oldheight + olddepth) +-- setheight(pointer,ratio * oldheight) +-- setdepth(pointer,ratio * olddepth) setheight(pointer,newheight) - end - if dp == 1 then setdepth(pointer,newdepth) + if not olddata.extensible then + -- check this on bonum and antykwa + setoptions(pointer,0) + end + if trace_fences then +-- report_fences("replacing %C using method %a, size %a, template %C and ratio %.3f",newchar,method,size,template,ratio) + report_fences("replacing %C using method %a, size %a and template %C",newchar,method,size,template) + end + else + -- 1 scaled point is a signal, for now + if ht == 1 then + setheight(pointer,newheight) + end + if dp == 1 then + setdepth(pointer,newdepth) + end + setchar(delimiter,newchar) + if trace_fences then + report_fences("replacing %C by %C using method %a and size %a",oldchar,char,method,size) + end end - setchar(delimiter,newchar) - if trace_fences then - report_fences("replacing %C by %C using method %a and size %a",oldchar,char,method,size) - end + elseif trace_fences then + report_fences("not replacing %C using method %a and size %a",oldchar,method,size) end end end diff --git a/tex/context/base/mkxl/math-rad.mklx b/tex/context/base/mkxl/math-rad.mklx index 863bb2128..ee91243e0 100644 --- a/tex/context/base/mkxl/math-rad.mklx +++ b/tex/context/base/mkxl/math-rad.mklx @@ -378,6 +378,12 @@ \integerdef\delimitedrightanutityuc \privatecharactercode{delimited right annuity} \integerdef\radicalbarextenderuc \privatecharactercode{radical bar extender} +%D We now default to nice bars: + +\setupmathradical + [\c!rule=\v!symbol, + \c!top=\radicalbarextenderuc] + \definemathradical [rannuity] [\c!left=\zerocount, diff --git a/tex/context/base/mkxl/math-spa.lmt b/tex/context/base/mkxl/math-spa.lmt index d2927ff58..a575b1714 100644 --- a/tex/context/base/mkxl/math-spa.lmt +++ b/tex/context/base/mkxl/math-spa.lmt @@ -41,6 +41,7 @@ local getnormalizedline = node.direct.getnormalizedline local getbox = nuts.getbox local setoffsets = nuts.setoffsets local addxoffset = nuts.addxoffset +local setattrlist = nuts.setattrlist local nextglue = nuts.traversers.glue local nextlist = nuts.traversers.list @@ -48,7 +49,9 @@ local nextboundary = nuts.traversers.boundary local nextnode = nuts.traversers.node local insertafter = nuts.insertafter +local insertbefore = nuts.insertbefore local newkern = nuts.pool.kern +local newstrutrule = nuts.pool.strutrule local texsetdimen = tex.setdimen local texgetdimen = tex.getdimen @@ -68,6 +71,10 @@ local d_strc_math_first_height = texisdimen("d_strc_math_first_height") local d_strc_math_last_depth = texisdimen("d_strc_math_last_depth") local d_strc_math_indent = texisdimen("d_strc_math_indent") +local report = logs.reporter("mathalign") + +local trace = false trackers.register("mathalign",function(v) trace = v end ) + local function moveon(s) for n, id, subtype in nextnode, getnext(s) do s = n @@ -138,15 +145,20 @@ stages[1] = function(specification,stage) p = getprev(p) end end - -- we use a hangindent so we need to treat the first one - local f = found[1] - local delta = f[2] - max - if delta ~= 0 then - insertafter(head,moveon(head),newkern(-delta)) - end - for i=2,#found do + for i=1,#found do local f = found[i] - insertafter(head,moveon(f[3]),newkern(-f[2])) -- check head + local w = f[2] + local d = i == 1 and (max-w) or -w + local k = newkern(d) + local r = newstrutrule(0,2*65536,2*65536) + local s = moveon(f[3]) + if trace then + report("row %i, width %p, delta %p",i,w,d) + end + setattrlist(r,head) + setattrlist(k,head) + insertbefore(head,s,r) + insertafter(head,r,k) end end texsetdimen("global",d_strc_math_indent,max) diff --git a/tex/context/base/mkxl/math-stc.mklx b/tex/context/base/mkxl/math-stc.mklx index fdad71978..5a701426a 100644 --- a/tex/context/base/mkxl/math-stc.mklx +++ b/tex/context/base/mkxl/math-stc.mklx @@ -1043,7 +1043,7 @@ \definemathstackers [\v!medium] [\v!mathematics] [\c!hoffset=1.5\mathemwidth] \definemathstackers [\v!big] [\v!mathematics] [\c!hoffset=2\mathemwidth] -\definemathextensible [\v!reverse] [xrel] ["002D] +\definemathextensible [\v!reverse] [xrel] ["2212] % ["002D] \definemathextensible [\v!reverse] [xequal] ["003D] \definemathextensible [\v!reverse] [xleftarrow] ["2190] % ["27F5] \definemathextensible [\v!reverse] [xrightarrow] ["2192] % ["27F6] @@ -1066,7 +1066,7 @@ \definemathextensible [\v!reverse] [xrightleftharpoons] ["21CC] \definemathextensible [\v!reverse] [xtriplerel] ["2261] -\definemathextensible [\v!mathematics] [mrel] ["002D] +\definemathextensible [\v!mathematics] [mrel] ["2212] % ["002D] \definemathextensible [\v!mathematics] [mequal] ["003D] \definemathextensible [\v!mathematics] [mleftarrow] ["2190] % ["27F5] \definemathextensible [\v!mathematics] [mrightarrow] ["2192] % ["27F6] @@ -1089,7 +1089,7 @@ \definemathextensible [\v!mathematics] [mrightleftharpoons] ["21CC] \definemathextensible [\v!mathematics] [mtriplerel] ["2261] -\definemathextensible [\v!text] [trel] ["002D] +\definemathextensible [\v!text] [trel] ["2212] % ["002D] \definemathextensible [\v!text] [tequal] ["003D] \definemathextensible [\v!text] [tmapsto] ["21A6] \definemathextensible [\v!text] [tleftarrow] ["2190] % ["27F5] @@ -1168,9 +1168,9 @@ %D in the backend (okay, we still need to deal with some cut and paste issues but at %D least we now know what we deal with. -\definemathoverextensible [\v!vfenced] [overbar] ["203E] -\definemathunderextensible [\v!vfenced] [underbar] ["203E] % ["0332] -\definemathdoubleextensible [\v!vfenced] [doublebar] ["203E] ["203E] % ["0332] +\definemathoverextensible [\v!vfenced] [overbar] ["203E] % todo: private +\definemathunderextensible [\v!vfenced] [underbar] ["203E] % todo: private +\definemathdoubleextensible [\v!vfenced] [doublebar] ["203E] ["203E] % todo: private \definemathoverextensible [\v!vfenced] [overbrace] ["23DE] \definemathunderextensible [\v!vfenced] [underbrace] ["23DF] @@ -1186,13 +1186,13 @@ %D For mathml: -\definemathdoubleextensible [\v!both] [overbarunderbar] ["203E] ["203E] +\definemathdoubleextensible [\v!both] [overbarunderbar] ["203E] ["203E] % todo: private \definemathdoubleextensible [\v!both] [overbraceunderbrace] ["23DE] ["23DF] \definemathdoubleextensible [\v!both] [overparentunderparent] ["23DC] ["23DD] \definemathdoubleextensible [\v!both] [overbracketunderbracket] ["23B4] ["23B5] -\definemathovertextextensible [\v!bothtext] [overbartext] ["203E] -\definemathundertextextensible [\v!bothtext] [underbartext] ["203E] +\definemathovertextextensible [\v!bothtext] [overbartext] ["203E] % todo: private +\definemathundertextextensible [\v!bothtext] [underbartext] ["203E] % todo: private \definemathovertextextensible [\v!bothtext] [overbracetext] ["23DE] \definemathundertextextensible [\v!bothtext] [underbracetext] ["23DF] \definemathovertextextensible [\v!bothtext] [overparenttext] ["23DC] @@ -1285,8 +1285,8 @@ \permanent\tolerant\protected\def\defineextensiblefiller[#1]#*[#2]% {\frozen\instance\edefcsname#1\endcsname{\mathfiller{\number#2}}} -%defineextensiblefiller [barfill] ["203E] % yet undefined -\defineextensiblefiller [relfill] ["002D] +%defineextensiblefiller [barfill] ["203E] % % todo: private +\defineextensiblefiller [relfill] ["2212] % ["002D] \defineextensiblefiller [equalfill] ["003D] \defineextensiblefiller [leftarrowfill] ["2190] \defineextensiblefiller [rightarrowfill] ["2192] diff --git a/tex/context/base/mkxl/math-twk.mkxl b/tex/context/base/mkxl/math-twk.mkxl index 6ffb36818..6e015d3de 100644 --- a/tex/context/base/mkxl/math-twk.mkxl +++ b/tex/context/base/mkxl/math-twk.mkxl @@ -95,5 +95,12 @@ \permanent\protected\def\minute{\iffontchar\font\textminute\textminute\else\mathminute\fi} \permanent\protected\def\second{\iffontchar\font\textsecond\textsecond\else\mathsecond\fi} +% \startsetups[math:rules] +% \letmathfractionparameter\c!rule\v!symbol +% \setmathfractionparameter\c!middle{"203E}% +% \letmathradicalparameter \c!rule\v!symbol +% \setmathradicalparameter \c!top{\radicalbarextenderuc}% +% \setmathfenceparameter \c!alternative{1}% +% \stopsetups \protect diff --git a/tex/context/base/mkxl/math-vfu.lmt b/tex/context/base/mkxl/math-vfu.lmt index 0a2b440a1..1639517b5 100644 --- a/tex/context/base/mkxl/math-vfu.lmt +++ b/tex/context/base/mkxl/math-vfu.lmt @@ -83,27 +83,37 @@ nps("flat double rule left piece") nps("flat double rule middle piece") nps("flat double rule right piece") +nps("minus rule left piece") +nps("minus rule middle piece") +nps("minus rule right piece") + do - local function horibar(main,unicode,rule,left,right,normal) + -- this overlaps with math-act + + local function horibar(main,unicode,rule,left,right,normal,force,m,l,r) local characters = main.characters - if not characters[unicode] then + local data = characters[unicode] + if force or not data then local height = main.mathparameters.defaultrulethickness or 4*65536/10 - local f_rule = rule and formatters["M-HORIBAR-RULE-%H"](rule) - local p_rule = rule and hasprivate(main,f_rule) + local f_rule = rule and formatters["M-HORIBAR-M-%H"](rule) + local p_rule = rule and hasprivate(main,f_rule) + local ndata = normal and characters[normal] if rule and left and right and normal then - local ldata = characters[left] - local mdata = characters[rule] - local rdata = characters[right] - local ndata = characters[normal] + local ldata = characters[l or left] + local mdata = characters[m or rule] + local rdata = characters[r or right] local lwidth = ldata.width or 0 local mwidth = mdata.width or 0 local rwidth = rdata.width or 0 local nwidth = ndata.width or 0 local down = (mdata.height / 2) - height - -- - local f_left = right and formatters["M-HORIBAR-LEFT-%H"](right) - local f_right = right and formatters["M-HORIBAR-RIGHT-%H"](right) +if unicode == normal then + height = ndata.height + down = 0 +end -- + local f_left = left and formatters["M-HORIBAR-L-%H"](left) + local f_right = right and formatters["M-HORIBAR-R-%H"](right) local p_left = left and hasprivate(main,f_left) local p_right = right and hasprivate(main,f_right) -- @@ -116,7 +126,7 @@ do push, leftcommand[.025*mwidth], downcommand[down], - slotcommand[0][rule], + slotcommand[0][m or rule], pop, }, }) @@ -130,7 +140,7 @@ do push, leftcommand[.025*lwidth], downcommand[down], - slotcommand[0][left], + slotcommand[0][l or left], pop, }, }) @@ -144,48 +154,72 @@ do push, leftcommand[.025*rwidth], downcommand[down], - slotcommand[0][right], + slotcommand[0][r or right], pop, }, }) end - characters[unicode] = { - keepvirtual = true, - partsorientation = "horizontal", - height = height, - width = nwidth, --- keepvirtual = true, - commands = { +if unicode ~= normal then + data = { + unicode = unicode, + height = height, + width = nwidth, + commands = { downcommand[down], slotcommand[0][normal] }, - parts = { - { glyph = p_left, ["end"] = 0.4*lwidth }, - { glyph = p_rule, extender = 1, ["start"] = mwidth, ["end"] = mwidth }, - { glyph = p_right, ["start"] = 0.6*rwidth }, - } + } + characters[unicode] = data +end + data.parts = { + { glyph = p_left, ["end"] = 0.4*lwidth }, + { glyph = p_rule, extender = 1, ["start"] = mwidth, ["end"] = mwidth }, + { glyph = p_right, ["start"] = 0.6*rwidth }, } else - local width = main.parameters.quad/4 or 4*65536 + local width = main.parameters.quad/2 or 4*65536 -- 3 if not characters[p_rule] then - p_rule = addprivate(main,f_rule,{ - height = height, - width = width, --- keepvirtual = true, - commands = { push, { "rule", height, width }, pop }, - }) + if unicode == normal then + p_rule = addprivate(main,f_rule,{ + height = ndata.height, + width = width, + commands = { + push, + upcommand[(ndata.height - height)/2], + { "rule", height, width }, + pop + }, + }) + else + p_rule = addprivate(main,f_rule,{ + height = height, + width = width, + commands = { + push, + { "rule", height, width }, + pop + }, + }) + end end - characters[unicode] = { - height = height, - width = nwidth, --- keepvirtual = true, - partsorientation = "horizontal", - parts = { - { glyph = p_rule }, - { glyph = p_rule, extender = 1, ["start"] = width/2, ["end"] = width/2 }, +if unicode ~= normal then + data = { + unicode = unicode, + height = height, + width = width, + commands = { + slotcommand[0][p_rule] } } + characters[unicode] = data +end + data.parts = { + { glyph = p_rule, ["start"] = width/2, ["end"] = width/2 }, + { glyph = p_rule, extender = 1, ["start"] = width/2, ["end"] = width/2 }, + } end + data.keepvirtual = true -- i need to figure this out + data.partsorientation = "horizontal" end end @@ -205,8 +239,8 @@ do local nwidth = ndata.width or 0 local down = (mdata.height / 2) - height -- - local f_rule = rule and formatters["M-ROOTBAR-RULE-%H"](rule) - local f_right = right and formatters["M-ROOTBAR-RIGHT-%H"](right) + local f_rule = rule and formatters["M-ROOTBAR-M-%H"](rule) + local f_right = right and formatters["M-ROOTBAR-R-%H"](right) local p_rule = rule and hasprivate(main,f_rule) local p_right = right and hasprivate(main,f_right) -- diff --git a/tex/context/base/mkxl/meta-imp-newmath.mkxl b/tex/context/base/mkxl/meta-imp-newmath.mkxl new file mode 100644 index 000000000..af49f82ac --- /dev/null +++ b/tex/context/base/mkxl/meta-imp-newmath.mkxl @@ -0,0 +1,76 @@ +%D \module +%D [ file=meta-imp-newmath, +%D version=2023.04.01, +%D title=\METAPOST\ Graphics, +%D subtitle=New Math Symbols, +%D author=Mikael Sundqvist & Hans Hagen, +%D date=\currentdate, +%D copyright={PRAGMA ADE \& \CONTEXT\ Development Team}] +%C +%C This module is part of the \CONTEXT\ macro||package and is +%C therefore copyrighted by \PRAGMA. See mreadme.pdf for +%C details. + +%D In this file we will collect solutions for special math symbols. When such symbols +%D are used in publications the CMS will contact the Unicode Consortium to suggest that +%D they get a slot, because then we have proof of usage. We also consider old obsolete +%D symbols because they can be treated like some ancient out|-|of|-|use script and fit +%D into the \type {ancient math script}. + +\startMPextensions + vardef math_ornament_hat(expr w,h,d,o,l) = + image ( path p ; p := + (w/2,h + 10l) -- + (o + w,h + o) -- + (w/2,h + 7l) -- + (-o,h + o) -- + cycle ; + fill p randomized o ; + setbounds currentpicture to (-o,0) -- (w+o,0) -- (w+o,h+2o) -- (-o,h+2o) -- cycle ; + ) + enddef ; +\stopMPextensions + +\startuniqueMPgraphic{math:ornament:hat} + draw + math_ornament_hat( + OverlayWidth, + OverlayHeight, + OverlayDepth, + OverlayOffset, + OverlayLineWidth + ) + withpen + pencircle + xscaled (2OverlayLineWidth) + yscaled (3OverlayLineWidth/4) + rotated 30 + withcolor + OverlayLineColor ; +% draw boundingbox currentpicture; +\stopuniqueMPgraphic + +\definemathornament [widerandomhat] [mp=math:ornament:hat] + +\continueifinputfile{meta-imp-newnmath.mkxl} + +\starttext + +This symbol was designed for one of Mikaels students working on a thesis on +probability. This student needed to typeset the characteristic function of a +random variable \im {X} with density function \im {f_{X}}, and it was insisted to +use another notation than the (wide) hat, that was already used for something +else. For this reason the \tex {widerandomhat} was introduced, + +\startformula + E[\ee^{\ii tX}] = \widerandomhat{f_{X}}(t)\mtp{,} + E[\ee^{\ii t(X_1+X_2)}] = \widerandomhat{f_{X_1} \ast f_{X_2}}(t)\mtp{.} +\stopformula + +Naturally, it is automatically scaled, just like the ordinary wide hat + +\startformula + \widehat{a+b+c+d+e+f} \neq \widerandomhat{a+b+c+d+e+f} +\stopformula + +\stoptext diff --git a/tex/context/base/mkxl/mlib-run.lmt b/tex/context/base/mkxl/mlib-run.lmt index 0e955818e..de5ceb1db 100644 --- a/tex/context/base/mkxl/mlib-run.lmt +++ b/tex/context/base/mkxl/mlib-run.lmt @@ -6,28 +6,16 @@ if not modules then modules = { } end modules ['mlib-run'] = { license = "see context related readme files", } --- cmyk -> done, native --- spot -> done, but needs reworking (simpler) --- multitone -> --- shade -> partly done, todo: cm --- figure -> done --- hyperlink -> low priority, easy - --- new * run --- or --- new * execute^1 * finish - --- a*[b,c] == b + a * (c-b) - ---[[ldx-- -The directional helpers and pen analysis are more or less translated from the
-
Most of the code that had accumulated here is now separated in modules.
---ldx]]-- - local next, type, tostring = next, type, tostring local gsub = string.gsub local concat, remove = table.concat, table.remove local sortedhash, sortedkeys, swapped = table.sortedhash, table.sortedkeys, table.swapped ---[[ldx-- -Access to nodes is what gives
The next function is not that much needed but in
This is rather experimental. We need more control and some of this -might become a runtime module instead. This module will be cleaned up!
---ldx]]-- +-- Some of the code here might become a runtime module instead. This old module will +-- be cleaned up anyway! local next = next local utfchar = utf.char diff --git a/tex/context/base/mkxl/pack-obj.lmt b/tex/context/base/mkxl/pack-obj.lmt index 1e22515b9..a18f5e7e7 100644 --- a/tex/context/base/mkxl/pack-obj.lmt +++ b/tex/context/base/mkxl/pack-obj.lmt @@ -6,10 +6,8 @@ if not modules then modules = { } end modules ['pack-obj'] = { license = "see context related readme files" } ---[[ldx-- -We save object references in the main utility table. jobobjects are -reusable components.
---ldx]]-- +-- We save object references in the main utility table; job objects are reusable +-- components. local context = context local codeinjections = backends.codeinjections diff --git a/tex/context/base/mkxl/pack-rul.lmt b/tex/context/base/mkxl/pack-rul.lmt index 12d131c88..62a904901 100644 --- a/tex/context/base/mkxl/pack-rul.lmt +++ b/tex/context/base/mkxl/pack-rul.lmt @@ -7,10 +7,6 @@ if not modules then modules = { } end modules ['pack-rul'] = { license = "see context related readme files" } ---[[ldx-- -An explanation is given in the history document
Regimes take care of converting the input characters into
-
Some backgrounds are discussed in
This module is a stripped down version of libraries that are used
-by
Let's silently quit and make sure that no one loads it
- manually in
We create a namespace and some variables to it. If a namespace is - already defined it wil not be initialized. This permits hooking - in code beforehand.
+ -- We create a namespace and some variables to it. If a namespace is already + -- defined it wil not be initialized. This permits hooking in code beforehand. -We don't make a format automatically. After all, distributions
- might have their own preferences and normally a format (mem) file will
- have some special place in the
A few helpers, taken from
We use the
You can use your own reported if needed, as long as it handles multiple - arguments and formatted strings.
- --ldx]]-- + -- You can use your own reported if needed, as long as it handles multiple + -- arguments and formatted strings. + metapost.report = metapost.report or function(...) if logs.report then @@ -89,11 +78,9 @@ else end end - --[[ldx-- -The rest of this module is not documented. More info can be found in the
-
We removed some message and tracing code. We might even remove the flusher
- --ldx]]-- + -- We removed some message and tracing code. We might even remove the + -- flusher. local function pdf_startfigure(n,llx,lly,urx,ury) tex.sprint(format("\\startMPLIBtoPDF{%s}{%s}{%s}{%s}",llx,lly,urx,ury)) @@ -443,9 +429,7 @@ else return t end - --[[ldx-- -Support for specials has been removed.
- --ldx]]-- + -- Support for specials has been removed. function metapost.flush(result,flusher) if result then diff --git a/tex/generic/context/luatex/luatex-preprocessor.lua b/tex/generic/context/luatex/luatex-preprocessor.lua index 8faa0b47e..b1debcd5c 100644 --- a/tex/generic/context/luatex/luatex-preprocessor.lua +++ b/tex/generic/context/luatex/luatex-preprocessor.lua @@ -6,11 +6,9 @@ if not modules then modules = { } end modules ['luatex-preprocessor'] = { license = "see context related readme files" } ---[[ldx -This is a stripped down version of the preprocessor. In
-
We provide a few commands.
---ldx]] - -- local texkpse local function find_file(...) diff --git a/tex/latex/context/ppchtex/m-ch-de.sty b/tex/latex/context/ppchtex/m-ch-de.sty deleted file mode 100644 index d35f8cf2d..000000000 --- a/tex/latex/context/ppchtex/m-ch-de.sty +++ /dev/null @@ -1,19 +0,0 @@ -\ProvidesPackage{m-ch-de}[2004/07/30 package wrapper for m-ch-de.tex] - -\newif\ifPPCH@PSTRICKS - -\DeclareOption{pstricks}{\PPCH@PSTRICKStrue} -\DeclareOption{pictex}{\PPCH@PSTRICKSfalse} - -\ExecuteOptions{pictex} -\ProcessOptions\relax - -\ifPPCH@PSTRICKS - \RequirePackage{pstricks,pst-plot} -\else - \RequirePackage{m-pictex} -\fi - -\input{m-ch-de.tex} - -\endinput \ No newline at end of file diff --git a/tex/latex/context/ppchtex/m-ch-en.sty b/tex/latex/context/ppchtex/m-ch-en.sty deleted file mode 100644 index e93a49867..000000000 --- a/tex/latex/context/ppchtex/m-ch-en.sty +++ /dev/null @@ -1,19 +0,0 @@ -\ProvidesPackage{m-ch-en}[2004/07/30 package wrapper for m-ch-en.tex] - -\newif\ifPPCH@PSTRICKS - -\DeclareOption{pstricks}{\PPCH@PSTRICKStrue} -\DeclareOption{pictex}{\PPCH@PSTRICKSfalse} - -\ExecuteOptions{pictex} -\ProcessOptions\relax - -\ifPPCH@PSTRICKS - \RequirePackage{pstricks,pst-plot} -\else - \RequirePackage{m-pictex} -\fi - -\input{m-ch-en.tex} - -\endinput \ No newline at end of file diff --git a/tex/latex/context/ppchtex/m-ch-nl.sty b/tex/latex/context/ppchtex/m-ch-nl.sty deleted file mode 100644 index 6e2b8d43d..000000000 --- a/tex/latex/context/ppchtex/m-ch-nl.sty +++ /dev/null @@ -1,19 +0,0 @@ -\ProvidesPackage{m-ch-nl}[2004/07/30 package wrapper for m-ch-nl.tex] - -\newif\ifPPCH@PSTRICKS - -\DeclareOption{pstricks}{\PPCH@PSTRICKStrue} -\DeclareOption{pictex}{\PPCH@PSTRICKSfalse} - -\ExecuteOptions{pictex} -\ProcessOptions\relax - -\ifPPCH@PSTRICKS - \RequirePackage{pstricks,pst-plot} -\else - \RequirePackage{m-pictex} -\fi - -\input{m-ch-nl.tex} - -\endinput \ No newline at end of file diff --git a/tex/latex/context/ppchtex/m-pictex.sty b/tex/latex/context/ppchtex/m-pictex.sty deleted file mode 100644 index a967b362d..000000000 --- a/tex/latex/context/ppchtex/m-pictex.sty +++ /dev/null @@ -1,5 +0,0 @@ -\ProvidesPackage{m-pictex}[2004/07/30 package wrapper for m-pictex.tex] - -\input{m-pictex.mkii} - -\endinput -- cgit v1.2.3